uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,566,115
arxiv
\section{Introduction} \label{sec:introduction} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures/main_illustration.pdf} \vspace{-0.3cm} \caption{[A] Multiple moving sensors scan a time-varying object (cloud) from multiple-views. [B] The measurements, acquired at a different times, are the input to our method, which aims to recover the 3D volume density of the object at the different times [C].} \label{fig:main_illustration} \end{figure} Computed tomography (CT) is a powerful way to recover the inner structure of three dimensional (3D) volumetric heterogeneous objects~\cite{gkioulekas2016evaluation}. Being possibly one of the earliest types of computational photography methods, CT has extensive use in many research and operational domains. These include medicine~\cite{pan20044d}, sensing of atmospheric pollution~\cite{aides2020distributed}, geophysics~\cite{wright2008scanning} and fluid dynamics~\cite{zang2019warp,zang2018space}. As a result, CT technologies and novel modalities are increasingly being advanced by the computer vision and graphics communities~\cite{gregson2012stochastic,narasimhan2005structured}. CT requires imaging from multiples directions~\cite{anirudh2018lose,kaestner2011spatiotemporal}. In nearly all CT approaches, the object is considered static during image acquisition. However, in many cases of interest, the object changes while multi-view images are acquired sequentially~\cite{zang2020tomofluid,eckert2018coupled}. Thus, effort has been invested to generalize 3D CT to four-dimensional (4D) spatiotemporal CT, particularly in the computer vision and graphics communities~~\cite{qian2017stereo,zang2020tomofluid,zang2019warp}. This effort has been directed at linear-CT modalities. Linear CT is computationally easier to handle, thus common for decades, mainly in medical imaging~\cite{hiriyannaiah1997x}. Medical CT often exploited the periodic temporal nature of organ dynamics, to synchronize sequential acquisitions~\cite{pan20044d}. The generalization in linear CT is mirrored in a generalization of object surface recovery by spatio-temporal computer vision~\cite{qian2017stereo,leroy2017multi,mustafa2016temporally}. This paper focuses on a more complicated model: scattering CT. It is important to treat this case for scientific, societal and practical reasons. The climate is strongly affected by interaction with clouds~\cite{fujita1986mesoscale} (Fig.~\ref{fig:main_illustration}). To reduce major errors in climate predictions, this interaction requires a much finer understanding of cloud physics than current knowledge. Current knowledge is based on empirical remote sensing data that is analyzed under the assumption that the atmosphere and clouds are made of very broad and uniform layers. This leads to errors in climate understanding. To overcome this problem, 3D scattering CT has been suggested as a way to study clouds~\cite{levis2015airborne,levis2017multiple}. Scattering CT of clouds requires high resolution multi-view images from space. There are spaceborne and high-altitude systems may provide such data, such as AirMSPI~\cite{diner2013airborne}, MAIA~\cite{boland2018nasa}, HARP~\cite{neilsen2015hyper}, AirHARP~\cite{mcbride2020spatial} and the planned CloudCT formation~\cite{schilling2019cloudct}. However, there is a practical problem: these systems are very expensive, so it is not realistic to deploy them in large numbers to simultaneously acquire images of the same clouds from many angles. Therefore, in practice, the platforms are planned to move above the clouds: a sequence of images is taken, in order to span and sample a wide angular breadth, but the cloud evolves meanwhile. Hence there are important reasons to derive 4D scattering CT, particularly of clouds. We pose conditions under which this task can be performed. These relate to temporal sampling and angular breadth, in relation to the correlation time of the evolving object. Then, we generalize prior 3D CT, specifically scattering CT, to spatiotemporal recovery using data taken by moving cameras. We present an optimization-based method to reach this, and then demonstrate this method both in rigorous simulations and on real data. \section{Theoretical Background} \label{sec:theory} Computed Tomography (CT) seeks estimation of the 3D volumetric density of ${\ \beta}$ an object. Usually, CT measures the object from multiple directions. Denote those measurements by ${\boldsymbol y}$. A forward model ${\cal F} \left({\boldsymbol \beta} \right)$ expresses the image formation model. Estimation of ${\boldsymbol \beta}$ is done by minimization of a cost ${\cal E}$, which penalizes the discrepancy between ${\boldsymbol y}$ and the forward model, \begin{equation} {\hat{\boldsymbol \beta}} = \arg \!\!\min_{{\boldsymbol \beta} ~~~~~} {\cal E} \left[{\boldsymbol y}, {\cal F} \left({\boldsymbol \beta} \right) \right] \;. \label{eq:inverse} \end{equation} Often, acquiring data from multiple angles simultaneously is very difficult. Sensors are expensive and/or power-consuming. Thus, their duplication in large numbers to sample many directions is often prohibitive. Sometimes the sensors are bulky and need cooling, posing difficulty to pack them densely. Therefore, CT measurements are often acquired {\em sequentially}. In contrast, the inverse tomographic problem (Eq.~\ref{eq:inverse}) expresses the object as time-invariant. However, in many situations it is not the case; the object changes in time. Thus, modeling the inverse problem as Eq.~(\ref{eq:inverse}) is inconsistent both with the dynamic nature of the object and the sequential nature of the sensing process. This paper focuses on scattering-based CT. Thus, we describe here the relevant forward model. We follow notations and definitions of~\cite{levis2015airborne,aides2020distributed}. Material density at a voxel is denoted by $\beta$. In case the main interaction effect is scattering (rather than absorption or emission), as is the case of visible light in clouds, $\beta$ is the extinction or scattering coefficient of the medium, in units of ${\rm km}^{-1}$. Concatenating this coefficient in all voxels creates a vector ${\boldsymbol \beta}_t$, per time $t$. The interaction of radiation with a scattering volumetric object is modeled by 3D {\em radiative transfer}, which includes multiple scattering. Define radiative transfer by an operator ${\rm RT}({\boldsymbol \beta}_t)$. There are various algorithms to implement ${\rm RT}({\boldsymbol \beta}_t)$, including Monte-Carlo~\cite{mayer2009radiative,loeub2020monotonicity} and the spherical harmonic discrete ordinate method (SHDOM). We use the latter, as it is considered trustworthy by the scientific community~\cite{evans1998spherical} and has open-source online codes~\cite{levis2020git}. Radiative transfer yields the radiance $i({\bf x},{\boldsymbol \omega})$ at each location ${\bf x}$ in space and each light propagation direction ${\boldsymbol \omega}$. A camera observes the scene from a specific location, while each of the camera pixels samples a direction ${\boldsymbol \omega}$. Hence, imaging (forward model) amounts to sampling the output of radiative transfer at the camera locations and pixels' lines of slight, while integrating over the camera exposure time and spectral bands. Camera sampling is denoted by a projection operator $P_{{\bf x},{\boldsymbol \omega}}$. To conclude, the forward model for the expected value of a pixel gray level at time $t$ is \begin{equation} g_{{\bf x},{\boldsymbol \omega},t} ={\cal F} \left( {\boldsymbol \beta}_{t} \right) \approx \gamma^{\rm cam} P_{{\bf x},{\boldsymbol \omega}} \left\{ {\rm RT} ({\boldsymbol \beta}_t) \right\} \;. \label{eq:forwardmodel_approx} \end{equation} Here $\gamma^{\rm cam}$ expresses camera properties, including the lens aperture area, exposure time, spectral band, quantum efficiency and lens transmissivity. Eq.~(\ref{eq:forwardmodel_approx}) assumes that the exposure time is sufficiently small, such that within this time, the scene and the camera pose vary insignificantly. Empirical measurements include random noise. The noise mainly originates from the discrete nature of photons and electric charges, which yields a Poisson process. There are additional noise sources, and their parameters can be extracted from the sensor specifications. Denote incorporation of noise into the expected signal by the operator $\cal N$. Then, a raw measurement is \begin{eqnarray} y_{{\bf x},{\boldsymbol \omega},t} = {\cal N} \left\{ g_{{\bf x},{\boldsymbol \omega},t} \right\} \;. \label{eq:y_noise_I} \end{eqnarray} \section{Representing Dynamic Volumetric Objects} \label{sec:3DObject} In this section, we present an approximate representation of volumetric 3D objects, which evolve gradually in time. There is a need and justification for the approximation. The need is because in our work, there is often insufficient simultaneous data for high quality 3D tomography at all times. Data is captured sequentially, while the object evolves. Hence, at best, we would recover a good approximation of the evolving object. The approximation can be satisfactory, however, if temporal samples are sufficiently dense, as elaborated in Sec.~\ref{sec:FrequencyAnalysis}. The object has an evolving state, which is sampled at the time set ${\cal T}=\{t_1,\ t_2,\ \dots,\ t_{N^{\rm state}}\}$. At time $t\in {\cal T}$, the true object is represented by the instantaneous 3D extinction field ${\boldsymbol \beta}_t$. Define a corresponding hidden field ${\boldsymbol \beta}^{\rm hidden}_t$. The instantaneous field is represented as a convex linear combination of the hidden fields: \begin{equation} {\boldsymbol \beta}_{t}= \sum_{t'\in {\cal T}} w_{t}(t'){\boldsymbol \beta}^{\rm hidden}_{t'} \label{eq:beta_t_linear_comb2} \;. \end{equation} All weights $\{ w_{t}(t')\}$ satisfy $0 \leq w_t(t') \leq 1$ and \begin{equation} \sum_{t'\in {\cal T}} w_{t}(t')=1 \label{eq:normw} \;. \end{equation} Equation~(\ref{eq:beta_t_linear_comb2}) implies a non-negative correlation of the 3D field at $t$ to the 3D field at any time $t'$. Correlation should decay with the time lag $|t-t'|$. We set the weights by a normal function \begin{equation} w_t(t') = s \exp\left( -\frac{|t-t'|^2}{2\sigma^2} \right) \label{eq:weights_t_normal_dist} \;. \end{equation} Here $s$ is a normalization factor, set to satisfy Eq.~(\ref{eq:normw}). The parameter $\sigma$ expresses the {\em effective correlation time} of the volumetric object. We elaborate on the value of $\sigma$ in Sec.~\ref{sec:FrequencyAnalysis}. Two limiting cases are illustrative, however. For $\sigma \xrightarrow{}\infty$, we have $w_t(t') \xrightarrow{} 1/N^{\rm state}$. This means that the object ${\boldsymbol \beta}$ is effectively static. On the other hand, for $\sigma \xrightarrow{}0$, we have $w_t(t') \xrightarrow{} \delta(t-t')$, i.e. a Dirac delta function. This means that the object ${\boldsymbol \beta}$ varies so fast, that at any time $t$ its state is uncorrelated to the state at other times. \section{Bandwidth and Object Sampling} \label{sec:FrequencyAnalysis} In Sec.~\ref{sec:3DObject}, a 3D volumetric object which gradually evolves is represented using a linear superposition of sampled states. The linear superposition is controlled by a kernel, whose width is $\sigma$. Here we elaborate both on the sampling process and how $\sigma$ emerges from the temporal nature of the object. The relation between a continuously varying object and its discrete temporal samples is governed by the Nyquist sampling theorem. This theorem states, that for an infinite time domain:\\ \noindent $\bullet$ Sampling loses no information if for any time sample index $k$, $|t_{k+1}-t_k|\leq (2B)^{-1}$, where $B$ is the half-bandwidth of the object's temporal variations. \\ \noindent $\bullet$ Based on these lossless temporal samples, lossless reconstruction of the object is achieved by a linear superposition of the samples. The linear superposition is achieved by temporal convolution of the samples with a ${\rm sinc}$ kernel. This kernel has infinite temporal length. The kernel's effective half-width, defined by its first zero-crossing, is $(2B)^{-1}$. \\ In practice, the temporal domain, number of samples and reconstruction kernels are finite. Moreover, the object's temporal spectrum is often not completely band-limited by $B$, because some low energy content has far higher frequencies. Consequently, sampling and reconstruction are lossy, yielding an approximate result. The reconstruction is not performed by a ${\rm sinc}$ kernel, but using a finite-length kernel, such as the cropped Gaussian $w_t(t')$ of Eq.~(\ref{eq:weights_t_normal_dist}). Correspondingly, in our approximation, $\sigma\sim (2B)^{-1}$. As an example, consider warm convective clouds. They are governed by air turbulence of decameter scale. In these scales~\cite{fujita1986mesoscale}, the {\em correlation time} of content in a voxel is about $\approx 20 ~{\rm to}~50$ seconds. This indicates that {\em 4D spatiotemporal clouds can be recovered well using 4D spatiotemporal samples}, if the temporal samples are about 30 seconds apart. Furthermore, this indicates the range of values of $\sigma$. Moreover, the entire lifetime of a warm convective cloud is typically measured in minutes. An additional illustration is given by a cloud simulation, which is described in detail in Sec.~\ref{sec:CloudFieldSimulation}. The cloud evolves for about 10 minutes. For each cloud voxel, we calculated the power spectrum using the short-time Fourier transform. This power was then aggregated over all voxels. The total temporal spectrum is plotted in Fig~\ref{fig:stft}. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures/stft_cloud_field.pdf} \vspace{-0.7cm} \caption{A cutoff frequency $\approx 1/50 {\rm [Hz]}$, within which 95\% of the signal power is contained, is marked in red.} \label{fig:stft} \end{figure} The spectrum is effectively limited. The cutoff is not sensitive to evolving stages of the clouds. From this cutoff, a temporal sampling period which is $25{\rm [sec]}$ or shorter encapsulates most of the energy of the temporal variations. Hence, we set $\sigma \approx 25{\rm [sec]}$ for clouds. \section{Tomographic Angular Extent} \label{sec:anglesmaple} Section~\ref{sec:FrequencyAnalysis} dealt with sampling of an object, as if 4D measurements are done in-situ. However, in CT, we have no direct access to ${\boldsymbol \beta}_t$: we only measure projections ${\boldsymbol y}_{t}$. As we discuss now, projections must have a {\em wide angular breadth, while object evolution is small}. Consider an extreme case. Let a cloud be temporally constant and reside only in a single voxel, over the ocean. Viewed from space by two cameras simultaneously, cloud recovery here amounts to triangulation. In triangulation, the best cloud-localization resolution is obtained if the angular range between the two cameras is $90^{\circ}$. At small baselines, localization decreases linearly with the decreasing angular extent. When more than two cameras operate, localization behaves more moderately, but with a similar trend. Consider two criteria that had been used in 3D CT~\cite{levis2015airborne,loeub2020monotonicity,holodovsky2016situ}. Per time sample $t$, \begin{equation} \delta_t = \frac{\| {\boldsymbol \beta}^{\rm true}_t\|_1 - \|{\hat {\boldsymbol \beta}}_t \|_1}{\| {\boldsymbol \beta}^{\rm true}_t\|_1} \;,\;\;\; \varepsilon_t = \frac{\| {\boldsymbol \beta}^{\rm true}_t - {\hat {\boldsymbol \beta}}_t \|_1}{\| {\boldsymbol \beta}^{\rm true}_t\|_1} \; \label{eq:MassError} \end{equation} relate, respectively, to the relative bias of the object mass and the relative recovery error. We generalize them to the whole sample set $t\in{\cal T}$, by averaging: \begin{equation} \delta = \frac{1}{N^{\rm state}} \sum_{t\in{\cal T}} \delta_t \;, \;\;\;\; \varepsilon = \frac{1}{N^{\rm state}} \sum_{t\in{\cal T}} \varepsilon_t \;. \label{eq:epsilin} \end{equation} Fig.~\ref{fig:angle_span} plots the measures when CT attempts to recover a single-voxel cloud (extending 20 meters), when 9 cameras surround it from 500km away. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figures/eps_delta_angle_span.pdf} \vspace{-0.3cm} \caption{A static heterogeneous cloud and a single-voxel cloud (having size $20 {\rm m}\times 20 {\rm m}\times 20 {\rm m}$) are recovered from nine viewpoints. The plots are of errors defined in Eq.~(\ref{eq:epsilin}).} \label{fig:angle_span} \end{figure} Above $\approx 60^{\circ}$ total angular extent, recovery reaches a limiting excellent quality, but quality is very poor at narrow angle spans. What happens in extended objects? In linear-CT (as in medical X-ray CT), information loss due to limited-angle imaging is known as the {\em missing cone of frequencies}~\cite{macias1988missing,agard1989fluorescence}. In scattering CT, with the exception of very sparse objects, the missing cone linear theory does not apply. However (See Fig.~\ref{fig:angle_span}), there is a marked degradation of quality if the angular extent is narrow. Here, scattering CT is performed on a single-state (fixed time) of a cloud simulated as in Sec.~\ref{sec:CloudFieldSimulation}. So far, this section dealt with static clouds. Clouds are considered nearly static between times $t,t'$ if $|t-t'|<\sigma$. The viewing angular extent covered in those times (and in intermediate times) is denoted $\Theta(t,t')$, in radians. So, within time span $\approx \sigma$ good recovery can be achieved only if $2\Theta(t,t')/\pi$ is large. If it is low, then spatial (altitude) resolution in CT recovery is lost. Most CT systems cover wide angular extent, eventually. So, guarantee for quality is the angular rate. Define a dimensionless figure \begin{equation} \rho =\frac{2\Theta(t,t')} {\pi} \frac{\sigma} {|t-t'|} \;. \label{eq:rho} \end{equation} Good 4D recovery requires $\rho\gtrsim 1$, while temporal sampling satisfies the condition of Sec.~\ref{sec:anglesmaple}. The more these conditions are violated, the worse 4D CT is expected to perform. \section{4D Scattering Tomography} \label{sec:4DTomography} We now generalize Eq.~(\ref{eq:inverse}) to 4D CT. At any time $t\in{\cal T}$, the object is viewed simultaneously from a set of viewpoints ${\cal C}_t$. The image data captured in viewpoint $c\in{\cal C}_t$ is denoted by the vector ${\boldsymbol y}_{c}$. The image data captured simultaneously by all cameras $c\in{\cal C}_t$ is concatenated to a vector ${\boldsymbol y}_{t}$. At that time, the modeled medium variables are represented by a vector ${\boldsymbol \beta}_t$. At the corresponding time, as described in Sec.~\ref{sec:3DObject}, there is a hidden representation of the medium, ${\boldsymbol \beta}^{\rm hidden}_t$. The inverse problem is now formulated by \begin{equation} \{ {\hat {\boldsymbol \beta}}_t \}_{t\in {\cal T}} = \arg \!\!\!\!\!\!\!\!\min_{ \{ {\boldsymbol \beta}_t \}_{t\in {\cal T}} ~~~~~ } \sum_{t\in {\cal T}} {\cal E} \left[{\boldsymbol y}_{t}, {\cal F} \left({\boldsymbol \beta}_t \right) \right] \;. \label{eq:dynamic_inverse} \end{equation} We use \begin{equation} {\cal E} \left[{\boldsymbol y}_{t}, {\cal F} \left({\boldsymbol \beta}_t \right) \right] = \frac{1}{2} \| {\boldsymbol y}_{t} - {\cal F} \left({\boldsymbol \beta}_t \right) \|^2_2 \;. \label{eq:Et} \end{equation} Eq.~(\ref{eq:dynamic_inverse}) can be solved efficiently by gradient-based methods. Towards this, let us approximate the gradient of the cost being minimized in Eq.~(\ref{eq:dynamic_inverse}) by \begin{equation} \frac{\partial } {\partial {\boldsymbol \beta}_t} \sum_{t'\in {\cal T}} {\cal E} \left[{\boldsymbol y}_{t'}, {\cal F} \left({\boldsymbol \beta}_{t'} \right) \right] \approx \frac{\partial } {\partial {\boldsymbol \beta}^{\rm hidden}_t} \sum_{t'\in {\cal T}} {\cal E} \left[{\boldsymbol y}_{t'}, {\cal F} \left({\boldsymbol \beta}_{t'} \right) \right] \;. \label{eq:smigrad} \end{equation} Note that \begin{eqnarray} \frac{\partial } {\partial {\boldsymbol \beta}^{\rm hidden}_t} \sum_{t'\in {\cal T}} {\cal E} \left[{\boldsymbol y}_{t'}, {\cal F} \left({\boldsymbol \beta}_{t'} \right) \right] = ~~~~~~~~~~~~~~~~~~~~~ \nonumber \\ ~~ \sum_{t'\in {\cal T}} \frac{\partial {\cal E} \left[{\boldsymbol y}_{t'}, {\cal F} \left({\boldsymbol \beta}_{t'} \right) \right] } {\partial {\boldsymbol \beta}_{t'}} \frac{\partial {\boldsymbol \beta}_{t'}} {\partial {\boldsymbol \beta}^{\rm hidden}_t} \;. \label{eq:grandgrad} \end{eqnarray} From Eq.~(\ref{eq:Et}), \begin{equation} \frac{\partial {\cal E} \left[{\boldsymbol y}_{t'}, {\cal F} \left({\boldsymbol \beta}_{t'} \right) \right] } {\partial {\boldsymbol \beta}_{t'}} = \left[ {\cal F} \left({\boldsymbol \beta}_{t'} \right) - {\boldsymbol y}_{t'} \right] \frac{\partial {\cal F} \left({\boldsymbol \beta}_{t'} \right) } {\partial {\boldsymbol \beta}_{t'}}, \label{eq:derivative_eps_t} \end{equation} while from Eq.~(\ref{eq:beta_t_linear_comb2}), \begin{equation} \frac{\partial {\boldsymbol \beta}_{t'}} {\partial {\boldsymbol \beta}^{\rm hidden}_t} =w_{t'}(t) \;. \label{eq:dbetadbeta} \end{equation} Define the set of medium density fields at all sampled times ${\cal B}= \{ {\boldsymbol \beta}_{t'}\}_{t'\in {\cal T}} $. From Eqs.~(\ref{eq:smigrad},\ref{eq:grandgrad},\ref{eq:derivative_eps_t},\ref{eq:dbetadbeta}), for optimizing the field at time $t$, the approximate gradient is \begin{equation} {\bf g}_t({\cal B}) = \sum_{t'\in {\cal T}} w_{t'}(t) \left[ {\cal F} \left({\boldsymbol \beta}_{t'} \right) - {\boldsymbol y}_{t'} \right] \frac{\partial {\cal F} \left({\boldsymbol \beta}_{t'} \right) } {\partial {\boldsymbol \beta}_{t'}} \;. \label{eq:pgrad} \end{equation} A gradient-based approach then performs per iteration $k$: \begin{equation} {\boldsymbol \beta}_{t}(k+1)= {\boldsymbol \beta}_{t}(k)-\alpha {\bf g}_t({\cal B}_k) \label{eq:generic_gradient_descent} \end{equation} where $\alpha$ is a step size and ${\cal B}_k= \{ {\boldsymbol \beta}_{t'}(k) \}_{t'\in {\cal T}} $. The approach in Eqs.~(\ref{eq:dynamic_inverse}-\ref{eq:generic_gradient_descent}) is not specific to scattering CT. The formulation {\em can apply generically to other inverse problems} where data is acquired sequentially while the object evolves, and the forward model ${\cal F}$ is known and differentiable. In case of scattering CT, ${\cal F}$ is discussed in Sec.~\ref{sec:theory}. Computing the Jacobian $ \partial {\cal F} \left({\boldsymbol \beta}_{t'} \right) / \partial {\boldsymbol \beta}_{t'}$ is complex then. However, there are approximations to the Jacobian of 3D RT, which can be computed efficiently~\cite{levis2015airborne,loeub2020monotonicity}, making the recovery tractable. The complexity of solving Eq.~(\ref{eq:dynamic_inverse}) is similar to 3D static CT by (\ref{eq:inverse}). We performed optimization using a L-BFGS-B solver~\cite{zhu1997algorithm}. Following~\cite{levis2017multiple}, prior to the optimization process, the set of voxels to estimate is bounded using space-carving~\cite{kutulakos2000theory}. Space-carving bounds a 3D shape by back-projecting multi-view images. A voxel is labeled as belonging to the object, if the number of back-projected rays that intersect this voxel is greater than a threshold. To adapt this approach to our dynamic framework, the shape was estimated in a coarse spatial grid and using a low threshold for labeling voxels as potentially part of the cloud. \section{Simulations} \label{sec:Simulations} We test the proposed method on clouds. The atmosphere is a scattering medium which changes continuously. It includes several types of scattering particles, including water droplets, aerosols and molecules. Scattering by water droplets is usually much more dominant and spatiotemporally variable than aerosols, hence we focus on the former. Scattering by air molecules does not require imaging: it follows Rayleigh theory. Molecular density changes mainly in height and is usually known globally using non-imaging sensors. Thus, the evolving concentration of cloud water droplets is the main unknown we sense and seek. \subsection{Cloud Simulation} \label{sec:CloudFieldSimulation} For realistic complexity, we use a rigorous simulation based on cloud physics, including evolution of the cloud microphysics. Clouds are simulated using the System of Atmospheric Modeling (SAM)~\cite{khairoutdinov2003cloud}, which is a non-hydrostatic, inelastic large eddy simulator (LES)~\cite{neggers2003size,xue2006large,heus2009statistical}. It describes the turbulent atmosphere using equations of momentum, temperature, water mass balance and continuity. This model was coupled to a spectral (bin) microphysical model (HUJI SBM)~\cite{khain2004simulation,fan2009ice} of the droplets' size. It propagates the evolution of the droplet's size distribution by solving the equations for nucleation, diffusional growth, collision-coalescence and break-up. This is done on a logarithmic grid of 33 bins $[2 \mu {\rm m},3.2 {\rm mm}]$. The simulation runs according to the BOMEX case~\cite{siebesma2003large} of trade wind cumulus clouds near Barbados. Humidity and potential temperature profiles are used as initial conditions, while the surface fluxes and large-scale forcing are constant. The mean horizontal background wind is zero. The horizontal boundary condition is cyclic. The domain is 5.12km long (cloud diameter is $\approx 800{\rm m}$) at 10m resolution. Vertically, the resolution is 10m from sea level to 3km, and then resolution coarsens to 50m. The cloud top reaches 2km. The simulation expresses an hour, of which 30 minutes includes the cloud's lifetime. The temporal resolution is 0.5sec. We present results using two different time-varying clouds: {\em Cloud~(i)} has size $43\times 30 \times 45$ voxels (See Fig.~\ref{fig:sim_cloud1_3dvolumes}). {\em Cloud~(ii)} has size $60\times 40 \times 45$ voxels (See Fig.~\ref{fig:sim_cloud2_3dvolumes}). The voxel resolution is $10m\times 10m \times 10m$. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures/cloud1_plots.pdf} \vspace{-0.3cm} \caption{{\em Cloud~(i)}. Results of recovery by the {\tt Baseline} and {\tt Setup A} are compared to the ground-truth by a 3D presentation and scatter plots that use 20\% of the data points, randomly selected for display clarity.} \label{fig:sim_cloud1_3dvolumes} \end{figure} \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/cloud2_plots.pdf} \vspace{-0.3cm} \caption{{\em Cloud~(ii)} Results of recovery by the {\tt Baseline} and {\tt Setup A} are compared to the ground-truth.} \label{fig:sim_cloud2_3dvolumes} \end{figure} \subsection{Rendered Measurements} \label{sec:MeasurementsRendering} \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/sat_illustration.pdf} \vspace{-0.3cm} \caption{Illustration of {\tt Setup A}.} \label{fig:sat_illustration} \end{figure} Using the time-varying size distribution of the cloud droplets, Mie theory~\cite{bohren2008absorption} provides the spatiotemporal extinction field ${\cal B}= \{ {\boldsymbol \beta}_{t'}\}_{t'\in {\cal T}}$ and scattering phase function. The scene is irradiated by the sun, whose illumination angle changes in time, relative to the Earth's coordinates, while cameras overfly the evolving cloud. The solar trajectory in Earth coordinates corresponds to Feb/03/2013 at 13:54:30 - 14:01:00 local time, around 38N 123W. We tested several types of imaging setups:\\ \noindent {\tt Setup A}: Three satellites orbit at $500{\rm km}$ altitude, one after the other. Their velocity is $7.35{\rm km}/{\rm sec}$. The orbital arc-length between nearest-neighboring satellites is $500{\rm km}$. At mid-time of the simulation, $t=(t_1+t_{N^{\rm state}})/2$, the setup is symmetric around the nadir direction. Then, the setup spans an angular range of $114^\circ$. Each satellite carries a perspective camera. The camera resolution is such that at nadir view, a pixel corresponds to $10{\rm m}$ at sea level. Images are taken every $10{\rm sec}$, during $60{\rm sec}$, i.e. $N^{\rm state}=7$. This setup is illustrated in Fig.~\ref{fig:sat_illustration}.\\ \noindent {\tt Baseline}: The baseline uses all the accumulated 21 viewpoints of {\tt Setup A}. However, all viewpoints here have perspective cameras that {\em simultaneously} acquire the cloud. In other words, this baseline is not prone to errors that stem from temporal sampling. The baseline is used for recovery only at time $t=(t_1+t_{N^{\rm state}})/2$.\\ \noindent {\tt Setup B}: This setup is similar to {\tt Setup A}, but it uses only two satellites. Thus at mid-time of the simulation, the setup spans a $57^\circ$ angular range. \\ \noindent {\tt Setup C}: A single camera, similar to the Multi-angle Spectro-Polarimeter Imager (AirMSPI)~\cite{diner2013airborne}, mounted on an aircraft flying $154^\circ$ relative to North at $20{\rm km}$ altitude. Imaging has a pushbroom scan geometry, having $10{\rm m}$ spatial resolution at Nadir view and $\lambda=660{\rm nm}$ wavelength band. AirMSPI scans view angles in a step-and-stare mode~\cite{diner2013airborne}. Based on AirMSPI PODEX campaign~\cite{diner2013airbornePODEX}, we set 21 viewing angles along-track: $\pm65^\circ,\ \pm62^\circ,\ \pm58^\circ,\ \pm54^\circ,\ \pm50^\circ,\ \pm44^\circ,\ \pm38^\circ,\ \pm30^\circ,\\ \pm21^\circ,\pm11^\circ$ off-nadir and $0^\circ$ (nadir). For example, three sample angles are illustrated in Fig.~\ref{fig:single_platform_illustration}. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures/single_platform_illustration_ocean} \vspace{-0.3cm} \caption{Illustration of {\tt Setup C}. A domain is viewed at 21 pushbroom angles, sequentially.} \label{fig:single_platform_illustration} \end{figure} It takes $\approx 1{\rm sec}$ to scan a cloud domain in any single view angle, during which the cloud and solar directions are assumed constant. Dynamics are noticeable {\em between} view angles.\\ A spherical harmonic discrete ordinate method (SHDOM) code~\cite{evans2003improvements} provides the numerical forward model ${\cal F}$. Simulated measurements $\{ {\boldsymbol y}_{t} \}_{t\in {\cal T}}$ include noise. The noise model follows the AirMSPI sensors parameters~\cite{diner2013airborne,van2018calibration}. There, the sensor full-well depth is 200,000 photo-electrons, readout noise has a standard deviation of 20 electrons, and the overall readout is quantized to 9 bits. \subsection{Tomography Results} \label{sec:NumericalResults} The rendered and noisy images served as input to 4D tomographic reconstruction. The voxel size in the recovery was set to $10 {\rm m}\times 10 {\rm m}$ horizontal, $25 {\rm m}$ vertical and $10 {\rm sec}$ resolution. For parallelization, optimization ran on a computer cluster, where each computer core was dedicated to rendering a modeled image from a distinct angle. The optimization was initialized by $\{{\boldsymbol \beta}_t \}_{\tau \in {\cal T}} =1$km$^{-1}$. Convergence was reached in several dozen iterations. Depending on the number of input images, it took between minutes to a couple of hours to complete, in total. From Sec.~\ref{sec:FrequencyAnalysis}, we assess that a value $\sigma\sim 20{\rm sec}$ is natural. Indeed, this is supported numerically in the plots of $\varepsilon_t,\delta_t,\varepsilon,\delta$ for {\em Cloud~(i)} (Fig~\ref{fig:error_sigma}). \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figures/eps_delta.pdf} \caption{{\em Cloud~(i)}. The criteria of Eq.~(\ref{eq:MassError}) are marked by colored circles, whose saturation decays the farther the sampling time is from $(t_1+t_{N^{\rm state}})/2$. The criteria in Eq.~(\ref{eq:epsilin}) marked by solid or dashed lines, with corresponding colors. The setting $\sigma=\infty$ refers to the solution by the state of the art, i.e. 3D static scattering tomography.} \vspace{-0.3cm} \label{fig:error_sigma} \end{figure} Analogous plots for {\em Cloud~(ii)} are presented in the Appendix. The 3D tomographic results using {\tt Setup A} are shown in Figs.~\ref{fig:sim_cloud1_3dvolumes} and~\ref{fig:sim_cloud2_3dvolumes}, corresponding to {\em Cloud~(i)} and {\em Cloud~(ii)}. Both illustrate the state at $t=(t_1+t_{N^{\rm state}})/2$. Recovery used $\sigma=20$sec. More results, particularly relating to {\tt Setup B} are shown in the Appendix. {\tt Setup C} uses a single platform, which is challenging. Results depend significantly on how fast the aircraft flies, i.e. how long it takes to capture the cloud from a variety of angles (up to 21 angles). Fig.~\ref{fig:single_platform_errors} compares the results for the recovery at $t=(t_1+t_{N^{\rm state}})/2$ for inter-angle time interval of 5sec, 10sec and 20sec. As expected, quality ($\varepsilon$) improves with velocity. Moreover, if the camera moves slowly (long time interval between angular samples), results improve by using a longer temporal support, observing the cloud from a wider angular range, despite its dynamics. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figures/single_platform_results.pdf} \vspace{-0.3cm} \caption{{\tt Setup C}. Error measures~(\ref{eq:MassError}) of {\em Cloud~(i)} at time $t=(t_1+t_{N^{\rm state}})/2$, for different acquisition inter-angular temporal intervals. The setting $\sigma=\infty$ refers to the solution by the state of the art, i.e. 3D static scattering tomography.} \label{fig:single_platform_errors} \end{figure} \section{Experiment: Real World AirMSPI Data} \label{sec:ResltdAirMSPI} We follow the experimental approach of~\cite{levis2015airborne}, and use real-world data acquired by JPL's AirMSPI, which flies on board NASA’s ER-2. The geometry is exactly as described in {\tt Setup C} in Sec.~\ref{sec:MeasurementsRendering}, including location and time. We examine an atmospheric domain of size $1.5{\rm km}\times 2{\rm km}\times 2{\rm km}$ in the East-North-Up coordinates. We discretized the domain to $80\times 80\times 80$ voxels. Because $N^{\rm states}=21$, the total number of unknowns is 10,752,000. The inter-angle time interval in this experiment is around 20sec. Based on Fig.~\ref{fig:single_platform_errors}, we set $\sigma = 60{\rm sec}$ here. We want to focus on dynamic tomography of the evolving cloud, and not on global motion due to wind in the cloud field. Hence, we used the pre-processing approach of~\cite{levis2015airborne} to align the cloud images. Additionally, the ground albedo is estimated to be 0.04. The pre-processing and albedo estimation are described in the Appendix. A recovered volumetric reconstruction for one time instant is displayed in Fig.~\ref{fig:airmspi_results}. We have no ground-truth for the cloud content in this case. Hence we check for consistency using cross-validation. For this, we excluded the nadir image (Fig.~\ref{fig:airmspi_results}b) from the recovery process. Thus tomography used 20 out of the 21 raw views. Afterward, we placed the recovered cloud in SHDOM physics-based rendering~\cite{evans2003improvements}, to generate the missing nadir view. The result is then compared to the ground-truth missing view. Fig.~\ref{fig:airmspi_results} compares the result of this process for two solutions: our 4D tomographic solution, and the state-of-the-art, i.e., 3D static scattering tomography. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures/vol_scatter_plots_images.pdf} \vspace{-0.3cm} \caption{Recovered 3D extinction field using real data (a). A raw AirMSPI nadir image (b). Corresponding rendered views of a cloud, that was estimated using data that had excluded the nadir, either by our 4D CT approach (c) or current static 3D CT (d). Gamma correction was applied on (b,c,d) for display clarity. (e) A scatter plot of rendered vs. raw AirMSPI images at nadir.} \label{fig:airmspi_results} \end{figure} The same cross-validation process was repeated for the $\pm54^\circ$ view angles. Quantitatively, we measure the fitting error using Eq.~(\ref{eq:Et}). The results are summarized in Table~(\ref{tab:airmspi}). \begin{table}[t] \begin{center} \begin{tabular}{|l|c|c|c|} \hline &$+54^\circ$ view & nadir view & $-54^\circ$ view \\ \hline\hline Ours & 0.96& 0.38& 0.24 \\ Static solution & 1.73& 0.94 & 0.61\\ \hline \end{tabular} \end{center} \vspace{-0.3cm} \caption{Analysis of empirical data in different view angles. Quantitative fit (\ref{eq:Et}) of our 4D result to the data, as compared to the error of state-of-the-art static 3D CT.} \label{tab:airmspi} \end{table} \section{Discussion} \label{sec:discuss} We derive a framework for 4D CT of dynamic objects that scatter, using moving cameras. The natural temporal evolution of an object indicates the temporal and angular sampling needed for a good reconstruction. Given these conditions, 4D CT recovery can be done, even with a small number of cameras. We believe that our approach can be relevant in additional tomographic setups~\cite{gumbel2020mats} that rely on radiative transfer. Some elements of this work are generic, beyond scattering CT. Thus, it is worth applying the approach to other tomographic modalities. Our findings can significantly improve various research fields, including bio-medical CT, flow imaging and atmospheric sciences. \section*{Acknowledgment} We are grateful to Aviad Levis and Jesse Loveridge for the pySHDOM code and for being responsive to questions about it. We are grateful to Vadim Holodovsky and Omer Shubi for helping in setting elements of the code. We thank the following people: Ilan Koren, Orit Altaratz and Yael Sde-Chen for useful discussions and good advice; Danny Rosenfeld for pointing out Ref.~\cite{fujita1986mesoscale} to us, Johanan Erez, Ina Talmon and Daniel Yagodin for technical support. Yoav Schechner is the Mark and Diane Seiden Chair in Science at the Technion. He is a Landau Fellow - supported by the Taub Foundation. His work was conducted in the Ollendorff Minerva Center. Minvera is funded through the BMBF. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 810370-ERC-CloudCT. \end{document} \section*{Appendices} We now present several appendices. Appendix~\ref{sec:preprocessing} elaborates on pre-processing which is applied to real world measurements that are presented in Sec.~\ref{sec:ResltdAirMSPI}. This data was collected by the AirMSPI instrument. Appendix~\ref{sec:simulations} provides an additional example of the bandwidth of the power spectrum of a cloud and more simulation results. Appendix~\ref{sec:complexity} analyzes the computational complexity of the method. \section{Pre-processing Real World Data} \label{sec:preprocessing} Sec.~\ref{sec:ResltdAirMSPI} presents results using real world measurements. The data were acquired by the AirMSPI instrument. As explained in Sec.~\ref{sec:ResltdAirMSPI}, while AirMSPI flies, clouds move due to wide-scale wind at their altitude. The geometry of AirMSPI's path and the cloud drift during the experiment is presented in Fig.~\ref{fig:airmspi_geometry}. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures_sup/airmspi_geometry.pdf} \caption{{\small Geometry of the AirMSPI real world setup which led to the data presented in Sec.~\ref{sec:ResltdAirMSPI}. The color represents the locations of the cloud and the AirMSPI instrument in the different time states. The cloud's outer contour and its corresponding center of mass, marked in a circle, are presented per state. The AirMSPI location and velocity are marked by arrows. The arrows point to the AirMSPI flight direction azimuth of $154^\circ$ relative to the North. Due to the domain size, not all AirMSPI locations are illustrated here. Due to wind, the cloud moves at 57km/h in azimuth $182^\circ$ relative to the North. } } \label{fig:airmspi_geometry} \end{figure} In order to eliminate the influence of wide-scale wind, a registration process of the cloud images is done. Moreover, for tomographic recovery, we need to have an assessment of the Earth surface albedo, under the clouds. This section describes how pre-processing estimates the wind and albedo. \subsection{Wind Estimation} \label{sec:WindEstimation} Clouds are segmented from the surface automatically~\cite{velasco1979thresholding}. Cloudy pixels are then used to estimate the cloud center of mass in each image~\cite{levis2015airborne}. A registration of these centers of mass can be done by triangulation. However, triangulation of images of a moving object using a translating camera has an inherent ambiguity. This ambiguity can be solved if the cloud height is known. In this work, we assess this altitude of a cloud by its shadow~\cite{abrams2013episolar,liasis2016satellite,hatzitheodorou1988optimal}. Let $(x^{\rm cl}, y^{\rm cl}, z^{\rm cl})$ and $(x^{\rm shad}, y^{\rm shad}, 0)$ be a point in a cloud and its corresponding shadow point on the earth surface, respectively (see Fig.~\ref{fig:cloud_height}). \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures_sup/cloud_height_illustration.pdf} \caption{Illustration of estimation of cloud altitude using its shadow.} \label{fig:cloud_height} \end{figure} Let $r^{\rm shad}=\sqrt{(x^{\rm cl} - x^{\rm shad})^2+(y^{\rm cl} - y^{\rm shad})^2}$. We obtain $x^{\rm cl}, y^{\rm cl}, x^{\rm shad}$ and $y^{\rm shad}$ from the AirMSPI images. Given the solar zenith angle relative to the nadir $\theta^{\rm sun}$, the altitude $z^{\rm cl}$ satisfies \begin{equation} z^{\rm cl} = \frac{r^{\rm shad}}{\tan(\theta^{\rm sun})} \;. \label{eq:cloud_height} \end{equation} For the example shown in Sec.~\ref{sec:ResltdAirMSPI}, we estimated the cloud base height as $\approx$500m and its top at $\approx$1100m. Indeed taking MODIS/AQUA~\cite{NASAsite} retrievals of cloud top heights, indicates that the clouds' top in the region\footnote{This data applies over the coast of California, 38N 122W,on Feb/03/2013 at 13:30 local time.} does not exceed 1000m, which makes our approximation reasonable. We approximate the cloud horizontal velocity by projecting the images from the locations of the camera to the altitude of $z^{\rm cl}$. From the center of mass of these images, we assess the velocity. We register the camera locations so the projections of the center of mass of all images intersect at the same point at altitude of $z^{\rm cl}$. The images and the registered camera locations are the input for 4D recovery. \subsection{Surface Albedo Estimation} \label{sec:albedoest} 3D radiative transfer calculations require the surface albedo. We use non-cloudy pixels to estimate the albedo. Let ${\cal Y}$ be a set of non-cloudy pixels. We estimate the surface albedo $a$ as, \begin{equation} {\hat a} = \arg\! \min_a \sum_{y\in{\cal Y}} ||y-{\cal F}(\beta^{\rm air};a)||_2^2 \;. \label{eq:albedo} \end{equation} Here ${\cal F}(\beta^{\rm air};a)$ is a rendering (forward) model where the surface albedo is set to be $a$ and the atmospheric medium contains no clouds. That is, sunlight interacts only with the air and the surface. Scattering by air is assumed to be known~\cite{gordon1988exact,wang2005refinement}. The optimization problem is solved by the Brent minimization method~\cite{brent2013algorithms}, implemented by the SciPy package~\cite{scipy}. For the example shown in Sec.~\ref{sec:ResltdAirMSPI}, the surface albedo is estimated to be 0.04. \section{Additional Simulations} \label{sec:simulations} \subsection{Cloud Temporal Spectrum} \label{sec:spectrum} Sec.~\ref{sec:FrequencyAnalysis} indicates that the temporal power spectrum of a convective cloud at 10m resolution is effectively limited. Thus, a temporal sampling period of 25[sec] or shorter is required. We assess this in an additional cloud simulation. We conducted a single cloud simulation in high resolution, using small changes, relative to the simulation described in Sec.~\ref{sec:CloudFieldSimulation}. The simulation parameters and setting are similar. However, the perturbation that initiates the convection and turbulent flow has a smaller horizontal size. This creates a smaller cloud with a horizontal width of $\approx$400m. This cloud is more sensitive to mixing and evaporation than the cloud in Sec.~\ref{sec:CloudFieldSimulation} whose width is $\approx$800m. Because mixing with the environment is more intense here, the clouds' growth is inhibited. It can not exceed a height of 1400m, compared to a 2000m ceiling of the cloud in Sec.~\ref{sec:CloudFieldSimulation}. Using the same process described in Sec.~\ref{sec:FrequencyAnalysis}, the temporal power spectrum of the cloud is presented in Fig.~\ref{fig:stft_sup}. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures_sup/stft.pdf} \caption{A cutoff frequency $\approx 1/70 {\rm [Hz]}$, within which 95\% of the signal power is contained, is marked in red.} \label{fig:stft_sup} \end{figure} The cutoff frequency is $\approx1/70$[Hz], and it is not sensitive to the evolving stages of the cloud. Here the required temporal sampling period is 35[sec] or shorter. This is more tolerable compared to the temporal sampling period in Sec.~\ref{sec:FrequencyAnalysis}. \subsection{Additional Tomography Results} \label{sec:results} Recall that our method is demonstrated on two simulated clouds, {\em Cloud~(i)} and {\em Cloud~(ii)}, using several types of imaging setups: {\tt Setup A}, {\tt Setup B} and {\tt Baseline}. Moreover, recall the error criteria as Eqs.~(\ref{eq:MassError},\ref{eq:epsilin}). Fig.~\ref{fig:error_sigma_sup} shows $\varepsilon_t,\delta_t,\varepsilon,\delta$ for {\em Cloud~(ii)}. \begin{figure}[th!] \centering \includegraphics[width=0.85\linewidth]{figures/eps_delta.pdf} \caption{{\small {\em Cloud~(ii)}. The criteria of Eq.~(\ref{eq:MassError}) are marked by colored circles, whose saturation decays the farther the sampling time is from $(t_1+t_{N^{\rm state}})/2$. The criteria in Eq.~(\ref{eq:epsilin}) are marked by solid or dashed lines, with corresponding colors. The setting $\sigma=\infty$ refers to the solution by the state of the art, i.e. 3D static scattering tomography.}} \label{fig:error_sigma_sup} \end{figure} It reinforces the assessment that a value $\sigma\sim 20{\rm sec}$ is natural, as explained in Sec.~\ref{sec:FrequencyAnalysis}. \begin{figure*}[t] \def1\linewidth{1\linewidth} \centering \input{cloud1_eps_cut_section.pdf_tex} \caption{{\em Cloud~(i)}. 3D cut-sections of the error $|{ \beta}^{\rm true}_t({\bf x}) - {\hat { \beta}}_t({\bf x})|$ at $t=(t_1+t_{N^{\rm state}})/2$ for {\tt Baseline}, {\tt Setup A} and {\tt Setup B}.} \vspace{0.3cm} \label{fig:cloud1_sctions} \end{figure*} \begin{figure*}[t] \def1\linewidth{1\linewidth} \centering \input{cloud2_eps_cut_section.pdf_tex} \caption{{\em Cloud~(ii)} comparison for {\tt Baseline}, {\tt Setup A} and {\tt Setup B}. [Top] 3D cut-sections of the error $|{ \beta}^{\rm true}_t({\bf x}) - {\hat { \beta}}_t({\bf x})|$ at $t=(t_1+t_{N^{\rm state}})/2$. [Bottom] Scatter plots that use randomly selected 20\% of the data points, for display clarity.} \vspace{0.3cm} \label{fig:cloud2_sctions} \end{figure*} Figs.~\ref{fig:cloud1_sctions} and~\ref{fig:cloud2_sctions} respectively visualize the results of {\em Cloud~(i)} and {\em Cloud~(ii)}. The 3D cut-sections of the error $|{\beta}^{\rm true}_t({\bf x}) - {\hat { \beta}}_t({\bf x})|$ at $t=(t_1+t_{N^{\rm state}})/2$ is presented for {\tt Setup A}, {\tt Setup B} and {\tt Baseline} in Figs.~\ref{fig:cloud1_sctions} and~\ref{fig:cloud2_sctions}[Top]. Fig.~\ref{fig:cloud2_sctions}[Bottom] uses scatter plots to compare the ground-truth to the results obtained by either the {\tt Baseline}, {\tt Setup A} or {\tt Setup B}. \section{Computational Complexity} \label{sec:complexity} The time complexity for solving the 4D CT inverse problem, (Eq.~\ref{eq:dynamic_inverse}) is governed by the gradient calculation. Recall the formulation of the approximate gradient, \begin{equation} {\bf g}_t({\cal B}) = \sum_{t'\in {\cal T}} w_{t'}(t) \left[ {\cal F} \left({\boldsymbol \beta}_{t'} \right) - {\boldsymbol y}_{t'} \right] \frac{\partial {\cal F} \left({\boldsymbol \beta}_{t'} \right) } {\partial {\boldsymbol \beta}_{t'}} \;. \label{eq:pgrad_sup} \end{equation} Computing the Jacobian $ \partial {\cal F} \left({\boldsymbol \beta}_{t'} \right) / \partial {\boldsymbol \beta}_{t'}$ is complex, thus it is established numerically by a surrogate function that evolves through iterations~\cite{levis2015airborne,loeub2020monotonicity}. Calculating the gradient includes two dominant time-consuming processes that are executed in alternation. The first process calculates the forward model for the $N^{\rm state}$ cloud states $\left\{ {\cal F} \left( {{\boldsymbol \beta}}_{t'} \right) \right\}_{t' \in {\cal T}}$. The second process sums over the entire set of measurements, which does not depend on the number of cloud states that we seek to recover. A spherical harmonic discrete ordinate method (SHDOM) code is used for computing the numerical forward model ${\cal F}(\cdot)$ and the Jacobian. SHDOM iteratively updates the estimation of 3D radiation fields until convergence. Calculating the forward model for the $N^{\rm state}$ cloud states can be done in parallel. Thus, the time complexity is governed by the temporal state for which the SHDOM code takes the longest time to compute the forward model. By calculating the forward model for all cloud states in parallel, the time complexity of gradient calculation is insensitive to the number of cloud states $N^{\rm state}$ . As a numerical example, we used 20 iterations of the L-BFGS-B optimization. Using measurements of {\em Cloud~(i)} acquired by {\tt Setup A}, the run-time of the solution by our method was 501[sec]. The static solution took 301[sec]. In both, the computer was Intel\textregistered \ Xeon\textregistered \ Gold 6240 CPU @ 2.60GHz with 72 cores. Although our method recovers $N^{\rm state}=7$ times more voxels, the run-time is less than twice that of the static solution. The time difference is caused by overheads of saving and loading larger data with our method, and nonoptimal task division for the cores. {\small \bibliographystyle{ieee_fullname}
1,108,101,566,116
arxiv
\section{An infinite sequence of networks} \label{app:networks} To protect our intellectual property, the construction is encrypted by a private secret key. After discovering the cipher, all details are here: \medskip\noindent ePC2 26lWMqx HOi9q2k DiP3XAq qEr c Auxn2A SnIbgpyM9k cNh18cNwnzxl IXd DI jNdNJ WN4z 5kzVfQyjxxGWpOtI SbPpLByxLmR x5R86IMKlZu CQC wX5rc4flvdhAZgVv mQ8M1baGj AdvRv 0KAUy4h47s qr0Q8 9fyD 64dIDfDtbo p OFnq D XP jU0 akBXbmb TkCDEgkLVJ W58pKCVN5 99LCS9Xlpnas0iJR CFTRc mpVZL8Tu 3v6v i6k XQxrGV 52l E253mcD ArWAMo iEZfzkTRd3AMHh AO4Jye 9BDd8YXeB3znyme RHTTYsO7CVGMp cg8fYiYXnTscG o4zzQWHWM smx BtzWR 3q Xx6LbkbyIB uOtoJK U uy3G h4 x IapaPaxfz 0IKqxm8ScRSGncqu BQ72n4a1LX3mZtCd0wQX1UkZ mNUyAHRA2A8w6r6r6Fr yRArAKmp Zcu3My IzRT0h3w KnetZ CR1CKciA 8nHioOS dHjNae QR4z7Ee RXZCQKT 8LFqosA xOf4OqyjxHkB Lh75Kl aCI nQQkVv aQt3i3B8EW 6d7L yQdghg5ey21Y QyhVkD1 8ex0BXR4N8 6PcJTi MF4 wE0AiW silSUz icyFliiEQz6b JzE89iYKLPbkDVXVcaE YOnfiYAuW51zLiFoiUm4s Ep fGu IhLQRMU SvU B21WN A KV9uMm8K4 7qCbkMTGX8ARZ 0wVkW1D eYuynycPEv laQ k fjuySpu 7CP5p7 VmiulNsRskDpBx jIlrtu 5ggT 1BFWhB2eV5 nbVGk d0j5GpahAqS7 rMTfkR4kN O3WsQCUSWT9 XDFz2oGymnUow DhLfZ Mjg2GHHqqm Dwl k5e85y7z e8jcrq xinT8RGkSwoL 3ljgoh6 kLC A93MdF c3u7pCom puQn6 9BQCcUuqpGI ktBnKkI 4rMhb8Vv2Y 3Edfk 2vWCDLv9fHSsaoEV8J CJU4X5 fV Kud FbZtEWtvyAMd9 y5Wu4YdJxRlY kC8 irprD klsdfhkusd na wdjhkdhefKLEFN VJ U65V87gftd jytBHJGjghjgVuy UGHbv 87264536 bcsiuwhnh 887BBNMJY5V JGG gggg g6vITh9S XGpK EfQwlpj 11bVP 6RKjKNLmq PKk UpqW U920 LSkH veFLcC v8t9k9f3M Ys8vX TjAn3ddH swu hfFZu dHpKrPY9 XUbb3VLA5Ehn LURLi BMIqNRrhE nxl6H moSS8lkj6S wPTML 4EoATkv9j31VtXbz4 FIO 9g CFI t81DT G0A5ynM tAXgmIM1P3 Tvuv3ZmFDia gBMzYSY2v7ruCQCxW p IltxBF ppVi0qi JQbU D98M2i7 sK Fuyjm1 Vq9WKUhDG N5CTDxCfA1oAM N859CGxq FWBF XzmbFiYlg KNG7pO GFMF32S04yE pO 0 0 0 zq5Em8lmAnbP 2KNSc3x E59rYUeR7Ti9V GLbINd FGRpFObifAAzMvTV2OH SiFFz8LtZRw BddTXa8SbsogK0iFb1E LztW Wu4UD7SrZCtvP ZWYIj wgR NzClosTwwBzX AAg qQxbl2UNg GqzqH0EWBk48ssmDqb3PKFV XBY sC6nNT zv s4s tBq O6KYPd VpFQ NOSnGbmS9M 7LQTYIj41kMCuwTqp7 KCVN5 99LCS9Xlpnas0iJR CFTRc mpVZL8Tu 3v6v i6k XQxrGV 52l E253mcD ArWAMo iEZfzkTRd3AMHh AO4Jye 9BDd8YXeB3znyme RHTTY RLi BMIqNRrhE nxl6H moSS8lkj6S wPTML 4EoATkv9j31VtXbz4 FIO 9g CFI t81DT G0A5ynM tAXgmIM1P3 Tvuv3ZmFDia gBMzYSY2v7ruCQCxW p IltxBF ppVi0qi JQbU D98M2i7 sK Fuyjm1 Vq9WKUhDG N5CTDxCfA1oAM N859 Aq qEr c Auxn2A SnIbgpyM9k cNh18cNwnzxl IXd DI jNdNJ WN4z 5kz kDpBx jIlrtu 5ggT 1BFWhB2eV5 nbVGk d0j5Gp kSwoL 3ljgoh6 kLC A93MdF c3u7pCom puQn6 \section{Quantum networks} \label{app:half} Most of what is needed is contained in Appendix A, Propositions A.3-A.8. As the principles of half and quarter quantization require more space than is available, we have opted to convey their main ideas through the only medium worthwhile---interpretive dance~\cite{dance1, dance2}. Lemmas B.1-B.3 can be found on the quantum internet; simply perform the right measurement on \begin{equation} \ket{{\tt qwww.}}\otimes \left[\bigotimes_{i=1}^{10}\left(\sum_{n=1}^{26} \gamma^{(i)}_{n}\ket{{\tt n}}\right)\right] \otimes\ket{{\tt .com}}. \end{equation} Most likely you will end up on some quantum internet porn site (any thinking person could have predicted that the quantum internet will consist mostly of quantum porn). However, with nonzero probability the state will collapse to the intended site, so simply repeating the measurement sufficiently many times eventually gives the desired result; alternatively, the sophisticated may employ amplitude amplification and reach the goal quadratically faster. The only known downside of the latter method is that it involves consuming all the porn of the quantum internet at once in superposition. \section{Generic separations \protect\\ with advice and without} \label{app:yetanother} Well, what can we say? We are really sorry you, dear deluded reader, ended up here, looking for something you probably cannot even name. The fact that you ventured all the way out here, to the godforsaken desolation of Appendix \ref{app:yetanother}, is, more than all other things, a sign of your utter desperation. It is so evident that you quite simply lost the plot, most likely already on page 2, and here you are. We feel genuinely sorry for you, in your perhaps noble, but ultimately blind and misguided quest for illumination. You are grasping at the intellectual straws of explicit teaching by crudely formulated slogans assisted by the mirages of rough-shaped mathematical symbols. Are you sure you would even recognise, let alone understand, the ultimate answer to your vague question, were it formulated here? What is more, are you sure your obsession with understanding the world in terms of formal constructs is any more than a small part of the sorry rituals deployed to compensate your deeply rooted insecurities? Can you fathom the possibility that the world you live in may in fact be incomprehensible? Look around you... It certainly seems so, does it not? Normally, this would be the moment where we either recruit you to a secretive death cult, or else sell you an expensive self-help manual, conveniently published by us just now (there are those who would do both \cite{Orwell:Hitler}). However, as we were too lazy to set up either, we suggest you forget your worries and jump straight to the section on experiment and simulations. If after that you still feel gloomy, come back here for \href{https://youtu.be/3SwQHIBNI90}{this brief summary of human achievements} before the inevitable end. \end{document}
1,108,101,566,117
arxiv
\section*{Introduction} In the present paper we complete the study of the spherical nilpotent orbits in complex symmetric spaces by considering the symmetric pairs $(\mathfrak g,\mathfrak k)$ with $\mathfrak g$ of exceptional type, the cases with $\mathfrak g$ of classical type being considered in \cite{BCG} and \cite{BG}. We refer to those papers for some background and motivation of this work. We keep here the notation introduced therein. In particular, here $G$ is a connected simple complex algebraic group of exceptional type, $K$ the fixed point subgroup of an involution $\theta$ of $G$, and $\mathfrak g=\mathfrak k\oplus\mathfrak p$ the corresponding eigenspace decomposition. The spherical nilpotent $K$-orbits in $\mathfrak p$ have been classified by King in \cite{Ki04}. In the exceptional cases such classification is based on \makebox[0pt]{\rule{3pt}{0pt}\rule[4pt]{3pt}{0.8pt}}Dokovi\'c's tables of nilpotent orbits in simple exceptional real Lie algebras \cite{D88a,D88b}. We give the list of the spherical nilpotent orbits $Ke \subset \mathfrak p$, together with a normal triple $\{e,h,f\}$ and a description of the centralizer of $e$ in $\mathfrak k$. The normalizer of the centralizer of $e$, $\mathrm N_K(K_e)$, is a wonderful subgroup of $K$. We compute its Luna spherical system and study the surjectivity of the multiplication of sections of globally generated line bundles on the corresponding wonderful variety. In particular, we obtain that for the wonderful varieties arising from the spherical nilpotent orbits in exceptional symmetric pairs such multiplication is always surjective (Theorem~\ref{teo: projnorm}). We use this to study the normality of the closure of the spherical nilpotent orbit $Ke$ in $\mathfrak p$ and compute the weight semigroup of its normalization, and we obtain that in the exceptional cases all the spherical nilpotent orbit closures are normal except in one case in type $\mathsf G_2$, see Theorem~\ref{teo:normal}. In Section~\ref{s:1} we compute the Luna spherical systems. In Section~\ref{s:2} we prove the surjectivity of the multiplication. In Section~\ref{s:3} we deduce our results on normality of orbit closures and compute the weight semigroups. In Appendix~\ref{A} we report the list of the orbits under consideration, together with normal triples and centralizers. In Appendix~\ref{B} we put the tables with the results of our computations. \section{Spherical systems}\label{s:1} In this first section, for every spherical nipotent orbit $Ke$ in $\mathfrak p$, we compute the spherical system of $\mathrm N_K(K_e)$, the normalizer of the centralizer, which is a wonderful subgroup of $K$ (see \cite[Section 1]{BCG} for background on wonderful subgroups and wonderful varieties). We take the spherical system given in the tables of Appendix~\ref{B} and explain case-by-case (in most cases we just provide references) that it actually corresponds to the normalizer of $Ke$ which is described in the list in Appendix~\ref{A}. We keep the notation introduced in \cite{BCG}. The rank zero cases correspond to parabolic subgroups: 1.1; 2.1; 3.1, 3.2, 3.5; 4.1; 5.1; 6.1; 7.1, 7.2, 7.5; 8.1; 9.1; 10.1; 11.1; 12.1. Notice that if we take the parabolic subgroups containing the opposite of the fixed Borel subgroup, we get that the root subsystem of their Levi factor (containing the fixed maximal torus) is generated by $S^\mathrm p$. This means that, if we take the standard parabolic subgroups (i.e.\ containing the fixed Borel subgroup), we get that the root subsystem of their Levi factor is generated by $(S^\mathrm p)^*$. In the list all the given parabolic subgroups are standard, by construction. For the positive rank cases we take the localization of the spherical system on $\supp_S\Sigma$, the support of the spherical roots of the spherical system, which corresponds to a wonderful subgroup of a Levi subgroup $M$ of $K$ that we describe. The investigated wonderful subgroup of $K$ can be obtained by parabolic induction from the wonderful subgroup of $M$. The localization on $\supp_S\Sigma$ is a well-known symmetric case (see \cite{BP15}) in: 1.2; 2.2, 2.3; 3.3, 3.4, 3.6; 5.2--5.4; 6.2, 6.3; 7.3, 7.4, 7.6--7.10; 8.2; 9.2, 9.3; 10.2, 10.3; 12.2. It corresponds to a wonderful reductive (but not symmetric) subgroup of $M$ (see \cite{BP15}) in the cases 3.9 and 11.2. It corresponds to a comodel case (see \cite[Section~5]{BGM}) in the cases 5.8, 5.9 and 8.6. The remaining cases are somewhat all very similar: indeed, they all possess a positive color, giving a morphism onto another spherical system of smaller rank which is very easy to describe (see \cite[\S2.3]{BP16}). Moreover, in all these cases the target of the morphism is always a parabolic induction of a symmetric case. A positive color is by definition an element $D\in\Delta$ which takes nonnegative values on all the spherical roots (through the Cartan pairing $c\colon\Delta\times\Sigma\to\mathbb Z$). Every positive color provides a distinguished subset of colors by itself, and the corresponding quotient has $\Delta\setminus\{D\}$ as set of colors and $\{\sigma\in\Sigma\,:\,c(D,\sigma)=0\}$ as set of spherical roots. To describe the subgroups of $M$, we fix a maximal torus and a Borel subgroup in $K$, and a corresponding set of simple roots $\alpha_1,\ldots,\alpha_n;\alpha'_1,\ldots,\alpha'_{n'}; \ldots$ for all almost-simple factors of $K$. The parabolic subgroups of $M$ are all chosen to be standard. They are in correspondence with subsets of simple roots, the simple roots that generate the root subsystem of the corresponding Levi factor. To be as explicit as possible, we work in the semisimple part $M'$ of $M$, and when $M$ is of classical type, we take $M'$ to be a classical matrix group. Let us fix some further notation. Since it is always a nontrivial parabolic induction, the wonderful subgroup of $M'$ corresponding to the target is given as $\tilde L P^\mathrm u$, where $P=L\,P^\mathrm u$ is a parabolic subgroup of $M'$ and $\tilde L\subset L$. The wonderful subgroup corresponding to the source $H=L_HH^\mathrm u$ can be included in $\tilde L P^\mathrm u$, with $L_H\subset \tilde L$ and $H^\mathrm u\subset P^\mathrm u$. \subsubsection*{Cases 1.3 and 10.4} \[\rule[-6pt]{0pt}{6pt}\begin{picture}(4200,1200)(-300,-300)\put(0,0){\usebox{\dynkincthree}}\multiput(0,0)(1800,0){2}{\usebox{\aprime}}\put(3600,0){\usebox{\aone}}\end{picture}\quad\longrightarrow\quad\begin{picture}(4200,1200)(-300,-300)\put(0,0){\usebox{\dynkincthree}}\multiput(0,0)(1800,0){2}{\usebox{\aprime}}\put(3600,0){\usebox{\wcircle}}\end{picture}\] Take $M'=\mathrm{Sp}(6)$, $P$ the parabolic subgroup of $M'$ corresponding to $\alpha_1$, $\alpha_2$, and take $\tilde L$ to be the normalizer of $\mathrm{SO}(3)$ in $L\cong\mathrm{GL}(3)$. The wonderful subgroup $H$ corresponding to the source is given by the same Levi factor $L_H=\tilde L$, and unipotent radical $H^\mathrm u$ of codimension 1 in $P^\mathrm u$. The unipotent radical $P^\mathrm u$ of $P$ is a simple $L$-module of highest weight $2\omega_{\alpha_1}$ which splits into two simple $\tilde L$-submodules of dimension 5 and 1, respectively, so that $H^\mathrm u$ is uniquely determined. \subsubsection*{Cases 2.4 and 5.5} \[\rule[-12pt]{0pt}{12pt}\begin{picture}(7800,1200)(-300,-300) \put(0,0){\usebox{\dynkinafive}} \multiput(0,0)(5400,0){2}{\multiput(0,0)(1800,0){2}{\usebox{\wcircle}}} \put(3600,0){\usebox{\aone}} \multiput(0,-1500)(7200,0){2}{\line(0,1){1200}} \put(0,-1500){\line(1,0){7200}} \multiput(1800,-1200)(3600,0){2}{\line(0,1){900}} \put(1800,-1200){\line(1,0){3600}} \end{picture} \quad\longrightarrow\quad \begin{picture}(7800,1200)(-300,-300) \put(0,0){\usebox{\dynkinafive}} \multiput(0,0)(5400,0){2}{\multiput(0,0)(1800,0){2}{\usebox{\wcircle}}} \put(3600,0){\usebox{\wcircle}} \multiput(0,-1500)(7200,0){2}{\line(0,1){1200}} \put(0,-1500){\line(1,0){7200}} \multiput(1800,-1200)(3600,0){2}{\line(0,1){900}} \put(1800,-1200){\line(1,0){3600}} \end{picture}\] Take $M'=\mathrm{SL}(6)$, $P$ the parabolic subgroup of $M'$ corresponding to $\alpha_1$, $\alpha_2$, $\alpha_4$, $\alpha_5$, and take $\tilde L$ to be the normalizer of $\mathrm{SL}(3)$ embedded diagonally into $L\cong\mathrm{S}(\mathrm{GL}(3)\times\mathrm{GL}(3))$. The wonderful subgroup $H$ is given by the same Levi factor $L_H = \tilde L$, and unipotent radical $H^\mathrm u$ of codimension 1 in $P^\mathrm u$. The unipotent radical $P^\mathrm u$ of $P$ is a simple $L$-module of highest weight $\omega_{\alpha_1}+\omega_{\alpha_5}$ which splits into two simple $\tilde L$-submodules of dimension 8 and 1, respectively, so that $H^\mathrm u$ is uniquely determined. \subsubsection*{Case 2.5} \[\rule[-9pt]{0pt}{9pt}\begin{picture}(6900,1500)(-300,-300)\put(0,0){\usebox{\dynkinathree}}\multiput(0,0)(3600,0){2}{\usebox{\wcircle}}\multiput(0,-1200)(3600,0){2}{\line(0,1){900}}\put(0,-1200){\line(1,0){3600}}\multiput(1800,0)(4500,0){2}{\usebox{\aone}}\multiput(1800,1200)(4500,0){2}{\line(0,-1){300}}\put(1800,1200){\line(1,0){4500}}\end{picture}\quad\longrightarrow\quad\begin{picture}(6900,1500)(-300,-300)\put(0,0){\usebox{\dynkinathree}}\multiput(0,0)(3600,0){2}{\usebox{\wcircle}}\multiput(0,-1200)(3600,0){2}{\line(0,1){900}}\put(0,-1200){\line(1,0){3600}}\multiput(1800,0)(4500,0){2}{\usebox{\wcircle}}\put(6300,0){\usebox{\vertex}}\end{picture}\] Take $M'=\mathrm{SL}(4)\times\mathrm{SL}(2)$, $P$ the parabolic subgroup of $M'$ corresponding to $\alpha_1$, $\alpha_3$, and take $\tilde L$ to be the normalizer of $\mathrm{SL}(2)$ embedded diagonally into $L\cong\mathrm{S}(\mathrm{GL}(2)\times\mathrm{GL}(2))$. The semisimple parts of the Levi subgroups $L_H$ and $\tilde L$ are equal, while the center of $L_H$ has codimension 1 in the center of $\tilde L$. The unipotent radical $H^\mathrm u$ has codimension 1 in $P^\mathrm u$. The unipotent radical $P^\mathrm u$ of $P$ is the direct sum of two simple $L$-modules of highest weight $\omega_{\alpha_1}+\omega_{\alpha_3}$ and $0$, respectively. The former splits into two simple $\tilde L$-submodules of dimension 3 and 1, respectively, so that $H^\mathrm u$ is the $L_H$-complement of a 1 dimensional submodule of the two 1 dimensional $\tilde L$-submodules of $P^\mathrm u$, that projects nontrivially on both summands. The subgroup $H$ is uniquely determined up to conjugation. \subsubsection*{Cases 3.7 and 3.8} \[\rule[-12pt]{0pt}{12pt}\begin{picture}(3600,1800)(-300,-300) \put(0,0){\usebox{\dynkindfour}} \put(0,0){\usebox{\athreene}} \put(0,0){\usebox{\athreese}} \end{picture} \quad\longrightarrow\quad \begin{picture}(3600,1800)(-300,-300) \put(0,0){\usebox{\dynkindfour}} \multiput(3000,1200)(0,-2400){2}{\usebox{\wcircle}} \end{picture}\] Take $M'=\mathrm{SO}(8)$, $P$ the parabolic subgroup of $M'$ corresponding to $\alpha_1$, $\alpha_2$, and take $\tilde L=L\cong\mathrm{GL}(3)\times\mathrm{GL}(1)$. The semisimple parts of the Levi subgroups $L_H$ and $L$ are equal, while the center of $L_H$ has codimension 1 in the center of $L$. The unipotent radical $H^\mathrm u$ has codimension 3 in $P^\mathrm u$. The unipotent radical $P^\mathrm u$ of $P$ is the direct sum of three simple $L$-modules, all of them of dimension 3. With respect to the semisimple part of $L$, two of them have highest weight $\omega_{\alpha_1}$ and the other one has highest weight $\omega_{\alpha_2}$. Therefore $H^\mathrm u$ is the $L_H$-complement of a 3 dimensional submodule of the two $L$-submodules of $P^\mathrm u$ of highest weight $\omega_{\alpha_1}$ that projects nontrivially on both summands. The subgroup $H$ is uniquely determined up to conjugation. \subsubsection*{Cases 6.4 and 8.3} \[\rule[-21pt]{0pt}{21pt}\begin{picture}(6900,1500)(0,-300) \put(0,0){\usebox{\dynkindsix}} \multiput(1800,0)(3600,0){2}{\usebox{\gcircle}} \put(6600,-1200){\usebox{\aone}} \end{picture} \quad\longrightarrow\quad \begin{picture}(6900,1500)(0,-300) \put(0,0){\usebox{\dynkindsix}} \multiput(1800,0)(3600,0){2}{\usebox{\gcircle}} \put(6600,-1200){\usebox{\wcircle}} \end{picture}\] Take $M'=\mathrm{SO}(12)$, $P$ the parabolic subgroup of $M'$ corresponding to $\alpha_1,\ldots,\alpha_5$, and take $\tilde L$ to be the normalizer of $\mathrm{Sp}(6)$ in $L\cong\mathrm{GL}(6)$. The wonderful subgroup $H$ is given by the same Levi factor $L_H = \tilde L$, and unipotent radical $H^\mathrm u$ of codimension 1 in $P^\mathrm u$. The unipotent radical $P^\mathrm u$ of $P$ is a simple $L$-module of highest weight $\omega_{\alpha_2}$ which splits into two simple $\tilde L$-submodules of dimension 14 and 1, respectively, so that $H^\mathrm u$ is uniquely determined. \subsubsection*{Cases 6.5 and 9.5} \[\rule[-9pt]{0pt}{9pt} \begin{picture}(11400,1500)(-300,-300)\multiput(0,0)(2700,0){2}{\usebox{\aone}}\multiput(0,1200)(2700,0){2}{\line(0,-1){300}}\put(0,1200){\line(1,0){2700}}\put(2700,0){\usebox{\plusdm}}\end{picture} \quad\longrightarrow\quad \begin{picture}(11400,1500)(-300,-300)\multiput(0,0)(2700,0){2}{\usebox{\vertex}}\multiput(0,0)(2700,0){2}{\usebox{\wcircle}}\put(2700,0){\usebox{\plusdm}}\end{picture}\] Take $M'=\mathrm{SL}(2)\times\mathrm{SO}(2n)$, $P$ the parabolic subgroup of $M'$ corresponding to $\alpha'_2,\ldots,\alpha'_n$, and take $\tilde L$ to be the normalizer of $\mathrm{SO}(2n-3)$ in $L\cong\mathrm{GL}(1)\times\mathrm{GL}(1)\times\mathrm{SO}(2n-2)$. The semisimple parts of the Levi subgroups $L_H$ and $\tilde L$ are equal, while the center of $L_H$ has codimension 1 in the center of $\tilde L$. The unipotent radical $H^\mathrm u$ has codimension 1 in $P^\mathrm u$. The unipotent radical $P^\mathrm u$ of $P$ is the direct sum of two simple $L$-modules of highest weight $0$ and $\omega_{\alpha'_2}$, respectively. The latter splits into two simple $\tilde L$-submodules of dimension $2n-3$ and $1$, respectively, so that $H^\mathrm u$ is the $L_H$-complement of a 1-dimensional submodule of the two 1-dimensional $\tilde L$-submodules of $P^\mathrm u$, that projects nontrivially on both summands. \subsubsection*{Cases 7.11 and 7.12} \[\begin{array}{ccc}\rule[-18pt]{0pt}{18pt} \begin{picture}(7800,1200)(-300,-300) \put(0,0){\usebox{\dynkinesix}} \put(1800,0){\usebox{\afour}} \multiput(7200,0)(-3600,-1800){2}{\circle{600}} \multiput(7200,0)(-25,-25){13}{\circle*{70}} \put(6900,-300){\multiput(0,0)(-300,0){10}{\multiput(0,0)(-25,25){7}{\circle*{70}}}\multiput(-150,150)(-300,0){10}{\multiput(0,0)(-25,-25){7}{\circle*{70}}}} \multiput(3600,-1800)(25,25){13}{\circle*{70}} \put(3900,-1500){\multiput(0,0)(0,300){4}{\multiput(0,0)(-25,25){7}{\circle*{70}}}\multiput(-150,150)(0,300){4}{\multiput(0,0)(25,25){7}{\circle*{70}}}} \put(0,0){\usebox{\aone}} \end{picture} & \longrightarrow & \begin{picture}(7800,1200)(-300,-300) \put(0,0){\usebox{\dynkinesix}} \put(1800,0){\usebox{\wcircle}} \put(3600,-1800){\usebox{\wcircle}} \put(0,0){\usebox{\aone}} \end{picture}\\ \downarrow & \rule[-12pt]{0pt}{30pt} & \downarrow \\ \rule[-18pt]{0pt}{18pt} \begin{picture}(7800,600)(-300,-300) \put(0,0){\usebox{\dynkinesix}} \put(1800,0){\usebox{\afour}} \multiput(7200,0)(-3600,-1800){2}{\circle{600}} \multiput(7200,0)(-25,-25){13}{\circle*{70}} \put(6900,-300){\multiput(0,0)(-300,0){10}{\multiput(0,0)(-25,25){7}{\circle*{70}}}\multiput(-150,150)(-300,0){10}{\multiput(0,0)(-25,-25){7}{\circle*{70}}}} \multiput(3600,-1800)(25,25){13}{\circle*{70}} \put(3900,-1500){\multiput(0,0)(0,300){4}{\multiput(0,0)(-25,25){7}{\circle*{70}}}\multiput(-150,150)(0,300){4}{\multiput(0,0)(25,25){7}{\circle*{70}}}} \put(0,0){\usebox{\wcircle}} \end{picture} & \longrightarrow & \begin{picture}(7800,600)(-300,-300) \put(0,0){\usebox{\dynkinesix}} \put(1800,0){\usebox{\wcircle}} \put(3600,-1800){\usebox{\wcircle}} \put(0,0){\usebox{\wcircle}} \end{picture} \end{array}\] Let us look at the morphism given by the first line of the diagram. Here $M'$ is semisimple of type $\mathsf E_6$, $P$ is the parabolic subgroup of $M'$ corresponding to $\alpha_1$, $\alpha_3$, $\alpha_4$, $\alpha_6$, and take $\tilde L\subset L$ to be the whole factor of type $\mathsf A_3$, times a torus in the factor of type $\mathsf A_1$, times the center of $L$. The semisimple parts of the Levi subgroups $L_H$ and $\tilde L$ are equal, while the center of $L_H$ has codimension 1 in the center of $\tilde L$. The unipotent radical $H^\mathrm u$ has codimension 4 in $P^\mathrm u$. Looking at the entire commutative diagram of morphisms given by positive colors, one can see that $H^\mathrm u$ must be the $L_H$-complement of a 4 dimensional submodule of the two $\tilde L$-submodules of $P^\mathrm u$ of lowest weight $\alpha_2$ and $\alpha_5$, respectively, that projects nontrivially on both summands. \subsubsection*{Case 9.4} \[\rule[-15pt]{0pt}{15pt} \begin{picture}(9600,1200)(-300,-300) \put(0,0){\usebox{\dynkineseven}} \multiput(0,0)(7200,0){2}{\usebox{\gcircle}} \put(9000,0){\usebox{\aone}} \end{picture} \quad\longrightarrow\quad \begin{picture}(9600,1200)(-300,-300) \put(0,0){\usebox{\dynkineseven}} \multiput(0,0)(7200,0){2}{\usebox{\gcircle}} \put(9000,0){\usebox{\wcircle}} \end{picture} \] Here $M'$ is semisimple of type $\mathsf E_7$, $P$ is the parabolic subgroup of $M'$ corresponding to $\alpha_1,\ldots,\alpha_6$, and take $\tilde L$ to be the normalizer of $\mathsf F_4$ in $L$. The wonderful subgroup $H$ is given by the same Levi factor $L_H = \tilde L$, and unipotent radical $H^\mathrm u$ of codimension 1 in $P^\mathrm u$. The unipotent radical $P^\mathrm u$ of $P$ is a simple $L$-module of highest weight $\omega_{\alpha_1}$ which splits into two simple $\tilde L$-submodules of dimension 26 and 1, respectively, so that $H^\mathrm u$ is uniquely determined. \subsubsection*{Case 10.5} \[\rule[-6pt]{0pt}{6pt} \begin{picture}(5100,1500)(-300,-300)\multiput(0,0)(2700,0){2}{\usebox{\aone}}\multiput(0,1200)(2700,0){2}{\line(0,-1){300}}\put(0,1200){\line(1,0){2700}}\put(2700,0){\usebox{\dynkinbtwo}}\put(4500,0){\usebox{\aprime}}\end{picture} \quad\longrightarrow\quad \begin{picture}(5100,1500)(-300,-300)\multiput(0,0)(2700,0){2}{\usebox{\wcircle}}\multiput(0,0)(2700,0){2}{\usebox{\vertex}}\put(2700,0){\usebox{\dynkinbtwo}}\put(4500,0){\usebox{\aprime}}\end{picture}\] Take $M'=\mathrm{SL}(2)\times\mathrm{SO}(5)$, $P$ the parabolic subgroup of $M'$ corresponding to $\alpha'_2$, and take $\tilde L$ to be the normalizer of $\mathrm{SO}(2)$ in $L\cong\mathrm{GL}(1)\times\mathrm{GL}(1)\times\mathrm{SO}(3)$. The semisimple parts of the Levi subgroups $L_H$ and $\tilde L$ are equal, while the center of $L_H$ has codimension 1 in the center of $\tilde L$. The unipotent radical $H^\mathrm u$ has codimension 1 in $P^\mathrm u$. The unipotent radical $P^\mathrm u$ of $P$ is the direct sum of two simple $L$-modules of highest weight $0$ and $\omega_{\alpha'_2}$, respectively. The latter splits into two simple $\tilde L$-submodules of dimension $2$ and $1$, respectively, so that $H^\mathrm u$ is the $L_H$-complement of a 1 dimensional submodule of the two 1 dimensional $\tilde L$-submodules of $P^\mathrm u$, that projects nontrivially on both summands. \section{Projective normality}\label{s:2} Let $\mathfrak p = \bigoplus_{i=1}^N \mathfrak p_i$ be the decomposition of $\mathfrak p$ into irreducible $K$-modules: recall that $N$ can only be equal to $1$ or $2$. If $(G,K)$ is of Hermitian type, then $N=2$ and $\mathfrak p_1 \simeq \mathfrak p_2^*$ are dual non-isomorphic $K$-modules. Otherwise, $N=1$ and $\mathfrak p$ is irreducible. As in \cite{BCG} and \cite{BG}, to any spherical nilpotent orbit $Ke \subset \mathfrak p$ we associate a wonderful $K$-variety $X$ as follows. Let $e\in\mathfrak p$ be a nonzero nilpotent element, and write $e = \sum_{i=1}^N e_i$ with $e_i \in \mathfrak p_i$. Up to reordering the $K$-modules $\mathfrak p_i$, we can assume that $e_i \neq 0$ if and only if $i \leq M$, for some $M \leq N$. If $v = \sum_{i=1}^M v_i$ with $v_i \in \mathfrak p_i \senza \{0\}$, let $[v_i] \in \mathbb P} \newcommand{\mQ}{\mathbb Q(\mathfrak p_i)$ be the line defined by $v_i$ and set $\pi(v) = ([v_1], \ldots, [v_M])$: then we get a morphism $$\pi : Ke \ra \mathbb P} \newcommand{\mQ}{\mathbb Q(\mathfrak p_1) \times \ldots \times \mathbb P} \newcommand{\mQ}{\mathbb Q(\mathfrak p_M).$$ Moreover, $\ol{Ke}$ is the multicone over $\ol{K\pi(e)}$ (see \cite[Proposition 4.2]{BG}), and if $Ke$ is spherical then the stabilizer of $\pi(e)$ coincides with the normalizer $\mathrm{N}_K(K_e)$ (see \cite[Proposition 1.1]{BG}). Therefore the spherical orbit $K\pi(e)$ admits a wonderful compactification $X$ (see \cite[Section 1]{BG} and the references therein). If $\calL, \calL' \in \Pic(X)$ are globally generated line bundles, we denote by $$ m_{\calL,\calL'} : \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda(X,\calL) \otimes \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda(X,\calL') \lra \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda(X,\calL \otimes \calL') $$ the multiplication of sections. In this section we prove the following. \begin{theorem} \label{teo: projnorm} Let $(\mathfrak g,\mathfrak k)$ be an exceptional symmetric pair, let $\mathcal O} \newcommand{\calP}{\mathcal P \subset \mathfrak p$ be a spherical nilpotent $K$-orbit and let $X$ be the wonderful $K$-variety associated to $\mathcal O} \newcommand{\calP}{\mathcal P$. Then $m_{\calL,\calL'}$ is surjective for all globally generated line bundles $\calL, \calL' \in \Pic(X)$. \end{theorem} \subsection{General reductions}\label{ss:General reductions} As already explained in our previous papers (see \cite[Section 2]{BCG} and \cite[Section 3]{BG}), to prove that the multiplication of sections of globally generated line bundles on a wonderful variety $X$ is surjective is enough to show that $X$ can be obtained by operations of localization, quotient and parabolic induction from another wonderful variety $Y$ for which the surjectivity of the multiplication holds. Here we show that in order to prove Theorem~\ref{teo: projnorm} it is enough to check the surjectivity of the multiplication in the following four cases. Indeed, we check that all the wonderful varieties associated with spherical nilpotent orbits in exceptional symmeric pairs can be obtained via operations of localization, quotient and parabolic induction from one of these four basic cases, or from other cases for which the surjectivity of the multiplication is already known. \begin{equation}\tag{\textbf{A}} \rule[-6pt]{0pt}{6pt} \begin{picture}(4200,1200)(-300,-300) \put(0,0){\usebox{\dynkincthree}} \put(0,0){\usebox{\aprime}} \put(1800,0){\usebox{\aprime}} \put(3600,0){\usebox{\aone}} \end{picture} \end{equation} \begin{equation}\tag{\textbf{B}} \rule[-12pt]{0pt}{12pt} \begin{picture}(3600,1800)(-300,-300) \put(0,0){\usebox{\dynkindfour}} \put(0,0){\usebox{\athreene}} \put(0,0){\usebox{\athreese}} \put(1800,0){\usebox{\athreebifurc}} \end{picture} \end{equation} \begin{equation}\tag{\textbf{C}} \rule[-18pt]{0pt}{18pt} \begin{picture}(7800,1200)(-300,-300) \put(0,0){\usebox{\dynkinesix}} \put(0,0){\usebox{\afour}} \multiput(0,0)(3600,-1800){2}{\circle{600}} \multiput(0,0)(25,-25){13}{\circle*{70}} \put(300,-300){\multiput(0,0)(300,0){10}{\multiput(0,0)(25,25){7}{\circle*{70}}}\multiput(150,150)(300,0){10}{\multiput(0,0)(25,-25){7}{\circle*{70}}}} \multiput(3600,-1800)(-25,25){13}{\circle*{70}} \put(3300,-1500){\multiput(0,0)(0,300){4}{\multiput(0,0)(25,25){7}{\circle*{70}}}\multiput(150,150)(0,300){4}{\multiput(0,0)(-25,25){7}{\circle*{70}}}} \put(7200,0){\usebox{\aone}} \end{picture} \end{equation} \begin{equation}\tag{\textbf{D}} \rule[-18pt]{0pt}{18pt} \begin{picture}(9600,1200)(-300,-300) \put(0,0){\usebox{\dynkineseven}} \multiput(0,0)(7200,0){2}{\usebox{\gcircle}} \put(9000,0){\usebox{\aone}} \end{picture} \end{equation} Let $\mathcal O} \newcommand{\calP}{\mathcal P \subset \mathfrak p$ be a spherical nilpotent $K$-orbit in an exceptional symmetric pair and let $X$ be the corresponding wonderful variety, with set of spherical roots $\grS$. When $X$ is a flag variety, or equivalently $\Sigma=\vuoto$, the surjectivity of the multiplication is trivial. The surjectivity of the multiplication on $X$ is reduced to the surjectivity of the multiplication on the localization of $X$, that we denote by $Z$, at the subset $\supp_S \grS\subset S$. These localizations are described in Section~\ref{s:1}. \subsubsection{Symmetric cases} In the cases 1.2, 2.2, 2.3, 3.3, 3.4, 3.6, 5.2--5.4, 6.2, 6.3, 7.3, 7.4, 7.6--7.10, 8.2, 9.2, 9.3, 10.2, 10.3 the wonderful variety $Z$ is the wonderful compactification of an adjoint symmetric variety, and the surjectivity of the multiplication holds thanks to \cite{CM_projective-normality}. \subsubsection{Rank one cases} In the cases 11.2 and 12.2 the wonderful variety $Z$ is a rank one wonderful variety which is homogeneous under its automorphism group (see \cite{Akh}). Therefore in these cases $Z$ is a flag variety for its automorphism group, and the surjectivity of the multiplication is trivial. \subsubsection{Comodel cases} In the cases 2.4, 2.5, 5.5, 5.8, 5.9, 6.4, 6.5, 8.3, 8.6, 9.5, 10.5, the wonderful variety $Z$ is a localization of a quotient of a wonderful subvariety of a wonderful variety $Y$ for which the surjectivity of the multiplication holds. In particular, in all these cases but the last one, we show that we can take $Y$ to be a comodel wonderful variety, for which the surjectivity holds thanks to \cite[Theorem~5.2]{BGM}. In the cases 5.8, 5.9 and 8.6 the wonderful variety $Z$ itself is a comodel wonderful variety, of cotype $\mathsf D_7$ in the cases 5.8 and 5.9, and of cotype $\mathsf E_8$ in the case 8.6. Let $Y$ be the comodel wonderful variety of cotype $\mathsf E} \newcommand{\sfF}{\mathsf F_8$, which is the wonderful variety with the following spherical system for a group of semisimple type $\sfD_7$. \[ \begin{picture}(9600,4800)(-300,-2400) \put(0,0){\usebox{\dynkindseven}} \put(-1800,0){ \multiput(1800,0)(1800,0){5}{\usebox{\aone}} \multiput(10200,-1200)(0,2400){2}{\usebox{\aone}} \put(1500,-2700){ \multiput(2100,5100)(3600,0){2}{\line(0,-1){1500}} \put(2100,5100){\line(1,0){7200}} \put(9300,5100){\line(0,-1){3000}} \put(9300,2100){\line(-1,0){300}} \multiput(300,4050)(7200,0){2}{\line(0,-1){450}} \put(300,4050){\line(1,0){1700}} \put(2200,4050){\line(1,0){3400}} \put(5800,4050){\line(1,0){1700}} \put(3900,800){\line(0,1){1000}} \put(300,300){\line(0,1){1500}} \put(300,300){\line(1,0){9300}} \put(9600,3300){\line(-1,0){200}} \put(9200,3300){\line(-1,0){200}} \put(9600,300){\line(0,1){3000}} \put(3900,800){\line(1,0){4500}} \multiput(2100,1500)(5400,0){2}{\line(0,1){300}} \put(2100,1500){\line(1,0){1700}} \put(4000,1500){\line(1,0){3500}} } \multiput(1800,600)(1800,0){2}{\usebox{\toe}} \put(1800,0){ \put(5400,600){\usebox{\tow}} \put(5400,600){\usebox{\toe}} \put(7200,600){\usebox{\tone}} \put(8400,-600){\usebox{\tonw}} } } \end{picture} \] If we consider the wonderful subvariety of $Y$ associated to $\Sigma \smallsetminus \{\alpha_6, \alpha_7\}$, then the set of colors $\{D_{\alpha_1}^-, D_{\alpha_2}^-, D_{\alpha_4}^-\}$ is distinguished, and the corresponding quotient is a parabolic induction of the wonderful variety $Z$ of the cases 2.4 and 5.5. If we consider the wonderful subvariety of $Y$ associated to $\Sigma \smallsetminus \{\alpha_2, \alpha_3, \alpha_6\}$, then the set of colors $\{D_{\alpha_4}^-, D_{\alpha_7}^-\}$ is distinguished, and we get the case 2.5. If we consider the wonderful subvariety of $Y$ associated to $\Sigma \smallsetminus \{\alpha_1\}$, then the set of colors $\{D_{\alpha_2}^+, D_{\alpha_2}^-, D_{\alpha_4}^-, D_{\alpha_7}^-\}$ is distinguished, and we get the cases 6.4 and 8.3. Let now $Y$ be the comodel wonderful variety of cotype $\sfD_{2n}$, which is the wonderful variety with the following spherical system for a group of semisimple type $\mathsf A} \newcommand{\sfB}{\mathsf B_{n-1} \times \sfD_n$ \[\begin{picture}(17550,4500)(-300,-2100) \put(0,0){\usebox{\edge}} \put(1800,0){\usebox{\susp}} \put(5400,0){\usebox{\edge}} \put(600,0){ \put(9300,0){\usebox{\edge}} \put(11100,0){\usebox{\susp}} \put(14700,0){\usebox{\bifurc}} } \multiput(0,0)(1800,0){2}{\usebox{\aone}} \multiput(5400,0)(1800,0){2}{\usebox{\aone}} \put(600,0){ \multiput(9300,0)(1800,0){2}{\usebox{\aone}} \put(14700,0){\usebox{\aone}} \multiput(15900,-1200)(0,2400){2}{\usebox{\aone}} } \put(7200,-2100){\line(0,1){1200}} \put(7200,-2100){\line(1,0){8100}} \put(15300,-2100){\line(0,1){1200}} \put(5400,-900){\line(0,-1){900}} \put(5400,-1800){\line(1,0){1700}} \put(7300,-1800){\line(1,0){6200}} \multiput(13500,-1800)(0,300){3}{\line(0,1){150}} \multiput(3600,-1500)(0,300){3}{\line(0,1){150}} \put(3600,-1500){\line(1,0){1700}} \put(5500,-1500){\line(1,0){1600}} \put(7300,-1500){\line(1,0){4400}} \put(11700,-1500){\line(0,1){600}} \multiput(1800,-900)(8100,0){2}{\line(0,-1){300}} \put(1800,-1200){\line(1,0){1700}} \multiput(3700,-1200)(1800,0){2}{\line(1,0){1600}} \put(7300,-1200){\line(1,0){2600}} \put(7200,2400){\line(0,-1){1500}} \put(7200,2400){\line(1,0){10050}} \put(17250,2400){\line(0,-1){3000}} \multiput(17250,-600)(0,2400){2}{\line(-1,0){450}} \multiput(5400,2100)(9900,0){2}{\line(0,-1){1200}} \put(5400,2100){\line(1,0){1700}} \put(7300,2100){\line(1,0){8000}} \multiput(1800,1500)(9900,0){2}{\line(0,-1){600}} \put(1800,1500){\line(1,0){3500}} \put(5500,1500){\line(1,0){1600}} \put(7300,1500){\line(1,0){4400}} \multiput(0,1200)(9900,0){2}{\line(0,-1){300}} \put(0,1200){\line(1,0){1700}} \put(1900,1200){\line(1,0){3400}} \put(5500,1200){\line(1,0){1600}} \put(7300,1200){\line(1,0){2600}} \multiput(0,600)(1800,0){2}{\usebox{\toe}} \put(5400,600){\usebox{\toe}} \put(11700,600){\usebox{\tow}} \put(15300,600){\usebox{\tow}} \put(16500,-600){\usebox{\tonw}} \put(16500,1800){\usebox{\tosw}} \end{picture}\] and consider the wonderful subvariety of $Y$ associated to $\Sigma \smallsetminus \{\alpha_2, \ldots, \alpha_{n-1}\}$. Then the set of colors $\{D_{\alpha'_i}^-\,:\,2\leq i\leq n\}\cup\{D_{\alpha'_i}^+\,:\,3\leq i\leq n-1\}$ is distinguished, and the corresponding quotient is a parabolic induction of the wonderful variety $Z$ of the cases 6.5 and 9.5 (respectively obtained for $n=4$ and $n=6$). We are left with the case 10.5. Here the variety $Z$ is equal to the case $\mathsf a^\mathsf y(1,1)+\mathsf b'(1)$ for which the surjectivity holds thanks to \cite[Proposition~2.12]{BCG}. \subsubsection{Basic cases} In the cases 1.3 and 10.4 the wonderful variety $Z$ has the spherical system $(\textbf{A})$. In the cases 3.7, 3.8 and 3.9 the wonderful variety $Z$ is a wonderful subvariety of the wonderful variety with spherical system $(\textbf{B})$. In the cases 7.11 and 7.12 the wonderful variety $Z$ has the spherical system $(\textbf{C})$. In the case 9.4 the wonderful variety $Z$ has the spherical system $(\textbf{D})$.\\ In the following subsections we study the surjectivity of the multiplication in the four basic cases (\textbf{A}), (\textbf{B}), (\textbf{C}), (\textbf{D}). In all these cases, we will denote by $\grS = \{\grs_1, \grs_2, \ldots \}$ the corresponding set of spherical roots, and and by $\grD = \{D_1, D_2, \ldots\}$ the corresponding set of colors. We keep the notation of \cite[Section 2]{BCG} concerning the combinatorics of colors, and we refer to the same paper for some general background on the multiplication as well, see especially (Section 2.1 therein). In particular, we will freely make use of the partial order $\leq_\grS$ on $\mN\grD$, of the notions of covering difference and height, and of the notions of fundamental triple and of low triple. \subsection{Case A} \label{ss:CaseA} \[\begin{picture}(4200,1800)(-300,-900) \put(0,0){\usebox{\dynkincthree}} \put(0,0){\usebox{\aprime}} \put(1800,0){\usebox{\aprime}} \put(3600,0){\usebox{\aone}} \end{picture}\] Enumerate the spherical roots as $\grs_1 = 2\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1$, $\grs_2 = 2\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2$, $\grs_3 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3$, and enumerate the colors as $D_1 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1}$, $D_2 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2}$, $D_3 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3}^+$, $D_4 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3}^-$. Then the spherical roots are expressed in terms of colors as follows: \[\begin{array}{rr} \grs_1 = & 2D_1 - D_2, \\ \grs_2 = & -D_1 +2D_2 -2D_4, \\ \grs_3 = & -D_2 + D_3 +D_4 \end{array}\] \begin{lemma} Let $\grg \in \mN\grS$ be a covering difference, then either $\grg \in \grS$ or $\grg$ is one of the following: \begin{itemize} \item[-] $\grg_4 = \grs_1+\grs_2 = D_1 +D_2 -2D_4$; \item[-] $\grg_5 = \grs_2+\grs_3 = -D_1 + D_2 + D_3 -D_4$; \item[-] $\grg_6 = \grs_2+2\grs_3 = -D_1 + 2D_3$; \item[-] $\grg_7 = \grs_1+\grs_2+\grs_3 = D_1 + D_3 -D_4$. \end{itemize} In particular, $\height(\grg^+) = 2$. \end{lemma} \begin{proof} Denote $\grg_i = \grs_i$ for all $i \leq 3$. Notice that $\grg_i$ is a covering difference for all $i \leq 7$: namely, $\grg_i^- <_\grS \grg_i^+$ and $\grg_i^-$ is maximal with this property. Suppose now that $\grg \in \mN\grS$ is a covering difference and assume that $\grg \neq \grg_i$ for all $i$. Notice that $\grg$ cannot be a nontrivial multiple of any other covering difference. Write $\grg = a_1 \grs_1 + a_2 \grs_2 + a_3 \grs_3 = c_1 D_1 + c_2 D_2 +c_3 D_3 +c_4 D_4$, then \[\begin{array}{rr} c_1 = & 2a_1 - a_2,\\ c_2 = & -a_1+ 2a_2 -a_3, \\ c_3 = & a_3, \\ c_4 = & -2a_2 +a_3. \end{array}\] Suppose that $a_3 = 0$. Since $\grg$ cannot be a multiple of a covering difference, it follows $a_1 > 0$ and $a_2 > 0$. Since $\grg^+ - \grg_i \not \in \mN \grD$ for $i =1,2,4$, we have $c_1 + c_2 \leq 1$. On the other hand $c_1 + c_2 = a_1 + a_2 \geq 2$, absurd. Suppose that $a_3 > 0$. Then $c_3 > 0$. If both $a_1 > 0$ and $a_2 > 0$, then $\grg^+ - \grg_i \not \in \mN\grD$ and $\grg^- + \grg_i \not \in \mN\grD$ for all $i=1,2,3,4,5,7$, and it easily follows $c_2 = c_4 = 0$, hence $a_1 = -c_2 -c_4 = 0$, absurd. We must have $a_1 = 0$: indeed otherwise $a_2=0$ implies $c_1 = 2a_1$, hence $\grg^+ - \grs_1 \in \mN\grD$. Therefore $c_1 <0$ and $c_3 > 0$. Since $\grg^+ - \grs_3 \not \in \mN\grD$ and $\grg^- + \grg_5 \not \in \mN\grD$, it follows $c_4 = 0$, hence $\grg$ is a multiple of $\grg_6$, absurd. \end{proof} Since every covering difference $\grg \in \mN \grS$ satisfies $\height(\grg^+)=2$, it follows that every fundamental triple is low. In particular we get the following classification of the low fundamental triples. \begin{lemma} Let $(D,E,F)$ be a low fundamental triple, denote $\grg = D+E-F$ and suppose that $\grg \neq 0$. Then, up to switching $D$ and $E$, the triple $(D,E,F)$ is one of the following: \begin{itemize} \item[-] $(D_1,D_1, D_2)$, $\grg = \grs_1$; \item[-] $(D_1,D_2, 2D_4)$, $\grg = \grs_1 + \grs_2$; \item[-] $(D_1,D_3, D_4)$, $\grg = \grs_1 + \grs_2 + \grs_3$; \item[-] $(D_2,D_2, D_1+2D_4)$, $\grg = \grs_2$; \item[-] $(D_2,D_3, D_1+D_4)$, $\grg = \grs_2+\grs_3$; \item[-] $(D_3,D_3, D_1)$, $\grg = \grs_2+2\grs_3$; \item[-] $(D_3,D_4, D_2)$, $\grg = \grs_3$. \end{itemize} \end{lemma} \begin{proposition} The multiplication $m_{D,E}$ is surjective for all $D,E\in\mathbb N\Delta$. \end{proposition} \begin{proof} It is enough to show that $s^{D+E-F}V_F\subset V_D\cdot V_E$ for all low fundamental triples. Moreover, notice that in this case the surjectivity of the multiplication is already known for all proper wonderful subvarieties. Indeed, if we remove the spherical root $\sigma_3$ we have a parabolic induction of a wonderful symmetric variety, if we remove the spherical root $\sigma_1$ we have a parabolic induction of a wonderful subvariety of $\mathsf a^\mathsf y(1,1)+\mathsf b'(1)$, if we remove $\sigma_2$ we have a parabolic induction of the direct product of two rank one wonderful varieties. Therefore, we are left to the only low fundamental triple with $\mathrm{supp}_\Sigma(D+E-F)=\{\sigma_1,\sigma_2,\sigma_3\}$, namely $(D_1,D_3,D_4)$. Let us consider the symmetric pair $(\mathfrak g=\mathfrak f_4,\mathfrak k=\mathfrak c_3+\mathfrak a_1)$, number 10 in our list, we have $\mathfrak p=V(\omega_3+\omega')$, and the $\mathfrak k$-action on $\mathfrak p$ gives a map $\varphi\colon\mathfrak k\otimes\mathfrak p\to\mathfrak p$. Restricting the map to the tensor product of $\mathfrak c_3\subset\mathfrak k$ with the simple $\mathfrak c_3$-submodule containing the highest weight vector $V(\omega_3)\subset\mathfrak p$, we get a map $\varphi\colon\mathfrak c_3\otimes V(\omega_3)\to V(\omega_3)$, hence a map $\varphi\colon V_{D_1}\otimes V_{D_3} \to V_{D_4}$. Let us fix $h_{D_3}=e$ as in the case~10.4 of the list in Appendix~\ref{A}, and take $\mathrm N_K(K_e)$. Recall its description given in Section~\ref{s:1}, cases~1.3 and~10.4. It follows that $h_{D_1}$ belongs to the 1-dimensional $L_{[e]}$-submodule of $P^\mathrm u$ ($\mathfrak u$ in the notation of Appendix~\ref{A}, case~10.4). By construction, we have $[\mathfrak u,e]\neq0$, that is, $\varphi(h_{D_1}\otimes h_{D_3})\neq 0$. \end{proof} \subsection{Case B} \label{ss:CaseB} \[\begin{picture}(3600,3000)(-300,-1500) \put(0,0){\usebox{\dynkindfour}} \put(0,0){\usebox{\athreene}} \put(0,0){\usebox{\athreese}} \put(1800,0){\usebox{\athreebifurc}} \end{picture}\] Enumerate the spherical roots as $\grs_1 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3$, $\grs_2 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4$, $\grs_3 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4$, and enumerate the colors as $D_1 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4}$, $D_2 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3}$, $D_3 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1}$. Then the spherical roots are expressed in terms of colors as follows: \[\begin{array}{rr} \grs_1 = & -D_1 + D_2 +D_3, \\ \grs_2 = & D_1 - D_2 +D_3, \\ \grs_3 = & D_1 + D_2 -D_3. \end{array}\] It is immediate to see that every covering difference $\grg \in \mN \grS$ is either a spherical root or the sum of two spherical roots, and it satisfies $\height(\grg^+) = 2$. As well, one can easily get the following description of the low fundamental triples. \begin{lemma}\label{lem:triple} Let $(D,E,F)$ be a low fundamental triple, denote $\grg = D+E-F$ and suppose that $\grg \neq 0$. Then, up to switching $D$ and $E$, the triple $(D,E,F)$ is one of the following: \begin{itemize} \item[-] $(D_1,D_2, D_3)$, $\grg = \grs_3$; \item[-] $(D_2,D_3, D_1)$, $\grg = \grs_1$; \item[-] $(D_3,D_1, D_2)$, $\grg = \grs_2$; \item[-] $(D_1,D_1, 0)$, $\grg = \grs_2+\grs_3$; \item[-] $(D_2,D_2, 0)$, $\grg = \grs_1+\grs_3$; \item[-] $(D_3,D_3, 0)$, $\grg = \grs_1+\grs_2$. \end{itemize} \end{lemma} \begin{proposition} The multiplication $m_{D,E}$ is surjective for all $D,E\in\mathbb N\Delta$. \end{proposition} \begin{proof} It is enough to show that $s^{D+E-F}V_F\subset V_D\cdot V_E$ for all low fundamental triples. For the wonderful subvarieties of rank 1 the surjectivity of the multiplication is known. Thus we are left with the low fundamental triples $(D,E,F)$ where $D+E-F$ is the sum of two spherical roots. By symmetry, it is enough to consider $(D_3,D_3,0)$. The subset $\{D_1, D_2\}\subset\Delta$ is distinguished, and the quotient is a wonderful symmetric variety (of rank 1), whose multiplication is surjective. \end{proof} \subsection{Case C} \label{ss:CaseC} \[\begin{picture}(7800,3000)(-300,-2100) \put(0,0){\usebox{\dynkinesix}} \put(0,0){\usebox{\afour}} \multiput(0,0)(3600,-1800){2}{\circle{600}} \multiput(0,0)(25,-25){13}{\circle*{70}} \put(300,-300){\multiput(0,0)(300,0){10}{\multiput(0,0)(25,25){7}{\circle*{70}}}\multiput(150,150)(300,0){10}{\multiput(0,0)(25,-25){7}{\circle*{70}}}} \multiput(3600,-1800)(-25,25){13}{\circle*{70}} \put(3300,-1500){\multiput(0,0)(0,300){4}{\multiput(0,0)(25,25){7}{\circle*{70}}}\multiput(150,150)(0,300){4}{\multiput(0,0)(-25,25){7}{\circle*{70}}}} \put(7200,0){\usebox{\aone}} \end{picture}\] Enumerate the spherical roots as $\grs_1 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5$, $\grs_2 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1 +\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4$, $\grs_3 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6$, and enumerate the colors as $D_1 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1}$, $D_2 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2}$, $D_3 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5}$, $D_4 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6}^+$, $D_5 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6}^-$. Then the spherical roots are expressed in terms of colors as follows: \[\begin{array}{rr} \grs_1 = & D_1 - D_2 + D_3 - D_5, \\ \grs_2 = & D_1 +D_2 -D_3, \\ \grs_3 = & -D_3 + D_4 +D_5. \end{array}\] \begin{lemma} Let $\grg \in \mN\grS$ be a covering difference, then either $\grg \in \grS$ or $\grg$ is one of the following: \begin{itemize} \item[-] $\grg_4 = \grs_1+\grs_2 = 2D_1 -D_5$; \item[-] $\grg_5 = \grs_1+\grs_3 = D_1 -D_2 +D_4$. \end{itemize} In particular, $\height(\grg^+) = 2$. \end{lemma} \begin{proof} Denote $\grg_i = \grs_i$ for all $i \leq 3$. Notice that $\grg_i$ is a covering difference for all $i \leq 5$: namely, $\grg_i^- <_\grS \grg_i^+$ and $\grg_i^-$ is maximal with this property. Suppose now that $\grg \in \mN\grS$ is a covering difference and assume that $\grg \neq \grg_i$ for all $i$. Write $\grg = a_1 \grs_1 + a_2 \grs_2 + a_3 \grs_3 = c_1 D_1 + c_2 D_2 +c_3 D_3 +c_4 D_4 +c_5 D_5$, then \[\begin{array}{rr} c_1 = & a_1 +a_2,\\ c_2 = & -a_1 +a_2, \\ c_3 = & a_1 -a_2 -a_3, \\ c_4 = & a_3, \\ c_5 = & -a_1 +a_3. \end{array}\] Notice that $\grg$ cannot be a nontrivial multiple of any other covering difference, thus $c_1 > 0$. Suppose that $a_1 >0$, $a_2 > 0$ and $a_3 > 0$. Then none of $\grg^+ - \grg_i$ and $\grg^- + \grg_i$ is in $\mN \grD$, for every $i \leq 5$. It easily follows that $c_1=1$ and $c_2 = 0$, which is absurd because $c_1 + c_2 = 2 a_2$. To conclude the proof it is enough to show that, if $a_i = 0$ for some $i$, then $\grg$ is a multiple of some $\grg_i$. Suppose that $a_1 = 0$: then $\grg^+ - \grs_2 \not \in \mN \grD$, hence $c_2 \leq 0$ and it follows $a_2 = 0$. Suppose that $a_2 = 0$: then $\grg^+ - \grg_5 \not \in \mN \grD$, hence $c_4 \leq 0$, and it follows $a_3 = 0$. Suppose that $a_3 = 0$: then $\grg^- + \grg_4 \not \in \mN \grD$, hence $c_5 \geq 0$ and it follows $a_1 = 0$. \end{proof} Since every covering difference $\grg \in \mN \grS$ satisfies $\height(\grg^+)=2$, it follows that every fundamental triple is low. In particular we get the following classification of the low fundamental triples. \begin{lemma} Let $(D,E,F)$ be a low fundamental triple, denote $\grg = D+E-F$ and suppose that $\grg \neq 0$. Then, up to switching $D$ and $E$, the triple $(D,E,F)$ is one of the following: \begin{itemize} \item[-] $(D_1, D_1, D_5)$, $\grg = \grs_1+\grs_2$; \item[-] $(D_1, D_2, D_3)$, $\grg = \grs_2$; \item[-] $(D_1, D_3, D_2 +D_5)$, $\grg = \grs_1$; \item[-] $(D_1, D_4, D_2)$, $\grg = \grs_1+\grs_3$; \item[-] $(D_4, D_5, D_3)$, $\grg = \grs_3$. \end{itemize} \end{lemma} \begin{proposition} The multiplication $m_{D,E}$ is surjective for all $D,E\in\mathbb N\Delta$. \end{proposition} \begin{proof} It is enough to show that $s^{D+E-F}V_F\subset V_D\cdot V_E$ for all low fundamental triples. For the wonderful subvarieties of rank 1 the surjectivity of the multiplication is known. We are left with the low fundamental triples $(D,E,F)$ where $D+E-F$ is the sum of two spherical roots, that is, $(D_1,D_1,D_5)$ and $(D_1,D_4,D_2)$. To treat $(D_1,D_1,D_5)$ we can consider the wonderful subvariety with spherical roots $\sigma_1$ and $\sigma_2$. Here $\{D_2,D_3\}$ is a distinguished subset of colors, giving a wonderful symmetric variety (of rank 1) as quotient, whose multiplication is surjective. To treat $(D_1,D_4,D_2)$ we consider the wonderful subvariety with spherical roots $\sigma_1$ and $\sigma_3$. Now $\{D_3,D_5\}$ is a distinguished subset of colors, giving again a wonderful symmetric variety (of rank 1) as quotient, with surjective multiplication. \end{proof} \subsection{Case D} \[\begin{picture}(9600,2700)(-300,-1800) \put(0,0){\usebox{\dynkineseven}} \multiput(0,0)(7200,0){2}{\usebox{\gcircle}} \put(9000,0){\usebox{\aone}} \end{picture}\] Enumerate the spherical roots as $\grs_1 = 2\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2 + 2\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3 + 2\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5$, $\grs_2 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3 + 2\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4 + 2\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5 + 2\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6$, $\grs_3 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_7$, and enumerate the colors as $D_1 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1}$, $D_2 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6}$, $D_3 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_7}^+$, $D_4 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_7}^-$. Then the spherical roots are expressed in terms of colors as follows: \[\begin{array}{rr} \grs_1 = & 2D_1 - D_2, \\ \grs_2 = & -D_1 +2D_2 -2D_4, \\ \grs_3 = & -D_2 + D_3 +D_4. \end{array}\] Notice that same Cartan matrix is the same of case (\textbf A). In particular, the description of the covering differences and of the low fundamental triples is the same. \begin{proposition} The multiplication $m_{D,E}$ is surjective for all $D,E\in\mathbb N\Delta$. \end{proposition} \begin{proof} The proof is similar to that in case (\textbf A). We have to show that $s^{D+E-F}V_F\subset V_D\cdot V_E$ for all low fundamental triples. In this case as well, the surjectivity of the multiplication is known for all proper wonderful subvarieties. Indeed, if we remove the spherical root $\sigma_3$ we have a parabolic induction of a wonderful symmetric variety, if we remove the spherical root $\sigma_1$ we have a parabolic induction of a wonderful subvariety of $\mathsf a^\mathsf y(1,1)+\mathsf d(5)$ (see \cite[\S2.2]{BCG}), and if we remove $\sigma_2$ we have a parabolic induction of a direct product of two rank 1 wonderful varieties. Therefore, we are left with the only low fundamental triple with $\mathrm{supp}_\Sigma(D+E-F)=\{\sigma_1,\sigma_2,\sigma_3\}$, namely $(D_1,D_3,D_4)$. Let us consider the symmetric pair $(\mathfrak g=\mathfrak e_8,\mathfrak k=\mathfrak e_7+\mathfrak a_1)$, number 9 in our list. We have $\mathfrak p=V(\omega_7+\omega')$, and the $\mathfrak k$-action on $\mathfrak p$ gives a map $\varphi\colon\mathfrak k\otimes\mathfrak p\to\mathfrak p$. Restricting the map to the tensor product of $\mathfrak e_7\subset\mathfrak k$ with the simple $\mathfrak e_7$-submodule containing the highest weight vector $V(\omega_7)\subset\mathfrak p$, we get a map $\varphi\colon\mathfrak e_7\otimes V(\omega_7)\to V(\omega_7)$, hence a map $\varphi\colon V_{D_1}\otimes V_{D_3} \to V_{D_4}$. Let us fix $h_{D_3}=e$ as in the case~9.4 of the list in Appendix~\ref{A}, and take $\mathrm N_K(K_e)$. Recall its description given in Section~\ref{s:1}, case~9.4. It follows that $h_{D_1}$ lies in the 1-dimensional $L_{[e]}$-submodule of $P^\mathrm u$ ($\mathfrak u$ in the notation of Appendix~\ref{A}, case~9.4). By construction, we have $[\mathfrak u,e]\neq0$, that is, $\varphi(h_{D_1}\otimes h_{D_3})\neq 0$. \end{proof} \section{Normality and semigroups}\label{s:3} Let $e \in \mathfrak p$ be a nilpotent element and suppose that $Ke$ is spherical, we study in this section the normality of the closure $\ol{Ke}$, as well as the $K$-module structure of the coordinate ring $\mathbb C} \newcommand{\mN}{\mathbb N[\wt{Ke}]$ of its normalization. This reduces to a combinatorial problem on the wonderful $K$-variety $X$ that we associated to $Ke$. We denote by $\grS$ the set of spherical roots of $X$, and by $\grD$ its set of colors. We keep the notation introduced in the previous section. By its very definition, $X$ is the wonderful compactification of $K/K_{\pi(e)}$. Thus, by the theory of spherical embeddings, $X$ is endowed with a $K$-equivariant morphism $\phi_i : X \ra \mathbb P} \newcommand{\mQ}{\mathbb Q(\mathfrak p_i)$, for all $i=1, \ldots, M$. For all such $i$, let $D_{\mathfrak p_i} \in \mN\grD$ be the unique $B$-stable divisor such that $\calL_{D_{\mathfrak p_i}} = \phi_i^* \mathcal O} \newcommand{\calP}{\mathcal P(1)_{\mathbb P} \newcommand{\mQ}{\mathbb Q(\mathfrak p_i)}$. It follows that $\mathfrak p_i^* \simeq V_{D_{\mathfrak p_i}}$ is naturally identified with the submodule of $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda(X,\calL_{D_{\mathfrak p_i}})$ generated by the canonical section of $D_{\mathfrak p_i}$, which is a highest weight section in $$ \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda(X,\calL_{D_{\mathfrak p_i}}) = \bigoplus_{D \leq_\grS D_{\mathfrak p_i}} s^{D_{\mathfrak p_i} - D} \, V_D $$ Since $\ol{Ke}$ is the multicone over $\ol{K\pi(e)}$, by \cite[Theorem 1.2]{BG} it follows that $\ol{Ke}$ is normal if and only if $D_{\mathfrak p_i}$ is a minuscule element in $\mN \grD$ for all $i=1, \ldots, M$. By making use of such criterion, we prove the following theorem. \begin{theorem} \label{teo:normal} Let $(\mathfrak g, \mathfrak k)$ be a symmetric pair with $\mathfrak g$ of exceptional type and let $\mathcal O} \newcommand{\calP}{\mathcal P \subset \mathfrak p$ be a spherical nilpotent orbit. Then $\ol \mathcal O} \newcommand{\calP}{\mathcal P$ is not normal if and only if $(\mathfrak g,\mathfrak k) = (\mathsf G_2, \mathsf A} \newcommand{\sfB}{\mathsf B_1 \times \mathsf A} \newcommand{\sfB}{\mathsf B_1)$, and the Kostant-Dynkin diagram of $\mathcal O} \newcommand{\calP}{\mathcal P$ is $(1;3)$. \end{theorem} \begin{remark} When $G$ is exceptional and $\height(e) \leq 3$, then $\ol{Ge}$ is not normal if and only if $G$ is of type $\mathsf G_2$ and the Kostant-Dynkin diagram of $Ge$ is $(10)$ (see \cite[Table 2]{Pa} for an account on these results). The remaining cases with $G$ exceptional and $Ke$ spherical only occur for $G$ of type $\mathsf E} \newcommand{\sfF}{\mathsf F_6$, $\mathsf E} \newcommand{\sfF}{\mathsf F_7$, $\sfF_4$, and specifically for the nilpotent orbits of Cases 3.6, 3.7, 3.8, 3.9, 7.10, 7.11, 7.12, 11.2. When $G$ is of type $\mathsf E} \newcommand{\sfF}{\mathsf F_6$ or $\sfF_4$, the normal nilpotent varieties have been classified respectively by Sommers \cite{sommers} and by Broer \cite{broer}. In particular, we have that $\ol{Ge}$ is normal in Case 3.6, and not normal in Cases 3.7, 3.8, 3.9, 11.2. When $G$ is of type $\mathsf E} \newcommand{\sfF}{\mathsf F_7$ (and $\mathsf E} \newcommand{\sfF}{\mathsf F_8$) the classification of the normal nilpotent varieties is still not complete in the literature, but it seems that the $G$-orbit closures of Cases 7.10, 7.11, 7.12 are {\em expected} to be normal (see \cite[\S 1.9.3]{FJLS} and \cite[Section 7.8]{broer2}). \end{remark} For the coordinate ring of the normalization $\wt{Ke}$, by \cite[Theorem 1.3]{BG} we have the following description: $$ \mathbb C} \newcommand{\mN}{\mathbb N[\wt{Ke}] = \bigoplus_{n_1,\ldots,n_M\geq0} \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda(X,\calL^{n_1}_{D_{\mathfrak p_1}}\otimes\ldots\otimes\calL^{n_M}_{D_{\mathfrak p_M}}). $$ Denote $\grD_\mathfrak p(e) = \{D_{\mathfrak p_1},\ldots, D_{\mathfrak p_M}\}$. It follows by the previous description that, to compute $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda(\wt{Ke})$, the weight semigroup of $\wt{Ke}$, is enough to compute the semigroup $$ \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{\grD_\mathfrak p(e)} = \{(n_1,\ldots,n_M,D)\in\mN^M\times\mN \grD \; : \; D \leq_\grS n_1 D_{\mathfrak p_1} + \ldots + n_M D_{\mathfrak p_M}\}. $$ Indeed, by definition $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda(\wt{Ke})$ consists of the weights $$ n_1\lambda_1^*+\ldots+n_M\lambda_M^*-(n_1D_{\mathfrak p_1}+\ldots+n_MD_{\mathfrak p_M}-D) $$ for $n_1,\ldots,n_M\geq0$ and $D \leq_\grS n_1 D_{\mathfrak p_1} + \ldots + n_M D_{\mathfrak p_M}$, where $\lambda_1^*,\ldots,\lambda_M^*$ are the highest weights of the $K$-modules $\mathfrak p_1^*,\ldots,\mathfrak p_M^*$. Suppose that $(G,K)$ is not of Hermitian type. As already recalled, in this case $\mathfrak p$ is an irreducible $K$-module, thus $\grD_\mathfrak p(e)$ consists of a unique element, that we denote by $D_\mathfrak p$. Moreover, since $K$ is semisimple and $Ke$ is spherical, in this case is enough to compute the semigroup $$ \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{D_\mathfrak p} = \{D\in\mN \grD \; : \; D \leq_\grS nD_{\mathfrak p}\text{ for some }n\in\mN\}. $$ Indeed, $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda(\wt{Ke})$ is the image of $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{D_\mathfrak p}$ via the canonical homomorphism $\gro\colon \mZ \grD \ra \mathcal X} \newcommand{\calU}{\mathcal U(B)$ induced by the restriction of line bundles to the closed orbit of $X$ (see \cite[Section 2]{BCG} for the combinatorial description of $\omega$). Suppose now that $(G,K)$ is of Hermitian type. Then $K$ is the Levi subgroup of a parabolic subgroup of $G$ with Abelian unipotent radical, thus $K$ is a maximal Levi subgroup of $G$ and the identity component $Z_K$ of its center is one dimensional. By \cite[Proposition 4.1]{BG}, $Z_K$ acts on $\mathfrak p_1$ and $\mathfrak p_2$ via nontrivial opposite characters in $\mathcal X} \newcommand{\calU}{\mathcal U(Z_K) \simeq \mZ$, which can be described as follows. Let us fix $T\subset B\subset G$ a maximal torus and a Borel subgroup of $G$, and $Q$ a standard parabolic subgroup of $G$ with Abelian unipotent radical, such that $K$ is the standard Levi subgroup of $Q$. Let us fix $K\cap B$ as Borel subgroup of $K$. We denote by $\mathfrak p_+$ the nilpotent radical of $\mathrm{Lie}\,Q$ and by $\mathfrak p_-$ the nilpotent radical of $\mathrm{Lie}\,Q_-$, so that $\mathfrak p=\mathfrak p_+\oplus\mathfrak p_-$. Let $S = \{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1, \ldots, \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_n\}$ be the set of simple roots of $G$ and denote by $\theta_G$ the highest root of $G$. Then $\theta_G$ is the highest weight of the simple $K$-module $\mathfrak p_+$. Let $\chi \in \mathcal X} \newcommand{\calU}{\mathcal U(Z_K)$ be the character given by the action on $\mathfrak p_+$. To describe $\chi$, recall that a standard parabolic subgroup of $G$ has Abelian unipotent radical if and only if it is maximal, and the corresponding simple root $\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_p$ has coefficient 1 in $\theta_G$. In particular, when $G$ is a simple group of exceptional type, we have the following possibilities: \begin{itemize} \item[(1)] If $G$ is of type $\mathsf E} \newcommand{\sfF}{\mathsf F_6$: $\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1, \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6$. \item[(2)] If $G$ is of type $\mathsf E} \newcommand{\sfF}{\mathsf F_7$: $\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_7$. \end{itemize} Let $\mathfrak t_G \subset \mathfrak g$ be the Cartan subalgebra generated by the fundamental coweights $\gro_1^\vee, \ldots, \gro_n^\vee$, and let $\mathfrak t_K^\mathrm{ss} \subset \mathfrak t_G$ be the subalgebra generated by the simple coroots of $K$. Take $K$ to be the maximal standard Levi subgroup of $G$ corresponding to $\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_p$, then the Lie algebra $\mathfrak z_K = \Lie Z_K$ is generated by the fundamental coweight $\gro_p^\vee$. Assuming $G$ to be simply connected, we have that $\mathcal X} \newcommand{\calU}{\mathcal U(T)^\vee = \mZ S^\vee$, therefore $$\mathcal X} \newcommand{\calU}{\mathcal U(Z_K)^\vee = \mathfrak z_K \cap \mZ S^\vee $$ is generated by $m \gro_p^\vee$, where $m \in \mathbb N$ is the minimum such that $m \gro_p^\vee \in \mZ S^\vee$. If $z(\xi)$ is the 1-parameter-subgroup of $Z_K$ corresponding to $m\omega_p^\vee$, we have $$ \chi(z(\xi)) = \xi^{m\theta_G(\omega_p^\vee)} = \xi^m. $$ In our cases, we have the following possibilities for the value of $m$, depending on the pair $(G,\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_p)$. \begin{itemize} \item[-] $(\mathsf E} \newcommand{\sfF}{\mathsf F_6, \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1)$, $(\mathsf E} \newcommand{\sfF}{\mathsf F_6, \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6)$: $m=3$. \item[-] $(\mathsf E} \newcommand{\sfF}{\mathsf F_7, \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_7)$: $m = 2$. \end{itemize} In the following we compute the weight semigroup of $\wt{Ke}$. We omit the cases when $X$ has rank smaller than three, or is obtained by parabolic induction from a symmetric variety: in all these cases the computations are fairly easy. At last, despite it is of rank one, we give some details for the unique nonnormal case, which is in type $\mathsf G_2$. \subsection{Cases 1.3, 2.4, 5.5, 6.4, 8.3, 9.4, 10.4} We consider the case 1.3, the others are treated essentially in the same way: indeed, apart from colors which take nonpositive values against every spherical root, they all have the Cartan pairing described in \S\ref{ss:CaseA}. Keeping a similar notation, we have $\grS = \{\grs_1, \grs_2, \grs_3\}$ and $\grD = \{D_1, \ldots, D_5\}$, where we denote $\grs_1 = 2\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2$, $\grs_2 = 2\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3$, $\grs_3 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4$, and $D_1 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2}$, $D_2 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3}$, $D_3 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4}^+$, $D_4 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4}^-$, $D_5 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1}$. In particular, the equations relating spherical roots and colors are all the same as in \S\ref{ss:CaseA}, but the first one which reads $$ \grs_1 = 2D_1 - D_2 -2D_5. $$ Notice that $D_\mathfrak p = D_3$ is minuscule in $\mN\grD$, in particular $\ol{Ke}$ is normal. On the other hand we have \begin{align*} & D_1 = 2D_3- (\grs_2 + 2\grs_3)\\ & D_4+2D_5 = 3D_3 - (\grs_1+2\grs_2 + 3\grs_3)\\ & D_2+ 2D_5 = 4D_3 - (\grs_1+2\grs_2 + 4\grs_3) \end{align*} On the other hand if $D \in \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{D_3}$ and $D = \sum_{i=1}^5 c_i D_i$, then $c_5 = 2(c_2+c_4)$. Therefore $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{D_3}$ is freely generated by $D_3$, $D_1$, $D_4+2D_5$, $D_2+2D_5$. \subsection{Cases 2.5, 6.5, 9.5, 10.5} We consider the case 2.5, the others are treated essentially in the same way since they have the same Cartan pairing, apart from possible colors taking nonpositive values against every spherical root. We have in this case $\grS = \{\grs_1, \grs_2, \grs_3\}$ and $\grD = \{D_1, \ldots, D_6\}$, where we denote $\grs_1 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma'$, $\grs_2 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3$, $\grs_3 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2+\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4$, and $D_1 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3}^+$, $D_2 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3}^-$, $D_3 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2}$, $D_4 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma'}^-$, $D_5 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5}$, $D_6 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1}$. Then colors and spherical roots are related by the following equations: \begin{align*} & \grs_1 = D_1 +D_2 -D_3 \\ & \grs_2 = D_1 -D_2 +D_3 -D_4 \\ & \grs_3 = -2D_3 +2D_4 -D_5 -D_6 \end{align*} Notice that $D_\mathfrak p = D_1$ is minuscule in $\mN\grD$, in particular $\ol{Ke}$ is normal. Moreover \begin{align*} & D_4 = 2D_1- (\grs_1 + \grs_2)\\ & 2D_2 +D_5 +D_6 = 2D_1 - (2\grs_2 + \grs_3)\\ & D_2 +D_3 +D_5 +D_6 = 3D_1 - (\grs_1+2\grs_2 + \grs_3)\\ & 2D_3 +D_5 +D_6 = 4D_1 - (2\grs_1+2\grs_2 + \grs_3)\\ \end{align*} On the other hand, if $D \in \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{D_1}$ and $D = \sum_{i=1}^6 c_i D_i$, then $c_5 = c_6$ and $c_2+c_3 = 2c_5$, therefore $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{D_1}$ is generated by $D_1$, $D_4$, $2D_2+D_5+D_6$, $D_2+D_3+D_5+D_6$, $2D_3+D_5+D_6$. \subsection{Case 3.9} Notice that this case is obtained by parabolic induction from the wonderful variety considered in \S\ref{ss:CaseB}. We have in this case $\grS = \{\grs_1, \grs_2, \grs_3\}$ and $\grD = \{D_1, \ldots, D_4\}$, where we denote $\grs_1 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4$, $\grs_2 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5$, $\grs_3 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4 + \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5$, and $D_1 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5}$, $D_2 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4}$, $D_3 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2}$, $D_4 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1}$. In particular, the equations relating spherical roots and colors of \S\ref{ss:CaseB} become \begin{align*} & \grs_1 = -D_1 + D_2 +D_3 -D_4,\\ & \grs_2 = D_1 - D_2 + D_3 -D_4,\\ & \grs_3 = D_1 + D_2 - D_3. \end{align*} Notice that $\grD_\mathfrak p(e) = \{D_1,D_2\}$, and that both $D_1$ and $D_2$ are minuscule in $\mN\grD$. In particular $\ol{Ke}$ is normal. Following \cite[Remark 4.7]{BG}, to compute the semigroup $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{\grD_\mathfrak p(e)}$ it is enough to compute the semigroup $$ \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda^\grS_{\grD_\mathfrak p(e)} = \{ \grg \in \mN\grS \; : \; \supp(\grg^+) \subset \{D_1, D_2\}\}. $$ Let $\grg \in \mN\grS$ and write $\grg = a_1\grs_1 + a_2 \grs_2 + a_3 \grs_3$, then $\grg \in \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda^\grS_{\grD_\mathfrak p(e)}$ if and only if $a_1+a_2 \leq a_3$. Therefore $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda^\grS_{\grD_\mathfrak p(e)}$ is generated by the elements $$ D_1 + D_2 -D_3 = \grs_3, \; 2D_1 -D_4 = \grs_2 + \grs_3, \; 2D_2 -D_4 = \grs_1 + \grs_3. $$ \subsection{Cases 5.8, 5.9} We treat the case 5.8, the other one is identical. Notice that this case is obtained by parabolic induction from the comodel wonderful variety of cotype $\mathsf E} \newcommand{\sfF}{\mathsf F_7$: in particular the combinatorics of colors and spherical roots is essentially the same as that of a model wonderful variety. We have in this case $\grS = \{\grs_1, \ldots, \grs_6\}$ and $\grD = \{D_1, \ldots, D_8\}$, where we enumerate the spherical roots as $\grs_1 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4$, $\grs_2 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3$, $\grs_3 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1$, $\grs_4 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5$, $\grs_5 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2$, $\grs_6 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6$, and the colors as $D_1 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4}^+$, $D_2 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3}^-$, $D_3 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4}^-$, $D_4 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1}^+$, $D_5 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5}^-$, $D_6 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6}^+$, $D_7 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6}^-$, $D_8 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_7}$. Then we have the following relations between colors and spherical roots: \begin{align*} & \grs_1 = D_1 +D_3 -D_4 \\ & \grs_2 = D_2 -D_3 +D_4 -D_5 \\ & \grs_3 = -D_1 -D_2 +D_3 +D_4 -D_5 \\ & \grs_4 = -D_2 -D_3 +D_4 +D_5 -D_6 \\ & \grs_5 = -D_4 +D_5 +D_6 -D_7 \\ & \grs_6 = -D_5 +D_6 +D_7 -D_8 \end{align*} Notice that $D_\mathfrak p = D_1$, which is a minuscule element in $\mN\grD$. Therefore $\ol{Ke}$ is normal. Let $D \in \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{D_1}$, and write $D = nD_1 - (\sum_{i=1}^6 a_i \grs_i) = \sum_{i=1}^8 c_i D_i$. Then we have the following relations: $c_8 = c_2+c_3+c_4+c_5$ and $c_2+c_5+c_7 = 2a_3$. On the other hand, notice that \begin{align*} & D_6 = 2D_1- (2\grs_1 + \grs_2 + \grs_4)\\ & 2D_7 = 3D_1 - (4\grs_1 +3\grs_2 + \grs_3 + 2\grs_4 +2\grs_5)\\ & D_3 + D_8 = 3D_1 - (3\grs_1 +2\grs_2 + 2\grs_4 + \grs_5 +\grs_6)\\ & D_4 +D_8 = 4D_1 - (4\grs_1 +2\grs_2 + 2\grs_4 + \grs_5 +\grs_6)\\ & D_2+D_7 +D_8 = 4D_1 - (5\grs_1 +3\grs_2 + \grs_3 +3\grs_4 + 2\grs_5 +\grs_6)\\ & D_5+D_7 +D_8 = 5D_1 - (6\grs_1 +4\grs_2 + \grs_3 +3\grs_4 + 2\grs_5 +\grs_6)\\ & 2D_2 + 2D_8 = 5D_1 - (6\grs_1 +3\grs_2 + \grs_3 +4\grs_4 + 2\grs_5 +2\grs_6)\\ & D_2 + D_5 +2D_8 = 6D_1 - (7\grs_1 +4\grs_2 + \grs_3 +4\grs_4 + 2\grs_5 +2\grs_6)\\ & 2D_5 +2D_8 = 7D_1 - (8\grs_1 +5\grs_2 + \grs_3 +4\grs_4 + 2\grs_5 +2\grs_6) \end{align*} Therefore $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{D_1}$ is generated by the elements in the previous list, together with $D_1$. \subsection{Cases 7.11, 7.12} We treat the case 7.12, the other one is identical. This case was already considered in \S\ref{ss:CaseC}, we keep the notation introduced therein. Notice that $\grD_\mathfrak p(e) = \{D_1,D_4\}$, and that both $D_1$ and $D_4$ are minuscule in $\mN\grD$. In particular $\ol{Ke}$ is normal. Following \cite[Remark 4.7]{BG}, to compute the semigroup $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{\grD_\mathfrak p(e)}$, we only have to compute the semigroup $$ \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda^\grS_{\grD_\mathfrak p(e)} = \{ \grg \in \mN\grS \; : \; \supp(\grg^+) \subset \{D_1, D_2\}\}. $$ Let $\grg \in \mN\grS$ and write $\grg = a_1\grs_1 + a_2 \grs_2 + a_3 \grs_3$, then $\grg \in \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda^\grS_{\grD_\mathfrak p(e)}$ if and only if $$\max\{a_2,a_3\} \leq a_1 \leq a_2+a_3.$$ Therefore $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda^\grS_{\grD_\mathfrak p(e)}$ is generated by the elements $$ 2D_1 - D_5 = \grs_1+\grs_2, \quad D_1 -D_2 +D_4 = \grs_1+ \grs_3, \quad 2D_1 -D_3+D_4 = \grs_1 + \grs_2 + \grs_3. $$ \subsection{Case 8.6} Notice that this case is a parabolic induction of the comodel wonderful variety of cotype $\mathsf E} \newcommand{\sfF}{\mathsf F_8$, which was treated in \cite[Section 8]{BGM}. We have in this case $\grS = \{\grs_1, \ldots, \grs_7\}$ and $\grD = \{D_1, \ldots, D_9\}$, where we label the spherical roots as $\grs_1 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4$, $\grs_2 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5$, $\grs_3 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_8$, $\grs_4 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3$, $\grs_5 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6$, $\grs_6 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_2$, $\grs_7 = \alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_7$, and the colors as $D_1 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4}^+$, $D_2 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_5}^-$, $D_3 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_4}^-$, $D_4 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3}^+$, $D_5 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_3}^-$, $D_6 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_6}^+$, $D_7 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_7}^-$, $D_8 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_7}^+$, $D_9 = D_{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma_1}$. Then we have the following relations between colors and spherical roots: \begin{align*} & \grs_1 = D_1 +D_3 -D_4 \\ & \grs_2 = D_2 -D_3 +D_4 -D_5 \\ & \grs_3 = -D_1 -D_2 +D_3 +D_4 -D_5 \\ & \grs_4 = -D_2 -D_3 +D_4 +D_5 -D_6 \\ & \grs_5 = -D_4 +D_5 +D_6 -D_7 \\ & \grs_6 = -D_5 +D_6 +D_7 -D_8 -D_9 \\ & \grs_7 = -D_6 +D_7 +D_8 \end{align*} Notice that $D_\mathfrak p = D_8$, which is a minuscule element in $\mN\grD$. Therefore $\ol{Ke}$ is normal. Let $D \in \Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{D_8}$, and write $D = \sum_{i=1}^9 c_i D_i$. Then we have the equality $c_9 = c_2+c_3+c_4+c_5$. On the other hand, notice that \begin{align*} &D_1 = 2D_8 - ( \sigma_2 + \sigma_3 + 2\sigma_5 + 2\sigma_7)\\ &D_7 = 3D_8 - ( 2\sigma_1 + 3\sigma_2 + 2\sigma_3 + \sigma_4 + 4\sigma_5 + 3\sigma_7)\\ &D_6 = 4D_8 - ( 2\sigma_1 + 3\sigma_2 + 2\sigma_3 + \sigma_4 + 4\sigma_5 + 4\sigma_7)\\ &D_2 +D_9 = 4D_8 - ( 3\sigma_1 + 4\sigma_2 + 3\sigma_3 + 2\sigma_4 + 6\sigma_5 + \sigma_6 + 5\sigma_7) \\ &D_3 +D_9 = 5D_8 - ( 3\sigma_1 + 5\sigma_2 + 3\sigma_3 + 2\sigma_4 + 7\sigma_5 + \sigma_6 + 6\sigma_7)\\ &D_5 +D_9 = 6D_8 - ( 4\sigma_1 + 6\sigma_2 + 4\sigma_3 + 2\sigma_4 + 8\sigma_5 + \sigma_6 + 7\sigma_7)\\ &D_4 +D_9 = 7D_8 - ( 4\sigma_1 + 6\sigma_2 + 4\sigma_3 + 2\sigma_4 + 9\sigma_5 + \sigma_6 + 8\sigma_7) \end{align*} Therefore $\Gamma} \newcommand{\grD}{\Delta} \newcommand{\grL}{\Lambda_{D_1}$ is generated by the elements in the previous list, together with $D_8$. \subsection{Case 12.2} In this case, we have $\grS = \{\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma\}$ and $\grD = \{D_1, D_2, D_3\}$, where we enumerate the colors as $D_1=D_\alpha^+$, $D_2=D_\alpha^-$, $D_3=D_{\alpha'}$. There is a unique relation relating colors and spherical roots, namely \[\alpha = D_1+D_2.\] Recall that $\mathfrak p=V(3\omega+\omega')$, thus we must have $\omega(D_\mathfrak p)=3\omega+\omega'$. It follows that, up to an equivariant automorphism of $X$, we have either $D_\mathfrak p=2D_1+D_2+D_3$, or $D_\mathfrak p = 3D_1+D_3$. On the other hand by construction the equivariant morphism $\phi : X \lra \mathbb P} \newcommand{\mQ}{\mathbb Q(\mathfrak p)$ defined at the beginning of the section is birational, thus by \cite[\S 2.4.3]{BL} it follows that $D_\mathfrak p=2D_1+D_2+D_3$: indeed, $\{D_2\}$ is a distinguished subset, therefore $3D_1+D_3$ is not a faithful divisor and the corresponding morphism $X \lra \mathbb P} \newcommand{\mQ}{\mathbb Q(V_{3D_1+D_3})$ is not birational. Notice that $D_\mathfrak p$ is not minuscule, since $D_\mathfrak p-\alpha} \newcommand{\grb}{\beta} \newcommand{\grg}{\gamma =D_1+D_3$. Therefore, $\ol{Ke}$ is not normal. The semigroup $\Gamma_{D_\mathfrak p}$ is generated by $2D_1+D_2+D_3$ and $D_1+D_3$.
1,108,101,566,118
arxiv
\section{Our results} \label{sec:intro} \section {Introduction} \paragraph {BosonSampling.} BosonSampling (Aaronson and Arkhipov \cite {AaAr13}, see also Tishby and Troyansky \cite {TrTi96}) is the following computational task. \vspace{0.1 in} \begin {enumerate} \item The input is an $n$ by $m $ complex matrix whose rows are unit vectors. \item The output is a sample from a probability distribution on all multisets of size $n$ from $\{1,2,\dots ,m\}$, where the probability of a multiset $S$ is proportional to $\mu(S)$ times the square of the absolute value of the permanent of the associated $n$ by $n$ minor. Here, if the elements of the multiset occurs with multiplicities $r_1,r_2,\dots,r_k$, then $\mu(S)=1/r_1!r_2!\dots r_k!$. \end {enumerate} \vspace{0.1 in} This sampling task can be achieved by an (ideal) quantum computer. In fact, it can be realized by linear systems of $n$ noninteracting photons which describe a restricted regime of quantum algorithms. The analogous algorithmic task with determinants instead of permanents is referred to as FermionSampling. While FermionSampling is in {\bf P}, a polynomial algorithm for BosonSampling implies that the polynomial hierarchy collapses to the third level \cite{AaAr13}. When we consider noisy quantum computers with the full apparatus of quantum fault-tolerance, BosonSampling can be achieved with negligible error. A few years ago, Aaronson and Arkhipov proposed a way based on BosonSampling to demonstrate quantum speed-up without quantum fault-tolerance\footnote{``quantum speed-up,'' ``quantum supremacy'' and ``falsification of the extended Church Turing Theses,'' are all terms used to express the hypothesis of computationally superior quantum computing.} They conjectured that, on the computational complexity side, achieving an approximate version of BosonSampling, even for a (complex) Gaussian random matrix, will be computationally hard for classical computers. On the other hand they conjectured that such approximate versions can be achieved when the number of bosons is not very large, but still large enough to demonstrate ``quantum supremacy.'' \paragraph{Noise sensitivity of Gaussian matrices.} An $n\times n$ complex (real) Gaussian matrix is a matrix where the coordinates are independent and are chosen according to a normalized Gaussian distribution. If $X$ is an $n\times n$ matrix and $U$ is a Gaussian matrix, then the random matrix $Y=\sqrt{1-\epsilon}\cdot X+\sqrt \epsilon U$ is called an $\epsilon$-noise of $X$. \begin{theorem} \label{thm:1} Let $X$ be an $n\times n$ random Gaussian complex (real) matrix, let $\epsilon>\omega\parenth{\frac1n}$, and let $Y$ be an $\epsilon$-noise of $X$. Define \[ f(X)= |\perm (X) |^2,~~ g(X)=\Ex \Brac{ |\perm(Y)|^2\ \vert X}. \] Then (i) As long as $\epsilon = \omega (\frac{1}{n})$, the correlation between $f$ and $g$ tends to zero. In other words: \begin {align} \label{e:i} corr(f,g)=\frac {<f',g'>}{\|f'\|_2\|g'\|_2} = o(1), \end {align} where $f'=f-\mathbb E(f)$ and $g'=g-\mathbb E(g)$. (ii) For $d \gg 1/\epsilon $ there is a degree $d$ polynomial function of $X$, $p_d(X)$, such that \begin {align} \label {e:ii} \norm{p_d(X)-g(X)}_2^2 = o( \norm{g}_2^2 ). \end {align} (iii) Moreover, any coefficients of $p_d$ can be computed in polynomial time in $n$, and $p_d$ can also be approximated to within a constant by a constant-depth circuit. \end{theorem} The proof of Theorem \ref {thm:1} for the real case relies on the description of noise in terms of the Fourier-Hermite expansion. The study of noise-sensitivity requires an understanding of how the $\ell_2$ norm is distributed among the degrees in the Hermite expansion. As it turns out the contributions coming from degree $2k$ coefficient is $ (k+1) (n!)^2$. The combinatorics involved is related to Aaronson and Arkhipov's computation of the forth moment of $|\perm(A)|$ when $A$ is a complex Gaussian matrix. In the complex case, which is similar but somewhat simpler, we use another set of orthogonal functions which form eigenvectors of the noise operator. In this basis the contribution of the degree $2k$ coefficients is $(n!)^2$ for all $k=0,1,\ldots, n$. \medskip \noindent We also obtain fairly concrete estimates: \begin {corollary}[of the proof] \label{c:1} For the complex case, \begin {align} \label {e:c1-1} corr(f,g)=\sqrt{ \frac{ (1-(1-\epsilon)^n)\cdot(2-\epsilon) } {\epsilon n\cdot (1+(1-\epsilon)^n) } } \end {align} For $\epsilon = c/n$ this asymptotically gives \begin {align} \label {e:c1-2} corr (f,g) = \sqrt {\frac {2\cdot(1-e^{-c})}{c\cdot(1+e^{-c})} }. \end {align} \end {corollary} See Figure \ref{fig:1} for some values. We also note that the asymptotic values given there via formula (\ref{e:c1-2}) are quite close to the values for small number of bosons $n=10, 20, 30$ as given by \eqref{e:c1-1}. \paragraph {Noise sensitivity of BosonSampling.} Given an $n$ by $m$ matrix drawn at random from a (real or complex) Gaussian distribution, we can compare the distribution of BosonSampling and of ``noisy BosonSampling'', where the later is described by averaging over an additional $\epsilon$-noise. Theorem \ref{thm:1} suggests that for any fixed amount of noise $\epsilon >0$, noisy BosonSampling can be approximated in ${\bf P}$ and that, as long that $\epsilon=\omega(\frac1n)$, the correlation between BosonSampling and noisy BosonSampling tends to $0$. We say ``suggests'' rather than ``asserts'', because when we move from individual permanents to permanental distributions we face two issues. The first is that averaging the probability of a minor is not identical to averaging the value of permanent-squared: the latter does not take into account the normalization term, which is the weighted sum of squares of permanents for all $n$ by $n$ minors.\footnote{When the rows of the matrix are orthonormal then the weighted sum of all permanents is 1. In the more general case we consider it is given by the Cauchy-Binet theorem for permanents \cite {Min78,HCB88}.} However, we can expect that approximating the normalization term itself is in ${\bf P}$ for a fixed amount of noise, and that when $m$ is not too small w.r.t. $n$ the normalization term will be highly concentrated so it will have a small effect. The second issue is that when $m$ is not too large w.r.t. $n$ a typical permanent for BosonSampling will have repeated columns and this will require an (interesting) extensions of our results, which is yet to be done. When $m$ is large compared to $n^2$ we will have that the BosonSampling distribution is mainly supported on permanents without repeated columns. We also note that Theorem \ref {thm:1} and its consequences refer to correlation between distributions rather than to the variational $(\ell_1$) distance that Aaronson and Arkhipov discuss. We expect that when the amount of noise is $C/n$ then $f(x)$ and $g(x)$ are bounded away in the $\ell_1$-distance by a constant depending on $C$ (This is suggested but not implied by the correlation estimate of part (i) of Theorem \ref{thm:1}). We also expect that for every $n$ and $m$ ($m \ge n$, say), when the amount of noise is $C/n$ then the noisy BosonSampling distribution is bounded away from the noiseless BosonSampling distribution in the $\ell_1$-distance. While not proven here, we also expect that our results can be extended in the following three directions \begin {enumerate} \item The results apply to other forms of noise like a deletion of $k$ of our $n$ bosons at random, or modeling the noise based on the ``gates,'' namely the physical operations needed for the implementation, or noise representing "incomplete interference." \item The results about noisy permanents extend also to the case of repeated columns. \item Noise sensitivity extends to describe the sensitivity of the distribution under small perturbations of the noise parameters. \end {enumerate} \begin{figure} \centering \includegraphics[scale=0.9]{tab} \caption{The correlation between the noisy and ideal values of the BosonSampling coefficients (for terms without repeated columns,) for several values of noise.} \label{fig:1} \end{figure} All in all Theorem \ref {thm:1} raises the question of whether, without quantum-fault-tolerance, approximate BosonSampling in Aaronson and Arkhipov's sense is realistic and whether realistically modeled noisy BosonSampling manifests computational-complexity hardness. Noise sensitivity for squares of permanents and BosonSampling may be manifested even for realistic levels of noise even for small values of $n$ and $m$ (Say, 10 bosons with 20 modes.) To this end computer simulations can give a good picture, and, of course, experimental efforts for implementing BosonSampling for three, four, five, and six bosons may also give us good picture on how things scale. This is discussed further in Appendix 2. Studying noise sensitivity of other quantum ``subroutines'' such as FourierSampling, processes for creating anyons of various types, and tensor networks, is an interesting subject for further study. We note also that there are various results in the literature both in the study of controlled quantum systems \cite {KKK14} and in computational complexity \cite {BL12,MMV13}, demonstrating that ``robustness'' and ``noise stability'' lead to computational feasibility\footnote {As the PCP theorem demonstrates this is not {\em always} the case.} The structure of the paper is as follows: Section \ref {s:back} gives further background on BosonSampling and noise sensitivity. The proof of theorem \ref {thm:1} for complex Gaussian matrices is given in Section \ref {s:complex}, and for the real case is delayed to the appendix in Section \ref{s:real}. Section \ref {s:conc} has some further discussion interpreting our results, and the appendices elaborate on several extensions and related issues. \section {Background} \label{s:back} \subsection {Noise sensitivity} The study for noise sensitivity for Boolean functions was introduced by Benjamini, Kalai, and Schramm \cite {BKS99}, see also \cite {GaSt14}. The setting for Boolean functions on $R^n$ equipped with the Gaussian probability distribution was studied by Kindler and O'Donnell \cite {KiOd12}, see also Ledoux \cite {Led96}, and O'Donnell \cite {O'Do14}. Let $h_j(x)$ be the normalized Hermite polynomial of degree $j$. For $d=(d_1,\ldots,d_n)$ we can define a multivariate Hermite polynomial $h_d(X)=\prod_{i=1}^n h_{d_i}(x_i)$, and the set of such polynomials is an orthonormal basis for $L_2(\R^n)$. Let $f$ be a function from $\R^n$ to $\R$. Let $\epsilon>0$ be a noise parameter and let $\rho=\sqrt{1-\epsilon}$. We define $T_\rho(f)(x)$ to be the expected value of $f(y)$ where $y=\sqrt {1-\epsilon}x +\sqrt \epsilon u$, and $u$ is a Gaussian random variable in $\R^n$ of variance 1. Consider the expansion of $f$ in terms of Hermite polynomials \begin{align} f(x) = \sum _{\beta \in \N^d}\hat f(\beta ) \prod_{i=1}^d h_{\beta_i}(x_i). \end{align} The values $\hat f(\beta ) $ are called the Hermite coefficients of $f$. Let $| \beta |=\beta_1+\cdots +\beta_n$ The following description of the noise operator in terms of Hermite expansion is well known: \begin{align} \label{e:nf} T_\rho (f) = \sum _{\beta \in N^d}\hat f(\beta )\rho^{\beta} \prod_{i=1}^d h_{\beta_i}(x_i). \end {align} A class of functions with mean zero $\cal F$ is called (uniformly) noise-stable if there is a function $s(\rho)$ that tends to zero with $\epsilon$ such that for every function $f$ in the class, \[ \| T_\rho(f) - f\|_2^2 \le s(\rho)\|f\|_2^2. \] A sequence of function $(f_n)$ (with mean zero) is asymptotically noise-sensitive if for every $\epsilon>0$ \[ \|T_\rho(f)|_2^2 = o(1)|f\|_2^2. \] These notions are mainly applied for characteristic functions of events (after subtracting their mean value). There are several issues arising when we move to general functions. In particular, we can consider these notions w.r.t. other norms. Noise-stability is equivalent to the assertion that most of the $\ell_2$-norm of every $f \in {\cal F}$ is given by low-degree Hermite-coefficients. Noise sensitivity is equivalent to the assertion that the contribution of Hermite coefficients of low degrees is $o(\|f\|_2^2)$. {\bf Example:} Let $f$ be a function of $n^2$ (real) Gaussian variables describing the entries of an $n$ by $n$ matrix, given by the permanent of the matrix. In this case the $n!$-terms expansion of the permanent is its Hermite expansion. This gives that the expected value of the permanent squared is $n!$. The permanent is thus very ``noise-sensitive''. (The noisy permanent is simply the permanent multiplied by $\rho^n$. In this example, while far apart, the permanent can be recovered perfectly from the noisy permanent.) In this paper we study a closely related (but more interesting) example where the function is the {\it square} of the permanent. {\bf Remark:} Questions regarding noise sensitivity of various invariants of random matrices were raised by Itai Benjamini in the late 90s, see \cite{Kal00} Section 3.5.11. Kalai and Zeitouni proved \cite {KaZe07} that the event of having the largest eigenvalue of an $n$ by $n$ Gaussian matrix larger than (and also smaller than) its median value is noise sensitive. \subsection {BosonSampling and Noisy Gaussian BosonSampling} Quantum computers allow sampling from a larger class of probability distributions compared to classical randomized computers. Denote by QSAMPLE the class of probability distributions that quantum computers can sample in polynomial time. Aaronson and Arkhipov \cite {AaAr13}, and Bremner, Jozsa, and Shepherd \cite {BJS11} proved that if QSAMPLE can be performed by classical computers then the computational-complexity polynomial hierarchy (PH, for short) collapses. Aaronson and Arkhipov result applies already for BosonSampling. These important computational-complexity results follow and sharpen older result by Terhal and DiVincenzo \cite {TeDi04}. The main purpose of Aaronson and Arkhipov \cite {AaAr13} was to extend these hardness results to account for the fact that implementations of quantum evolutions are noisy. The novel aspect of \cite {AaAr13} approach was that they did not attempt to model the noisy evolution leading to the bosonic state but rather made an assumption on the target state, namely that it is close in variation distance to the ideal state. They also considered the case that the input matrix is Gaussian both because it is easier to create experimentally such bosonic states, and because of computational complexity consideration. They conjecture that approximate BosonSampling for random Gaussian input is already computationally hard for classical computers (namely it already implies PH collapse), and show how this conjecture can be derived from two other conjectures: A reasonable conjecture on the distribution of the permanents of random Gaussian matrices together with the conjecture that it is \#P hard to approximate the permanent of a random Gaussian complex matrix. Aaronson and Arkhipov proposed BosonSampling as a way to provide strong experimental evidence that the ``extended Church-Turing hypothesis'' is false. Their hope is that current experimental methods not involving quantum fault-tolerance may enable performing approximate BosonSampling for Gaussian matrices for 10-30 bosons (``but not 1000 bosons''). This range allows (exceedingly difficult) classical simulations and thus the way quantum and classical computational efforts scale could be examined. ``If that can be done,'' argues Aaronson, ``it becomes harder for QC skeptics to maintain that some problem of principle would inevitably prevent scaling to 50 or 100 photons.'' \subsection {Combinatorics of permutations and moments of permanents} A beautiful result by Aaronson and Arkhipov asserts that for $n$ by $n$ complex Gaussian matrices\footnote{Aaronson and Arkhipov proved that the same formula holds for determinants and also studied higher moments.} \begin{align} \label {e:aa} \Ex\left[|\perm (A)|^4\right] = (n+1)(n!)^2. \end{align} The proof of the complex case of our main theorem refines and re-proves this result. It turns out that combinatorial argument similar to the one used by Aaronson and Arkhipov is needed in the case where $A$ is a real Gaussian matrix, to determine the contribution of the top-degree Hermite coefficients of $|\perm (A)|^2$, and this can then be used to compute the contributions of all other degrees. \section {Noise sensitivity - complex Gaussian matrices} \label {s:complex} In this section we analyse the permanent of an $n$ by $n$ complex Gaussian matrix. We begin with a few elementary definitions and observations. \medskip We equip $\C^n$ with the product measure where in each coordinate we have a Gaussian normal distribution with mean 0 and variance 1. We call a random vector $z\in\C^n$ which is distributed according to this measure a normal (complex) Gaussian vector. The measure also defines a natural inner-product structure in the space of complex valued functions on $\C^n$. \paragraph{Noise operator and correlated pairs.} Let $\epsilon>0$ be a noise parameter, let $\rho=\sqrt{1-\epsilon}$, and let $u$\gnote{why these letters?} be an independent Gaussian normal vector in $\C^n$. For any $z\in\C^n$, we say that $y=\sqrt {1-\epsilon}\cdot z +\sqrt \epsilon \cdot u$ is an $\epsilon$-noise of $z$. If $z$ is also a normal Gaussian vector independent of $y$, we say that $y$ and $z$ are a $\rho$-correlated pair. For a function $f:\C^n\to\C$, we define the noise operator $\T_\rho$ by \[T_\rho(f)(z)=\Ex[f(y)],\] where $y$ is an $\epsilon$-noise of $z$. \paragraph{An orthonormal set.} In order to study the noise sensitivity of $\perm$, it is useful to use the following set of orthonormal functions, related to the real Hermite basis. \begin {proposition} \label{prop:basic-complex} The functions 1, $z$, $\bar z$ and $h_2(z)=z \bar z-1$ form an orthonormal set of functions. Moreover, these functions are all eigenvectors of $T_\rho$, with eigenvalues $1,\rho,\rho$ and $\rho^2$ respectively. \end {proposition} \newcommand{\barz}{{\bar z}} \newcommand{\expect}{\mathbb E} \paragraph{Proof.} The function $1$ obviously has norm $1$, and the functions $z$ and $\bar z$ have norm $1$ since $z$ (and therefore $\bar z$) have variance $1$. Also note that since $a=Re(z)$ and $b=Im(z)$ are independent real normal variables with expectation $0$ and variance $\frac12$, \begin{align*} ||z\barz||_2^2= \expect[|z|^4]=\expect[(a^2+b^2)^2]=\expect[a^4+b^2+2a^2b^2]=\frac34+\frac34+\frac12=2. \end{align*} Hence the norm of $h_2(z)$ is given by \begin{align*} ||z\cdot\bar z-1||_2^2=||z\bar z||_2^2 +1-2\langle z\barz, 1 \rangle = ||z\bar z||_2^2 +1-2\langle z,z \rangle = 2+1-2=1 \end{align*} It is simple to verify that $1$, $z$, and $\barz$ are also all orthogonal to each other (it follows since the Gaussian distribution is symmetric around zero), and that $z\barz-1$ is orthogonal to $1$. Also, $\langle z\barz-1,z \rangle=\expect[z\barz^2 -\barz]$, and the expectations of both terms is again zero as they are odd functions of $z$. It is left to show that the above functions are eigenvectors of $T_\rho$. This is obvious for $1$. For $f(z)=z$, $T_\rho(f)(z)=\expect[\rho z+\sqrt{1-\rho^2}u]=\rho z$, and similarly for $\barz$. Also, \begin{align*} T_\rho(h_2)(z)=&\expect[(\rho z+\sqrt{1-\rho^2}u )(\rho \barz +\sqrt{1-\rho^2}\bar u )]-1 \\=&\rho^2z\barz+ \sqrt{1-\rho^2} \expect[z\bar u + \barz u] +\expect[(1-\rho^2)u\bar u] -1 \\= &\rho^2z\barz+(1-\rho^2) -1=\rho^2\cdot h_2(z). \end{align*} \endproof \newcommand{\comp}[1]{{#1}^c} \paragraph{Permanents.} Let $\mathbf z=\{z_{i,j}\}_{i,j=1,\ldots,n}$ be an $n\times n$ matrix of independent complex Gaussians, and let $\perm(\mathbf z)= \sum_{\sigma\in S_n} \prod_{i=1}^n z_{i,\sigma(i)}$ be the permanent function. We also let \[f(\mathbf z)= |\perm (\mathbf z)|^2 = \sum_{\sigma,\tau\in S_n} \prod_{i=1}^n z_{i,\sigma(i)}\barz_{i,\tau(i)}.\] In order to study $T_\rho(f)$, consider one term in the formula above that corresponds to the permutations $\sigma$ and $\tau$, and let $T$ be the indices $i$ on which they agree, and $\comp T=[n]\setminus T$ be its complement. We can write such a term as \begin{align*} \prod_{i=1}^n z_{i,\sigma(i)}\barz_{i,\tau(i)}&=\prod_{i\in T}(z_{i,\sigma(i)}\barz_{i,\sigma(i)}) \cdot \prod_{i\in \comp{T}}z_{i,\sigma(i)}\barz_{i,\tau(i)} =\prod_{i\in T}(1+h_2(z_{i,\sigma(i)})) \prod_{i\in \comp{T}}z_{i,\sigma(i)}\barz_{i,\tau(i)} \\ &=\sum_{R\subseteq T}\left[\prod_{i\in T\setminus R}h_2(z_{i,\sigma(i)}) \prod_{i\in \comp T}z_{i,\sigma(i)}\barz_{i,\tau(i)} \right] \end{align*} \paragraph{The degree of a term.} For each product in the sum above we assign a degree -- we add $1$ to the degree for each multiplicand of the form $z_{i,j}$ or $\barz_{i,j}$, and $2$ for each multiplicand of the form $h_2(z_{i,j})$. The degree of a term $\prod_{i\in T\setminus R}h_2(z_{i,\sigma(i)}) \prod_{i\in \bar T}z_{i,\sigma(i)}\barz_{i,\tau(i)}$ is thus $2(|T|-|R|)+2(n-|T|)=2(n-|R|)$. \paragraph{The weight of $f$ on terms of degree $2(n-k)$.} The $2(n-k)$-degree part of $f$ is obtained by summing over all sets $R\subseteq [n]$ of size $k$, the terms as above obtained from pairs $(\sigma,\tau)$ of permutations which agree on the indices in $R$ (and possibly on other indices). It is useful to further partition these terms according to the image $R'$ of $R$ under $\sigma$ and $\tau$ -- note that there are $k!$ ways to fix the values of $\sigma$ and $\tau$ on $R$ given $R'$. We denote by $\sigma',\tau'$ the restriction of $\sigma$ and $\tau$ respectively on the complement of $R$, namely these are one-to-one functions from $\comp R$ to $[n]\setminus {R'}$. Also, let $S(\sigma',\tau')\subseteq \comp R$ be the set of indices on which they agree. So the degree $2(n-k)$ part of $f$ is given by \begin{equation}\label{eq:poop} f^{=2(n-k)}=\sum_{|R|,|R'|=k} \left( k!\cdot\left[\sum_{\sigma',\tau'}\prod_{i\in S(\sigma',\tau')} h_2(z_{i,\sigma'(i)}) \prod_{i\in \comp R\setminus S(\sigma',\tau')} z_{i,\sigma'(i)}\barz_{i,\tau'(i)} \right]\right). \end{equation} Note that in the inner sum above no two summands are the same ($R$ and $R'$, as well as $\sigma'$ and $\tau'$, can be inferred from looking at such a summand). Hence, since these summands form an orthonormal set, we have that the weight of $f$ on its degree $2(n-k)$ terms is \begin{equation}\label{eq:2} ||f^{=2(n-k)}||_2^2={n \choose k}^2\cdot(k!)^2\cdot((n-k)!))^2=(n!)^2, \end{equation} where the ${n \choose k}^2$ terms accounts for the possible values of $R$ and $R'$, $(k!)^2$ comes from the coefficient of each summand in~\eqref{eq:poop}, and $((n-k)!))^2$ is the number of choices for $\sigma'$ and $\tau'$. \medskip \noindent {\bf Remark:} Summing over all values of $k$, $1 \le k \le n+1$ we retrieve Aaronson and Arkhipov's formula (\ref {e:aa}). \subsubsection*{Proof of Theorem \ref {thm:1} for the complex case} Let $f,g,f'$ and $g'$ be as in Theorem~\ref{thm:1}, and recall that the correlation $corr(f,g)$ between $f$ and $g$ is given by $corr(f,g)= <f',g'>/\|f'\|_2\|g'\|_2$. Also note that by the definition of $T_\rho$, $g=T_\rho(f)$ for $\rho=\sqrt{1-\epsilon}$. \paragraph {The correlation diminishes when the noise is $\omega(1)/n$.} It follows from Proposition~\ref{prop:basic-complex} that the terms of degree $2m$ are eigenvectors of the operator $T_\rho$ with eigenvalue $\rho^{2m}$. We will use this observation together with \eqref{eq:2} to show that $corr(g,f)=o(1)$ when $\epsilon = \omega(1)/n$. Indeed, denoting $W_{2m}(n)=||f^{=2m}||_2^2$, we have $$\|f'\|_2=\parenth{ \sum_{m>0} W_{2m}(n) }^{1/2},$$ $$\|g'\|_2=\|T_\rho(f')\|_2=\parenth{ \sum_{m>0} W_{2m}(n)\rho^{4m} }^{1/2},$$ $$\inner{f',g'}=\sum_{m>0} W_{2m}(n)\rho^{2m}.$$ It follows that \begin{equation} \label{eq:3} corr(f,g) = \frac{\sum_{m=1}^n \rho^{2m}}{ (\sum_{m=1}^n 1)^{1/2} (\sum_{m=1}^n\rho^{4m})^{1/2}}\;. \end{equation} When $\epsilon=\omega(1)/n$, $\rho^2=1-\epsilon=1-\omega(1)/n$, and thus the enumerator in \eqref{eq:3} is of order $\Theta(1/\epsilon)$ and the denominator is of order $\Theta\parenth{\sqrt {n/\epsilon} }$. The correlation between $f$ and $g$ in this case is therefore of order $\Theta\parenth{\sqrt {\epsilon n}}$, which indeed tends to zero when $\epsilon=\omega(1)/n$. \paragraph{Proof of Corollary \ref {c:1}.} The corollary is obtained from \eqref{eq:3} by using the formula for the summation of a geometric series and the approximation $(1-\frac c n)^n\sim \exp(-c)$. \paragraph {Approximating the noisy permanent for a constant noise parameter.} Note that the weight of the noisy permanent function, $g$, on terms of degree $> d$, is bounded by $\rho^d\cdot ||g||_2^2$. Therefore $g$ can be approximated to within a $\rho^d\cdot ||g||_2^2$ distance by truncating terms of degree above $d$. It follows that when the noise parameter $\epsilon$ is constant, $g$ can be approximated to within any desired constant error by a linear combination of terms each of degree at most $d$. Moreover, as the coefficient of each such term can be easily computed in polynomial time, and since the number of such coefficient is a polynomial function of $n$, this implies that $g$ can be approximated in polynomial time up to any desired (constant) precision. This approximation of $g$ can even be achieved by a constant depth circuit: this follows since each term, being of constant degree, can be approximated to within polynomially small error in constant depth as it only required taking $O(\log n)$ bits into account (it is actually possible to only do computations over a constant number of bits here by first applying some noise to the input variables). Then one can approximate the sum of these terms by simply summing over a sample of them, using binning to separately sample terms of different orders of magnitude. We note that this argument is very general and only uses the fact that $g$ can be approximated by an explicit constant degree polynomial. \endproof \subsection {Discussion} \paragraph {Sharpness of the results.} Since our (Hermite-like) expansion of $|\perm^2(X)|$ is supported on degrees at most $2n$, we do have noise stability when the level of noise is $o(1/n)$. There is also a recent result by Alex Arkhipov \cite {Ar14} that for certain general error-models, if the error per photon is $o(1/n)$, ``you'll sample from something that’s close in variation distance to the ideal distribution.'' (A careful comparison between Alex's result and ours shows that in our notions it applies when $\epsilon = o(1/n^2)$ leaving an interesting interval for noise-rate to be further explored.) Independently from our work, Scott Aaronson \cite {Aa14} has a recent unpublished (partially heuristic) result which shows that part (ii) of Theorem \ref {thm:1} is sharp for a different but related noise model: ``Suppose you do a BosonSampling experiment with $n$ photons, suppose that $k$ out of the $n$ are randomly lost on their way through the beamsplitter network (you don't know which ones), and suppose that this is the only source of error. Then you get a probability distribution that's hard to simulate to within accuracy $\theta (1/n^k)$ in variation distance, unless you can approximate the permanents of Gaussian matrices in $BPSUBEXP^{NP}$.'' \paragraph {Determinants.} We expect that our results apply to determinants and thus for FermionSampling and it would be interesting to work out the details. Perhaps a massage to be learned is that the immense computational complexity gap between determinants and permanents is not manifested in the realistic behavior of fermions and bosons.\footnote{This is related to comments made by Naftali Tishby is the mid 90s. \cite{TrTi96}, however, proposes a physical distinction between permanents and determinants in term of intrinsic variance of the measurement.} Noise sensitivity gives an explanation why. \paragraph {Permanents with repeated columns.} For the study of noise sensitivity of BosonSampling (when $m$ is not very large compared to $n$) we will need to extend our results to permanents of complex Gaussian matrices with repeated columns. This looks very interesting and would hopefully be studied in a future work. Given an $n$ by $k$ matrix $A=(z_{ij})_{1 \le i \le n, 1 \le j \le k}$, and $k$ integers $n_1,n_2\dots,n_k$ ,summing to $n$ we can let $A'$ be the $n$ by $n$ matrix obtained by taking $n_i$ copies of column $i$ and define $f(A)= (1/n_1!n_2!\dots n_k! )\perm (A' A'^* )$. It is possible to expand $f(A)$ in a similar way to our computation above where only the combinatorics becomes somewhat more involved (and explicit formulas are not available). Of course, repeated columns are not relevant for FermionSampling. \paragraph{BosonSampling: the normalization term.} Given an $n$ by $m$ matrix $A=(z_{ij})_{1 \le i \le n, 1 \le j \le m}$ we will consider now the normalization term, $h$, namely the $\mu(S)$-weighted sum of absolute value squared of permanents of all $n$ by $n$ minors. By the Cauchy-Binet formula for permanents \cite {Min78,HCB88}, \begin {align*} h(A)= \perm(A A^*)= \sum_{\sigma\in S_n}\sum_{k_1,k_2,..,k_n\in [m]} \prod_{i=1}^n z_{i,k_i}\barz_{\sigma(i),k_i}. \end {align*} Again, it is possible to expand $h(A)$ in a similar way to our computation above. Of course, the (even more familiar) Cauchy-Binet theorem for determinants applies (in our setting) to the normalization term for FermionSampling. \paragraph {Noise sensitivity for general polynomials in $z_i$ and $\bar z_i$} It will be interesting to extend our framework and study noise sensitivity for general polynomials in $z_i$ and $\bar z_i$, or even just for absolute values of polynomials, parallel to \cite {BKS99} and \cite {KiOd12}. (This will be needed. e.g., for extensions of our results to higher moments of the complex Gaussian determinant and permanent.) \paragraph{The Bernoulli case.} It will be interesting to prove similar results for other models of random matrices. A case of interest is when the entries of the matrix are i. i. d. Bernoulli random variables. To extend our results we need first to compute (or at least estimate) the expectation of $|\perm(X)^4|.$ This is known for the determinant \cite {Tur55} (while more involved than the Gaussian case). \section {Conclusion} \label{s:conc} Theorem \ref {thm:1} and its anticipated extensions propose the following picture: First, for constant noise level the noisy version of BosonSampling is in {\bf P}. In fact, noisy BosonSampling can be approximated by bounded depth circuits. Second, when the level of noise is above $1/n$ when we attempt to approximate Gaussian bosonic states we cannot expect robust experimental outcomes at all. And third, when we consider perturbations of our Gaussian noise model, the noisy BosonSampling distribution will be very dependent on the detailed parameters describing the noise itself, so that for robust outcomes, an exponential size input will be required to describe the noise. The relevance of noise sensitivity may extend to more general quantum systems and this is an interesting topic for further research. \subsection*{Acknowledgment} We would like to thanks Scott Aaronson, Alex Arkhipov, Micharl Ben-Or, Michael Geller, Greg Kuperberg, Nadav Katz, Elchanan Mossell, and John Sidles, for helpful discussions.
1,108,101,566,119
arxiv
\section{Introduction} Certain classes of neutron stars are expected to be excellent sources of continuous gravitational waves (GWs). Anomalous X-ray pulsars and soft gamma repeaters, now widely recognised as neutron stars of the highly-magnetised variety \cite[`magnetars',][]{td92}, may be deformed by magnetic stresses to the point that the resulting GW luminosity could be detected by currently operating ground-based interferometers \citep{cf53,goos72,mast11}. Neutron stars accreting through Roche-lobe overflow are another strong candidate, since there remains an observational puzzle as to why their spin frequencies seem to be capped at $\lesssim 700$ Hz \citep{pat17}; such systems would be expected to spin-up indefinitely unless stalled by a sufficiently large, spin-dependent counter-torque. It has been argued that GW radiation-reaction may be the key agent that limits the rotational growth \cite[\citealt{bild98,git19}; though see also][]{pat12,glamp21}. Regardless, because the GW strain scales quadratically with the spin frequency, some of the most promising sources for long-term emissions are those with millisecond periods, where the resulting GW frequency, $f_{\text{GW}}$, lies in the $\sim$kHz band \citep{thorne80,ligo21}. For radiation at these frequencies, gravitational or otherwise, wave-optical effects are expected to come into play when encountering solar-mass bodies along or near the line of sight \citep{oh74,nak99,mac04}. Diffractive effects in particular are important for $M_{L} \lesssim 10^{2} M_{\odot} (f_{\text{GW}} / \text{kHz})^{-1}$ \citep{tak03,tak05}, when the wavelength of the source exceeds the Schwarzschild radius of the (micro-)lens. For macrolenses consisting of $n \gtrsim 10^{2}$ stars, we then enter into an intermediate arena between the heavily diffracted and eikonal regimes, where the overall amplifications may be non-negligible and `beat' patterns can emerge at the interferometer due to time delays \citep{christ18,jung19}. Although impressive advances have been made in the numerical implementation of ray-shooting codes in geometric optics \citep{lew10,lew20}, such tools are not applicable in this case. Wave-optical lensing for \emph{continuous} GWs may be especially impactful because detections would likely take many months of persistent monitoring using phase-coherent strategies \citep{lasky15,derg21,suv21,sold21}, and the line of sight may cross a number of interference fringes during this time. In this respect, \cite{liao19} have demonstrated that diffraction and interference effects may, albeit rarely, show up in continuous GW signals, and that the amplitude and phase modulations arising due to lensing can non-negligibly affect parameter estimation \cite[see also][]{dep01,marc20,meena20}. However, these authors concentrated on the case of a single point-mass lens, where a closed-form expression for the lensing flux is available \citep{nak99}, thereby allowing for analytically-tractable computations. While lensing by multiple stars is highly unlikely for any given \emph{Galactic} source \citep{pac86b,jow20}, it is nevertheless worthwhile to re-examine the situation for different macro configurations. Generally speaking, the main challenge in performing legitimately wave-optical calculations is the oscillatory nature of the relevant Fresnel-Kirchhoff diffraction integral \citep{peters74}; given a phase profile $\varphi$, the physical optics calculation involves integrating the exponential $e^{i \varphi}$ over the aperture, which is infinitely oscillatory owing to Euler's formula. \cite{feld19}, following a mathematical program outlined by \cite{witten11}, have recently developed a method based on the application of Picard-Lefschetz (PL) theory that is useful in this regard \cite[see also][for a catalogue of other methods]{guo20}. In essence, the PL calculation involves analytically continuing the integrand into the complex plane. One then builds a set of special contours (`Lefschetz thimbles') which ultimately form a closed loop, so that Cauchy's theorem may be applied. The actual Fresnel-Kirchhoff integral of interest can then be evaluated by calculating instead some simpler, non-oscillatory integrals. Each Lefschetz thimble is associated to a point of stationary phase (i.e., an image) of the original integrand, thereby connecting back to the more familiar geometric optics calculation. The main novelty of this paper is that, by adopting the PL methodology described by \cite{feld19,feld20a,feld20b}, we are able to perform wave-optical calculations for $\sim$kHz GWs lensed by clusters consisting of $\gtrsim10^{2}$ stars. Such a scenario may be relevant when observing neutron stars located behind particularly dense regions of the Galaxy with the next generation of detectors \citep{pac86b,liao19}. GWs, unlike light, also tend to propagate through matter without scattering and absorption, so that lensing remains nominally important even through regions that are opaque in the optical. This paper is organised as follows. In Section 2 we review the theory of continuous GW generation by deformed neutron stars, outlining the potential impact of wave-optical lensing. Section 3 is then devoted to the derivation of the relevant Fresnel-Kirchhoff integral in the thin-lens approximation, and the setting up of microlens distributions. The numerical techniques, based on PL theory, are described in Section 4, with the results given in Section 5. Some discussion is presented in Section 6. \section{Continuous gravitational waves from neutron stars} GWs emitted by a non-axisymmetric system are polarised according to how momentum (current multipoles) and energy (mass multipoles) are distributed within the host body \citep{thorne80}. For rapidly rotating neutron stars, several mechanisms can organically induce large momentum or energy fluxes within the stellar interior. For instance, a sufficiently strong magnetic field introduces density asymmetries within the core and outer layers \citep{cf53,goos72}, and generates a time-dependent mass quadrupole moment, conventionally written $\ddot{I}_{22} \propto \nu_{\star}^2 e^{2 i \nu_{\star} t} I_{0} \varepsilon$ for spin frequency $\nu_{\star}$, moment of inertia $I_{0}$, and triaxial ellipticity $\varepsilon$ \citep{lasky15}. Mode oscillations, possibly driven to large amplitudes through secular instabilities \citep{and99}, are another often-considered possibility for generating multipoles of either the current or mass variety \citep{owen10,stergbook}. For concreteness we focus on magnetic deformations, where GWs are emitted at twice the rotational frequency, $f_{\text{GW}} = 2 \nu_{\star}$, and carry an intrinsic amplitude of \cite[e.g.,][]{lasky15} \begin{equation} \label{eq:massquadamp} \begin{aligned} h_{0} =& \frac {4 \pi^2 G \varepsilon I_{0} f_{\text{GW}}^2} {c^4 D_{OS}} \\ \approx& \, 1.1 \times 10^{-27} \left( \frac {\varepsilon} {10^{-8}} \right) \left( \frac{\nu_{\star}} {500 \text{ Hz}} \right)^{2} \left( \frac {10 \text{ kpc}} {D_{OS}} \right), \end{aligned} \end{equation} as measured by an observer a distance $D_{OS}$ from the source. For a neutron star consisting of normal $npe$ matter, the ellipticity is roughly equal to the ratio of magnetic energy to gravitational binding energy. In terms of a characteristic field strength $B_{\star}$, one can estimate \citep{mast11} \begin{equation} \label{eq:massquad} \varepsilon \approx 4 \times 10^{-8} \left( \frac{B_{\star}}{10^{14} \text{ G}}\right)^2 \left( \frac {R_{\star}} {10^{6} \text{ cm}} \right)^{4} \left( \frac {1.4 M_{\odot}} {M_{\star}} \right)^{2}, \end{equation} assuming a purely poloidal and dipolar configuration on top of a hydrostatic Tolman-VII density profile. The inclusion of higher multipoles, a toroidal field, or employing a different equation of state can potentially lead to order-of-magnitude adjustments within expression \eqref{eq:massquad} \citep{dos09,cio09}. Additionally, for a star whose core contains superconducting protons, $\varepsilon$ is amplified by a factor $\sim H_{c1}/B_{\star}$ \citep{cutler02}, where $H_{c1} \lesssim 10^{16} \text{ G}$ represents the microscopic critical field strength, the exact value of which is determined by the London penetration depth, amongst other physical factors \citep{glamp11,lander13}. Stars with surface fields $B_{\star} \lesssim 10^{12} \text{ G}$ may therefore still permit strains of order $h_{0} \gtrsim 10^{-27}$ if their cores are superconducting or if they harbour a dominant toroidal field \citep{suv21}. Even in the restricted context of magnetic deformations, it is clear therefore that a detection of continuous GWs from a localised source, which would effectively measure $\varepsilon$ within some tolerance, can yield a significant amount of information about stellar structure. \subsection{Detectability and relative motion} The characteristic strains \eqref{eq:massquadamp} are orders of magnitude lower than those due to the violent merger events that have thus far been detected. Persistent emissions, associated with magnetically (or otherwise) deformed neutron stars, have the advantage however that signal can be accrued over many cycles. In particular, if the star houses a mass or current quadrupole moment with a lifetime that exceeds the observational window $T_{\text{obs}}$, continual monitoring leads to an increase in detector sensitivity. In a fully-coherent search, a ground-based interferometer can detect a signal of amplitude \begin{equation} \label{eq:h0thresh} h_{0} \approx 11.4 \sqrt{ S_{n} / T_{\text{obs}}} \end{equation} with $90\%$ confidence \citep{watts08}, where $S_{n}$ is the noise power spectral density of the detector. Although not shown here \cite[see, for example, Figure 5 in][]{sold21}, it is likely that observations spanning at least a year would be necessary to detect continuous GWs from many of the known sources with existing instruments \cite[see also][]{lasky15,suv21}. Suppose however that the GWs from the system were lensed en route to the detector. In the case of burst-like signals associated with mergers, for example, where the bulk of the measurable GW luminosity is emitted within a time window spanning a few seconds, relative motion between the lens and source is negligible. Still, wave-like effects are likely to be important here because the source frequency sweeps (`chirps') through a wide band, and the lensing-induced amplification is an oscillatory function of $f_{\text{GW}}$ \citep{nak99,tak03,christ18}. By contrast, while continuous GWs are expected to be roughly monochromatic (though see below), the relative motion between the lens and source cannot be ignored when considering observational windows spanning some months. For a phase-coherent search lasting $\gtrsim$ one year, the neutron star would have travelled a (relative) distance of $\sim 10^{15} \times (v / 300 \text{ km s}^{-1}) (T_{\text{obs}}/ \text{ yr} )$ cm, where $v = 300 \text{ km s}^{-1}$ is a typical transverse velocity for a millisecond pulsar \citep{hobbs06}. This distance, while negligible compared to $D_{OS} \sim 10$ kpc, comfortably exceeds the Einstein radius associated with a solar-mass microlens, viz. \begin{equation} \label{eq:einrad} \begin{aligned} R_{E} &\approx \sqrt{ \frac{ 4 G M_{L}} {c^2} \frac{ D_{OL} D_{LS}} {D_{OS}} } \\ &= 6.7 \times 10^{13} \left( \frac {M_{L}} {M_{\odot}} \right)^{1/2} \text{ cm}, \end{aligned} \end{equation} for a lens located $D_{OL} = 5$ kpc from the observer and a further\footnote{We ignore cosmological corrections to all distance quantities since $z \ll 1$ for the sources considered here.} $D_{LS} = 5$ kpc from the source. In rare cases, the emitter may thus cross multiple Einstein rings over a long $T_{\text{obs}}$ when located behind particularly dense regions of the Galaxy \cite[see][for some rate estimates]{dep01,jow20}. Crossing interference fringes will lead to modulations of the GW signal, and could noticeably affect parameter estimation in the event of a detection, depending on the location of the neutron star and its environs. Moreover, wave-optical lensing will generally cause the phase of the signal to drift over time. This may be problematic for phase-coherent GW searches, as matched filtering generally requires the (noisy) detector output, multiplied by a template waveform, to remain in phase with the signal to within $\lesssim$ 1 rad \cite[e.g.,][]{jones04}. This problem is well known in the case of searches directed at sources within active binaries, where the GW frequency can drift due to accretion-induced spin evolution and it is necessary to analyse the signal semi-coherently over segments shorter than the full $T_{\text{obs}}$ \citep{dre18}. The spins of isolated neutron stars may also wander over $\sim$ year-long timescales for a variety of reasons (e.g., glitches); see \cite{suva16} for a discussion. One aspect we can explore with wave-optical calculations is, for a given macrolens, the maximum interval over which matched filtering can be reliably applied (see Sec. 5). \section{Wave-optical microlensing of gravitational waves} Here we briefly review the wave-optical theory of gravitational (micro-)lensing of GWs emitted by a point source, closely following \cite{tak03} and \cite{mac04}. Working within the weak lensing regime, we adopt the `Newtonian' spacetime metric \begin{equation} ds^2 = g_{\mu\nu}^{(L)}dx^\mu dx^\nu = - \left( 1+2U \right) dt^2 + \left( 1-2U \right) d \boldsymbol{x}^2 , \label{eq:metric} \end{equation} where $U(\boldsymbol{x}) \ll 1$ denotes the gravitational potential associated with the macrolens and we have temporarily set the speed of light, $c$, to unity. At the linear level, the macro potential $U$ is simply the superposition of potentials $U_{k}$ associated with each microlensing body $k$, i.e., $U = \sum_{k \leq n} U_{k}$ for $n$ microlenses. GWs emitted by the source are described by a vacuum perturbation $h$ on top of the lensing background, viz. $g_{\mu \nu} = g_{\mu\nu}^{({L})} + h_{\mu \nu}$, which satisfies \citep{isaac68} \begin{equation} \label{eq:pert} 0 = \nabla_{\alpha} \nabla^{\alpha} h_{\mu\nu} +2R^{(L)}_{\alpha\mu\beta\nu} h^{\alpha\beta} + \mathcal{O}(h^2), \end{equation} where $R^{(L)}_{\alpha \beta \mu \nu}$ is the Riemann tensor associated with $g_{\mu \nu}^{(L)}$ and we have employed the Lorentz gauge ($\nabla_{\nu} h^\nu_{\mu}=0$ and ${h^\mu}_{\mu}=0$). For the cases considered here, the GW wavelength is tiny relative to the typical radius of curvature associated with the background, and we can safely drop the Riemann tensor term in equation \eqref{eq:pert}. In this instance, the equations of motion for each component of $h$ are individually equivalent to a Klein-Gordon equation for a scalar field $\phi(t,\boldsymbol{x})$ \citep{peters74}. The leading-order problem thus reduces to finding solutions to \begin{equation} 0 = \left( \nabla^2 + \omega^2 \right) \phi - 4 \omega^2 U \phi, \label{eq:scalar} \end{equation} where, through a slight abuse of notation, we have taken out the time dependence via $\phi = e^{i \omega t} \phi$ with $\omega = 2 \pi f_{\text{GW}}$. Finally, equation \eqref{eq:scalar} can be solved using Kirchhoff's theorem, thereby defining the Fresnel-Kirchhoff diffraction integral associated with the wave optics of GW lensing \citep{tak05}, \begin{equation} \label{eq:truefk} \phi(\boldsymbol{x}) = \phi^{(0)}(\boldsymbol{x}) - \frac {\omega^2} {\pi} \int d^{3} \boldsymbol{x}' \frac {e^{i \omega |\boldsymbol{x} - \boldsymbol{x}'|}} {|\boldsymbol{x} - \boldsymbol{x}'|} U(\boldsymbol{x}') \phi^{(0)}(\boldsymbol{x}'), \end{equation} where $\phi^{(0)}$ solves the homogeneous ($U=0$) version of equation \eqref{eq:scalar}. Expression \eqref{eq:truefk} can also be obtained from a path integral \citep{nak99}, in line with the expectation that the wave-optical equations can be derived from quantum-mechanical arguements; see also \cite{feld19} for a derivation in the case of electromagnetic radiation, where the resulting equation(s) are practically identical. \subsection{Thin-lens approximation} As it stands, the Fresnel-Kirchhoff integral \eqref{eq:truefk} contains a non-local Green's function. We introduce the thin-lens approximation to reduce the dimensionality of the problem and to eliminate the unwieldy denominator term; this amounts to projecting each microlens onto a single 2-dimensional screen, the mathematical details of which can be found, for example, in \cite{tak05}. The end result is that the amplification factor, $F = \phi/\phi^{(0)}$, can be written as [see equation (2.11) in \cite{nak99}] \begin{equation} F(\boldsymbol{x}_{s}) = \frac {4 G M_{L}} {c^3} \frac {f_{\text{GW}}} {i} \int d^2 \boldsymbol{x} \exp\left[2\pi i f_{\text{GW}} t_d(\boldsymbol{x},\boldsymbol{x}_{s})\right], \label{eq:diffint} \end{equation} where $\boldsymbol{x}$ (lens-plane-projected coordinates) and $\boldsymbol{x}_{s}$ (source-plane-projected coordinates) are expressed in units of \begin{equation} \label{eq:units} \xi_{0} = R_{E} \,\,\,\, \text{and} \,\,\,\, \eta_{0} = \frac{D_{OS}} {D_{OL}} R_{E}, \end{equation} respectively, and the Einstein radius $R_{E}$ is defined in expression \eqref{eq:einrad}. For a lens that is equidistant between the source and the observer, we have $\eta_{0} = 2 \xi_{0}$. The (normalised) time delay $t_{d}$, up to a constant factor, is a sum of the geometric and Shapiro delays, \begin{equation} \label{eq:timedelay} t_d(\boldsymbol{x},\boldsymbol{x}_{s})= \frac{4 G M_{L}} {c^3} \left[ \frac{1}{2}|\boldsymbol{x}-\boldsymbol{x}_{s}|^2+\psi(\boldsymbol{x}) \right], \end{equation} where $\psi(\boldsymbol{x})$ is the dimensionless deflection potential (i.e., the projection of $U$ onto the 2D lens screen). For a collection of $n$ point lenses, we have \begin{equation} \label{eq:psipot} \psi(x,y) = -\displaystyle{\underset{k \leq n}{\sum}} \left( \frac {M_{k}} {M_{L}} \right) \log \sqrt{ \left( x - x_{k} \right)^2 + \left( y - y_{k} \right)^2 }, \end{equation} where the bodies of mass $M_{k}$ are located at $(x_{k},y_{k})$ on the lens plane. We close this section by noting that in microlensing calculations, one generally also includes convergence ($\kappa$) and shear $(\gamma)$ components related to `off-screen' elements within the time-delay function \eqref{eq:timedelay} \cite[see, e.g.,][]{pac86,meena20,lew20}. The former optical scalar accounts for magnifications due to a smooth mass component (e.g., a background Galactic contribution) while the latter is induced by large-scale anisotropies (e.g., tidal distortions). While the formalism presented in the next section can also be used to study cases with non-zero convergence or shear, we defer such a calculation to a future work. \subsection{Star clusters} For sources out to $\gtrsim 10$~kpc, the likelihood that any emitted GWs non-negligibly interact with the gravitational field of a perturber en route to Earth is relatively low: the lensing probability by stars for a source within the bulge may reach a few times $10^{-6}$ \citep{pac86b,dep01}. If, however, a non-negligible fraction\footnote{\cite{woan18} have provided population-based evidence that all millisecond pulsars house a \emph{minimum} ellipticity of $\varepsilon \sim 10^{-9}$, comparable to expression \eqref{eq:massquad}, a fraction $f_{l}$ of which may be observable in GWs. Although dependent on recycling and star formation assumptions, \cite{zhu15} estimate that the millisecond neutron star birth rate is $\sim 10^{-4} \text{ yr}^{-1}$ from the combined core collapse and accretion-induced collapse channels. A rough estimate for the number of observable sources is then $\sim 10^{6} \times f_{l}$, further multiplied by the fraction of the source's lifetime \cite[$\lesssim$Gyr,][]{zhu15} relative to the age of the Milky Way.} \cite[$\approx 10^{-5}$,][]{liao19} of the $\sim 10^{9}$ neutron stars within the bulge emit appreciable GWs, microlensing events may conceivably be observed by the next generation of interferometers over long $T_{\text{obs}}$. Furthermore, \cite{ras21} have recently suggested that for neutron stars in the globular clusters 47 Tuc and M22, `self-lensing' (i.e., lensing by a fellow member of the cluster) rates may be as high as $2 \times 10^{-3} \text{ yr}^{-1}$ and $4 \times 10^{-5} \text{ yr}^{-1}$, respectively. The single-lens scenario has been considered in detail by \cite{liao19}, who made use of the fact that the Fresnel-Kirchhoff integral can be evaluated analytically in the case of an individual point-mass lens [see equation (17) in \cite{tak03}]. Here, we generalise their scenario by considering instead macrolenses consisting of $n \gtrsim 10^{2}$ stars at various distances from the line of sight. For concreteness, we consider two distinct cluster distributions in this work. Defining an `optical depth', $\tau$, as the ratio of the area covered by the individual Einstein rings to a fiducial screen area $A$, expressed in units of $R_{E}$, we have $\tau \approx n \pi / A$. If we consider a $50 \times 50$ screen (say), we then have that $\tau \approx 10^{-3} n$. We consider two cases, one with $n=25$ ($\tau \approx 0.03$), henceforth the \emph{low optical depth} case, and $n=250$ ($\tau \approx 0.3$), which we call the \emph{high optical depth} case. On a practical level, these two individual distributions are constructed by plucking points from bivariate Gaussians with relatively large variances over a disc of diameter $50 R_{E}$. For each of these two cases, we used the available sampling functions in \small MATHEMATICA\textregistered \, \normalsize to define the macrolens. The resulting initialisations are shown in Figures \ref{fig:grav25} and \ref{fig:grav250}, respectively. In these Figures, the overlaid colour scale shows the projected potential $\psi(\boldsymbol{x})$ from expression \eqref{eq:psipot}, which is naturally stronger by a factor $\sim 10$ in the higher $\tau$ case. It should be stressed that the distributions considered here are not meant to represent any particular astrophysical system. They are constructed primarily to illustrate the mathematical machinery and to qualitatively explore what the wave-optical impact of lensing by $n \gg 1$ stars may be for continuous GWs in the $\sim$kHz band. \begin{figure} \includegraphics[width=0.47\textwidth]{GravPot25.pdf} \caption{Distribution of $n=25$ microlenses (black dots) over the lens plane with a total area $A = 2500 R_{E}^2$. The colour scale depicts the potential $\psi$ from equation \eqref{eq:psipot}, with redder shades indicating a stronger (i.e., less negative) value. } \label{fig:grav25} \end{figure} \begin{figure} \includegraphics[width=0.47\textwidth]{GravPot250NEW.pdf} \caption{Similar to Fig. \ref{fig:grav25}, though for $n=250$ microlenses distributed within the same area. } \label{fig:grav250} \end{figure} \section{Picard-Lefschetz approach} As mentioned in the introduction, evaluating expression \eqref{eq:diffint} is challenging because the integrand oscillates an infinite number of times over the aperture (real plane). Standard numerical methods that involve finite cutoffs, for example, fail to return an adequate evaluation because, depending on whether one truncates at a trough or a crest of the time-delay $t_{d}$, the integral will either be under- or over-estimated, respectively. To make progress, we make use of the ideas behind PL theory, as described by \cite{feld19} \cite[see also][for alternative approaches]{guo20}. We begin by introducing polar coordinates, $x = r \cos \theta$ and $y = r \cos \theta$, so that we have only one (semi)-infinite interval ($0 \leq r < \infty$) to consider. The transformed integral is evaluated for fixed values of $\theta$; given a set of values for $F(\boldsymbol{x}_{s},\theta)$, we can eventually build-up the full 2D integral using Simpson's (or some other standard) method because the angular limits are finite. The PL strategy begins by extending $r$ into the complex plane (i.e., analytically continuing the variable), viz. $r \rightarrow \text{Re}(\boldsymbol{r}) + i \text{ Im}(\boldsymbol{r})$, effectively doubling the number of (real) coordinates. Such an extension allows us to then deform the original integral into the complex plane. In particular, provided that the exponent $t_{d}$ is analytic (cf. Sec. 4.1), integrals around closed, complex contours will vanish by Cauchy's theorem, i.e., \begin{equation} \label{eq:cauchy} \oint_{\Gamma} e^{2 \pi i f_{\text{GW}} t_{d}(\boldsymbol{r},\theta,\boldsymbol{x}_{s})} d \boldsymbol{r} = 0, \end{equation} around any closed loop $\Gamma \subset \mathbb{C}$ (again, for a fixed $\theta$). If we were to thus design a contour consisting of multiple segments $\gamma_{i}$ such that $\Gamma = \sum_{i} \gamma_{i}$, the first of which ($\gamma_{1}$) is the non-negative real line, we can effectively evaluate the original integral \eqref{eq:diffint} by summing the remaining integrals over $\gamma_{i \geq 2}$. The key observation now is that there is freedom in choosing these contours. Noting that the main issue with evaluating the original integral is its oscillatory nature, we proceed by choosing the contours precisely such that the oscillations are damped out as much as possible, so that standard numerical methods may be applied. In general, the function $t_{d}$ can itself be expanded into real and imaginary components, conventionally written as $i t_{d}(\boldsymbol{r},\theta,\boldsymbol{x}_{s}) = h(\boldsymbol{r},\theta,\boldsymbol{x}_{s}) + i H(\boldsymbol{r},\theta,\boldsymbol{x}_{s})$. Starting from the origin, we trace a contour $\boldsymbol{\gamma}(\lambda) = \{ \text{Re} [\boldsymbol{r}(\lambda)], \text{Im} [\boldsymbol{r}(\lambda)] \}$ according to a \emph{Morse flow} \citep{witten11}, \begin{equation} \label{eq:morseflow} \frac {d \gamma^{i}} {d \lambda} = - G^{ij} \frac {\partial h} {\partial \gamma^{j}}, \end{equation} where $G^{ij}$ is a metric on the complex plane\footnote{Throughout this paper, we consider the Kronecker delta, $G^{ij} = \delta^{ij}$, by topologically associating $\mathbb{C}$ with $\mathbb{R}^{2}$. In some cases it may be advantageous to consider different metrics \citep{witten11}, but we ignore such generalisations here for simplicity.} and we have introduced an affine parameter $\lambda$ which labels the position along the curve. Along this particular contour, we have that \begin{equation} \begin{aligned} \frac {d H} {d \lambda} &= \frac {d \gamma^{i}} {d \lambda} \frac {\partial H} {\partial \gamma^{i}} \\ &= -G^{ij} \frac {\partial h} {\partial \gamma^{j}} \frac {\partial H} {\partial \gamma^{i}} \\ &= 0, \end{aligned} \end{equation} where the last equality holds because of the Cauchy-Riemann equations. In effect, the above result demonstrates that the oscillatory portion of the integrand is constant along a Morse flow, and can thus be pulled outside of the integral. Furthermore, the Morse flow is also a contour of steepest descent in the sense that $h$ is always decreasing, $d h / d \lambda = - \sum_{j} (\partial h / \partial \gamma^{j})^{2}$. This is the power of the PL approach: not only are oscillations exorcised, the real portion also decreases as quickly as possibly and truncation can be exacted at low values of $\text{Re}(\boldsymbol{r})$ without sacrificing accuracy \citep{witten11,feld19}. In general, however, there will not be a single Morse flow over the entire domain, but rather it is necessary to consider a sequence of such flows. The reason for this is that if a point of stationary phase is encountered, the right-hand side of equation \eqref{eq:morseflow} vanishes, implying that the flow halts as the velocity $d \boldsymbol{\gamma} / d \lambda$ tends to zero. It is therefore necessary to attach a flow to each individual image, where the beginning (i.e., initial condition) of each segment corresponds to the end-point of the previous plus a small perturbation. Each such segment is referred to as a \emph{Lefschetz thimble} \citep{feld19,jow20,jow21}, the overall sum of which defines the contour we wish to integrate along; for mathematical details concerning the well-posedness of such flows, we refer the reader to \cite{witten11}. In summary, we design a 3-component contour $\Gamma$ by sequentially flowing from the origin according to \eqref{eq:morseflow} ($\gamma_{3}$), eventually joining to the real axis by introducing an arc $(\gamma_{2})$, which then connects back to the origin ($\gamma_{1}$), defining a closed contour. Furthermore, the integral along the arc $\gamma_{2}$ vanishes, under reasonable assumptions, by Jordan's lemma. The method described above can be used in the evaluation of the Fresnel-Kirchhoff integral \eqref{eq:diffint}. Some additional numerical details are given in the next section, while a worked example pertaining to the generalised Fresnel integral is given in the Appendix. Figure \ref{fig:cont} illustrates the core ideas involved in the PL evaluation. \begin{figure} \includegraphics[width=0.47\textwidth]{contour.png} \caption{Graphical representation of Cauchy's theorem, equation \eqref{eq:cauchy}, as applied in the evaluation of the Fresnel-Kirchhoff diffraction integral \eqref{eq:diffint}. The actual integral of interest $(0 \leq \text{Re}(\boldsymbol{r}) < \infty)$ lies along (the negative of) $\gamma_{1}$, shown in red, though two additional lines are introduced to form a closed contour. Starting from the origin, a \emph{Lefschetz thimble} is built by flowing in the direction defined by the Morse equation \eqref{eq:morseflow}, until a point of stationary phase (i.e., an image, labelled $p_{1}$) is reached. At this point, the velocity of the flow effectively tends to zero, and it is necessary to introduce a perturbation to continue the flow. The flow continues until a second image ($p_{2}$) is met. This process of flow and perturbation is continued until all images along the route, the last of which is $p_{k}$, are moved past. The overall sum of these flows defines the contour $\gamma_{3}$ (green), which is then connected back to the real line through an arc ($\gamma_{2}$; blue). This latter integral vanishes, in many cases of interest, by Jordan's lemma. } \label{fig:cont} \end{figure} \subsection{Numerical implementation} In practice, there are several numerical obstacles encountered when applying the above ideas. The first major difficulty concerns the fact that the Morse flow terminates at points of stationary phase. After performing the complex decomposition of the integrand it is then necessary to find the roots, parameterised by the angle $\theta$ and the screen parameters $\boldsymbol{x}_{s}$, of the first derivative of $t_{d}$ plus the logarithm of the Jacobian. For large $n$, this entails solving high-order polynomial equations, which we achieve through an exhaustive search using different initial guesses and the Newton-Raphson method within \small MATHEMATICA\textregistered \normalsize. Once an image is encountered, the flow velocity is then artificially perturbed, $\boldsymbol{\gamma}'(\lambda) \rightarrow \boldsymbol{\gamma}'(\lambda) + \epsilon$, to `kick' the flow onto the next thimble. In practice, we set $\epsilon = 10^{-4}$. Not all images are of relevance, however, since some may be topologically disconnected from the overall flow (see Appendix A). For example, if a saddle occurs at a place of negative real component $[\text{Re}(\boldsymbol{p}) < 0]$, the flow cannot encounter it and the associated thimble is irrelevant. Classifying such irrelevant images and related Stokes transitions is in general a difficult problem; see \cite{feld19} for a thorough discussion. We employ something of a trial-and-error approach in this paper, where images are flowed from, but only connected (relative to the origin) components are kept to ensure topological continuity. The root solver is initialised over a grid of $\theta$ and $\boldsymbol{x}_{s}$ values, and a set of thimbles (in $\boldsymbol{r}$) is then constructed at each grid point. For each figure produced here, $256$ angular steps are used, i.e., $\Delta \theta = 2 \pi / 256$. This value was chosen because the result, tested at several values of $\boldsymbol{x}_{s}$, differed negligibly from the value when using 255 or 257 steps. Generally, higher radiation frequencies require smaller $\Delta \theta$ (by a factor $\sim f / f_{\text{GW}}$) because the integrand varies more rapidly. Simpson's method is then used to sum the $\theta$-integrals together to build the full, 2D diffraction integral \eqref{eq:diffint}. The Morse flow equations \eqref{eq:morseflow} are sequentially solved using a Runge-Kutta method up to some maximum radius $\text{Re}(\boldsymbol{r})$. Formally speaking, this radius must extend to infinity, else Cauchy's theorem \eqref{eq:cauchy} cannot be applied. However, because the Morse flow is also a contour of steepest descent, high accuracy can be achieved even with relatively early truncations that depend on $\boldsymbol{x}_{s}$; see Appendix A. Finally, it is important to note that the function $t_{d}$ is not analytic over the complex plane, as the $\psi$ piece introduces logarithmic singularities at each microlens position $(r_{k} \cos \theta_{k}, r_{k} \sin \theta_{k})$. This is a problem because the flow tends to infinitely wind around singular points (i.e., the velocity blows up), causing the integral to diverge. We have explored several possibilities for tempering the singularities: \begin{itemize} \item{Introducing an additional $n-1$ lens planes can circumvent the appearance of more than one singularity on any given plane, as discussed by \cite{feld20b} \cite[see also][]{ram21}. In particular, if each microlens resides on a different plane, screen-adapted coordinates can be used to center each individual singularity away from the relevant regions, so that it may be ignored. This approach introduces considerable computational demand however, as the Fresnel-Kirchhoff integral effectively becomes $2n$ dimensional.} \item{Various regularisation techniques are possible, including (i) approximating $\psi$ near the singularities by a different function that is regular there \cite[cf.][]{christ18,guo20}, and (ii) cutting out regions of some size surrounding the singularities. These approaches can be difficult to tune however because if too much of the contour is cut, or if $\psi$ is poorly approximated, significant errors can be introduced.} \item{In the building of the total contour $\gamma_{3}$ and applying \eqref{eq:cauchy}, it is not necessary that the entire portion (or even any portion) be a Morse flow. Therefore, if singularities lie within the Morse-constructed contour, one can deform the path to restore analyticity, as is typically done when computing inverse Laplace transforms, for example. These deformed segments could be fittingly called `suboptimal' thimbles.} \end{itemize} A thorough exploration of the above possibilities lies beyond the scope of this paper. Nevertheless, the third option is employed here for concreteness, as some testing with low $n$ cases suggests the results agree with the multi-plane approach to within a few percent. Since it is only ever necessary to suboptimally flow over short segments, the oscillations introduced are not fatal for the numerical method. The simplest such suboptimal thimble, which we use here, is just a horizontal line of constant imaginary coordinate. \section{Results} Having introduced the PL approach, we are now in a position to evaluate expression \eqref{eq:diffint} for the microlens distributions described in Sec. 3.2, i.e., for a sparse cluster (Sec. 5.1) and a dense cluster (Sec. 5.2). \subsection{Low optical depth} Figure \ref{fig:intensity2d25} shows the intensity pattern, as a function of the normalised screen parameters $x_{s}$ and $y_{s}$, associated with the low-optical depth microlens distribution shown in Fig. \ref{fig:grav25}, where we have fixed $f_{\text{GW}} = 1$ kHz. Figure \ref{fig:intensity2d252} instead shows the same intensity profile but for $f_{\text{GW}} = 2$ kHz, i.e., for a star spinning twice as quickly. Although no neutron stars with such a high rotational velocity have been directly observed, this latter case provides a useful comparison to illustrate how the source frequency impacts on the overall intensity map. Furthermore, theoretical models of accretion-induced spin-up allow, in principle, for the rotation rate to reach this level \cite[e.g.,][]{glamp21}, and GW searches at this frequency range have been conducted \citep{derg21}. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{intensity252dNEW.pdf} \caption{The intensity profile, $|F(\boldsymbol{x}_{s})|^2$, associated with the gravitational potential shown in Fig. \ref{fig:grav25} for $f_{\text{GW}} = 1$ kHz. Hotter shades indicate greater amplifications. The resolution is $100 \times 100$ (cells). } \label{fig:intensity2d25} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{intensity252d2NEW.pdf} \caption{Similar to Fig. \ref{fig:intensity2d25} though for $f_{\text{GW}} = 2$ kHz. } \label{fig:intensity2d252} \end{figure} For $f_{\text{GW}} = 1$ kHz, the maximum intensity over the screen is relatively low, $|F|_{\text{max}}^2 = 1.8$, owing primarily to the sparse nature of the microlens distribution and, hence, the weakness of the bulk gravitational potential $U$. For the faster star with $f_{\text{GW}} = 2$ kHz, we instead have $|F|_{\text{max}}^2 = 2.3$. Monotonicity in $|F|_{\text{max}}$ as a function of frequency is generally expected, since as $f_{\text{GW}} \rightarrow \infty$ we approach the geometric optics limit, where the amplification becomes formally infinite along the caustic surface(s) where $\boldsymbol{x} + \tfrac{1}{2} \nabla_{\boldsymbol{x}} \psi = 0$ \citep{jow21}. To put these maxima in perspective, consider the case of a single solar-mass lens, where one finds maximum values of $|F_{n=1}|^2 = 1.21$ and $|F_{n=1}|^2 = 1.44$ for $f_{\text{GW}} = 1$ kHz and $f_{\text{GW}} = 2$ kHz, respectively \citep{nak99}. These maxima occur when the source is oriented directly behind the perturbing body, i.e., at $(x_{s},y_{s}) = (x_{1},y_{1})$. It is unsurprising therefore that the maxima in our simulations occur when the source aligns itself behind the centre of mass of the densest mini-cluster located at $(x_{s},y_{s}) \approx (0,15)$; see Fig. \ref{fig:grav25}. In particular, a collection of point masses in close proximity to one another generally behave as one larger lens around the centre of mass, and since $|F|$ scales with $M_{L}$, as can be seen from \eqref{eq:diffint}, the amplification is larger there. In the local vicinity (i.e., within a few Einstein radii) of isolated stars in our distribution, such as at $(x_{s},y_{s}) \approx (-20,0)$, the intensity strongly resembles that of the single point-lens case \citep{liao19,meena20}. The oscillatory nature of the intensity along any given line is also the hallmark of interference, as expected in a wave-optics calculation. The variability is more extreme in the higher frequency case in Fig. \ref{fig:intensity2d252}, as the dimensionless exponent $f_{\text{GW}} t_{d}$ varies over length-scales exactly half as long. The emergence of interference fringes is also more obvious in this case, especially surrounding isolated members of the cluster, e.g., near $(x_{s},y_{s}) \approx (-20,0)$. The bulk influence is minimal in the vicinity of these regions, and the amplification roughly matches the analytic profile for the single lens case described above, as does the spacing between interference fringes computable from the Fourier spectrum \citep{nak99}. For lower GW frequencies, the amplitudes of the oscillations become virtually invisible since the wavelength is so long that the GWs hardly experience the lens, implying that lensing by stellar-mass bodies is unimportant for `ordinary' neutron stars with $\nu_{\star} \lesssim 10^2$ Hz. To better illustrate the wave-like nature of the Fresnel-Kirchhoff integral, we consider also flux, $|F(t)|^2$, and phase, $\theta_{F}(t) = - i \ln [F(t)/|F(t)|]$, variations along a hypothetical trajectory of the source. To this end, suppose that the neutron star begins at the origin $(x_{s},y_{s}) = (0,0)$ at $t=0$, and then moves, relative to the macrolens, in the $+y_{s}$ direction with velocity $v = 600 \text{ km s}^{-1}$; a value not unreasonable for millisecond pulsars \citep{hobbs06}. (Note, however, that a smaller $v$ but longer $T_{\text{obs}}$ or $D_{OL}$ yields the same qualitative picture). Within a year, the star therefore travels $\sim 14 \eta_{0}$ [see equation \eqref{eq:units}], crossing a large number of interference fringes. Figures \ref{fig:intensity25} and \ref{fig:phase25} show the 1D variations in flux and phase, respectively, experienced along this path as a function of time, where we implicitly ignore variations in $D_{OS}$. The red (blue) data points correspond to $f_{\text{GW}} = 1 (2) \text{ kHz}$, with the size of the symbols roughly characterising the error bars present in the calculation ($\sim$ percent level). \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{Flux25.pdf} \caption{The flux modulation, $|F(t)|^2$, observed in the case of a source moving in the $+y_{s}$ direction relative to the origin at $t=0$ for the microlens distribution shown in Fig. \ref{fig:grav25}. The red (blue) symbols correspond to $f_{\text{GW}} = 1 (2)$ kHz, with their size roughly representing the maximum level of numerical error present in the calculation. The speed of the neutron star is taken to be $v = 600 \text{ km s}^{-1}$, so that it crosses $\sim 14 \eta_{0}$ per year. } \label{fig:intensity25} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{phase25.pdf} \caption{Similar to Fig. \ref{fig:intensity25}, though instead showing the phase drift $\theta_{F} = - i \ln (F/|F|)$, offset such that $\theta_{F}(t=0)=0$. } \label{fig:phase25} \end{figure} The oscillatory nature of the modulation is evident in the flux, especially near $t=0$ where the source is caught between two neighbouring elements of the cluster (the origin in Fig. \ref{fig:grav25}). In regions of low density, i.e., at early or late times, the $f_{\text{GW}} =2$ kHz case oscillates roughly twice as often, as expected from the relative decrease in spacing between interference fringes. The maximum flux achieved by the higher-frequency case is greater than in the lower frequency case by $\sim 25\%$, in accord with Figs. \ref{fig:intensity2d25} and \ref{fig:intensity2d252}. These maxima are achieved after approximately one year of travel time in this setup (or two years with $v \sim 300 \text{ km s}^{-1}$), whereupon the neutron star enters into the densest region shown in the top half of Fig. \ref{fig:grav25}. Similar oscillatory patterns are observed in the phase, which covers a wide range of values ($-2 \lesssim \theta_{F} \lesssim 1.5$) over the full $T_{\text{obs}}$. Because phase drifts exceeding a sizeable fraction of unity are likely to inhibit a fully coherent search \citep{jones04,dre18}, variations on the order seen in Fig. \ref{fig:phase25} are potentially detrimental for detection prospects, even given a flux enhancement (Fig. \ref{fig:intensity25}). Within $\sim$ 6 (3) months, the phase wanders by more than $0.3$ radian for the $f_{\text{GW}} = 1 (2)$ kHz source in this scenario, though the gradient $d \theta_{F} / d t$ increases more rapidly in the dense parts of the cluster and coherence is lost at a faster rate. As such, it is likely that the data would need to be analysed semi-coherently in this scenario over (at most) $\sim$~month-long segments, especially in the higher frequency case, even in the absence of (accretion-induced or otherwise) spin wandering. A thorough examination of the trade-off between lensing-induced amplification and decoherence lies beyond the scope of this work; we refer the reader to \cite{suva16,dre18} for a comparison between fully- and semi-coherent sensitivities. Given a lens model however, such phase errors may be corrected for in the template waveform using the PL methodology described here. Either way, issues related to phase modulation may be further compounded by the Earth's diurnal and orbital motions if the sky location or the relative velocity of the source is poorly constrained. \subsection{High optical depth} Similar to the previous section, Figure \ref{fig:intensity2d250} shows the intensity pattern with $f_{\text{GW}} = 1$ kHz, as a function of the screen parameters, for the $n=250$ case shown in Fig. \ref{fig:grav250}. Because $\psi$ is everywhere larger by a factor $\sim 10$ than in the previous case, and more importantly because the distribution is highly concentrated around the origin, the bulk magnifications in the heart of the cluster are considerably larger ($|F|_{\text{max}}^2 = 50.8$). By contrast, the analytic formula for a point-mass lens with $M_{L} = 250 M_{\odot}$ gives $|F_{n=1}|^2 = 97$ at the origin for the same radiation frequency. For $n \gtrsim 10^2$ and the Gaussian distributions we use, the overall mass density on the screen appears similar to that of a grainy sphere with $\psi$ decaying with radius as a power-law. Over- and under-dense regions in the magnification profile are clearly visible however even for $n=250$, especially at $(x_{s},y_{s}) \approx (2,-2)$ where there is angle-dependent structure within the second peak `ring'. Interference fringes are also abundant, as can be seen from the network of graininess near the origin that extends out to $|\boldsymbol{x}_{s}| \lesssim 20$. Overall, the intensity profile is compactified over the screen relative to the $n=25$ case; in the limit $n \rightarrow \infty$, $\psi$ tends to that of a point-lens with ever-growing $M_{L}$ \citep{nak99}. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{intensity2d250FINAL2.pdf} \caption{Similar to Fig. \ref{fig:intensity2d25}, though for the high optical depth case shown in Fig. \ref{fig:grav250}. The resolution is $70 \times 70$ (cells). } \label{fig:intensity2d250} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{Flux250NEW.pdf} \caption{Similar to Fig. \ref{fig:intensity25}, but for the high optical depth distribution shown in Fig. \ref{fig:grav250}. The source velocity is taken to be $v = 300 \text{ km s}^{-1}$ in the $+y_{s}$ direction. The numerical errors roughly match those of the $n=25$ simulation ($\sim$ percent level), though appear smaller due to the logarithmic scaling of the vertical axis. } \label{fig:intensity250} \end{figure} To better visualise the oscillations above the ambient contribution, suppose that the source moves in the $+y_{s}$ direction with velocity $v = 300 \text{ km s}^{-1}$ (or greater $v$ and proportionately contracted time axis). Similar to Fig. \ref{fig:intensity25}, the variation in flux for the $n=250$ cluster is shown in Figure \ref{fig:intensity250}, where the red (blue) symbols correspond to $f_{\text{GW}} = 1 (2)$ kHz. The emergence of interference fringes is readily apparent as the flux begins at a maximum as the line of sight intersects with the densest region at the origin, descends to an order-unity trough after $\lesssim 2 (1)$ months in the $f_{\text{GW}} = 1 (2)$ kHz case, and then rises on a similar timescale to reach a second peak. Each subsequent peak after the first, the total number of which is doubled in the higher frequency case as expected, is generally of lower amplitude because the stellar density decreases as a function of radius (see Fig. \ref{fig:grav250}). This pattern is not monotonic however, especially around $t \sim 4$ months for the $2$~kHz case where only a mild peak is reached ($|F|^2 = 4$), because the lens distribution is grainy. In this case, the phase drifts are so extreme that coherence is likely to be lost even over timescales of $\sim$~weeks. Once the line of sight encounters only the outskirts of the cluster after $\sim 1.5$ yr, the flux closely resembles that seen in Fig. \ref{fig:intensity25}, though oscillates more frequently as there are a greater number of interference fringes produced by the microlenses. \subsection{Implications for parameter estimation} Unlike in the case of a merging binary, where the bulk of the GW luminosity is produced within a few seconds, continuous GW emissions from a deformed neutron star are longer lived -- potentially persisting for its entire lifetime -- though much fainter; see equation \eqref{eq:massquadamp}. With present-day (future) instruments, it is likely therefore that a detection of these sources requires monitoring for a year (month) or more \citep{suv21,sold21}. If the line of sight linking the neutron star to the detector intersects with a dense cluster during the observation, lensing effects may modulate the strain, as explored in Figs. \ref{fig:intensity25} and \ref{fig:intensity250}. What impact could this be expected to have on parameter estimation? For magnetic deformations, the gravitational waveform is expected to have a sinusoidal profile of the form $h(t) \sim h_{0} \sin (2 \nu_{\star} t)$ plus some subdominant harmonics. The lensed waveform will be similar but with a (time-dependent) prefactor. Because the variations in $h(t)$ are much more rapid than the interference-induced variations in $F(t)$, it may be possible to disentangle the effects of lensing via `beat' patterns \citep{tak05,jung19,meena20}. At the simplest level however, we note that only mean values for the (upper-limit) quadrupole moments of neutron stars are typically reported from search efforts \citep{derg21,ligo21}. This is because the coherent nature of continuous GW-searches necessitates that averaging processes between individual interferometers be carried out to properly subtract noise \citep{jaran98,owen10}. As such, a detection of the mean strain $\langle h_{0} \rangle$ implies that the true strain would be lower by a factor $1/\langle |F| \rangle$, where the angled brackets denote a statistical average over $T_{\text{obs}}$. In realistic cases, the amplification will not exceed $\sim 20\%$ or so (see Fig. \ref{fig:intensity25}), implying that, since $h_{0} \propto B_{\star}^2$, the difference in the magnetic field estimate would likely be no more than $\sim 10\%$. Nevertheless, the signal-to-noise ratio (SNR) of the system, given by \citep{jaran98} \begin{equation} \label{eq:snr} \text{SNR} \simeq \sqrt{ \frac{2}{S_{n}(f_{\text{GW}})} \int^{T_{\text{obs}}}_{0} [h(t)^2] dt }, \end{equation} may non-negligibly increase for large amplifications. Considering the trajectory defined within Fig. \ref{fig:intensity25}, for example, we estimate that the relative increase in the SNR due to lensing for $f_{\text{GW}} = 1$ kHz and $T_{\text{obs}} = 2$ yr reads \begin{equation} \label{eq:snrex} \begin{aligned} \frac {\text{SNR}(F \neq 1)} {\text{SNR}(F = 1)} &\approx \sqrt{\frac { \int^{T_{\text{obs}}}_{0} |F(t)|^2 h_{0}^2 \sin^2(f_{\text{GW}} t) dt} {\int^{T_{\text{obs}}}_{0} h_{0}^2 \sin^2( f_{\text{GW}} t) dt}} \\ &= 1.13. \end{aligned} \end{equation} For the first $T_{\text{obs}} = 1$ yr, we find instead $\text{SNR}(F \neq 1)/\text{SNR}(F = 1) = 1.07$. While these SNR increases are marginal, they could be sufficient to propel an otherwise borderline source into the detectable threshold \citep{lasky15}. As seen in Fig. \ref{fig:phase25} however, these increases may be offset by the fact that coherence is lost due to lensing-induced phase wandering. Depending on the gradient $d \theta_{F} / d t$, it may be necessary to break the data into many shorter segments, thereby actually reducing the overall sensitivity \citep{suva16,dre18}. For greater values of the total lensing mass over a fixed area, we generally find that the SNR increase is higher. For large SNR ($\gtrsim 10$), parameter-estimation errors that may result if lensing were not taken into account can be calculated through a Fisher analysis \citep{tak03}. Given the inner product $( \cdot | \cdot )$, well approximated by the integral in expression \eqref{eq:snr} for a monochromatic source \citep{jaran98}, one can define the Fisher matrix, $\Gamma_{ij} = \left( \frac {\partial h} {\partial q^{i}} | \frac {\partial h} {\partial q^{j}} \right)$, where $\boldsymbol{q}$ is a vector of parameters, which includes the source parameters (e.g., $\varepsilon$), location parameters (e.g., position relative to solar system barycenter), and lensing parameters (e.g., $M_{k}$). In general, these are not all independent. For example, the intensity pattern depends intimately on both $f_{\text{GW}}$ and the $M_{k}$. Regardless, the relative error on any given $q^{k}$ is estimable through $(\Delta q^{k})^{2} = (\Gamma^{-1})_{kk}$. Unfortunately, a reliable calculation of $\Gamma_{ij}$ requires one to build many amplification profiles so that derivatives with respect to the lens parameters can be taken, which is beyond our current computational capacity. Nevertheless, using the methodology described herein, a thorough exploration of the parameter space and relative errors could be performed; see Appendix A in \cite{tak03} for a Fisher analysis in the case of a single point-mass lens. Finally, we note that a misinterpretation of a time-varying $h_{0}$ as being intrinsic to the source, as opposed to being a result of lensing, may have important consequences for the evolutionary modelling of the system. For example, in actively accreting systems that nevertheless show little variation in the spin frequency, it has been argued that GW backreaction may be a key factor in maintaining spin equilibrium \citep{bild98,git19}. The validity of this scenario can be tested, for any given system, if the braking index, $n_{b} = \nu_{\star} \ddot{\nu}_{\star} / \dot{\nu}_{\star}^2$, can be independently measured: GW-dominated spindown implies $n_{b} \approx 5$. However, if $\varepsilon$ were to intrinsically vary over sufficiently short timescales, $n_{b}$ could differ significantly from 5, even for GW-dominated emission \citep{ara16}. Simultaneous measurements of $n_{b}$ and $\varepsilon$ could then be used as an independent means to quantify the wave-like effects of lensing. \section{Discussion} It is hoped that continuous GW signals from (magnetically-deformed or otherwise) neutron stars may be detected in the near future. Owing to their comparatively weak strain \eqref{eq:massquadamp}, a measurement likely requires observational monitoring for at least a year with current technology \citep{suv21,sold21}, during which the relative motion between the source and the detector is important, especially if matched filtering is to be carried out \citep{jaran98,dre18}. Although rare, it is possible that intermittent lensing by one or more stars may take place for sources located within/behind the Galactic bulge or a globular cluster, modulating the resulting signal to a degree that depends on the GW frequency, the lens distribution, and the source velocity \citep{pac86b,dep01,meena20}. Importantly, if the GW wavelength, $c/f_{\text{GW}}$, is comparable to or greater than the Schwarzschild radius, $2 G M_{L}/c^2$, of any given microlens, the wave will be diffracted by the gravitational `slit' \citep{nak99,tak03}. For $M_{L} \sim M_{\odot}$, this condition is fulfilled even for $f_{\text{GW}} \lesssim 10^2$ kHz. If $f_{\text{GW}}$ is too low relative to $M_{L}$ however, the resulting amplifications will be virtually invisible; $\sim$kHz band radiation lensed by stars thereby lies in something of a sweetspot. Geometric optics is inappropriate in this case, and the relevant mathematical object is instead the Fresnel-Kirchhoff diffraction integral \eqref{eq:diffint}. This integral is solved in this paper for a variety of microlens distributions (Figs. \ref{fig:grav25} and \ref{fig:grav250}) using the PL approach described by \cite{feld19,feld20a,feld20b}. Depending on the nature of the macrolens and the neutron star trajectory relative to it, we find that wave-optical lensing may warp the waveform to make it appear as though the ellipticity were varying (see Figs. \ref{fig:intensity25} and \ref{fig:intensity250}), which has implications when estimating the intrinsic properties of the neutron star through \eqref{eq:massquad} and similar formulae \citep{cio09,mast11,lander13}. Furthermore, the overall SNR during an observational run may increase for large amplification factors; see expression \eqref{eq:snr}. If continuous GWs emitted from a source $\sim$10 kpc away were lensed by (sections of) a cluster similar to that of 47 Tuc ($D_{OL} \sim 4$ kpc, $n \sim 10^5$), for example, we find that potentially large amplifications could be achieved (see Fig. \ref{fig:intensity2d250}). \cite{ras21} estimate, for 47 Tuc specifically, that the self-lensing rate for neutron stars is $\sim 2 \times 10^{-3} \text{ yr}^{-1}$. Any flux enhancement may however by offset by the loss of coherence due to phase modulation (Fig. \ref{fig:phase25}), which would inhibit matched filtering. Aside from GWs, wave-like effects are also important in the characterisation of radio sources in a variety of circumstances \citep{mun16,jow20,jow21}. For planetary-mass microlenses and $\lesssim$ GHz band sources, such as radio pulsars or fast radio bursts (FRBs), lensing is likely to operate in the diffractive regime where geometric optics fails to capture the salient features \citep{feld19}. The implementation of the core PL ideas is largely the same in this case, though for radio sources one typically requires a much smaller $\Delta \theta$ [by a factor proportional to $ (M_{\odot} / M_{L}) (f_{\text{GW}}/f_{\text{radio}})$] to adequately resolve the angular variations (see Sec. 4.1). An exploration of such cases for $n \gtrsim 10$ is currently beyond our available computational resources; see \cite{feld20b} for $n \leq 3$ simulations and a discussion on the relevant challenges faced when extending to $n > 3$. It is worth pointing out that the formalism presented here is not restricted to gravitational lensing, but can also be applied to the case of \emph{plasma} lensing, where a similar Fresnel-Kirchhoff integral arises; see also \cite{jow21} for recent applications of PL theory to plasma lensing. We close by noting that there are multiple worthwhile avenues for extension of this work. For the Galactic sources considered here, the lensing probability is at most a few times $10^{-6}$ for sources located within the Galactic bulge \citep{pac86b,dep01}. This probability increases by several orders of magnitude for extragalactic sources, where one might expect anything up to $\lesssim 10^{7}$ microlensing events to take place \citep{pac86}. While continuous waves from sources at redshifts $z \gg 0$ will not be observable for the foreseeable future, burst signals from merger events are now routinely detected out to cosmological distances \cite[the record holder being GW190521 at a redshift $z = 0.82^{+0.28}_{-0.34}$;][]{ligo20}. Designing numerically efficient tools for the study of ever-higher $n$ simulations is thus of astrophysical importance \citep{guo20}. In any case, the results presented here are mostly intended as a proof-of-principle, and thus the lens-plane distributions we have adopted (Figs. \ref{fig:grav25} and \ref{fig:grav250}), while loosely resembling what might be expected of open or globular clusters, are also rather arbitrary. It would be interesting to instead model the microlenses based on astrophysical data. It would also be worth including the optical scalars $\kappa$ and $\gamma$, which might be sizeable fractions of unity in realistic scenarios \citep{lew10,lew20}. Finally, in an effort to better understand the nature of the GWs in strong gravitational fields, one might ambitiously try to extend the PL formalism beyond Newtonian backgrounds described by \eqref{eq:metric}; the lensing theory developed by \cite{hart19} and \cite{cus20} would be useful in this direction. More ambitiously still, wave-optical lensing in theories beyond general relativity could also be explored \citep{ezq20}. \section*{Acknowledgements} I am indebted to Mark Walker and Artem Tuntsov for introducing me to several key references about wave optics, microlensing, and Picard-Lefschetz theory.
1,108,101,566,120
arxiv
\section{Introduction} Decades ago, elliptical galaxies were thought to contain very little, if any, gas. Studies of galaxy formation, therefore, often focussed on the stellar properties, however we now know that a large fraction of the baryonic mass in massive galaxies is believe to be in diffuse form. Thus a complete view of galaxy formation and evolution necessarily incorporates both the stars and hot gas and an understanding of the processes by which these phases interact (McCarthy et al.\ 2010). Cooling-flow clusters are common in the local Universe and massive central cluster galaxies (CCGs) are often found at the centres of these systems (Peres et al.\ 1998). If the central cluster density is high enough, intracluster gas can condense and form stars at the bottom of the potential well. Since the radiative cooling times for intracluster gas are short enough that gas can cool and settle to the cluster centre (Edge, Stewart $\&$ Fabian 1992), it has been suggested that the big envelopes of CCGs may arise from the gradual deposition of this cool gas. More recently, high spectral resolution \textit{XMM-Newton} observations showed that the X-ray gas in cluster centres does not cool significantly below a threshold temperature of $kT\sim1-2$ keV (Jord\'an et al.\ 2004, and references therein). This initially contradicted the model that these young stars are formed in cooling flows. However, it is possible that star formation is ongoing in cool-core clusters at a much reduced rate (Bildfell et al.\ 2008). Previous studies have reported several examples of ongoing star formation in CCGs, in particular those hosted by cooling-flow clusters (Cardiel, Gorgas $\&$ Arag\'{o}n-Salamanca 1998; Crawford et al.\ 1999; McNamara et al.\ 2006; Edwards et al.\ 2007; O'Dea et al.\ 2008; Bildfell et al.\ 2008; Pipino et al.\ 2009; Loubser et al.\ 2009). However, the origin of the gas fuelling this star formation is not yet known. Possible explanations include processes involving cooling flows or cold gas deposited during a merging event (Bildfell et al.\ 2008). These processes will leave different imprints in the dynamical properties, the detailed chemical abundances, and the star formation histories of these galaxies, which can be studied using high-quality spectroscopy (Loubser et al. 2008; 2009; 2012). Observations that support this idea are blue- and ultraviolet-colour (UV-colour) excesses observed (indicative of star formation) in the central galaxy of Abell 1795 by McNamara et al.\ (1996) and molecular gas detected in 10 out of 32 central cluster galaxies by Salom\'{e} $\&$ Combes (2003). The observations by Cardiel et al.\ (1998) were consistent with an evolutionary sequence in which star formation bursts, triggered by radio sources, take place several times during the lifetime of the cooling flow in the centre of the cluster. However, McNamara $\&$ O'Connell (1992) found only colour anomalies with small amplitudes, implying star formation rates that account for at most a few percent of the material that is cooling and accreting onto the central galaxy. Cooling-flow models for CCG formation also imply the formation of larger numbers of new stars, for which there is no good observational evidence (Athanassoula, Garijo $\&$ Garc\'{\i}a-G\'{o}mez 2001). The central cluster galaxies often host radio-loud AGN which may account for the necessary heating to counteract radiative cooling (Von der Linden et al.\ 2007). In summary, cooling flow models predict more cooled gas than is observed (Bohringer et al. 2001). Thus, it is possible that the mass deposited into the molecular clouds is heated by one of several processes - hot young stellar populations, radio-loud AGN, X-rays or heat conduction from the intracluster medium itself, shocks and turbulent mixing layers, and cosmic rays. Therefore, only a small fraction of the cooled gas is detected (Crawford et al. 2005; Ferland et al. 2009). Thus, CCGs lie at the interface where it is crucial to understand the role of feedback and accretion in star formation. Within these cooling-flow CCGs, cool molecular clouds, warm ionized hydrogen, and the cooling intracluster medium are related. A complete view of the star formation process incorporates the stars with the gas and an understanding of the processes by which these phases interact, and therefore, requires information from several wavelength regimes. In conclusion, although CCGs are probably not completely formed in cooling flows, the flows play an important role in regulating the rate at which gas cools at the centres of groups and clusters. In the $\Lambda$CDM cosmology it is now understood that local massive haloes assemble late through the merging of smaller systems. In this picture, cooling flows seem to be the main fuel for galaxy mass growth at high redshift. This source is removed only at low redshifts in group or cluster environments, due to AGN feedback (De Lucia $\&$ Blaizot 2007; Voit et al.\ 2008). Indeed, if AGN feedback is not properly assumed in hydrodynamical simulations, an apparently bluer BCG is formed as a result of an accelerated late stellar birthrate even after the epoch of quiescent star formation (Romeo et al.\ 2008). We proposed and obtained IFU observations of the central few kiloparsecs of the ionised nebulae in active CCGs in cooling-flow clusters. These observations will map the morphology, kinematics and ionisation state of the nebulae to gain an understanding of their formation, heating and relationship to the cluster centre. We selected galaxies from the McDonald et al.\ (2010) study, which consisted of an H$\alpha$ survey of 23 cooling flow clusters. Amongst their conclusions, McDonald, Veilleux $\&$ Rupke (2012) find a stong correlation between the H$\alpha$ luminosity contained in filaments and the X-ray cooling flow rate of the cluster, suggesting that the warm, ionised gas is linked to the cooling flow. We chose objects with confirmed extended H$\alpha$ emission, and with near-IR (2MASS), ultraviolet (GALEX), X-ray data (Chandra), and in some cases VLA 1.4 GHz fluxes, already available (McDonald et al. 2010). Detailed properties of the host clusters, which are reported to influence the activity in the central galaxy, such as central cooling times and the offset between the cluster X-ray peak and the central galaxy, have been derived. Farage et al.\ (2010) presented IFU observations of one nearby BCG showing LINER-like emission, and Brough et al.\ (2011) presented IFU data on four CCGs at z $\sim 0.1$ to calculate the dynamical masses of CCGs and measure their stellar angular momentum (although none of their four CCGs contained emission lines). The SAURON sample includes only one CCG (M87). Hatch, Crawford $\&$ Fabian (2007) and Edwards et al. (2009) analysed data comparable to the data presented here. Hatch et al.\ (2007) presents IFU observations of six emission-line nebulae in cool-core clusters (selected from the optical ROSAT follow-up by Crawford et al.\ 1999) with OASIS on the WHT (with a limited wavelength range centered around H$\alpha$). Edwards et al.\ (2009) presents IFU observations of nine CCGs in cooling and non-cooling clusters (also from Crawford et al.\ 1999) and within 50 kpc of the X-ray centre, with GMOS IFU and OASIS on WHT (most galaxies with wavelength ranges also limited to the H$\alpha$ region, but three also included H$\beta$). Emission line maps of morphology, kinematics, often line ratios, and the stellar continuum have been published for Abell 496 by Hatch et al.\ (2007), and for Abell 2052 by Edwards et al.\ (2009). These studies concluded that no proposed heating mechanism reproduces all the emission-line properties within their observed wavelength within a single source. Thus, a single dominant mechanism may not apply to all CCG nebulae, and there may be a mixture of heating mechanisms acting within a single nebula (Wilman, Edge $\&$ Swinbank 2006; Hatch et al.\ 2007). Thus to get more information which will enable the dominant mechanism(s) to be identified, we observed more line ratios over a longer wavelength range (around H$\alpha$ and H$\beta$ which also allowed extinction to be appropriately measured). We will be adding the detailed stellar population analysis, and place the derived information from the optical spectra in context with multiwavelength data over the full spectrum in a future paper, to explain the diverse nature of these galaxies. All the data for Abell 780 and 1644 is new, and all four of the extinction maps are new. Also, the construction of ionisation diagrams and analysis of ionisation processes with shock models and pAGB models is also new. Our exposure times (improved from previous studies) are shown in Table \ref{table:objects}. This data is also complimentary to the long-slit spectroscopy (on KECK and Magellan) along the H$\alpha$ filaments of the four objects (studied here) by McDonald et al.\ (2012). We introduce the sample and detail of the data reductions in Sections \ref{Sample} and \ref{reduction}. We then derive the optical extinction as well as the line strengths in Section \ref{extinction}. We proceed to discuss the four individual cases in Section \ref{figures_NVSS}, and identify the mechanism producing the emission and the ionised gas in these four CCGs in Section \ref{ionisation}. We summarise the findings of this paper in \ref{summary}. We have used the following set of cosmological parameters: $\Omega_{m}$ = 0.3, $\Omega_{\Lambda}$ = 0.7, H$_{0}$ = 70km s$^{-1}$ Mpc$^{-1}$. \section{Sample} \label{Sample} \begin{figure*} \begin{center} \mbox{\subfigure[Abell 0496 -- MCG-02-12-039]{\includegraphics[width=4.2cm,height=4.2cm]{MCG_8.pdf}}\quad \subfigure[Abell 0780 -- PGC026269]{\includegraphics[width=4.2cm,height=4.2cm]{PGC026_8.pdf}}\quad \subfigure[Abell 1644 -- PGC044257]{\includegraphics[width=4.2cm,height=4.2cm]{PGC044_8.pdf}}\quad \subfigure[Abell 2052 -- UGC09799]{\includegraphics[width=4.2cm,height=4.2cm]{UGC_8.pdf}}} \mbox{\subfigure[Abell 0496 -- MCG-02-12-039]{\includegraphics[width=4.2cm,height=4.2cm]{MCG0212039_B.pdf}}\quad \subfigure[Abell 0780 -- PGC026269]{\includegraphics[width=4.2cm,height=4.2cm]{PGC026269_B.pdf}}\quad \subfigure[Abell 1644 -- PGC044257]{\includegraphics[width=4.2cm,height=4.2cm]{PGC044257_B.pdf}}\quad \subfigure[Abell 2052 -- UGC09799]{\includegraphics[width=4.2cm,height=4.2cm]{UGC09799_B.pdf}}} \mbox{\subfigure[Abell 0496 -- MCG-02-12-039]{\includegraphics[width=4.2cm,height=6.0cm]{MCGcontinuum.pdf}}\quad \subfigure[Abell 0780 -- PGC026269]{\includegraphics[width=4.2cm,height=6.0cm]{PGC026continuum.pdf}}\quad \subfigure[Abell 1644 -- PGC044257]{\includegraphics[width=4.2cm,height=6.0cm]{PGC044continuum.pdf}}\quad \subfigure[Abell 2052 -- UGC09799]{\includegraphics[width=4.2cm,height=6.0cm]{UGCcontinuum.pdf}}} \caption{DSS images of the four targets (north above, and east to the left for all images). The upper plots show 8 $\times$ 8 arcmin field of views, and the middle plots show the targets with the 5 $\times$ 3.5 arcsec IFU field of view overlayed. The top of the IFU FOV is indicated with a blue arrow. The lower plots show continuum images made from the IFU cubes (width of 50 \AA{} at 6350 \AA{}), smoothed spatially with a Gaussian with width 3 spaxels (which corresponds to 0.3 arcsec) and using the Sauron colourmap.} \end{center} \label{fig:Thumbnails} \end{figure*} We have chosen our sample of active central cluster galaxies from the H$\alpha$ imaging presented in McDonald et al. (2010), who in turn, selected their sample from White, Jones $\&$ Forman (1997). McDonald et al.\ (2010) enforced the cuts: $\delta < +35 \deg$ and $0.025 < z < 0.092$, after which they selected 23 clusters to cover the full range of properties, from very rich clusters with high cooling rates to low-density clusters with small cooling flows. Their classical cooling rates range from 6.3 -- 431 M$_{\odot}$ yr$^{-1}$ which means that while covering a large range in properties, their sample consisted of only cooling flow clusters. From their 23 cooling flow clusters, we selected all the clusters with clearly detected H$\alpha$ in their centres (albeit filamentary, extended or nuclear emission). In addition, all of these central galaxies have optical imaging, near-IR (2MASS) and UV (Galex data) available. Thereafter, we selected all the central galaxies with detailed X-ray (Chandra) data, as well as VLA 1.4 GHz fluxes, available. This resulted in a sub-sample of 10 galaxies. We observed four of these galaxies with the GMOS IFU (as shown in Figure \ref{fig:Thumbnails}). We merely chose the objects with the most auxiliary information available. This additional information will be added in the future paper (where the underlying stellar populations will be analysed in detail) and might help to constrain the ionisation mechanisms. The rest-wavelength range of the emission lines of interest is 4860--6731 \AA{} (H$\beta$ to [SII]$\lambda$6731). The ratio of the forbidden [NII]$\lambda$6584 to H$\alpha$ line will depend on the metallicity of the gas, the form of the ionising radiation, and the star formation rate. The relative strength of the [OIII]$\lambda$5007 and H$\beta$ lines reveals further excitation mechanism and gas metallicity information. The role of AGN photoionisation is confined to the central 2 -- 3" of active, massive nearby elliptical galaxies (Sarzi et al. 2006). Thus, IFU observations are ideal and will also allow us to study the 2D-distribution of the ionising radiation. In addition to the information from the emission lines, we are able to extract the underlying stellar absorption spectra using the improved GANDALF code (Sarzi et al. 2006). Thus, the kinematics and morphology of the hot ionised gas and stellar components can be correlated. \begin{table*} \begin{footnotesize} \begin{tabular}{l c c c c c c c c} \hline Object & Cluster & Redshift & Linear scale & R$_{off}$ & T$_{X}$ & Classical cooling rates & Spectrally determined & Exposure Time \\ & & \multicolumn{1}{c}{$z$} & \multicolumn{1}{c}{(kpc/arcsec)} & \multicolumn{1}{c}{(Mpc)} & \multicolumn{1}{c}{(keV)} & \multicolumn{1}{c}{(M$_{\odot}$/yr$^{-1}$)}& \multicolumn{1}{c}{(M$_{\odot}$/yr$^{-1}$)} &\multicolumn{1}{c}{(seconds)} \\ \hline MCG-02-12-039 & Abell 0496 & 0.0329 & 0.654 & 0.031 & 4.8 & 134 & 1.5 & 7 $\times$ 1800 \\ PGC026269 & Abell 0780 & 0.0539 & 1.059 & 0.015 & 4.7 & 222 & 7.5 & 6 $\times$ 1800 \\ PGC044257 & Abell 1644 & 0.0474 & 0.935 & 0.009 & 5.1 & 12 & 3.2 & 6 $\times$ 1800 \\ UGC09799 & Abell 2052 & 0.0345 & 0.685 & 0.038 & 3.4 & 94 & 2.6 & 6 $\times$ 1800 \\ \hline \end{tabular} \caption{Galaxies observed with the Gemini South telescope. All four galaxies show extended H$\alpha$ emission (McDonald et al.\ 2010). The cluster X-ray temperature (T$_{X}$) and classical cooling rates rates (\.{M}) are from White et al.\ (1997). The spectrally determined cooling rates are from McDonald et al.\ (2010). The values for R$_{off}$ are from Edwards et al.\ (2007), with the exception of PGC044257 which is from Peres et al.\ (1998).} \label{table:objects} \end{footnotesize} \end{table*} \begin{table*} \centering \begin{footnotesize} \begin{tabular}{l c c c c c} \hline Object & Cluster & Rest wavelength range & Foreground extinction (mag) & Average extinction (mag) & Radio flux \\ & & \AA{} & E(B-V)$_{galactic}$ & measured E(B-V)$_{total}$ & mJy \\ \hline MCG-02-12-039 & Abell 0496 & 4648 -- 7540 & 0.140 & 0.425 & 121\\ PGC026269 & Abell 0780 & 4743 -- 7693 & 0.042 & 0.210 & 40800\\ PGC044257 & Abell 1644 & 4713 -- 7646 & 0.071 & 0.195 & 98\\ UGC09799 & Abell 2051 & 4655 -- 7552 & 0.037 & 0.460 & 5500\\ \hline \end{tabular} \caption{Further properties of the CCGs observed on Gemini South. Radio fluxes are from the NVSS.} \label{table:objects2} \end{footnotesize} \end{table*} \section{Observations and data reduction} \label{reduction} The data were obtained with the GMOS IFU on the Gemini South telescope in semester 2011A (February to July 2011). The GMOS-IFU in 1-slit mode was used and allowed us to map at least a 3kpc wide region in the centre of the target galaxies with a simultaneus coverage of the 4600-6800 \AA{} range in the target rest frame (using the B600 grating) with a single pointing. This resulted in a spectral resolution of 1.5 \AA{}. This spectral resolution (81 km s$^{-1}$) is poorer than that of Edwards et al.\ (2007, who had a much shorter wavelength range), and much higher than that of Hatch et al.\ (2006) (223 -- 273 km s$^{-1}$). The IFU field-of-view is 5 $\times$ 3.5 arcsec, and this area is divided into 500 lenslets (and another 250 lenslets offset for sky measurements). We obtained six exposures per galaxy, with the exception of MCG-02-12-039 where we obtained 7 exposures, resulting in a total of 12500 galaxy spectra. The targets and exposure times are shown in Table \ref{table:objects}. Our integration time is three times that of Edwards et al.\ (2009), and more than five times that of Hatch et al.\ (2007) with a much bigger instrument. In addition to the targets, the necessary bias, flat-fields, twilight flat-fields, and arcs frames at two different central wavelengths were also observed, as well as a spectrophotometric standard star for flux calibration. Two central wavelength settings were used to avoid losing crutial spectral information in the two CCD gaps. For more detail on the GMOS IFU data reduction process see Gerssen et al.\ (2006). The basic data reduction was done using the GMOS package in IRAF. The IFU sky-to-detector mapping was stored in the data array prior to data reduction. Several bias frames were averaged and subtracted directly from raw data, to correct the zero point for each pixel. Additional care was taken to avoid including raw bias frames in the mean bias frame that drifted measurably with time. Frames were mosaiced, and the overscan regions were trimmed. Flat-field and twilight flat-field frames were used to correct for differences in sensitivity both between detector pixels and across the IFU field. The majority of the cosmic rays were rejected in the individual frames before sky subtraction using the Gemini cosmic ray rejection routine. The remainder of the cosmic rays were eliminated using the LACosmic routine (van Dokkum 2001) with an IRAF script that retained the multi-extension fits format for further reductions. The sets of 2D spectra were calibrated in wavelength using the arc lamp spectra for the two different central wavelength settings. The IFU elements were, thereafter, extracted from the raw data format to a format more convenient for further processing. Sky emission lines and continuum were removed by averaging the sky spectrum over a number of spatial pixels (from the offset sky fibers on the edge of the science field) to reduce the noise level, before subtracting it from all the spatial pixels. Thus the process adds little extra noise to the result since the observations were obtained in dark time (the variability of the sky region was minimal), and a number of 250 spaxels were averaged in the sky-subtraction process. Thus, the error contribution of the sky-subtraction process is $\frac{1}{\sqrt{250}}\times $ the error on one sky spaxel. A spectrophotometric standard star (LTT4816) was used to correct the measured counts for the combined transmission of the instrument, telescope and atmosphere as a function of wavelength. We reduced the standard star observation with the same instrument configuration as the corresponding scientific data. A 1D spectrum was extracted by adding the central spatial pixels from the standard star observation, and it was used to convert the measured counts from the galaxy spectra into fluxes with erg cm$^{-2}$ s$^{-1}$ \AA{}$^{-1}$ units. The reduced 2D arrays were transformed back to a physical coordinate grid ($x, y, \lambda$ datacube) before scientific analysis, while also correcting for atmospheric dispersion. The latter (also called differential refraction) causes the position of a target within the IFU field to vary with wavelength. This correction was necessary as data were taken at different airmasses throughout the observing nights. The spatial offset as a function of wavelength was determined using an atmospheric refraction model (given the airmass, position angle and other parameters) from the SLALIB positional astronomy library. Each hexagonal spaxel (each spatial element) was 0.2 arcsec, and this was subsampled onto a rectangular grid of 0.1 arcsec per spaxel when the separate exposures were combined. The exposures were combined (median averaged) using a centroid algorithm to calculate the shifting in $x$ and $y$, and also shifting in $\lambda$ for the exposures at two different wavelength settings. No additional cosmic rays were visible after the LACosmic task was run on the individual cubes, so cosmic rays were not removed during this step. The cubes were also converted into RSS (row-stacked spectra) for further data reductions in IDL. Each spaxel was averaged with its eight neighbouring spaxels (improving the S/N by a factor of three), which is effectively smoothing over 0.3 arcsecs - this is still slightly undersampled compared to the average seeing ($\sim$1 arcsecs), but only larger regions will be analysed further. \begin{figure} \centering \mbox{\subfigure[6548.2 \AA{}]{\includegraphics[height=3.3cm,width=2.3cm]{Ha1.pdf}}\quad \subfigure[6550.9 \AA{}]{\includegraphics[height=3.3cm,width=2.3cm]{Ha2.pdf}}\quad \subfigure[6553.5 \AA{}]{\includegraphics[height=3.3cm,width=2.3cm]{Ha3.pdf}}} \mbox{\subfigure[6556.2 \AA{}]{\includegraphics[height=3.3cm,width=2.3cm]{Ha4.pdf}}\quad \subfigure[6558.8 \AA{}]{\includegraphics[height=3.3cm,width=2.3cm]{Ha5.pdf}}\quad \subfigure[6561.4 \AA{}]{\includegraphics[height=3.3cm,width=2.3cm]{Ha6.pdf}}} \mbox{\subfigure[6564.1 \AA{}]{\includegraphics[height=3.3cm,width=2.3cm]{Ha7.pdf}}\quad \subfigure[6566.7 \AA{}]{\includegraphics[height=3.3cm,width=2.3cm]{Ha8.pdf}}\quad \subfigure[6569.3 \AA{}]{\includegraphics[height=3.3cm,width=2.3cm]{Ha9.pdf}}} \caption{Slices of width 1 \AA{} (smoothed spatially with Gaussian FWHM 1.5 spaxels and using a SAURON colourmap) of PGC026269 showing the wavelength and spatial scale over which the transition from [NII]$\lambda$6548 to H$\alpha$ emission occurs, and how the morphology of H$\alpha$ changes. Although the continuum emission is usually smooth, the morphologies of the line emission are not uniform.} \label{fig:Thumbnails2} \end{figure} \section{Line measurements and internal extinction} \label{extinction} To accurately measure the emission-line fluxes of the CCG spectra, we use a combination of the \textsc{ppxf} (Cappellari $\&$ Emsellem 2004) and \textsc{gandalf} (Gas AND Absorption Line Fitting algorithm; Sarzi et al.\ 2006) routines\footnote{We make use of the corresponding \textsc{ppxf} and \textsc{gandalf IDL} (Interactive Data Language) codes which can be retrieved at http:/www.leidenuniv.nl/sauron/.}. Gandalf version 1.5 was used as it enables a reddening correction to be performed, and it encorporates errors. This code treats the emission lines as additional Gaussian templates, and solves linearly at each step for their amplitudes and the optimal combination of stellar templates, which are convolved by the best stellar line-of-sight velocity distribution. The stellar continuum and emission lines are fitted simultaneously. All 985 stars of the MILES stellar library (S\'{a}nchez-Bl\'{a}zquez et al.\ 2006) were used as stellar templates to automatically include $\alpha$-enhancement in the derived optimal template. The emission lines were masked while the optimal template was derived. The H$\alpha$ and [NII]$\lambda$6583 lines were fitted first, and the kinematics of all the other lines were tied to these lines, following the procedure described in Sarzi et al.\ (2006). However, in cases where the emission of the other lines were strong enough to measure velocity and velocity dispersion (as was mainly the case except for the extreme edges of the data cube), this was calculated independently as there is no a priori reason to expect the kinematics measured from all the lines to be the same (as they can originate in different regions). After the kinematics are fixed, a Gaussian template is constructed for each emission line at each iteration, and the best linear combination of both stellar and emission-line templates (with positive weights) is determined. This is done without assuming line ratios, except in the case of doublets where their relative strength is fixed by the ratio of the corresponding transition probabilities. We have adapted the gandalf code to apply it to the GMOS IFU cubes for a longer wavelength range. All 1617 spaxels were collapsed together to obtain a 1D spectrum per cube, thereafter all 985 stars for the MILES library were used to create a global optimal template for each galaxy. This global optimal template (and the stars it consisted of -- to account for varying weights over the spatial region) was then applied to all 1617 spectra per cube. The spectral types of the stars that the global optimal templates consisted of (from the MILES library) are shown in Table \ref{absorption}. \begin{table*} \centering \begin{footnotesize} \begin{tabular}{l c c c} \hline Object & Spectral types in the global optimal template\\ \hline MCG-02-12-039 & A0 Ia, G0, G1 Ib, G5, G8 III, K0, K0 III, K IIvw \\ PGC026269 & A0 Ia, K0 Ibpvar, M4 III \\ PGC044257 & B3 III, A5, F3 V, F6, F7 V, G0 Vw, G1 Ib, G5, G5 V, G8 III/IVw, K1 III \\ UGC09799 & B8 Ib, A5, F3 III, G0 Vw, G1 Ib, K0 III\\ \hline \end{tabular} \caption{Properties of the underlying stellar populations.} \label{absorption} \end{footnotesize} \end{table*} Some ellipticals contain dust in the centre that can be patchy, uniform or filamentary (Laine et al.\ 2003). The long wavelength range of the spectra allows us to constrain the amount of reddening using the observed decrement of the Balmer lines, which can be set to have an intrinsic decrement consistent with the recombination theory by treating the lines as a multiplet. The physical constraints on the emission from the higher-order Balmer lines also ensures the strength of the corresponding absorption features is correctly estimated. We used the dust models by Calzetti et al.\ (2000) to calculate the flux attenuation values at the desired wavelength for any given $E(B-V)$ value (optional see below). The Balmer decrement assumes a case B recombination for a density of 100 cm$^{-3}$ and a temperature of 10$^{4}$ K, resulting in the predicted H$\alpha$/H$\beta$ ratio of 2.86 (Osterbrock 1989). The code can adopt either a single dust component, affecting both the stellar continuum and the emission-line fluxes, or in addition include a second dust component that affects only the emission-line templates. We did not specify Galactic extinction (from the NED database, Schlegel, Finkbeiner $\&$ Davis 1998), but we measured a total diffuse component which gave the total extinction (hence including the foreground galactic extinction). This total extinction measured from the Balmer decrement are shown in Figure \ref{fig:MCG_extinct} and averages are noted in Table \ref{table:objects2} (the galactic extinction noted in Table \ref{table:objects2} was not subtracted). The extinction was smoothed over 0.3 arcseconds, and is only plotted where the velocity dispersion of the H$\alpha$ line is less than 500 km s$^{-1}$ (to avoid spaxels where the H$\alpha$ line could not be separated from the [NII] lines), and it is only plotted where the amplitude-to-noise (A/N) ratio of the H$\alpha$ line is higher than 3 (as defined in Sarzi et al.\ 2006)\footnote{The A/N is related to the S/N: $EW=\frac{F}{S}=\frac{A/N \times \sqrt{2 \pi \sigma}}{S/N}$ where $EW$ is the equivalent width of the line, and $\sigma$ is the line width.}. The A/N maps of the H$\alpha$ measurements are shown in Section \ref{figures_NVSS} which gives a direct indication of the errors on the extinction. The parameter $E(B-V)$, i.e.\ the colour excess between 4350 \AA{} and 5550 \AA{}, for the galactic extinction for each of the four galaxies was taken from the NED database (Schlegel et a.\ 1998), and ranged between 0.037 and 0.140 mag (Table \ref{table:objects2}). The parameter R$_{\rm V}$, i.e.\ the ratio of the absolute extinction at 5550 \AA{} (A$_{\rm V}$) to the colour excess $E(B-V)$, was taken as 3.1 for the interstellar medium (Cardelli, Clayton $\&$ Mathis 1989). The total extinction is given by: \[E(B-V)_{total}=\frac{2.177}{-0.37 R_{V}} \times (\log {\frac{H\alpha_{0}}{H\beta_{0}}}-\log {\frac{H\alpha_{Obs}}{H\beta_{Obs}}})\] The theoretical H$\alpha$/H$\beta$ flux ratio of 2.86 may not be the ideal value to use for Seyfert-type galaxies, but the actual value is debated. It is often assumed that the H$\alpha$ emission in these systems is enhanced due to collisional processes, and several authors use a value $R_{V}$ of 3.1 (Gaskell $\&$ Ferland 1984; Osterbrock $\&$ Ferland 2006), although other values have also been determined (Binette et al.\ 1990 calculate a value of 3.4). The total extinction maps are presented in Figure \ref{fig:MCG_extinct}, and shows mostly very low extinction, but some morphological features can be seen in MCG-02-12-039 and UGC09799. Particularly high values of $E(B-V)_{internal}$ can be seen in UGC09799. The galactic extinction of PGC044257 is $E(B-V)_{gal}$ = 0.071 mag, and from long-slit spectra, Crawford et al.\ (1999) derived the total extinction as 0.46 to 0.63 mag. This agrees with the extinction we derived for the very centre of the galaxy in Figure \ref{fig:MCG_extinct}, but on average our spatially resolved extinction is slightly lower. The galactic extinction $E(B-V)_{gal}$ of UGC09799 is 0.037 mag, and Crawford et al.\ (1999) derived an integrated internal extinction of $E(B-V)_{internal}$ of 0.22 mag for the centre of this galaxy. This corresponds very well to what we derived and plotted in Figure \ref{fig:MCG_extinct}, although we find that some regions show much higher internal extinction. The values of extinction determined here may be slightly overestimated due to the choice of intrinsic H$\alpha$/H$\beta$ flux used. Figure \ref{fig:Thumbnails2} show slices with width 1 \AA{} (smoothed spatially with Gaussian FWHM 1.5 spaxels and using the Sauron colourmap) of PGC026269 showing the wavelength and spatial scale over which the morphology of H$\alpha$ changes. The morphology of the nuclear region changes rapidly across the emission lines. This is illustrated in the figure showing a series of monochromatic slices. Although the continuum emission is usually smooth, the morphologies of the line emission are not uniform. We are able to measure H$\alpha$ line fluxes with 3 A/N accuracy at around 5 $\times$ 10$^{-18}$ erg cm$^{-2}$ s$^{-1}$. After correcting for extinction, we are able to measure the following lines within our wavelength range: [ArIV]$\lambda$4740 \AA{}; H$\beta \lambda$4861 \AA{}; [OIII]$\lambda \lambda$4958,5007 \AA{}; [NI]$\lambda \lambda$5198,5200 \AA{}; HeI$\lambda$5876 \AA{}; [OI]$\lambda \lambda$6300,6364 \AA{}; [NII]$\lambda \lambda$6548,6583 \AA{}; H$\alpha$ at 6563 \AA{}; [SII]$\lambda \lambda$6716,6731 \AA{} (see Figures \ref{fig:MCGkinematics2} to \ref{fig:UGCkinematics3} and \ref{Hbeta} for the A/N ratios of the individual emission lines). H$\alpha$ and [NII]$\lambda \lambda$6548,6583 \AA{} has already been measured for MCG-02-12-039 and UGC09799, and H$\beta$ and [OIII]$\lambda \lambda$4958,5007 \AA{} has also been measured for UGC09799. All the other line measurements are new. \renewcommand*{\thesubfigure}{}.pdf \begin{figure*} \mbox{\subfigure{\includegraphics[scale=0.7, trim=1mm 1mm 50mm 1mm, clip]{MCGextinction.pdf}}\quad \subfigure{\includegraphics[scale=0.7, trim=1mm 1mm 50mm 1mm, clip]{PGC026extinction.pdf}}} \mbox{\subfigure[MCG-02-12-039]{\includegraphics[scale=0.3]{MCG_arrow.pdf}}\quad \subfigure[PGC026269]{\includegraphics[scale=0.3]{PGC026_arrow.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.7, trim=1mm 1mm 50mm 1mm, clip]{PGC044extinction.pdf}}\quad \subfigure{\includegraphics[scale=0.7, trim=1mm 1mm 50mm 1mm, clip]{UGCextinction.pdf}}} \mbox{\subfigure[PGC044257]{\includegraphics[scale=0.3]{PGC044_arrow.pdf}}\quad \subfigure[UGC09799]{\includegraphics[scale=0.3]{UGC_arrow.pdf}}} \caption{Total extinction of the four CCGs. The extinction was smoothed over 0.3 arcseconds, and is only plotted where the velocity dispersion of the H$\alpha$ line is less than 500 km s$^{-1}$ (to avoid spaxels where the H$\alpha$ line could not be separated from the [NII] lines), and it is only plotted where the A/N ratio of the H$\alpha$ line is higher than 3. Overplotted is the H$\alpha$ flux contours one magnitude apart. The grey regions in these, and all other, IFU maps are where the A/N is too low to plot.} \label{fig:MCG_extinct} \end{figure*} \renewcommand*{\thesubfigure}{(\alph{subfigure})} \section{Individual galaxies} \label{figures_NVSS} The continuum emission of the galaxies are shown in Figure \ref{fig:Thumbnails}. Figure \ref{fig:MCG_gandalf}, \ref{fig:PGC026_gandalf}, \ref{fig:PGC044_gandalf} and \ref{fig:UGC_gandalf} shows a random spectrum in the central region of MCG-02-12-039, PGC026269, PGC044257 and UGC09799 respectively. The best-fitting stellar template, Gaussians at the emission lines, best-fitting stellar templates with the emission lines subtracted and the relative flux for the measured emission lines are also shown. The absorption-extracted, dereddened maps of the H$\alpha$, [NII]$\lambda$6583/H$\alpha$, and [OIII]$\lambda$5007/H$\beta$ emission are shown in Figure \ref{Haflux}. For comparison purposes, the kinematics were extracted from the IFU image along the same slit position as the long-slit data in Loubser et al.\ (2008). This comparison for MCG-02-12-039, PGC026269 and PGC044257 is showed in Section \ref{stellar_kinematics_comparison}, and the data points compare satisfactory. The absorption-extracted, dereddened maps of the H$\alpha$ emission are shown in Figure \ref{Haflux} in units of 10$^{-15}$ erg cm$^{-2}$ s$^{-1}$. We also show the useful ratios [NII]$\lambda$6583/H$\alpha$, and [OIII]$\lambda$5007/H$\beta$. Figures \ref{fig:MCGkinematics2} to \ref{fig:UGCkinematics3} shows the A/N ratios of the H$\alpha$, [NII]$\lambda$6583 \AA{}; [SII]$\lambda \lambda$6716,6731 \AA{}; [OIII]$\lambda$5007 \AA{}; [OI]$\lambda$6300 \AA{} lines. As mentioned in the Introduction, MCG-02-12-039 and UGC09799 already has previous IFU observations which is compared (in terms of S/N and lines measurable) in the beginning of Section 3 and the last paragraph of Section 4. The stellar kinematics in Figures \ref{fig:MCGkinematics}, \ref{fig:PGC026kinematics}, \ref{fig:PGC044kinematics} and \ref{fig:UGCkinematics} were measured using all the absorption lines within our wavelength range. For a sample of $\sim$50 elliptical galaxies, Sarzi et al.\ (2006) find ionised gas velocities (estimated using the [OIII]$\lambda5007$ line) between --250 and 250 km s$^{-1}$, and gas velocity dispersion as high as 250 km s$^{-1}$. Using the same line, we find gas velocities from $\pm 100$ to $\pm 350$ km s$^{-1}$, and line widths from 200 to 420 km s$^{-1}$. Using H$\alpha$, we find gas velocities from $\pm 125$ to $\pm 350$ km s$^{-1}$, and line widths from 200 to 400 km s$^{-1}$. \subsection{MCG-02-12-039} \begin{figure*} \centering \includegraphics[scale=0.8]{MCG.pdf} \caption{A random spectrum in the central region of MCG-02-12-039. The red line indicates the best-fitting stellar template and Gaussians at the emission lines, and the green line indicates the best-fitting stellar templates with the emission lines subtracted. The blue line indicates the relative flux for the measured emission lines. The lines are (from left to right) in the left plot: [ArIV], H$\beta$, and the [OIII] doublet, and in the right plot: [OI] doublet, [NII] doublet, and H$\alpha$. The H$\alpha$ A/N of this pixel was 15.} \label{fig:MCG_gandalf} \end{figure*} The central galaxy (MCG-02-12-039) of Abell 0496 is a fairly weak line emitter (Fabian et al. 1981; Hu, Cowie $\&$ Wang 1985) and is host to the compact radio source MSH 04-112 (Markovic, Owen $\&$ Eilek 2004). Abell 0496 is a relaxed cluster with a cool core with a central metal abundance enhancement (Tamura et al. 2001). At a redshift of 0.0329, the galaxy has a linear scale of 0.656 kpc arcsec$^{-1}$ (from Hatch et al.\ 2007). This cluster has an interesting H$\alpha$ morphology (McDonald et al.\ 2010). There are at least 5 distinct filaments, with various shapes and directions. The two longest filaments run parallel to each other for $\sim$ 12 kpc (18.3 arcsec, whereas our observations only cover 3.5 by 5 arcsecs in the centre). Figure \ref{Haflux} shows that the peak H$\alpha$ emission is also where the dust (extinction) is the highest in Figure \ref{fig:MCG_extinct}. This is similar to the finding of Hatch et al.\ (2007), where they found that the peak H$\alpha$ + [NII] emission corresponds to the area where the three dust lanes meet in the galaxy centre. We confirm the observation by Hatch et al.\ (2007), that the line emission follows the general path of the dust (extinction) features, but is not as filamentary. IFU observations of this galaxy was also taken by Hatch et al.\ (2007), but they only measured the H$\alpha$ and [NII]$\lambda \lambda$6548, 6583 \AA{} lines. Their spectral resolution is much poorer at 223 - 273 km s$^{-1}$ than ours at 81 km s$^{-1}$. Our S/N is also more than five times that of Hatch et al.\ (2007), observed with a much bigger instrument. We plot H$\alpha$, [NII], [SII], [OIII] and [OI] velocities, line width and A/N in Figure \ref{fig:MCGkinematics2} and \ref{fig:MCGkinematics3}. The stellar component displays a elongated morphology in the kinematics plotted in Figure \ref{fig:MCGkinematics}. The gas components also appear elongated (kinematics plotted in Figure \ref{fig:MCGkinematics2}), but to a lesser extend. It suggests that the stars and gas are kinematically decoupled. Comparison of the gas kinematics (Figures \ref{fig:MCGkinematics2} and \ref{fig:MCGkinematics3}) with the stellar kinematics presented in Figure \ref{fig:MCGkinematics}, and Figure \ref{fig:MCG_kinematics}, as well as by Fisher, Illingworth $\&$ Franx (1995) confirms the Hatch et al.\ (2007) observations that the two components are disconnected. Comparison of the plots in Figure \ref{fig:MCGkinematics2} showed that, to the degree that our spatial resolution reveals, it appears that all the optical forbidden and hydrogen recombination lines originate in the same gas. We find a maximum emission line width of 400 km s$^{-1}$ as shown in Figures \ref{fig:MCGkinematics2} and \ref{fig:MCGkinematics3}. This is again similar to the finding of Hatch et al.\ (2007). They found a maximum linewidth of 600 km s$^{-1}$ in the dust-free central region to the north-east of the galaxy centre, and the rest of the nebula to have a linewidth of 100 -- 250 km s$^{-1}$. The stellar line-of-sight velocity reveals that some parts of the nucleas are blueshifted by $\sim$125 km s$^{-1}$ and some parts are redshifted by the same amount. Hatch et al.\ (2007) found the southern part of the galaxy to be blueshifted by --200 km s$^{-1}$ whilst the northern section is marginally redshifted up to +150 km s$^{-1}$. No clear kinematic pattern is associated with the dust structures. The peak- to-peak gas velocity of 250 km s$^{-1}$ is fairly low compared to the other CCGs, and the stellar component of Abell 0496 has a mean rotation of 59 km s$^{-1}$ (Paper 1). \begin{figure*} \centering \mbox{\subfigure[Stellar velocity]{\includegraphics[scale=0.6, trim=1mm 1mm 50mm 1mm, clip]{MCG_stars_kin.pdf}}\quad \subfigure[Stellar velocity dispersion]{\includegraphics[scale=0.6, trim=1mm 1mm 50mm 1mm, clip]{MCG_stars_sigma.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.25]{MCG_arrow.pdf}}} \caption{MCG-02-12-039: Velocity and velocity dispersion of the absorption lines in km s$^{-1}$.} \label{fig:MCGkinematics} \end{figure*} \begin{figure*} \mbox{\subfigure[H$\alpha$ velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_Ha_vel.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_NII_vel.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_SII_vel.pdf}}} \mbox{\subfigure[H$\alpha$ line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_Ha_sigma.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_NII_sigma.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_SII_sigma.pdf}}} \mbox{\subfigure[H$\alpha$ A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_Ha_AN.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_NII_AN.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_SII_AN.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.28]{MCG_arrow.pdf}}} \caption{MCG-02-12-039: Velocity (in km s$^{-1}$), line width (in km s$^{-1}$) and A/N of the H$\alpha$, [NII]$\lambda$6583 and [SII]$\lambda\lambda$6731,6717 lines.} \label{fig:MCGkinematics2} \end{figure*} \begin{figure*} \mbox{\subfigure[{[OIII]}$\lambda$5007 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_OIII_vel.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_OI_vel.pdf}}} \mbox{\subfigure[{[OIII]}$\lambda$5007 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_OIII_sigma.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_OI_sigma.pdf}}} \mbox{\subfigure[{[OIII]}$\lambda$5007 A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_OIII_AN.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{MCG_OI_AN.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.28]{MCG_arrow.pdf}}} \caption{MCG-02-12-039: Velocity (in km s$^{-1}$), line width (in km s$^{-1}$) and A/N of the [OIII]$\lambda$5007 and [OI]$\lambda$6300 lines.} \label{fig:MCGkinematics3} \end{figure*} \subsection{PGC026269} \begin{figure*} \centering \includegraphics[scale=0.8]{PGC026.pdf} \caption{A random spectrum in the central region of PGC026269. The red line indicates the best-fitting stellar template and Gaussians at the emission lines, and the green line indicates the best-fitting stellar templates with the emission lines subtracted. The blue line indicates the relative flux for the measured emission lines. The lines are (from left to right) in the left plot: H$\beta$, the [OIII] doublet, and [NI], and in the right plot: [OI] doublet, [NII], H$\alpha$, [NII], and the [SII] doublet.The sharp feature to the left of the first [NII] line is one of the two CCD chip gaps and was masked during the pPXF and GANDALF fitting processes.} \label{fig:PGC026_gandalf} \end{figure*} This galaxy coincides with the well-known luminous radio counterpart Hydra A. Abell 0780 is a poor cluster with an associated cooling flow nebulae. IFU observations of this galaxy has not been presented in the literature before this study. The stellar line-of-sight velocity (from the IFU data) is blue-and redshifted by $\sim$175 km s$^{-1}$ which is slightly higher than that derived from long slit spectroscopy (51 $\pm$ 20 km s$^{-1}$ Paper 1). We plot the kinematics of the stellar and gaseous components in Figures \ref{fig:PGC026kinematics} and \ref{fig:PGC026kinematics2}. Both the stellar and gaseous components show clear rotation in Figures \ref{fig:PGC026kinematics} and \ref{fig:PGC026kinematics2}, although it also seems to be decoupled. This system bears resemblance to the BCG NCG 3311 in Abell 1060, studied by Edwards et al.\ (2009) which also showed striking rotation. The rotation of the warm gas agrees with the long-slit observations by McDonald et al.\ (2012). We plot H$\alpha$, [NII], [SII], [OIII] and [OI] velocities, line width and A/N in Figure \ref{fig:PGC026kinematics2} and \ref{fig:PGC026kinematics3}. Comparison of the plots in Figure \ref{fig:MCGkinematics2} showed that, to the degree that our spatial resolution reveals, it appears that all the optical forbidden and hydrogen recombination lines originate in the same gas. Wise et al. (2007) present a summary of the X-ray and radio properties of this cluster, showing the excellent correlation between the radio jets and the X-ray cavities. The arcing H$\alpha$ filament that McDonald et al.\ (2010) detect north of the CCG (on a larger scale as this study) appears to be spatially correlated with the radio jet. Our Figure \ref{fig:BPTs} (d -- f) show that our optical observations confirms that the photoionisation is caused by an AGN (the green and red curves), and are therefore consistent with the X-ray and radio studies of this system on larger scales. \begin{figure*} \centering \mbox{\subfigure[Stellar velocity]{\includegraphics[scale=0.6, trim=1mm 1mm 50mm 1mm, clip]{PGC026_stars_kin.pdf}}\quad \subfigure[Stellar velocity dispersion]{\includegraphics[scale=0.6, trim=1mm 1mm 50mm 1mm, clip]{PGC026_stars_sigma.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.25]{PGC026_arrow.pdf}}} \caption{PGC026269: Velocity and velocity dispersion of the absorption lines in km s$^{-1}$.} \label{fig:PGC026kinematics} \end{figure*} \begin{figure*} \mbox{\subfigure[H$\alpha$ velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_Ha_vel.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_NII_vel.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_SII_vel.pdf}}} \mbox{\subfigure[H$\alpha$ line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_Ha_sigma.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_NII_sigma.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_SII_sigma.pdf}}} \mbox{\subfigure[H$\alpha$ A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_Ha_AN.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_NII_AN.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_SII_AN.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.28]{PGC026_arrow.pdf}}} \caption{PGC026269: Velocity (in km s$^{-1}$), line width (in km s$^{-1}$) and A/N of the H$\alpha$, [NII]$\lambda$6583 and [SII]$\lambda\lambda$6731,6717 lines.} \label{fig:PGC026kinematics2} \end{figure*} \begin{figure*} \mbox{\subfigure[{[OIII]}$\lambda$5007 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_OIII_vel.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_OI_vel.pdf}}} \mbox{\subfigure[{[OIII]}$\lambda$5007 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_OIII_sigma.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_OI_sigma.pdf}}} \mbox{\subfigure[{[OIII]}$\lambda$5007 A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_OIII_AN.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC026_OI_AN.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.28]{PGC026_arrow.pdf}}} \caption{PGC026269: Velocity (in km s$^{-1}$), line width (in km s$^{-1}$) and A/N of the [OIII]$\lambda$5007 and [OI]$\lambda$6300 lines.} \label{fig:PGC026kinematics3} \end{figure*} \subsection{PGC044257} This CCG is one of the rare cases (occurs in $\sim$ 3 per cent of CCGs; Hamer et al.\ 2012) where the CCG is slightly offset from the optical line emission (Johnson et al.\ 2010). The rarity of such offsets points to a large event in cluster evolution, a major cluster merger or possibly a powerful AGN outburst. Whatever the reason for the separation the gas cooling at the X-ray peak will continue and cooled gas will be deposited away from the CCG (Hamer et al.\ 2012). \begin{figure*} \centering \includegraphics[scale=0.8]{PGC044.pdf} \caption{A random spectrum in the central region of PGC044257. The red line indicates the best-fitting stellar template and Gaussians at the emission lines, and the green line indicates the best-fitting stellar templates with the emission lines subtracted. The blue line indicates the relative flux for the measured emission lines. The lines are (from left to right) in the left plot: H$\beta$, and the [OIII] doublet, and in the right plot: [OI] doublet, [NII], H$\alpha$, [NII], and the [SII] doublet.} \label{fig:PGC044_gandalf} \end{figure*} IFU observations of this galaxy has not been presented in the literature before this study. The stellar line-of-sight velocity is blue-and redshifted by $\sim$225 km s$^{-1}$ which is higher than derived from long slit spectroscopy (20 $\pm$ 16 km s$^{-1}$ Paper 1). We plot the kinematics of the stellar and gaseous components in Figures \ref{fig:PGC044kinematics} and \ref{fig:PGC044kinematics2}. Both the stellar and gaseous components show morphologies (Figures \ref{fig:PGC044kinematics} and \ref{fig:PGC044kinematics2}) that are elongated and aligned. However the recession velocity of the gas is $\sim$ 200 km s$^{-1}$, which is higher than that of the stars, suggesting that the gaseous and stellar components are decoupled. The rotation of the warm gas agrees with the long-slit observations by McDonald et al.\ (2012). We plot H$\alpha$, [NII], [SII], [OIII] and [OI] velocities, line width and A/N in Figure \ref{fig:PGC044kinematics2} and \ref{fig:PGC044kinematics3}. To the degree that our spatial resolution reveals, it appears that all the optical forbidden and hydrogen recombination lines originate in the same gas. We find the H$\alpha$ flux to be uniform in Figure \ref{Haflux} suggesting that all of the gas had the same origin, but the H$\alpha$ flux is also quite low. If Figure \ref{Haflux} is compared to Figure \ref{fig:PGC044kinematics2}, then it can be seen that some structure is visible where the A/N of H$\alpha$ is the highest. The H$\alpha$ appear to be quite core-dominated. McDonald et al.\ (2010) found that the direction of the H$\alpha$ filaments (from narrow band imaging) correlate with the position of nearby galaxies. The filaments in Abell 1644 also appear to be star-forming (McDonald et al.\ 2011). The star forming filaments that McDonald et al.\ (2011) detected are on a bigger scale than the region we probe in this study. Our results show that the very centre of the system display properties of a LINER (see Section \ref{ionisation}), and therefore both mechanisms may be part of this system at different scales. \begin{figure*} \centering \mbox{\subfigure[Stellar velocity]{\includegraphics[scale=0.6, trim=1mm 1mm 50mm 1mm, clip]{PGC044_stars_kin.pdf}}\quad \subfigure[Stellar velocity dispersion]{\includegraphics[scale=0.6, trim=1mm 1mm 50mm 1mm, clip]{PGC044_stars_sigma.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.25]{PGC044_arrow.pdf}}} \caption{PGC044257: Velocity and velocity dispersion of the absorption lines in km s$^{-1}$.} \label{fig:PGC044kinematics} \end{figure*} \begin{figure*} \mbox{\subfigure[H$\alpha$ velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_Ha_vel.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_NII_vel.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_SII_vel.pdf}}} \mbox{\subfigure[H$\alpha$ line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_Ha_sigma.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_NII_sigma.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_SII_sigma.pdf}}} \mbox{\subfigure[H$\alpha$ A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_Ha_AN.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_NII_AN.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_SII_AN.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.28]{PGC044_arrow.pdf}}} \caption{PGC044257: Velocity (in km s$^{-1}$), line width (in km s$^{-1}$) and A/N of the H$\alpha$, [NII]$\lambda$6583 and [SII]$\lambda\lambda$6731,6717 lines.} \label{fig:PGC044kinematics2} \end{figure*} \begin{figure*} \mbox{\subfigure[{[OIII]}$\lambda$5007 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_OIII_vel.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_OI_vel.pdf}}} \mbox{\subfigure[{[OIII]}$\lambda$5007 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_OIII_sigma.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_OI_sigma.pdf}}} \mbox{\subfigure[{[OIII]}$\lambda$5007 A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_OIII_AN.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{PGC044_OI_AN.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.28]{PGC044_arrow.pdf}}} \caption{PGC044257: Velocity (in km s$^{-1}$), line width (in km s$^{-1}$) and A/N of the [OIII]$\lambda$5007 and [OI]$\lambda$6300 lines.} \label{fig:PGC044kinematics3} \end{figure*} \subsection{UGC09799} \begin{figure*} \centering \includegraphics[scale=0.8]{UGC.pdf} \caption{A random spectrum in the central region of UGC09799. The red line indicates the best-fitting stellar template and Gaussians at the emission lines, and the green line indicates the best-fitting stellar templates with the emission lines subtracted. The blue line indicates the relative flux for the measured emission lines. The lines are (from left to right) in the left plot: H$\beta$, and the [OIII] doublet, and in the right plot: [OI] doublet, [NII], H$\alpha$, [NII], and the [SII] doublet. The sharp feature between the [NII] and [SII] lines is one of the two CCD chip gaps and was masked during the pPXF and GANDALF fitting processes.} \label{fig:UGC_gandalf} \end{figure*} IFU observations of this galaxy (around H$\alpha$ and H$\beta$) was previously presented by Edwards et al.\ (2009). The observations presented in this study are improved in that it was observed with triple the integration time of the previous observations. The morphology of the continuum emission of this galaxy (see Figure \ref{fig:Thumbnails}) is centrally concentrated and condensed, as was also found by Edwards et al.\ (2009). Just fitting one single Gaussian per emission line resulted in very poor fits (as shown in Figure \ref{fig:UGC_before_after}). We also tried fitting Voigt profiles to the [OIII] lines, for example, as well as slightly offset blue and red velocity wings. The best fits were consistently achieved using 2 Gaussians for each individual line in the [OIII], [OI], [NII] and [SII] doublets, of which one of the Gaussians were broader than the other. The H$\beta$ lines (and therefore also H$\alpha$) however required no additional Gaussian to achieve a good fit. Edwards et al.\ (2009) could not detect H$\beta$ emission above the 1$\sigma$ level for this galaxy. We do detect H$\beta$ above this level and plot the flux ratios in Figure \ref{Haflux}. Figure 16 in Edwards et al.\ (2009) can be compared with Figures \ref{fig:UGCkinematics2} and \ref{Haflux} here -- both show smooth centrally condensed emission. We plot the kinematics of the stellar and gaseous components in Figures \ref{fig:UGCkinematics} and \ref{fig:UGCkinematics2}. We plot H$\alpha$, [NII], [SII], [OIII] and [OI] velocities, line width and A/N in Figure \ref{fig:UGCkinematics2} and \ref{fig:UGCkinematics3}. To the degree that our spatial resolution reveals, it appears that all the optical forbidden and hydrogen recombination lines originate in the same gas. Our kinematic analysis can be compared to that of Edwards et al.\ (2009), who detected a gradient in the velocity but could not differentiate between a rotation or outflow. Our stellar kinematics is shown in Figure \ref{fig:UGCkinematics}, however we do not detect rotational kinematics. Edwards et al.\ (2009) also found H$\alpha$ to have a velocity range of --250 km s$^{-1}$ to +150 km s$^{-1}$. Our H$\alpha$ kinematics are shown in \ref{fig:UGCkinematics2} and show also show a range ($\sim$ 400 km s$^{-1}$) of velocities for H$\alpha$. The gaseous components show rotation in Figure \ref{fig:UGCkinematics2} even though no rotation is apparent in the stellar components (Figure \ref{fig:UGCkinematics}). \begin{figure*} \centering \mbox{\subfigure[Single Gaussians]{\includegraphics[scale=0.5]{UGC_before.pdf}}\quad \subfigure[Two Gaussians]{\includegraphics[scale=0.5]{UGC_after.pdf}}} \caption{UGC09799 showing single Gaussian fits and double Gaussian fits (one narrow and one broad) to the [OI] doublet, [NII], H$\alpha$, [NII], and the [SII] doublet.} \label{fig:UGC_before_after} \end{figure*} This galaxy has patchy dust in the centre (Laine et al.\ 2003, also shown in Figure \ref{fig:MCG_extinct}), and Hicks $\&$ Mushotzky (2005) have deduced star formation from the excess UV--IR emission. The FUV/H$\alpha$ ratio suggests heating by fast shocks or some other source of hard ionisation (e.g., cosmic rays, AGN). The filaments of Abell 2052 have ratios which are consistent with this picture (McDonald et al.\ 2011). The highest extinction (in Figure \ref{fig:MCG_extinct}) does not coincide with the H$\alpha$ peak (Appendix B1), and is slightly off centre. This does not necessarily contrast with the results quoted by Laine et al.\ (2003) that the dust is patchy in the centre, as the centre of the observations might not necessarily be exactly the same as our IFU placement (see Figure 1). Figure 1 also shows that our continuum emission is smooth, similar to the conclusion reached by Edwards et al.\ (2009). Venturi, Dallacasa $\&$ Stefanachi (2004) found a parsec-scale bipolar radio source, and Chandra X-ray emission show two bubbles in the ICM on a larger scale (Blanton, Sarazin $\&$ McNamara 2003). Our results are consistent with the central emission being that of a LINER (see Section \ref{ionisation}). \begin{figure*} \centering \mbox{\subfigure[Stellar velocity]{\includegraphics[scale=0.6, trim=1mm 1mm 50mm 1mm, clip]{UGC_stars_kin.pdf}}\quad \subfigure[Stellar velocity dispersion]{\includegraphics[scale=0.6, trim=1mm 1mm 50mm 1mm, clip]{UGC_stars_sigma.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.25]{UGC_arrow.pdf}}} \caption{UGC09799: Velocity and velocity dispersion of the absorption lines in km s$^{-1}$.} \label{fig:UGCkinematics} \end{figure*} \begin{figure*} \mbox{\subfigure[H$\alpha$ velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_Ha_vel.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_NII_vel.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_SII_vel.pdf}}} \mbox{\subfigure[H$\alpha$ line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_Ha_sigma.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_NII_sigma.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_SII_sigma.pdf}}} \mbox{\subfigure[H$\alpha$ A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_Ha_AN.pdf}}\quad \subfigure[{[NII]}$\lambda$6583 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_NII_AN.pdf}}\quad \subfigure[{[SII]}$\lambda\lambda$6731,6717 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_SII_AN.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.28]{UGC_arrow.pdf}}} \caption{UGC09799: Velocity (in km s$^{-1}$), line width (in km s$^{-1}$) and A/N of the H$\alpha$, [NII]$\lambda$6583 and [SII]$\lambda\lambda$6731,6717 lines.} \label{fig:UGCkinematics2} \end{figure*} \begin{figure*} \mbox{\subfigure[{[OIII]}$\lambda$5007 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_OIII_vel.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 velocity]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_OI_vel.pdf}}} \mbox{\subfigure[{[OIII]}$\lambda$5007 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_OIII_sigma.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 line width]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_OI_sigma.pdf}}} \mbox{\subfigure[{[OIII]}$\lambda$5007 A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_OIII_AN.pdf}}\quad \subfigure[{[OI]}$\lambda$6300 line A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{UGC_OI_AN.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.28]{UGC_arrow.pdf}}} \caption{UGC09799: Velocity (in km s$^{-1}$), line width (in km s$^{-1}$) and A/N of the [OIII]$\lambda$5007 and [OI]$\lambda$6300 lines.} \label{fig:UGCkinematics3} \end{figure*} \begin{figure*} \mbox{\subfigure[MCG-02-12-039 H$\beta$ A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{Hbeta_MCG.pdf}}\quad \subfigure[PGC026269 H$\beta$ A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{Hbeta_PGC026.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.28]{MCG_arrow.pdf}}\quad \subfigure{\includegraphics[scale=0.28]{PGC026_arrow.pdf}}} \mbox{\subfigure[PGC044257 H$\beta$ A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{Hbeta_PGC044.pdf}}\quad \subfigure[UGC09799 H$\beta$ A/N]{\includegraphics[scale=0.5, trim=1mm 1mm 50mm 1mm, clip]{Hbeta_UGC.pdf}}} \mbox{\subfigure{\includegraphics[scale=0.28]{PGC044_arrow.pdf}} \subfigure{\includegraphics[scale=0.28]{UGC_arrow.pdf}}} \caption{A/N of the H$\beta$ lines.} \label{Hbeta} \end{figure*} \section{Discussion -- Ionisation mechanisms} \label{ionisation} Various mechanisms have been proposed as sources of excitation in the nebulae, including photoionisation by radiation from the AGN, cluster X-rays, or hot stars, collisional heating by high-energy particles, shocks or cloud-cloud collisions, and conduction of heat from the X-ray corona (Peterson et al.\ 2003). However, none so far satisfactorily describes all the characteristic spectra, energetics, and kinematics of the extended emission-line regions (Wilman et al.\ 2006; Hatch et al.\ 2007). We reiterate that the FOV of the observations are only 3.5 $\times$ 5 arcsec (corresponding to the central few parsec -- see Table \ref{table:objects}). These results only look at the very heart of these sources. Therefore, it is really only the emission mechanisms for the gas at the very center of these systems that is being studied here. The forbidden lines such as [NII]$\lambda$6583 result from the excitation of N$^{+}$ through collisions with electrons liberated through photoionisation. The H$\alpha$ emission results from the recombination of the hydrogen ion. [NII]$\lambda$6583 flux depends on the N$^{+}$ abundance, the strength of the radiation field, and the form of the radiation field: a harder ionising source will produce a greater flux. The H$\alpha$ flux also depends on the strength of the radiation field. Therefore, the ratio will depend on the metallicity of the gas and the form of the ionising radiation. The form of the ionising radiation and/or the gas metallicity are not uniform but must vary within each galaxy and between the whole sample. Different excitation mechanisms may act in different regions (Hatch et al.\ 2007, Edwards et al.\ 2009), or the H$\alpha$ emission might be disturbed by the presence of companion galaxies (Wilman et al.\ 2006). A commonly used method to distinguish between the sources of ionisation uses the emission-line diagrams pioneered by Baldwin, Phillips $\&$ Terlevich (1981, hereafter BPT diagram) which separate the two major origins of emission: star formation and AGN. The diagrams use pairs of emission line ratios, of which the most commonly used is [OIII]$\lambda$5007/H$\beta$ against [NII]$\lambda$6584/H$\alpha$. Extinction-corrected emission-line measurements were used (even though the BPT diagram is almost insensitive to reddening). We plot the BPT diagrams in Figures \ref{fig:MCG_BPT} and \ref{fig:UGC_BPT} using the following criteria: Kewley criteria: Galaxies above this line are AGN. Kewley et al.\ (2001) used a combination of photoionisation and stellar population synthesis models to place a theoretical upper limit on the location of star forming galaxies on the BPT diagram, $\log (\frac{[OIII]}{H\beta} = 0.61/{\log([NII]/H\alpha)-0.47}+1.19)$. Kauffmann et al.\ (2003) criteria: Galaxies below this line are purely star forming. Between Kewley and Kauffmann criteria is composite objects. Kauffmann et al.\ (2003) revised the criteria as follows: A galaxy is defined to be an AGN if $\log (\frac{[OIII]}{H\beta} > 0.61/{\log([NII]/H\alpha)-0.05}+1.3)$. LINERS--Seyffert line (Schawinsky et al.\ 2007): This line distinguishes LINERS from Seyffert galaxies $\log (\frac{[OIII]}{H\beta}) = 1.05 \log(\frac{[NII]}{H\alpha})+0.45$. SDSS emission-line galaxies occupy a well-defined region shaped like the wings of a seagull (Stasinska et al.\ 2008). Although the exact location of the line dividing the star forming and active galactic nuclei galaxies is still controversial (Kewley et al 2001; Kauffmann et al 2003; Stasinska et al 2006). All galaxies show important LINER emission, but that at least one has significant Seyfert emission areas, and at least one other has significant HII like emission line ratios for many pixels as shown in Figures \ref{fig:MCG_BPT} and \ref{fig:UGC_BPT}. This is in agreement with the long-slit data of these sources plotted on BPT diagrams in figure 3 of McDonald et al.\ (2012). However, there is a debate about the ionisation mechanism in LINERS (Low-Ionisation Nuclear Emission-line Region). The most viable excitation mechanisms are: a low accretion-rate AGN (Kewley et al.\ 2006), photoionisation by old post-asymptotic giant branch (pAGB) stars (Stasinska et al.\ 2008), and fast shocks (Dopita $\&$ Sutherland 1995). Sarzi et al.\ (2010) investigate the ionising sources for the gas in elliptical galaxies based on SAURON intergral-field spectroscopy whose spectra are limited to a relatively narrow wavelength range. They conclude that pAGB stars are the main source of ionisation. In contrast, Annibali et al.\ (2010) analyse long-slit spectra of 65 ellipticals and claim that their nuclear line emission can be explained by excitation from the hard ionising continuum from an AGN and/or fast shocks. However, they can not completely rule out a contribution from pAGB stars at large radii. Voit $\&$ Donahue (1997) suggested that sources of supplementary heating produce the LINER-like properties of the spectra, though not necessarily through the same mechanism in all systems. Annibali et al.\ (2010) found that from the centre outward, galaxies move left and down in the BPT diagram for their study of 65 early-type galaxies. Thus, the hardness of the ionising continuum decreases with galactocentric distance (up to half the half-light radius in the Annibali et al.\ (2010) sample). Figures \ref{fig:MCG_BPT} and \ref{fig:UGC_BPT} also show the flux ratios as function of distance from the galaxy centre. The red circles indicate the central 0.5 $\times$ 0.5 arcsec of the galaxy, the yellow circles 1.0 $\times$ 1.0 arcsec, and the blue circles the full 3.5 $\times$ 5.0 arcsec. The centre of the galaxy was determined as the luminosity peak in the continuum images in Figure \ref{fig:Thumbnails}. In our case, the hardness of the ionising continuum stay mostly uniform with galactocentric distance over out limited spatial extend. PGC044257 show an interesting core separation of the emission in the very centre of the galaxy in Figure \ref{fig:UGC_BPT}a. There is also the debate about which galaxies are LINERS (AGN) and which have just LINER-like emission (non-AGN). The division is usually made by looking at the extend of the LINER signature: core dominated mean true LINER and diffuse means LINER-like. The arising problem is that the scale of the SDSS fibers is already too large to make the distinction for most galaxies. For the nearby galaxies, the centroiding of the fibres is not accurate enough to be sure if the measured spectra cover the core of the galaxy. We have core dominated LINER emission for at least three out of the four galaxies. Additional confirmation comes from the fact that at least one of our galaxies (PGC026269) is a strong radio source (AGN, see \ref{table:objects2}). Similarly, our findings for UGC09799 agree with Edwards et al.\ (2009) who found line ratios consistent with Seyffert or LINER activity in most of their central spaxels for UGC09799. Edwards (2009) also found that the Seyffert signature dominated the central spaxels of the CCG (UGC09799). The H$\alpha$ emission surrounding the BCG in this cluster is coincident with radio-blown bubbles in the central region of the cluster. These bubbles to the north and south of the cluster core are filled with radio emission, which likely originated from the AGN within the CCG (Blanton et al.\ 2003). Since the H$\alpha$ emission seen by McDonald et al.\ (2010) is primarily along the edges of the northern bubble, they suspect that shocks may be responsible for the heating in this case. However, for PGC044257, McDonald et al.\ (2010) find very little evidence for an AGN in the cluster hosting the galaxy, in terms of the radio power, X-ray morphology and hard X-ray flux. Two of the galaxies in the current study have detected X-ray point sources in the CCG (PGC026269 and UGC09799) with large cavities in their X-ray haloes, suggesting that the AGN is influencing the surrounding medium. We investigate this further by plotting the other BPT diagrams, on a spaxel-by-spaxel basis, for all four galaxies (shown in Figure \ref{fig:BPTs}): [OIII]$\lambda$5007/H$\beta$ vs [NII]$\lambda$6584/H$\alpha$, [OIII]$\lambda$5007/H$\beta$ vs [SII]$\lambda\lambda$6717,6731/H$\alpha$, and [OIII]$\lambda$5007/H$\beta$ vs [OI]$\lambda$6300/H$\alpha$. One BCG PGC026269 show several HII pixels in all three BPT diagrams. Figure \ref{fig:MCG_BPT} show that these HII pixels occur throughout the centre of the galaxy. On these plots, we compare our observations to the photoionisation models for pAGB stars with Z=Z$_{\odot}$ (Binette et al.\ 1994). These models are consistent with most of our observations. A model with Z=1/3 Z$_{\odot}$ will be shifted towards lower values on the x-axis of the three BPTs. The pAGB scenario has recently been revisited by Stas\'inska et al.\ (2008), whose extensive grid of photoionisation models (see their Figure 5) cover most of the regions occupied by our spatially resolved measurements. Three other grids of ionisation models are overplotted on the BPT diagrams (Figure \ref{fig:BPTs}). The plotted AGN photoionisation models (Groves, Dopita $\&$ Sutherland 2004) have an electron density, n$_{e}$ = 100 cm$^{-3}$, metallicities of solar, Z=Z$_{\odot}$ (red grids), and twice solar (green grids), a range of ionisation parameter (--3.6 $< \log U < $ 0.0) and a power-law ionising spectrum with spectral index $\alpha$ = --2, --1.4, and --1.2. A harder ionising continuum, with $\alpha$ = --1.2, boosts [SII]$\lambda,\lambda$6717,6731 and [OI]$\lambda$6300 relative to H$\alpha$. We also compared our results with shock models (Allen et al.\ 2008, purple grids). In Figure \ref{fig:BPTs}, we plot the grids with Z=Z$_{\odot}$, preshock densities of 100 cm$^{-3}$, shock velocities of 100, 500 and 1000 km s$^{-1}$, and preshock magnetic fields of B=1, 5 and 10 $\mu$G. Shock models with a range of magnetic field strengths (B=1,5 and 10 $\mu$G) match our observations. Interstellar magnetic fields of B $\sim$ 1 -- 10 $\mu$G are typical of what is observed in elliptical galaxies (Mathews $\&$ Brighenti 1997). Overall, shock models reproduce the majority of our data in the three emission-line ratio diagrams. The shock grids with lower metallicity (e.g. LMC and SMC metallicities) are not consistent with our measurements.\footnote{We downloaded the shock and AGN grids from the web page http://www.strw.leidenuniv.nl/$\sim$brent/itera.html.} As shown in Table \ref{table:objects2} -- we have very weak as well as strong radio fluxes in our small sample. We therefore believe that we are not particularly prone to biases such as the fact that a priori choice of galaxies with strong radio fluxes will result in finding a sample where high [NII]/H$\alpha$ would be more common than in an optically, or H$\alpha$ selected sample only -- where starforming high H$\alpha$/[NII] sources might be more abundant. \begin{figure*} \centering \mbox{\subfigure[MCG-02-12-039]{\includegraphics[scale=0.5]{MCG_BPT_colour.pdf}}\quad \subfigure[PGC026269]{\includegraphics[scale=0.5]{PGC026_BPT_colour.pdf}}} \caption{BPT diagram for MCG-02-12-039 and PGC026269. The red circles indicate the central 0.5 $\times$ 0.5 arcsec of the galaxy, the yellow circles 1.0 $\times$ 1.0 arcsec, and the blue circles the full 3.5 $\times$ 5.0 arcsec. The centre of the galaxy was determined as the luminosity peak in the continuum images in Figure \ref{fig:Thumbnails}. The black solid curve is the theoretical maximum starburst model from Kewley et al.\ (2001), devised to isolate objects whose emission line ratios can be accounted for by the photoionisation by massive stars (below and to the left of the curve) from those where some other source of ionisation is required. The black-dotted curve in the diagram represent the Seyfert-LINER dividing line from Kewley et al.\ (2006) and transposed to the [NII]$\lambda$6584/H$\alpha$ diagram by Schawinski et al.\ (2007).} \label{fig:MCG_BPT} \end{figure*} \begin{figure*} \centering \mbox{\subfigure[PGC044257]{\includegraphics[scale=0.5]{PGC044_BPT_colour.pdf}}\quad \subfigure[UGC09799]{\includegraphics[scale=0.5]{UGC_BPT_colour.pdf}}} \caption{BPT diagram for PGC044257 and UGC09799. The red circles indicate the central 0.5 $\times$ 0.5 arcsec of the galaxy, the yellow circles 1.0 $\times$ 1.0 arcsec, and the blue circles the full 3.5 $\times$ 5.0 arcsec. See caption of Figure \ref{fig:MCG_BPT}.} \label{fig:UGC_BPT} \end{figure*} \begin{figure*} \centering \mbox{\subfigure[MCG-02-12-039]{\includegraphics[scale=0.38]{MCG_BPT_1.pdf}}\quad \subfigure[MCG-02-12-039]{\includegraphics[scale=0.38]{MCG_BPT_2.pdf}}\quad \subfigure[MCG-02-12-039]{\includegraphics[scale=0.38]{MCG_BPT_3.pdf}}} \mbox{\subfigure[PGC026269]{\includegraphics[scale=0.38]{PGC026_BPT_1.pdf}}\quad \subfigure[PGC026269]{\includegraphics[scale=0.38]{PGC026_BPT_2.pdf}}\quad \subfigure[PGC026269]{\includegraphics[scale=0.38]{PGC026_BPT_3.pdf}}} \mbox{\subfigure[PGC044257]{\includegraphics[scale=0.38]{PGC044_BPT_1.pdf}}\quad \subfigure[PGC044257]{\includegraphics[scale=0.38]{PGC044_BPT_2.pdf}}\quad \subfigure[PGC044257]{\includegraphics[scale=0.38]{PGC044_BPT_3.pdf}}} \mbox{\subfigure[UGC09799]{\includegraphics[scale=0.38]{UGC_BPT_1.pdf}}\quad \subfigure[UGC09799]{\includegraphics[scale=0.38]{UGC_BPT_2.pdf}}\quad \subfigure[UGC09799]{\includegraphics[scale=0.38]{UGC_BPT_3.pdf}}} \caption{Diagnostic diagrams for all four galaxies. From left to right: log ([OIII]$\lambda$5007/H$\beta$) vs. log ([NII]$\lambda$6584/H$\alpha$), log ([OIII]$\lambda$5007/H$\beta$) vs. log ([SII]$\lambda\lambda$6731,6717/H$\alpha$) and log ([OIII]$\lambda$5007/H$\beta$) vs. log ([OI]$\lambda$6300/H$\alpha$). The black solid curve is the theoretical maximum starburst model from Kewley et al.\ (2001), devised to isolate objects whose emission line ratios can be accounted for by the photoionisation by massive stars (below and to the left of the curve) from those where some other source of ionisation is required. The black-dashed curves in the [SII]$\lambda\lambda$6731,6717/H$\alpha$ and [OI]$\lambda$6300/H$\alpha$ diagrams represent the Seyfert-LINER dividing line from Kewley et al.\ (2006) and transposed to the [NII]$\lambda$6584/H$\alpha$ diagram by Schawinski et al.\ (2007). The predictions of different ionisation models for ionising the gas are overplotted in each diagram. The boxes show the predictions of photoionisation models by pAGB stars for Z = Z$_{\odot}$ and a burst age of 13 Gyr (Binette et al.\ 1994). The purple lines represent the shock grids of Allen et al.\ (2008) with solar metallicity and preshock magnetic fields B=1.0, 5.0 and 10 $\mu$G (left to right). The horizontal purple lines show models with increasing shock velocity V = 100, 500 and, 1000 km s$^{-1}$, and the densities n$_{e}$ is 100 cm$^{-3}$. Grids of photoionization by an AGN (Groves et al.\ 2004) are indicated by green and red curves, with n$_{e}$ = 100 cm$^{-3}$ and a power-law spectral index of $\alpha$ = --2, --1.4 and --1.2 (from left to right). The models for Z = Z$_{\odot}$ (red) and Z = 2Z$_{\odot}$ (green), and the horizontal lines trace the ionisation parameter $\log$ U, which increases with the [OIII]$\lambda$5007/H$\beta$ ratio from $\log$ U = --3.6, --3.0, --2.0, --1.0, 0.0.} \label{fig:BPTs} \end{figure*} \section{Conclusion} \label{summary} We present detailed integral field unit (IFU) observations of the central few kiloparsecs of the ionised nebulae surrounding four active CCGs in cooling flow clusters (Abell 0496, 0780, 1644 and 2052). Our sample consists of CCGs with H$\alpha$ filaments. We observed the detailed optical emission-line (and simultaneous absorption line) data over a broad wavelength range to probe the dominant ionisation processes, excitation sources, morphology and kinematics of the hot gas (as well as the morphology and kinematics of the stars). Two of the four sources have not been observed with IFU data before (Abell 0780 and 1644), and for the other two sources we observed with significantly improved integration times (and number of lines) than previous studies (Hatch et al.\ 2007, Edwards et al.\ 2009). This will help form a complete view of the different phases (hot gas and stars) and how they interact in the processes of star formation and feedback detected in central galaxies in cooling flow clusters, as well as the influence of the host cluster. The total extinction maps are presented in Figure \ref{fig:MCG_extinct} and shows extinction which agrees well with previously derived long-slit values (where available). From long-slit spectra, Crawford et al.\ (1999) derived the total extinction of PGC044257 as 0.46 to 0.63 mag. This agrees with the extinction we derived for the very centre of the galaxy in Figure \ref{fig:MCG_extinct}, but on average our spatially resolved extinction is slightly lower (0.195 mag). Crawford et al.\ (1999) derived an integrated internal extinction of $E(B-V)_{internal}$ of 0.22 mag for the centre of UGC09799. This corresponds very well to what we derived and plotted in Figure \ref{fig:MCG_extinct}, although we find that some regions show much higher internal extinction (on average 0.42 mag). We derive a range of different kinematic properties, given the small sample size. For Abell 0496 and 0780, we find that the stars and gas are kinematically decoupled, and in the case of Abell 1644 we find that these components are aligned. For Abell 2052, we find that the gaseous components show rotation even though no rotation is apparent in the stellar components. To the degree that our spatial resolution reveals, it appears that all the optical forbidden and hydrogen recombination lines originate in the same gas for all the galaxies. All galaxies show important LINER emission, but that at least one has significant Seyfert emission areas, and at least one other has significant HII like emission line ratios for many pixels (consistent with the long-slit observations of McDonald et al.\ (2012)). We also show that the hardness of the ionising continuum does not decrease with galactocentric distance (except for PGC044257 which show an interesting core separation of the emission in the very centre of the galaxy in Figure \ref{fig:UGC_BPT}a). The radial profiles of diagnostic line ratios, [OIII]$\lambda$5007/H$\beta$ and [NII]$\lambda$6584/H$\alpha$, show that they are roughly constant with radius for three of the four galaxies (all except PGC044257). This indicates that the dominant ionising source is not confined to the nuclear region in the two objects and that the ionised gas properties are homogeneous in the emission-line regions across each galaxy. Overall, it remains difficult to dissentangle the dominant photoionisation mechanisms, even with more line measurements. The AGN photoionisation models (with higher metallicity) are the best able to reproduce our spatially-resolved line ratios the in all of the three BPT diagrams simultaneously of most objects, even though shock models and pAGB stars can not be conclusively eliminated. We also do not see extended [OI]$\lambda$6300 emission, following the morphology of the strong emission lines (e.g.\ Farage et al.\ 2010), therefore, it is unlikely that shock excitation is the dominant ionising source in the galaxies of this limited sample. In addition, multiwavelength observations (as discussed in the previous section) favour the AGN photoionisation mechanism, especially in the case of PGC026269 and UGC09799. \section*{Acknowledgments} SIL is financially supported by the South African National Research Foundation. We thank James Turner, Bryan Miller, and Michele Cappellari for providing helpful scripts, as well as Marc Sarzi for helpful discussions. Based on observations obtained on the Gemini South telescope. The Gemini Observatory is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the NSF on behalf of the Gemini Partnership: the National Science Foundation (USA), the Science and Technology Facilities Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), CNPq (Brazil) and CONICET (Argentina). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology.
1,108,101,566,121
arxiv
\section{Introduction} There are sometimes speculations about a worst case scenario unfolding at the LHC. For example a light Higgs finally showing up after many years of running with no further signs of new physics would be distinctly unsatisfying. This would provide us with no insight about the origins of the observed mass spectrum and the flavor structure of the standard model. The possibility of not finding \textit{anything}, including the Higgs, is equally depressing since it leaves open the possibility that new physics lurks just beyond the reach of the LHC. In contrast, a best case scenario is one that maximizes our understanding of electroweak symmetry breaking, as well as shedding light on the questions of mass and flavor. Maximizing understanding is closely related to a maximal elimination of the widest range of possibilities currently being considered by theorists.\footnote{Experimentalists, at least, may refer to this as a best case scenario.} This would not be accomplished very quickly for example if the first anomalies involved missing energy, due to escaping particles stabilized by a new conserved quantum number. A wide range of different theories may have similar missing energy signatures, and it could take considerably more effort to pin down the actual physics. Here we argue that a best case scenario involves the discovery of a sequential fourth family (sequential means quarks and leptons with standard quantum numbers). If the masses of these new fermions lie in a certain range, then it implies that we have identified the dominant order parameter of electroweak symmetry breaking. Strong interactions are implied, and the whole idea of a perturbative description of electroweak symmetry breaking, as embodied in models of light elementary Higgs scalars, low energy supersymmetry, and composite Higgs constructions, is eliminated. In addition, as we argue in this note, there may be signals for this possibility that can show up quite early, thus making this at least or more exciting than other possible strong interaction scenarios such as technicolor and related dual descriptions in higher dimensions. Our scenario differs in a fundamental respect from these other strong interaction scenarios. Consider the nature of the propagating degrees of freedom most closely associated with electroweak symmetry breaking. In other words what are the propagating degrees of freedom to which the Goldstone bosons couple most strongly, and among these which are the lightest? These degrees of freedom are typically bosonic. Typically some rho-like resonance plays this role, or some Kaluza-Klein mode, or some scalar Higgs-type mode. These bosons are often produced singly as resonances in colliders. In our scenario the prime signal will instead involve the pair production of unconfined fermions, whose masses are the order parameters for electroweak symmetry breaking. Each fermion decays weakly, which for the case of decaying quarks produces the $f\overline{f}W^{+}W^{-}$ signal where $f$ is typically $b$ or $t$. A bosonic resonance in contrast typically decays to a fermion pair \textit{or} a pair of gauge bosons. In this note our interest is in the $b\overline{b}W^{+}W^{-}$ signal. A sequential fourth family is not the unique possibility for new heavy fermions. For example new fermions that are vector-like under the electroweak symmetries are allowed to have a wide range of masses unrelated to electroweak symmetry breaking. In this sense their discovery would be less informative than a sequential fourth family. There are two ways to distinguish a sequential fourth family from vector-like fermions and other exotic possibilities. One is that the fourth family masses would have to satisfy the constraints arising from electroweak precision measurements. In particular the mass splittings of the quark and lepton doublets and the relative sizes of the quark and lepton masses are constrained \cite{O}, and if such mass relations are observed then this would constitute good evidence.\footnote{Of course the fourth neutrino must not be light, lying at least above $\sim 80$ GeV, but remember that the fourth family leptons are also associated with strong interactions.} The other is that our proposed signal involves the weak charged-current decay of new heavy fermions. In contrast new vector-like fermions may have dominant decay modes through gluon or $Z$ emission. The suppression of these flavor-changing neutral currents is natural for sequential fermions, but less so for nonstandard fermions that can mix with standard fermions in a variety of ways. Our study will focus on the weak decay of a sequential, charge $2/3$, quark, the $t'$, assumed to have mass in the 600-800 GeV range. This range should be typical of a dynamically generated fermion mass if this is the mass of electroweak symmetry breaking, assuming no fine tuning in the underlying dynamics. Related to this is the old observation that 550 GeV is roughly the mass of a heavy quark above which its coupling to the Goldstone boson is strong \cite{N}. For more discussion of the dynamics and structure of such a theory see \cite{O}. For most of our study we use $m_{t'}=600$ GeV, but we also briefly compare to $m_{t'}=800$ GeV. We assume that the fourth family enjoys CKM mixing and that the resulting decay $t'\rightarrow bW$ is dominant. This is naturally the case if $m_{b'}>m_{t'}$,\footnote{An attempt to understand the top mass in the context of a fourth family yields $m_{b'}>m_{t'}$ \cite{O}.} and it is still true for a range of the mixing element $V_{t'b}$ when $m_{t'}-M_W<m_{b'}<m_{t'}$ so that $t'\rightarrow b'W^{(*)}$ is suppressed by phase space. With the $t'\rightarrow bW$ decay dominant then the width of the $t'$ is roughly 60 $|V_{t'b}|^2$ GeV, and thus $t'$ could well be narrower than $t$ for $V_{t'b}$ reasonably small. We are not considering the possibility of single $t'$ production here, but that cross section is also proportional to $|V_{t'b}|^2$. If $|V_{t'b}|\lesssim 0.1$ for example then $t'\overline{t'}$ production should dominate \cite{R}. $t'\overline{t}'$ production adds to the same final states as $t\overline{t}$ production, and thus a reconstruction of the $t'$ mass is likely necessary to pull the signal from background. In the analyses done thus far a strategy similar to the original $t$ mass determination is adopted; the resulting signal to background ratio $S/B$ is not very encouraging, making the search for a 600-800 GeV $t'$ fairly challenging even for 100 fb$^{-1}$ of data \cite{K}. A lower limit on the $t'$ mass using a similar analysis has been set by the CDF collaboration \cite{S}. Here we present a preliminary study of a method that appears to significantly improve signal to background $S/B$. We make use of the fact that standard jet reconstruction algorithms may tend to combine the two proto-jets from the hadronic decay of a sufficiently energetic $W$ into a single jet. Such a jet can be identified through a measurement of its invariant mass, which can be obtained from the energy deposits in the calorimeter cells without any need to resolve nearly merged proto-jets. The idea of using jet masses to identify highly energetic $W$'s (and as well $t$'s) has been considered before in efforts to reconstruct the $t$ mass in $t\overline{t}$ production \cite{J,F}. Our motivation here is in a sense orthogonal in that the reconstruction of $W$ jets in this way may actually act to suppress the $t\overline{t}$ background relative to signal. The reason is basically kinematical, having to do with the relative isolation of $W$ jets and $b$ jets in signal versus background. We should stress that $W$'s are being identified through their hadronic rather than leptonic decay modes and this allows us to consider event selection without requirements for leptons or missing energy. This increases statistics considerably. \section{Event selection} We utilize the PGS4 detector simulation program \cite{Y} which conveniently reconstructs the invariant masses of jets. PGS4 incorporates both a cone-based jet finding algorithm and a $k_T$-based algorithm. See \cite{Z} for a description and comparison of these algorithms as applied to the proto-jets of $W$ decay. We will mostly confine ourselves to the cone-based finder, and we show by comparison that it tends to produce a significantly better $S/B$ for our application. We use the ATLAS LHC detector simulation parameter choices,\footnote{These settings are found in the Madgraph package which includes PGS4.} which includes a $\Delta \eta\times\Delta\phi=0.1\times0.1$ grid size to model the hadronic calorimeters along with some estimate of their energy resolution. Our only change will be to the cone size for the cone-based jet finder, for which we choose 0.6. Since our own event selection is very restrictive we will not make use of triggers in the detector simulation. The dominant irreducible background is $t\overline{t}$ production, whose cross section is $\sim 500$ times larger than for $t'\overline{t}'$. An obvious reduction of this background comes by imposing a lower bound $\Lambda_{\rm top5}$ on the scalar $p_T$ sum of some number (say five) of the hardest reconstructed objects in the detector. These objects may include leptons and missing energy. We will choose $\Lambda_{\rm top5}=2m_{t'}$. Because of the $\Lambda_{\rm top5}$ cut we can apply a lower bound $\Lambda_{\rm totE}$ on the scalar $p_T$ sum of all final particles in the Herwig or Pythia output, so as to reduce the number of events to be stored and/or simulated by PGS4. When $m_{t'}=600$ GeV and $\Lambda_{\rm top5}=1200$ GeV we choose $\Lambda_{\rm totE}=1000$ GeV. The latter cut, for computational purposes only, removes an insignificant fraction of the events that pass the other cuts. Tagging $b$-jets will play an important role in our analysis, and we will insist on at least one $b$-tag with $p_T>\Lambda_b$. We find that a good choice is $\Lambda_b=m_{t'}/3$. In an analysis of real data where $m_{t'}$ is not known, our $\Lambda_{\rm top5}$ and $\Lambda_b$ cuts will have to be varied to optimize the signal while keeping the ratio $\Lambda_{\rm top5}/\Lambda_b\approx 6$. The $b$-tagging efficiencies incorporated in PGS4 have been fit to CDF data and are not very appropriate for our studies involving such energetic $b$-jets. With these energies the $b$-mistag rate involving light quarks and gluons is expected to deteriorate (increase) \cite{I}. To account for this and to make the dependence on the efficiencies more transparent we replace the $b$-tag efficiencies, both the tight and loose sets in PGS4, with a single set (1/2, 1/10, 1/30) for underlying $b$'s, $c$'s, and light quarks or gluons, respectively. We assume these efficiencies vanish for pseudorapidity $|\eta|>2$, thus roughly modeling the point in pseudorapidity at which efficiencies start to deteriorate \cite{I}. When we consider the $W+\rm{jets}$ background, it will basically be the $b$-mistags of gluon jets that determine its level. Thus it is the ratio of the first and last of the three efficiencies above that is most relevant for $S/B$, and we believe that our choice is conservative. We will also require evidence for at least one $W$ in a manner that we now describe. The object is to make use of the observation that $t'\overline{t}'$ production generates jets originating from energetic $W$'s and $b$'s that are quite isolated from each other. This is in contrast to the $W$'s and $b$'s in the $t\overline{t}$ background passing the above cuts, since they typically come from decays of quite highly boosted $t$'s. In addition, as we have mentioned, when the $W$'s are sufficiently energetic the two jets from the hadronically decaying $W$ will often be reconstructed as a single jet. In particular, even for $E_T=150$ GeV $W$ jets \cite{J}: ``Over 95\% of the jet energy is contained in a $\sqrt{{\Delta \eta}^2+{\Delta \phi}^2}=.7$ cone around the $E_T$ weighted baricenter." This then provides an opportunity to identify isolated and energetic $W$'s, by reconstructing jets using a $\Delta R\lesssim0.7$ cone (in fact we will use $\Delta R=0.6$), and then looking for a peak in the invariant mass distribution of these jets. The main point is that this procedure should be less efficient for the $t\overline{t}$ background events. The $W$ jets in these events will often be contaminated by the nearby $b$ jets, resulting in measured invariant masses that are more widely scattered relative to the true $W$ mass. Note that our goal here is opposite to the usual and more complex task of trying to reconstruct $W$'s in $t\overline{t}$ events. In that case choosing a small cone size and using sophisticated analyses to reduce the merging and cross-contamination of jets is appropriate. Here we want these same effects to reduce the identification of $W$'s in the background sample, and a simple-minded cone algorithm with a fairly large cone size may be quite desirable for this purpose. We are thus prompted to define a $W$-jet as a non-$b$-tagged jet whose invariant mass is close to a peak in the invariant mass distribution, which in turn is close to the $W$ mass. The actual peak location will be influenced by the effects of ``splash-out'' and ``splash-in''; the former occurs when the cone does not capture all of the energy originating from the $W$ decay (large angle contributions with respect to the cone center are weighted more in the invariant mass determination) and the latter occurs where the cone is receiving unrelated energy contributions from the underlying event and pile-up effects. There may also be scattering effects that spread the transverse shape of the jet that may not be properly modeled by the detector simulation. But the point is that both the peak location and width can be experimentally determined, and then a jet with an invariant mass falling appropriately close to the peak can be called a $W$-jet. From the histograms for the jet invariant masses that we display below we are led to define a $W$-jet to be one whose invariant mass is within $\approx 10$ GeV of the peak. In fact we use a slightly optimized 9 GeV value. We require at least one $W$-jet defined in this way. Although our event selection relies on hadronic decays of the $W$, in our event generation we allow the $W$ to decay both hadronically and leptonically in both signal and background. In the sample of events containing both $b$ and $W$ jets we can now attempt a reconstruction of the $t'$ mass by considering the invariant mass of $b$-$W$ jet pairs. We consider all such pairs in each event, as long as the $b$-jet has $p_T>\Lambda_b$. We are thus interested in a pair of plots; one for the single jet invariant masses to display the $W$ peak, and one for the invariant masses of the $W$-$b$ pairs to display the $t'$ peak. Note that both plots are produced from events that pass the $\Lambda_{\rm top5}$ and $\Lambda_b$ cuts. On each plot we will overlay signal and background histograms. We will produce these plots for each of a variety of the different event generation tools available: MC@NLO\cite{E}-Herwig, Herwig\cite{B}, Alpgen\cite{C}-Herwig, Alpgen-Pythia, Pythia\cite{A} and Madgraph\cite{D}-Pythia. We have made an effort to use up-to-date versions.\footnote{Herwig 6.51, MC@NLO 3.3, Alpgen 2.11, Madgraph 4.1, Pythia 6.409, PGS4 release 070120.} Our goal here is to study the effects of different types of physics that are modeled in various ways by these programs. \begin{itemize} \item MC@NLO comes the closest to correctly modeling the partonic scattering by including the one-loop effects. \item For the study of more jets originating in the higher order tree level processes at the partonic level we use Alpgen, interfaced respectively to both Herwig and Pythia.\footnote{Note that some matching to massive quark matrix elements is already incorporated into Herwig and Pythia \cite{V}.} \item Other physics having an important impact on the background is initial state radiation and the underlying event. Stand-alone Pythia includes more varied and possibly more advanced descriptions of this physics. \item Different tools may be more convenient for different backgrounds; we shall use Madgraph-Pythia to model the $W+\rm{jets}$ background. \item Pythia and Madgraph-Pythia make it easy to incorporate a fourth family and thus allow a parameterization of the CKM mixing relevant to $t'$ decay. \end{itemize} An important feature of our analysis is that for each case, we always use the same tool(s) to calculate both signal and background. In the next two sections we concentrate on the $t\overline{t}$ background and then after consider the $W+\rm{jets}$ background. We collect roughly 3 fb$^{-1}$ of integrated luminosity for the backgrounds. For the signal we often collect 5-10 times more and then scale the resulting histograms down; for both signal and background we scale results to 2.5 fb$^{-1}$. Although the signal histograms are then artificially smooth, they usefully show the structure expected as more data is collected. \section{Event generation involving Herwig} We first investigate the MC@NLO-Herwig combination, where MC@NLO corrects the partonic production process at the one-loop level, thus incorporating more correctly the first extra hard parton. It could be expected that the predicted size of both signal and background in MC@NLO is more reliable than the other approaches we consider below. (Other approaches require more consideration of renormalization scales and K-factors.) A slight drawback of MC@NLO is that it offers no straightforward way to incorporate a fourth family. Thus to model the process $t'\rightarrow bW$ we simply increase the $t$ mass to 600 GeV and then use $t\rightarrow bW$ to model $t'\rightarrow bW$. The larger than expected width of the $t'$ that this entails has little effect on our results. We also note that MC@NLO operates by producing a fraction ($\sim 15$\%) of events with negative weight, which need to be subtracted when forming histograms. We use the CTEQ6M parton distribution function (a NLO PDF) for use with MC@NLO-Herwig, but we also use it elsewhere (unless otherwise specified) to make comparisons more transparent. We note that the choice of PDF has minor impact on $S/B$ as long as the same PDF is used for both signal and background. \begin{center}\includegraphics[scale=0.28]{fig1}\end{center} \vspace{-1ex}\noindent Figure 1: Signal (red) versus $t\overline{t}$ background (blue), using MC@NLO-Herwig. As for all such figures to follow, the $W$ mass plot is on the left and the $t'$ mass plot is on the right. \vspace{2ex} We present the resulting $W$ mass and $t'$ mass plots in Fig.~(1), comparing signal against the $t\overline{t}$ background. In the $W$ mass plot we see that a stronger peak at the $W$ mass shows up in the signal events as compared to the background events, as expected from our previous discussion, and which thus leads to an increased $S/B$. Even so we are surprised by how strong the $t'$ mass peak is relative to background. This naturally leads to the question of the role that MC@NLO is playing, and so we compare to Herwig when run in stand-alone mode so as to produce results without NLO corrections (the MC@NLO scripts provide this option). Those results are displayed in Fig.~(2). The comparison of these two sets of results show that the effect of MC@NLO is apparently to cause the signal to increase and the background to decrease! We also see that the large difference in the backgrounds in the $W$ mass plots do not carry over to the same degree in the $t'$ mass plots. This is at least partly due to the $\Lambda_{\rm top5}$ and $\Lambda_b$ cuts which, for the background, pushes a broad peak in the $W$-$b$ invariant mass spectrum to higher energies. \begin{center}\includegraphics[scale=0.28]{fig2}\end{center} \vspace{-1ex}\noindent Figure 2: Signal versus $t\overline{t}$ background, using stand-alone Herwig. \vspace{2ex} For a better understanding of the large difference in background on the $W$ mass plots we consider the $H_T$ distribution, where $H_T$ is defined as the scalar sum of all transverse (including missing) momenta. Of interest is the high energy tail of this distribution for the $t\overline{t}$ background, since this is the region populated by the signal events. The $H_T$ distributions with and without MC@NLO are shown in Fig.~(3). On the high $H_T$ tail we see a significant reduction due to MC@NLO, even though MC@NLO increases the total cross section for $t\overline{t}$ production from 490 to 815 pb. In fact this increase in the total $t\overline{t}$ production cross section is similar to the increase in the $t'\overline{t'}$ production cross section, an increase from 0.87 to 1.33 pb. These increases are the K-factors. The K-factor for the signal combined with the change in shape of the $H_T$ distribution for the background gives some understanding of the signal enhancement and the background suppression. Also we see how tiny the signal appears in Fig.~(3); this highlights again the surprising effectiveness of the event selection and the $t'$ mass reconstruction in pulling the signal from the background. Although we are finding that the physics incorporated by MC@NLO can have a significant effect on the shape of the $H_T$ distribution, we should keep in mind that the stand-alone Herwig results may be sensitive to choices made in its own attempt to model the physics. We have also considered the effect of the Jimmy add-on to Herwig to model the underlying event. We use a Jimmy tuning for the LHC, in particular \verb$PTJIM=4.9$ and \verb$JMRAD(73)=1.8$ \cite{M}. The result is more low energy activity in the event, but we find that this has only a minor effect on signal and background; the $H_T$ tail of the background is little affected. We have therefore presented results without Jimmy. \begin{center}\includegraphics[% scale=0.56]{HT1}\hspace{-3pt}\includegraphics[% scale=0.56]{HT2}\end{center} \vspace{-2ex}\noindent Figure 3: $H_T$ distributions for $t\overline{t}$ production, with and without NLO effects. The high $H_T$ tails are shown on the right along with the $t'\overline{t'}$ signal distribution. \vspace{2ex} Complimentary to MC@NLO in the effort to go beyond lowest order matrix elements is Alpgen, which can also interface with Herwig. Alpgen incorporates higher order amplitudes involving more partons and uses jet-parton matching to avoid double counting, given that the showering process generates additional jets. On the other hand Alpgen lacks the loop corrections that should accompany the higher order tree diagrams. This shows up as more sensitivity to the choice of renormalization scale. For here and in the following we choose $\sqrt{\hat{s}}/2$ for the renormalization scale, which for example approaches $m_t$ near the $t\overline{t}$ threshold. We again choose the CTEQ6M PDF to simplify comparison to MC@NLO, and we ensure that the same PDF is used by Herwig. To run Alpgen we use 4.0 as the maximum pseudorapidity and 0.6 as the minimum jet separation. See \cite{P} for a comparison of Alpgen and MC@NLO and for a description of the MLM matching procedure used by Alpgen. The high $H_T$ tail of the distribution, of interest for the background, is sensitive to the higher jet multiplicities. We use Alpgen to generate samples of $t\overline{t}+0$, $t\overline{t}+1$ and $t\overline{t}+2$ partons. We note that the lower multiplicity samples become relatively more important for an increasing value of the minimum jet $p_{T}$ parameter used by Alpgen, $p_T^{min}$. We wish to choose $p_T^{min}$ large enough so that the highest jet multiplicity sample does not completely dominate the high $H_T$ tail. (The highest multiplicity sample is inclusive and is sensitive to the parton showering performed by Herwig; we wish to avoid this reliance on Herwig.) We will display results for $p_T^{min}=80$, but in principle results should be fairly independent of the $p_T^{min}$ choice. Indeed we find similar results for $p_T^{min}=120$ GeV and $p_T^{min}=40$ GeV. In the latter case we have to consider jet multiplicities up to and including the $t\overline{t}+4$ parton sample, again in order for the highest multiplicity sample not to completely dominate the high $H_T$ region. For the signal we don't need the full Alpgen machinery since we are not on a tail of a distribution in this case. But for a fair comparison of signal and background we nevertheless use Alpgen to produce a $t'\overline{t'}$ sample, although with no extra partons and with jet matching turned off. To model the $t'$ with Alpgen we do the same as with MC@NLO, and simply increase the mass of the $t$ to 600 GeV. An advantage of Alpgen over MC@NLO is that the former incorporates spin correlations in the fully inclusive decays of $t$ and $t'$, although we do not expect this to have much effect on our results. We find that the Alpgen-Herwig results in Fig.~(4) are strikingly similar to the MC@NLO-Herwig results. Note that we have not included K-factors that would be necessary to bring the total cross sections in line with MC@NLO results. In any case the handling of the higher jet multiplicities in Alpgen, arguably in manner more correct that in MC@NLO, does little to degrade the $t'$ mass reconstruction. If anything these results again suggest that perturbative effects beyond lowest order tend to enhance $S/B$. \begin{center}\includegraphics[scale=0.28]{fig4}\end{center} \vspace{-1ex}\noindent Figure 4: Signal versus $t\overline{t}$ background, using Alpgen-Herwig. \vspace{2ex} \section{Event generation involving Pythia} As a further check we would like to use Alpgen in conjunction with Pythia, but first we need to discuss our use of Pythia. In the following Pythia will also be used in stand-alone mode, as well as with Madgraph. We wish Pythia to be used consistently in these three contexts, so that for processes that all three methods can describe, they give the same results. The renormalization scale, used as an argument for parton distributions and for $\alpha_s$ at the hard interaction, is specified explicitly in stand-alone Pythia. Alternatively Alpgen and Madgraph can pass this scale to Pythia on an event-by-event basis. Another important scale in Pythia is the maximum parton virtuality allowed in $Q^2$-ordered space-like showers (initial state radiation). We will refer to this as a phase space cutoff. We find that the high $H_T$ tail of our distributions, important for determining the background, is sensitive to this cutoff. There is much less sensitivity to a corresponding cutoff for time-like showers (final state radiation). In Pythia, \verb$MSTP(32)=4$ (specifies $\hat{s}$) and \verb$PARP(34)=0.25$ (the pre-factor) gives $\hat{s}/4$ as the square of the renormalization scale. In stand-alone Pythia the phase space cutoffs, space-like and time-like, are determined (when \verb$MSTP(68)=0$) by the factors \verb$PARP(67)$ and \verb$PARP(71)$ that multiply $\hat{s}$ in our case. To get the same cutoffs in Alpgen-Pythia and Madgraph-Pythia, assuming that $\sqrt{\hat{s}}/2$ is the renormalization scale in Alpgen or Madgraph, requires a rescaling \verb$PARP(67)$ $\rightarrow$ \verb$PARP(67)/PARP(34)$ and \verb$PARP(71)$ $\rightarrow$ \verb$PARP(71)/PARP(34)$ in Pythia when using the external events. We display the Alpgen-Pythia results in Fig.~(5). The same unweighted Alpgen events that were passed through Herwig are passed through Pythia, with the latter again using the CTEQ6M PDF. We find that the Alpgen-Herwig and Alpgen-Pythia results are in excellent agreement. \begin{center}\includegraphics[scale=0.28]{fig5}\end{center} \vspace{-1ex}\noindent Figure 5: Signal versus $t\overline{t}$ background, using Alpgen-Pythia. \vspace{2ex} We now consider results of stand-alone Pythia. Pythia has recently provided built in choices for ``tunes'' of the various parameters and options in the modeling of initial and final state radiation and the underlying event, where the latter includes beam remnants and multiple parton interactions. First we consider the DW tune, developed by R.~D.~Field by testing against CDF data \cite{L}. It has \verb$PARP(67)=2.5$. We will now also switch to the CTEQ5L PDF since this is assumed by the Pythia tunes.\footnote{We also used the DW settings for the Alpgen-Pythia runs, which in that case is not strictly the DW tune due to the different PDF used.} The results for the DW tune are shown in Fig.~(6), and we see a $S/B$ that is somewhat smaller than previous results of MC@NLO and Alpgen. This reinforces our previous conclusions regarding the role of perturbative corrections. \begin{center}\includegraphics[scale=0.28]{fig6}\end{center} \vspace{-1ex}\noindent Figure 6: Signal versus $t\overline{t}$ background, using stand-alone Pythia with tune DW. \vspace{2ex} It is of interest to consider other tunes involving the different showering and underlying event models available in Pythia. Tune DW is based on the older $Q^2$-ordered shower model, but there are newer models based on $p_T$-ordered showers. There are four such Sandhoff-Skands tunes \cite{H}, S0, S0A, S1, S2, differing mainly by the color reconnection model they use. Although the value of \verb$PARP(67)$ is not used in these tunes, a low or high phase space cutoff can be chosen with \verb$MSTP(68)=0$ or \verb$3$ respectively. We choose the former and we comment more on this below. Although the new models are hopefully more realistic than the old, they may not be as well tested or tuned. In addition the new models appear only to work in stand-alone Pythia. Among these we choose to focus on tune S0A, motivated partly by the fact that it shares with tune DW the same value for \verb$PARP(90)=0.25$ (the energy scaling of the infrared cutoff in the underlying event model). Also, its color reconnection model is less computationally intensive than S1 or S2. (Of special note is tune S1 which runs extremely slow and is the only tune to give a lower $S/B$ than tune S0A.) The results for the S0A tune are shown in Fig.~(7). A further drop in $S/B$ from the DW tune is evident. \begin{center}\includegraphics[scale=0.28]{fig7}\end{center} \vspace{-1ex}\noindent Figure 7: Signal versus $t\overline{t}$ background, using stand-alone Pythia with tune S0A. \vspace{2ex} In Fig.~(8) we compare the high $H_T$ tails of the various cases. We omit the stand-alone Herwig result already shown in Fig.~(3), which is larger than any here. The largest in Fig.~(8) is from tune S0A and we find that all four Sandhoff-Skands tunes give very similar results for the $H_T$ tail. We see that the relative sizes of these high $H_T$ tails are inversely related to the observed $S/B$ ratios. \begin{center}\includegraphics[% scale=0.8]{HT3}\hspace{-1pt}\end{center} \vspace{-2ex} \noindent Figure 8: Distributions on the high $H_T$ tail. \vspace{2ex} The distribution much smaller than the others in Fig.~(8) corresponds to the case of turning off initial state radiation in the DW tune of stand-alone Pythia. This last result makes clear just how important the initial state radiation is to the background estimate; in fact the bulk of the background is due to it. Physically this suggests that the high $H_T$ tail of the distribution is receiving significant contributions from scatterings producing $t\overline{t}$ at lower energy, where the cross section is larger, since the remaining energy can come from initial state radiation. Initial state radiation also stimulates multiple parton interactions, which adds to the activity. There has been some discussion \cite{G} of initial state radiation and the motivation to increase the phase space cutoff to better fit $p_T$ distributions of jets. But increasing this cutoff (using the default \verb$MSTP(68)=3$ for the new tunes for example) will significantly inflate the $H_T$ tail further, and thus there appears to be some tension in the attempt by stand-alone Pythia to model simultaneously both the jet $p_T$ and $H_T$ distributions. \section{Another background and variations} We now turn to a brief discussion of the $W+\rm{jets}$ background, where the $W$ decays hadronically and at least one jet is mistagged as a $b$-jet. For this we turn to Madgraph. (In Alpgen only the leptonic $W$ decay is incorporated for this process.) We continue to make the choice of $\sqrt{\hat{s}}/2$ for the renormalization scale (in Madgraph a modification of \verb$setscales.f$ is needed) and CTEQ5L for the PDF. Just from kinematics, for a mistagged $b$-jet and $W$ to have a combined invariant mass in the signal region typically requires that there be at least one other hard jet in the process. We thus focus on the $W+2$ jet process at the partonic level. (We have confirmed that it generates larger background than the $W+1$ jet process). In Madgraph we require a minimum $p_T$ of 120 GeV for each of the two jets (and we confirm that a 100 GeV cut does not increase the background). The size of this background of course depends on the $b$-mistag rate from light partons (mainly gluons), for which we have chosen the quite conservative value of 1/30. The results in Fig.~(9) are again encouraging, with this background being comparable to the $t\overline{t}$ background. \begin{center}\includegraphics[scale=0.28]{fig9}\end{center} \vspace{-1ex}\noindent Figure 9: Signal versus $W+\rm{jets}$ background, using Madgraph-Pythia. \vspace{2ex} More study of the $W+\rm{jets}$ background is warranted, but we note that if necessary it can be significantly reduced relative to the $t\overline{t}$ background. This is done by requiring an additional $b$-tag and/or a lepton and/or missing energy, at the expense of lower statistics. We do not enter into a detailed discussion of other possible backgrounds here, but the ones we have briefly checked, including $b\overline{b}+\rm{jets}$, $(W/Z)b\overline{b}$, $Z+\rm{jets}$, and $(WW/ZZ/WZ)+\rm{jets}$, all appear to be smaller than $W+\rm{jets}$. More problematic to estimate is the QCD multijet background, and a more detailed study is left for future work. Indeed it may be that this background could force lepton and/or missing energy requirements in our event selection. Thus far we have considered $m_{t'}=600$ GeV, and so here we briefly consider increasing this mass to 800 GeV. The only difference is that we scale up the $\Lambda_{\rm top5}$ and $\Lambda_b$ cuts by a factor of $4/3$. We display the results for MC@NLO in Fig.~(10) and for the stand-alone Pythia case with the DW tune in Fig.~(11). The comparison is between a case with NLO effects and a case without, and the differences that we have noted before are now accentuated for $m_{t'}=800$ GeV. \begin{center}\includegraphics[scale=0.28]{fig10}\end{center} \vspace{-1ex}\noindent Figure 10: Signal versus $t\overline{t}$ background with $m_{t'}=800$ GeV, using MC@NLO-Herwig. \vspace{2ex} \begin{center}\includegraphics[scale=0.28]{fig11}\end{center} \vspace{-1ex}\noindent Figure 11: Signal versus $t\overline{t}$ background with $m_{t'}=800$ GeV, using Pythia with tune DW. \vspace{2ex} Finally, we would like to consider the use of the $k_T$ jet finding algorithm available in PGS4. We keep everything else the same except that we set the parameter analogous to the cone size to 0.5. We display the results for the stand-alone Pythia case with DW tune in Fig.~(12). A comparison to Fig.~(6) illustrates how the $k_T$ jet finder has been unable so far to match the performance of the cone-based jet finder. \begin{center}\includegraphics[scale=0.28]{fig12}\end{center} \vspace{-1ex}\noindent Figure 12: Signal versus $t\overline{t}$ background using stand-alone Pythia with tune DW, and using the $k_T$ jet finding algorithm. \vspace{2ex} In summary, event selection based on the use of single jet invariant masses and cone-based jet finding provides a very encouraging signal to background ratio for the search for heavy quarks. It appears to survive the various effects that may increase the background estimates, even for just a few inverse femtobarns of data. The background sensitivity to initial state radiation creates one of the larger uncertainties, as does the range of results arising from the various models and tunes available in Pythia. On the other hand the higher order effects as modeled by MC@NLO and Alpgen appear to improve the signal to background ratio. Further analysis and refinements of the method, and the consideration of other applications where $W$ identification is needed, will be left for future work \cite{Q}. \section{But is a fourth family worth looking for?} The present theoretical bias against the consideration of a fourth family is basically a reflection of our present lack of understanding of the origin of flavor. Most attention is focussed on the origin of the electroweak symmetry breaking, and a flavor structure is usually simply imposed so as to accommodate the known three families. In this sense flavor physics and the physics of electroweak symmetry breaking have not been well integrated into a common framework. The discovery of a fourth family and the realization that this is very much connected with electroweak symmetry breaking would bring these two issues together. In particular the origin of mass of the light quarks and leptons would become a question of how the heavy masses are fed down to the lighter masses. In the absence of a Higgs scalar and associated Yukawa couplings, one would have to consider effective four-fermion operators as the mechanism for feeding mass down. Since such operators are naturally suppressed by the mass scale of their generation, one is led towards new physics at energy scales not too far above the electroweak scale. A fourth family, and its implication that there is no light Higgs, no low-energy supersymmetry, and no evidence for any required fine-tuning, would shift the focus away from theories of much higher energy scales, and towards the dynamics of a theory of flavor. \section*{Note added} Upon the completion of this work the recent work by W. Skiba and D. Tucker-Smith \cite{T} was brought to the author's attention. They also make use of single jet invariant masses, and they reference a further example of earlier work using this technique \cite{U}. \section*{Acknowledgments} The author thanks John Conway and Johan Alwall for useful communications. I also thank Brian Beare for his input. This work was supported in part by the National Science and Engineering Research Council of Canada.
1,108,101,566,122
arxiv
\section{Introduction} The effects of topological terms on the dynamics of Goldstone modes and the quantum number of solitons and instantons in non-linear ${\sigma }$ (NL${% \sigma }$) models have a long history, and continue to attract strong interests from the physics community\cite{wilczek}. For example, in one spatial dimension (1D), quantum SO(3) spin chains have fundamentally different low energy properties, depending on whether the site representation is linear (spin integer) or projective (spin half-odd integer). For nearest neighbor isotropic exchange interaction, the former cases always have excitation gaps while the latter are gapless - the well known Haldane conjecture\cite{haldane}. Aside from the usual stiffness terms, the (1+1)D NL${\sigma }$ models for these spin chains contain a topological (Berry phase) term (also known as the $\theta $ term)\cite% {haldane}. When the space-time configuration of the Neel order parameter wraps the target space\cite{mermin} ($S^{2}$) $~n$ times, the Berry phase factor is +1 for integer spin chains, while it is $(-1)^{n}$ for half-odd integer spin chains\cite{haldane}. Using an algebraic approach, Chen, Gu and Wen\cite{xc} recently generalized an idea of Ref.\cite{pollmann} and argued that, in one spatial dimension, a gapped ground state which is invariant under translation and the global symmetry operation (we refer to this type of state as ``totally symmetric'' in the following) is obtainable when the site representation of the global symmetry group (which can be discrete) is linear. When the site representation is projective, a totally symmetric ground state must be gapless. In a projective representation $D$, the matrix product $D(g_{1})D(g_{2})$, where $g_{1,2}$ are group elements, can differ from $D(g_{1}.g_{2})$ by a phase factor $e^{i\theta (g_{1},g_{2})} $. For the SO(3) group, an integer spin forms the linear representation while a half-odd integer spin forms the projective representation. Hence the spectral difference between the translational invariant integer and half-odd integer SO(3) spin chains constitutes a special example of the results of Ref.\cite{xc}. Thus there are two ways to view the difference between integer and half-odd integer SO(3) spin chain: one is geometrical (the Berry's phase)\cite{haldane} and the other is algebraic\cite{xc}. SO(5) is a rank-2 classical Lie group. It also has linear and projective representations. For example, in the vector representation, the generators of the Lie algebra are given by\cite{georgi} \begin{equation} (L_{ab})_{jk}=-i\delta _{a,j}\delta _{b,k}+i\delta _{a,k}\delta _{b,j}, \label{vec} \end{equation}% where $a,b,=1,..5$, and $i,j=1..5$. Two consecutive $\pi $ rotations generated by, e.g., $L_{12}$ give \begin{equation} U_{12}(\pi )U_{12}(\pi )=I_{5\times 5}. \end{equation}% The spinor representation, on the other hand, is given by\cite{georgi} \begin{equation} L_{ab}=i[\Gamma _{a},\Gamma _{b}]/4, \label{sp} \end{equation}% where $\Gamma _{a,b}$ are the $4\times 4$ gamma matrices (e.g., $\Gamma _{1}=-\sigma _{y}\otimes \sigma _{x}$, $\Gamma _{2}=-\sigma _{y}\otimes \sigma _{y}$, $\Gamma _{3}=-\sigma _{y}\otimes \sigma _{z}$, $\Gamma _{4}=\sigma _{x}\otimes I_{2\times 2}$, $\Gamma _{5}=\sigma _{z}\otimes I_{2\times 2}$). In this case it is simple to check that two consecutive $% \pi $ rotations generated by $L_{12}$ yield \begin{equation} U_{12}(\pi )U_{12}(\pi )=-I_{4\times 4}. \end{equation}% Thus the vector representation is linear, while the spinor representation is projective. According to Ref.\cite{xc}, a spin chain of the former type can have a totally symmetric ground state with a gapped spectrum, while a spin chain of the latter type has to be gapless if it is totally symmetric. In the following, we will seek for the geometric (Berry' phase) difference between the two cases. Another motivation for us to study the Berry's phase of the SO(5) spin chain is a recent exactly solvable 1D SO(5) spin model (in the vector representation) proposed by Tu et. al.\cite{zx}. The ground state is a translational invariant matrix product state, i.e., a valence bond solid state\cite{aklt}, and the excitation spectrum has a gap. Moreover, when the chain is subjected to open boundary condition, there are edge states. These properties are reminiscent of those of integer SO(3) spin chains% \cite{ng}. With the advance of cold atom physics, the SO(5) spin chain might not be a purely academic model anymore. An SO(5) symmetric spin chain can in principle be realized experimentally when the hyperfine spin-3/2 cold fermions on an 1D optical lattice form the Mott-insulating state\cite{tu}. At quarter filling (one fermion per site), the effective spin chain is in the spinor representation, while for half-filling (two fermions per site) it is in the SO(5) vector representation. Therefore the idea presented here might one day be tested experimentally. \section{Model formulation} Let us start by considering the following SO(5) invariant Hamiltonian \begin{equation} H=\sum_{i}\left[ J_{1}\left( \sum_{a<b}L_{ab}^{i}L_{ab}^{i+1}\right) +J_{2}\left( \sum_{a<b}L_{ab}^{i}L_{ab}^{i+1}\right) ^{2}\right] \label{h5} \end{equation}% where $L_{ab}$'s are the SO(5) generators and $J_{1,2}>0$. When $L_{ab}$ are given by Eq.(\ref{vec}), the ground state is translational invariant and the spectrum has a gap in the parameter range $1/9<J_{2}/J_{1}<1/3$\cite{zx}. Naively, one would not expect the NL$\sigma $ model action of this model to contain a topological term. This is because in contrast to SO(3) spin chain where the space-time dimension ($1+1$) matches the dimension of the target space of the order parameter ($S^{2}$), for the SO(5) spin chain the target space dimension is much larger than the space-time dimension. To understand the structure of the target space for the SO(5) spin chain, we need to know how the presence of SO(5) ``magnetic'' moment breaks the global SO(5) symmetry. For that purpose, it is sufficient to consider the following mean-field theory where non-linear terms in $L_{ab}$ are decoupled into linear ones with order parameter given by $\langle L_{ab}^{i}\rangle =(-1)^{i}m_{ab}$ \begin{equation} H_{\text{MF}}=\left( -2J_{1}+2J_{2}\Delta ^{2}\right) \sum_{i,a<b}(-1)^{i}m_{% \text{ab}}L_{\text{ab}}^{i}+\sum_{i}\left( J_{1}\Delta ^{2}-3J_{2}\Delta ^{4}\right) , \end{equation}% where $\Delta ^{2}=\sum_{a<b}m_{ab}^{2}$. The question at hand is, for a fixed total magnitude of $m_{ab}$ (i.e. fixed $\sum_{a<b}m_{ab}^{2}$), what is the most energetically favorable ratio between different components of $m_{ab}$. This can be answered by diagonalizing a single-site Hamiltonian \begin{equation} H_{1}=-\sum_{a<b}m_{ab}L_{ab}. \label{eq:h1} \end{equation}% and see what ratio gives the lowest ground state energy. (Of course we need to remember the sign of $m_{ab}$ change from site to site.) In the following, we study the two irreducible representations given by Eqs.(% \ref{vec}) and (\ref{sp}). First we consider the vector representation, Eq.(% \ref{vec}). It is straightforward to show the energy spectrum of $H_{1}$ is $% E=(-\Delta _{1},-\Delta _{2},0,\Delta _{2},\Delta _{1})$ where $\Delta _{1,2}=\sqrt{A\pm \sqrt{A^{2}+B-C}}$ and \begin{eqnarray} A &=&\sum_{a<b}m_{ab}^{2}/2=\Delta ^{2}/2, \notag \\ B &=&2% \sum_{a<b<c<d}(m_{ac}m_{ad}m_{bc}m_{bd}-m_{ab}m_{ad}m_{bc}m_{cd}+m_{ab}m_{ac}m_{bd}m_{cd}), \notag \\ C &=&\sum_{a<b}\sum_{a<c<d}m_{ab}^{2}m_{cd}^{2}(1-\delta _{bc})(1-\delta _{bd}). \end{eqnarray}% The single site ground state energy reaches the minimum when $B=C$, where the energy spectrum of $H_{1}$ is $E=\{-\Delta ,0,0,0,\Delta \}$. Now we ask how is the SO(5) symmetry broken under such a condition. Let us use $H_{1}$ as one of the two Cartan generators\cite{georgi}. For the other Cartan generators $H_{2}$, we choose it to be a different linear combination of $L_{ab}$ so that $\mathrm{Tr}% (H_{1}H_{2})=0$. The root and weight diagrams are shown in Fig.~(\ref{irrep}% a). Here the x and y coordinates of the dots are the eigenvalues of $H_{1}$ and $H_{2}$, respectively. The arrows in the root diagrams indicate how the rasing/lowering operators\cite{georgi} in the SO(5) Lie algebra change the eigenvalues of $H_{1,2}$. $H_{2}$ and the rasing/lowering operators generate an SO(3) subgroup which commutes with $H_{1}$. As the result, the SO(5) symmetry is broken down to SO(3)$\times $SO(2), where the SO(2) is generated by $H_{1}$ itself. \newline \begin{figure}[tbp] \begin{center} \includegraphics[angle=90,scale=0.5]{irrep-a.eps} \end{center} \caption{(color on-line) The root (upper row) and weight (lower row) patterns of the vector (a) and spinor (b) representation of SO(5). The x and y coordinates of each dot corresponds to the eigenvalue of $H_{1}$ and $H_{2} $. The red arrows indicate how the rasing/lowering operators in the SO(5) Lie algebra change the eigenvalues of $H_{1}$ and $H_{2}$.The central dots in the root patterns are doubly degenerate. } \label{irrep} \end{figure} Second we consider the spinor representation, Eq.(\ref{sp}). It is straightforward to show that the energy spectrum of $H_{1}$ is $E=(-\Delta _{1},-\Delta _{2},\Delta _{2},\Delta _{1})$ with $\Delta _{1,2}=\sqrt{A\pm \sqrt{C-B}}/\sqrt{2}$. In this case the single site ground state energy reaches the minimum when $A=\sqrt{C-B}$, where the energy spectrum of $H_{1}$ is $E=\{-\Delta ,0,0,\Delta \}$. The root and weight patterns are shown in Fig.~(\ref{irrep})(b). Again the SO(5) symmetry is broken down to SO(3)$% \times $SO(2). After fixing the ratio of $m_{ab}$, we assume that the low energy fluctuations correspond to smooth space-time dependent SO(5) rotations of such an order parameter pattern. The NL$\sigma $ model precisely describes such smooth fluctuations. The order parameter lives on the manifold $\frac{SO(5)}{% SO(3)\times SO(2)}$, which has $\mathbb{Z}$ as its second homotopy group\cite{Hatcher}. Thus the corresponding NL${\sigma }$ model may contain a topological term, which can lead to a spectral difference between the vector and spinor representations. \section{The single-site Berry's phase} To study the possible topological term, we begin by analyzing the Berry's phase of a single SO(5) spin described by the following time-dependent Hamiltonian \begin{equation} H_{1}(t)=-\sum_{a<b}m_{ab}(t)L_{ab}, \end{equation}% where $m_{ab}(t)$ satisfy the constraints: (a) $\sum_{a<b}m_{ab}^{2}=1$, and (b) $H_{1}(t)$ possesses SO(3)$\times $SO(2) symmetry. Both constraints can be satisfied by starting with a reference Hamiltonian $H_{1,0}$ satisfying (a) and (b) and perform time-dependent SO(5) conjugation, i.e., \begin{equation} H_{1}(t)=U^{\dagger }(t)H_{1,0}U(t). \label{2s} \end{equation} As usual, the Berry's phase is given by the loop integral of the Berry connection\cite{wilczek}. We can use Stoke's theorem to convert this loop integral to an areal integral over a disk with the loop as the boundary. The advantage of doing so is the Berry curvature rather than the Berry connection appears in the latter integral. This makes the integral involving only local quantities and gauge invariant: \begin{equation} S_{B}=\frac{i}{2}\int_{0}^{1}du\int dt~\epsilon ^{\mu \nu }\text{Tr}F_{\mu \nu }. \label{sb} \end{equation}% In the above $F_{\mu \nu }=\left( \partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }\right) $ and $A_{\mu }=-i\langle \Omega |\partial _{\mu }|\Omega \> $. Here $|\Omega (t,u)\rangle $ is the ground state of \begin{equation} H_{1}(t,u)=U^{\dagger }(t,u)H_{1,0}U(t,u). \label{1s} \end{equation}% In Eq.~(\ref{1s}), $U(t,u)$ is the extension of the $U(t)$ in Eq.~(\ref{2s}) to the disk. Because $\pi _{1}$(SO(5)/SO(3)$\times $SO(2))=0, we can always construct the extension so that $U(u=1,t)=U(t)$ and $U(u=0,t)=U_{0}$, where $% U_{0}$ is a certain reference SO(5) element. Using the first order perturbation theory for wave functions, it is simple to show that \begin{equation} \text{Tr}F_{\mu \nu }=-i\sum_{k}\frac{\left\langle \Omega \left| \partial _{\mu }H\right| k\right\rangle \left\langle k\left| \partial _{\nu }H\right| \Omega \right\rangle -(\mu \leftrightarrow \nu )}{\left( E_{0}-E_{k}\right) ^{2}}. \label{f1} \end{equation}% Here $k$ labels the excited states. Since all Hamiltonians described by Eq.~(% \ref{1s}) are unitary conjugate of one another, they have the same eigenspectrum. Under that condition we have \begin{equation} \partial _{\mu }H=\sum_{k}(E_{k}-E_{0})(|\partial _{\mu }k\>\langle k|+|k\>\langle \partial _{\mu }k|). \label{pmh} \end{equation} In writing down the above equation we have made a shift of the zero of energy so that $E_0\to 0$. Substitute Eq.~(\ref{pmh}) into Eq.~(\ref{f1}) and use the fact that $\langle \Omega|k\rangle=\langle k|\Omega\rangle=0$ we find \begin{equation} \mathrm{Tr} F_{\mu\nu} =-i\mathrm{Tr}(Q[\partial_\mu Q,\partial_\nu Q]), \label{f2} \end{equation} where $Q$ is the ground state projection operator \begin{equation} Q(t,u)=|\Omega \>\langle \Omega |=U(t,u)PU^{\dagger }(t,u), \label{q} \end{equation}% where $P=|0\>\langle 0|$ is the ground state projector operator of $H_{1,0}$% . Substituting Eq.~(\ref{f2}) into Eq.~(\ref{sb}) we obtain \begin{equation} S_{B}=\int_{0}^{1}du\int dt\text{Tr} (Q[\partial _{u }Q,\partial _{t}Q]) . \label{sb1} \end{equation} Eq.~(\ref{sb1}) actually applies for any target space. For example, in the case of SO(3)/SO(2), we have \begin{equation*} U(t,u)=\left( \begin{array}{cc} z_{1} & z_{2} \\ -\bar{z}_{2} & \bar{z}_{1}% \end{array}% \right) \mathrm{~}\text{and}\mathrm{~}P=\left( \begin{array}{cc} 1 & 0 \\ 0 & 0% \end{array}% \right) , \end{equation*}% where $z_{1,2}(t,u)$ satisfy \begin{equation*} \left( \bar{z}_{1},\bar{z}_{2}\right) \cdot \vec{{\sigma }}\cdot \left( \begin{array}{c} z_{1} \\ z_{2}% \end{array}% \right) =\hat{n}(t,u). \end{equation*}% Substitute the above two expressions into Eq.~(\ref{q}) and Eq.~(\ref{sb1}) we obtain \begin{equation} S_{B}=\frac{i}{2}\int_{0}^{1}du\int dt\left( \hat{n}\cdot \partial _{u}\hat{n% }\times \partial _{t}\hat{n}\right) , \end{equation}% which is the well known expression for the Berry's phase of a spin-1/2\cite% {haldane}. Because the dimension of our target space is eight, there are more than one disks having the loop in question as the boundary. Therefore, we need to ask whether Eq.~(\ref{sb1}) yields the same answer for the Berry phase when different extensions of $U(t)\rightarrow U(t,u)$ are used. The difference in the Berry phase using two different disks as extension can be calculated by integrating the Berry curvature over a closed two dimensional surface formed by joining the two disks at their common boundary. The resulting closed surface has the topology of a 2-sphere. Because the second homotopy group of our target space is $\mathbb{Z}$, all 2-spheres in the target space are topologically the multiple of a basic sphere. Hence all we need to check is whether the Berry curvature integral is integer multiple of $2\pi $ when $t$ and $u$ in Eq.~(\ref{sb1}) parameterize the basic 2-sphere\cite{witten}. In the following, we perform such a calculation. For the vector representation, we choose $H_{1,0}=L_{12}$, and pick $U(t,u)$ so that \begin{equation} U^{\dagger }(t,u)H_{1,0}U(t,u)=\hat{w}(t,u)\cdot \vec{L}, \label{ex} \end{equation}% where $\vec{L}=(L_{13},L_{23},L_{12})$, \begin{equation*} \hat{w}_{i}(t,u)=(\sin (\pi u)\cos \frac{2\pi t}{\beta },\sin (\pi u)\sin \frac{2\pi t}{\beta },\cos (\pi u)), \end{equation*} and $\beta $ is the period of the imaginary time. For the spinor representation, we take $H_{1,0}=(L_{12}-L_{34})/\sqrt{2}$, and $\vec{L}% =\left( L_{13}+L_{24},L_{14}-L_{23},L_{12}-L_{34}\right) /\sqrt{2}$. Using Eq.~(\ref{sb1}), or equivalently Eqs.(\ref{sb}) and (\ref{f1}), we find that \begin{equation} S_{B,\mathrm{basic~sphere}}^{\mathrm{~vector}}=4\pi ,\text{ \ \ }S_{B,% \mathrm{basic~sphere}}^{\mathrm{~spinor}}=2\pi . \label{dif} \end{equation}% Thus the condition for the uniqueness of the Berry phase is satisfied. As we shall see, the difference between the vector and spinor Berry phase in Eq.~(% \ref{dif}) serves to distinguish the vector and spinor SO(5) chains. \section{The lattice Berry's phase, gapful versus gapless, and the edge states} We now extend the above single-site Berry phase analysis to the one dimensional lattice. If the order parameter is perfectly ``antiferromagnetic'' at all time, the ground state of $% H_{1}(t)$ are $|\Omega \rangle (t)$ on the even sublattice and its conjugate $|\bar{\Omega}(t)\>=R|\Omega (t)\>^{\ast }$ on the odd sublattice, where $R$ is the operator that satisfies $RL_{ab}^{\ast }R^{-1}=-L_{ab}$. For the vector representation, $R$ is given by $% R_{15}=-R_{24}=R_{33}=-R_{42}=R_{51}=-1$ and $R_{ij}=0$ otherwise. For the spinor representation, $R=-iI_{2\times 2}\otimes \sigma _{y}$. Using the above result, it is straightforward to show that $\langle \Omega |L_{ab}|\Omega \>=-\langle \bar{\Omega}|L_{ab}|\bar{\Omega}\>$, and $% RU^{\ast }R^{-1}=U$ for all $U\in $ SO(5). This, plus the invariance of the trace under matrix transposition, allows one to show that $\epsilon ^{\mu \nu }\text{Tr}\bar{Q}\partial _{\mu }\bar{Q}\partial _{\nu }\bar{Q}% =-\epsilon ^{\mu \nu }\text{Tr}Q\partial _{\mu }Q\partial _{\nu }Q$. As the result, the Berry's phases associated with neighboring sites tend to cancel each other. Let $r$ label the center of mass position of site $i$ and $i+1$ for $i=$ odd, the total lattice Berry's phase is equal to \begin{equation} S_{B}^{\mathrm{tot}}=\sum_{r}\sum_{\epsilon =\pm 1}(-1)^{\epsilon -1}\int_{0}^{1}du\int dt\mathrm{Tr}(Q_{r+\epsilon /2}[\partial _{u}Q_{r+\epsilon /2},\partial _{t}Q_{r+\epsilon /2}]), \label{gk} \end{equation}% where $Q$ is a smooth function of spacial coordinates. Under such a condition, Eq.~(\ref{gk}) has a continuum limit \begin{equation} S_{B}^{\mathrm{tot}}=\frac{1}{2}\int dx\int dt\text{Tr}Q[\partial _{t}Q,\partial _{x}Q], \label{sb3} \end{equation}% the factor of 1/2 arises from the density of odd lattice sites. Under open boundary condition, Eq.~(\ref{sb3}) becomes \begin{eqnarray} S_{B}^{\mathrm{tot}} &=&{\frac{1}{2}}\int dxdt\mathrm{Tr}Q[\partial _{t},Q\partial _{x}Q] \notag \\ &&+{\frac{1}{2}}\int_{0}^{1}du\int dt\left\{ \mathrm{Tr}[Q[\partial _{u},Q\partial _{t}Q]]_{R}-\mathrm{Tr}[Q[\partial _{u}Q,\partial _{t}Q]]_{L}\right\} , \label{tbe} \end{eqnarray}% where the subscript ``R''\ and ``L''\ labels the right and the left ends. This topological term together with the stiffness term from the energetics, constitute the NL${\sigma }$ model for the SO(5) spin chain. Eqs.~(\ref{dif}) and (\ref{sb3}) have important implications. The fact that the mapping from the space-time to the target space is classified by integer homotopy classes implies that the space-time order parameter configurations can be grouped into different topological sectors labeled by an integer topological invariant. This is similar to the SO(3) NL${\sigma }$ model where the topological invariant, the Pontryagin index\cite{pont}, is the number of times which the order parameter configurations cover the target space $S^{2}$. In our case there is an analogous integer topological index, which we will refer to as the Pontryagin index as well. Eq.~(\ref{dif}) implies that for the vector SO(5) spin chain the Berry phase associated with the order parameter configuration having different Pontryagin indices are all the same because $\exp (i4\pi /2\times \mathrm{integer})=+1$. Given the facts that (i) the topological term has no effect (hence the NL$\sigma $ model has only the stiffness terms), and (ii) the target space dimension is high, it is easy to believe that the vector SO(5) spin chain should have a quantum disordered, i.e., translational invariant gapped, phase. For the spinor SO(5) chain, however, the order parameter configurations with even Pontryagin index have the Berry phase $\exp (i2\pi /2\times \mathrm{% even~integer})=+1$, while those with odd Pontryagin index have Berry phase $% \exp (i2\pi /2\times \mathrm{odd~integer})=-1$. This result is exactly analogous to the Berry's phase in the spin-1/2 representation of the SO(3) antiferromagnetic Heisenberg chain. In view of the result of Ref.\cite{xc}, we conclude that the above non-trivial Berry's phase also implies the lack of an energy gap as long as the translation symmetry is unbroken. Now we comment on the edge state of the SO(5) ``valence bond solid state'' in Ref.\cite{zx}. By tuning the ratio of $J_{1}$ and $J_{2}$ in Eq.(\ref{h5}% ), Tu et al were able to show that a short-range entangled, translational invariant matrix product state is the exact ground state. In addition, under the open boundary condition the ground state wavefunction becomes $4\times 4=16$ fold degeneracy. According to Eq.~(\ref{tbe}), the boundary of a vector spin chain should exhibit the following Berry phase \begin{equation} {\frac{1}{2}}\int_{0}^{1}du\int dt\mathrm{Tr}Q[\partial _{u},Q\partial _{t}Q]. \label{sf} \end{equation}% When $Q(t,u)$ is a unit Pontryagin index order parameter configuration, the value of Eq.(\ref{sf}) is ${\frac{1}{2}}*{4\pi }=2\pi $. This is consistent with the spinor Berry phase in Eq.~(\ref{dif}). Therefore, the edge state of the vector SO(5) spin chain carries the spinor representation. Because the latter is 4-dimensional, each end of the chain independently yields a 4-fold degeneracy of the ground state, resulting in a total $4\times 4=16$ fold degenerate ground state for the open chain. Before closing a technical remark is in order. The readers might wonder what if the ratio between different components of $m_{ab}$ fluctuates away from the the optimal value. When that happens the single site spectrum will become $\{-\Delta _{1},-\Delta _{2},0,\Delta _{2},\Delta _{1}\}$ and $% \{-\Delta _{1},-\Delta _{2},\Delta _{2},\Delta _{1}\}$ for the vector and spinor representation, respectively. In this case, the SO(5) symmetry is broken down to SO(2)$\times $SO(2). The second homotopy group of $\frac{SO(5)}{SO(2)\times SO(2)}$ is $\mathbb{Z}\oplus \mathbb{Z}$ rather than $\mathbb{Z}$. In other words, the image of the space-time in the target space is topologically the multiple of two basic spheres. As $\Delta _{2}\rightarrow 0$, one of these spheres shrinks to a point. We have checked that so long as $\Delta _{2}$ is small, i.e., when the ratio between $m_{ab}$ does not deviate from the optimal value too drastically, the Berry phase is only sensitive to the Pontryagin index of the dominant (large) sphere. Hence all the results discussed earlier remain unchanged. \section{Conclusion} We have studied the Berry's phase of the antiferromagnetic SO(5) spin chain, and shown the existence of a topological term in the non-linear $\sigma$ model description of the system. The quantum phase factor associated with this topological term differentiates the vector (linear) and spinor (projective) representations. We argue that this leads to the spectral difference as long as the translation and SO(5) symmetry is unbroken. More specifically, the vector spin chain can have a totally symmetric ground state while having an energy gap. The spinor chain, on the other hand, must be gapless if there is no symmetry breaking. Under the open boundary condition, we find the boundary Berry's phase of the vector spin chain is consistent with the derived edge degeneracy of an exactly solvable model. The present result can be straightforwardly generalized to other irreducible representations, leading to two classes of SO(5) spin chain: in one class the site representation is linear, and in the other the site representation is projective. While the first class can have a totally symmetric ground state and maintain a gapped spectrum, the second class must have gapless spectrum if there is no symmetry breaking. Finally, we would like to make contact with relevant works in the literature. Eq.(\ref{sb1}) appeared in Ref.\cite{wiegmann} in discussing a super-symmetric version of the Hubbard model. Of course, the target space is entirely different. In Ref.\cite{aff} Affleck discussed the topological term of an SU(N) spin chains. In that case the target space is $U(N)/U(m)\times U(N-m)$, which also has $\mathbb{Z}$ as its second homotopy group. In Ref.\cite{read1}, Read and Sachdev have discussed the effects of Haldane's (SU(2) or SO(3)) Berry phase term in two space dimensions\cite{read1}. Subsequently, they generalized this to the SU(N) group in Ref.\cite{read}. The essential difference between the SU(N) and the SO(5) spin chains is their target spaces. Different target spaces have distinct topology. As far as we know, this is the first time one points out that the target space of the SO(5) spin chain also has second homotopy group $\mathbb{Z}$, and constructs the Berry's phase term for the NL$\sigma$ model description. Of course, analogous to Refs.\cite{read1,read} one can ask what is the effect of this topological term in two space dimensions. We leave this question for future studies. \section{Acknowledgement} We thank Wu-Yi Hsiang, Xiao-Gang Wen, Zheng-Cheng Gu, Ari Turner, Xie Chen, and Geoffrey Lee for helpful discussions. DHL is supported by DOE grant number DE-AC02-05CH11231 and thanks for the hospitality for the MIT condensed matter theory group in the last stage of this work. GMZ and TX acknowledge the support of NSF-China and the National Program for Basic Research of MOST, China.
1,108,101,566,123
arxiv
\section{Introduction} \label{intro} Observables in QCD are functions of $\alpha=g^2/4\pi$ and $1/N_f$. An inspection of the 5-loop $\beta$ function~\cite{Baikov:2016tgj} (see also~\cite{Luthe:2016ima}), 5-loop $\gamma_m$~\cite{Baikov:2014qja}, and 3-loop $\gamma_\pm$~\cite{Gracey:2012gx} reveals that these RG functions may be re-organized in the ${\overline{\rm MS}}$ scheme as an expansion in $\alpha$ and $\sim N_f/10$ with coefficients of order unity or smaller. From this empirical observation we may conclude that ordinary perturbation theory should be reliable when $\alpha$ is sufficiently smaller than unity, whereas a large $N_f$ calculation should provide a reasonably accurate estimate of the exact, non-perturbative result for $N_f\gtrsim10$. Nevertheless, investigations of QCD in the limit of large number $N_f$ of massless flavors are quite useful in practice. At the very least, they serve as non-trivial consistency checks for high-order calculations in ordinary perturbation theory. Furthermore, they represent an interesting laboratory for the study of dualities, see e.g.~\cite{Gracey:1996he} and references therein. Yet, because even the first non-trivial order effectively re-sums an entire $\alpha N_f$ series, large $N_f$ calculations offer a unique, and systematically improvable probe of the non-perturbative regime of gauge theories. Therefore, despite the fact that for realistic numbers of massless flavors this approach is not fully justifiable,~\footnote{The authors of~\cite{Broadhurst:1994se} suggested the replacement $N_f\to-{3}\beta_0/2$, with $\beta_0=11-2N_f/3$ the coefficient of the QCD one-loop beta function, as a way to improve large $N_f$ computations to smaller $N_f$. This ``naive non-abelianization" effectively includes additional gluonic loops; however, it is an unsystematic (and gauge-dependent) truncation of the series and it is hard to judge its reliability. A more convincing way to quantify the impact of gluonic loops would be to calculate subleading $1/N_f$ corrections.} one can still hope that some quantity of physical interest be approximated reasonably well by the first few orders in $1/N_f$ even when $N_f\lesssim10$. Of course the QCD beta function cannot be such an example, since it changes abruptly with $N_c/N_f$. Other RG functions, such as the anomalous dimension of the quark bilinear or of baryons, might be more promising candidates. An analysis of large $N_f$ QED was initiated in \cite{Espriu:1982pb}\cite{PalanquesMestre:1983zy} with the calculation of the anomalous dimension of the mass operator. The leading order result straightforwardly generalizes to large $N_f$ QCD. The beta function at next to leading order was calculated for QED in~\cite{PalanquesMestre:1983zy} and for QCD in~\cite{Gracey:1996he}. Here we wish to derive an intrinsically non-abelian quantity that has no counterpart in QED: the anomalous dimension of baryons. This is currently known up to 3 loops within standard perturbation theory~\cite{Gracey:2012gx}. \section{Spin-1/2 operators in QCD} We adopt a Weyl spinor notation, where all fermions are left handed: $\psi$ is a ${\bf3}$ of $SU(3)$ color and a fundamental of $SU(N_f)_L$ whereas $\widetilde{\psi}$ is a ${\overline{\bf3}}$ of $SU(3)$ color and the anti-fundamental of $SU(N_f)_R$. Using Fierz transformations it is easy to show that (in exactly $d=4$ dimensions) spin-1/2 baryons appear in two Lorentz structures: \begin{eqnarray}\label{basis} [B_+]^{ijk}_\alpha=\psi^{\{i}_\alpha(\psi^{j\}}\psi^k)~~~~~~~~~~~~[B_-]^{i\tilde j\tilde k}_\alpha=\psi^i_\alpha(\widetilde{\psi}^{\tilde j}\widetilde{\psi}^{\tilde k})^*, \end{eqnarray} plus their conjugates. In this notation $(\psi_1\psi_2)=\psi_1^t\epsilon\psi_2$ --- with $\alpha, \beta, \cdots$ Lorentz indices and $\epsilon$ the fully antisymmetric 2 by 2 matrix ---, contractions of color is understood, and $i,j,k,\tilde i, \tilde j, \tilde k$ are flavor indices. The latter will often be suppressed for brevity, unless necessary to avoid ambiguities. We defined $\psi^{\{i}\psi^{j\}}\equiv\psi^i\psi^j+\psi^j\psi^i$. The operators (\ref{basis}) are in different representations of the flavor group $SU(N_f)_L\times SU(N_f)_R$. $B_+$ transforms as the $\frac{1}{3}{{\bf N_f}({\bf N_f}^2-1)}$-dimensional representation of $SU(N_f)_L$, that has mixed symmetry properties, while $B_-$ is a $({\bf N_f},[{\bf N_f}\otimes {\bf N_f}]_{\rm antisym})$. In the chiral limit, and still in $d=4$ dimensions, $B_{\pm}$ do not mix under RG within any mass-independent renormalization scheme: \begin{eqnarray}\label{RG} \mu\frac{d}{d\mu} \left(\begin{array}{c} B^r_+ \\ B^r_- \end{array}\right) =- \left(\begin{array}{cc} \gamma_+ & \\ & \gamma_- \end{array}\right) \left(\begin{array}{c} B^r_+ \\ B^r_- \end{array}\right). \end{eqnarray} Here and in the following $B_\pm$ ($B_\pm^r$) denote the bare (renormalized) composite operators. By Parity conservation, similar relations hold for $\widetilde B_+=\widetilde{\psi}(\widetilde{\psi}\widetilde{\psi})$ and $\widetilde B_-=\widetilde{\psi}(\psi\psi)^*$, that have anomalous dimensions $\gamma_+,\gamma_-$ respectively.~\footnote{In the literature a different operator basis has often been adopted. A connection with the latter is straightforwardly obtained introducing a 4-component Dirac fermion $\Psi=(\Psi_L, \Psi_R)^t$ with $\Psi_L=\psi$, $\Psi_R=\epsilon\widetilde{\psi}^*$, and defining $B_1\equiv\Psi(\Psi^tC\Psi)$, $B_2=\gamma^5\Psi(\Psi^tC\gamma^5\Psi)$, where $C=i\gamma^0\gamma^2$. The latter basis is commonly used in lattice QCD simulations, since $B_{1,2}$ have the right Parity properties to interpolate a nucleon. ($B_1$ vanishes in the non-relativistic limit and therefore $B_2$ is usually preferred.) From the relations $(B_2\pm B_1)_L=-2B_\pm$, $(B_2\pm B_1)_R=+2\epsilon\widetilde B_\pm$, and (\ref{RG}), we see that: \begin{eqnarray}\label{constr} \mu\frac{d}{d\mu} \left(\begin{array}{c} B^r_1 \\ B^r_2 \end{array}\right) =-\frac{1}{2} \left(\begin{array}{cc} \gamma_++\gamma_- & \gamma_+-\gamma_- \\ \gamma_+-\gamma_- & \gamma_++\gamma_- \end{array}\right) \left(\begin{array}{c} B^r_1 \\ B^r_2 \end{array}\right). \end{eqnarray} Again, this expression is exact in the limit of unbroken chiral symmetry and Parity, $d=4$, and for any mass-independent scheme. The constraint (\ref{constr}) is consistently satisfied by the 3-loop calculation of~\cite{Gracey:2012gx}.} Unfortunately, dimensional regularization (the mass-independent regularization scheme adopted here and virtually all multi-loop calculations), violates the assumption $d=4$. This introduces a mixing with evanescent operators with Lorentz structures such as $\Gamma\psi(\psi\Gamma\psi)$ and modifies (\ref{RG}) starting at 2-loops, as we discuss in detail next \subsection{Definition of the renormalization scheme} Diagrams are regulated via dimensional regularization with $d=4-\varepsilon$ throughout the paper. Furthermore, we assume that the 2-dimensional matrices $\bar\sigma^\mu, \sigma^\mu$ and the anti-symmetric tensor $\epsilon$ are defined in $d$ dimensions. We use the notation of \cite{Dreiner:2008tw}. Now, consider the following correlators of the bare operators: \begin{eqnarray}\label{short} \langle B_+\rangle_{\dot\alpha\dot\beta\dot\gamma\delta}&\equiv&\langle{\psi^\dagger(p_1)}^i_{\dot\alpha a}{\psi^\dagger(p_2)}^{j}_{\dot\beta b}{\psi^\dagger(p_3)}^{k}_{\dot\gamma c}[B_+(-p_1-p_2-p_3)]^{i j k}_{\delta}\rangle\\\nonumber \langle B_-\rangle_{\dot\alpha\beta\gamma\delta}&\equiv&\langle{\psi^\dagger(p_1)}^i_{\dot\alpha a}{\widetilde\psi(p_2)}^{\tilde j}_{\beta b}{\widetilde\psi(p_3)}^{\tilde k}_{\gamma c}[B_-(-p_1-p_2-p_3)]^{i\tilde j\tilde k}_{\delta}\rangle, \end{eqnarray} where the repeated flavor indices are not summed. At leading order in $1/N_f$ we find: \begin{eqnarray}\label{loop} \langle B\rangle^{(1)}&=&{\cal{D}}_{00}(\varepsilon,p)\langle B\rangle^{(0)}+{\cal{D}}_{01}(\varepsilon,p)\langle T\rangle^{(0)}. \end{eqnarray} In particular, the divergent part of $\langle B\rangle^{(1)}$ contains terms proportional to the tree correlator $\langle B\rangle^{(0)}\propto1\otimes1\otimes1$, as well as to non-trivial spinor structures like $\langle T\rangle^{(0)}\propto1\otimes\Gamma_{\mu\nu}\otimes\Gamma^{\mu\nu}+\Gamma_{\mu\nu}\otimes1\otimes\Gamma^{\mu\nu}+\Gamma_{\mu\nu}\otimes\Gamma^{\mu\nu}\otimes1$. The latter reduce to $1\otimes1\otimes1$ in $d=4$ dimensions, that is $T\to B$ as $\varepsilon\to0$. However, for $\varepsilon\neq0$ the Gamma matrices are not complete, $T$ is independent from $B$, and $B$ is not multiplicatively renormalized. In order to have a set of operators closed under RG we must extend (\ref{RG}) by introducing $T$, or more conveniently a linear combination $E_1$ of $T,B$ that vanishes as $\varepsilon\to0$, i.e. an evanescent operator. The latter would eventually mix with other evanescent operators involving a higher number of Gamma matrices and so on. The bottom line is that in dimensional regularization $B$ mixes with an infinite number of evanescent operators $E_{a=1,2,3,\cdots}$, invalidating (\ref{RG}). This complication is well appreciated in the context of 4-fermion operators, see e.g.~\cite{Bondi:1989nq}\cite{Dugan:1990df} for earlier literature, and~\cite{Herrlich:1994kh} for a lucid discussion. Denoting the complete operator basis by $O_A$ ($=B,E_1,E_2,E_3,\cdots$), the bare and renormalized operators are related via \begin{eqnarray} O_A=Z_{AB}O^r_B, \end{eqnarray} with $Z_{AB}=Z_{AB}^{\rm conn}Z_\psi^{3/2}$. By construction, the bare evanescent operators have vanishing tree-level matrix elements. On the other hand, the renormalized operators $E_a^r$ may contribute at loop level, though their matrix elements are not independent. In fact, in exactly 4-dimensions there exist finite functions $f_a$ of the renormalized coupling such that $\langle E^r_a\rangle=f_a\langle B^r\rangle$, see e.g.~\cite{Kraenkl:2011qb}. The functions $f_a$ are scheme-dependent. Fortunately, one can always choose a prescription where $f_a=0$.~\cite{Dugan:1990df} Such a scheme is especially useful when matching with a more fundamental theory at some high scale. The authors of~\cite{Dugan:1990df}\cite{Herrlich:1994kh} also found that $\gamma_{a0}=0$ in this case, so the running of the phenomenologically relevant parameters is simply controlled by the $00$ component of an infinite-dimensional anomalous dimension matrix $\gamma=Z^{-1}\mu dZ/d\mu$, i.e. $\gamma_{00}$, which itself receives contributions (starting at second nontrivial order) from loops involving evanescent operators. With this qualification (\ref{RG}) is correct. In a generic scheme with $f_a\neq0$ the evanescent operators also contribute to the matching. Moreover, the scaling of Green's functions with an insertion of $B^r$ is controlled by~\cite{Bondi:1989nq}\cite{Dugan:1990df} $\gamma=\gamma_{00}+\gamma_{0a}f_a$. Therefore (\ref{RG}) can also be made sense with $f_a\neq0$. None of this is relevant at leading order. Indeed, since $f_a$ first arises at ${\cal O}(1/N_f)$, we find $\gamma=\gamma_{00}+{\cal O}(1/N_f^2)$. Irrespective of $f_a$ we can thus write: \begin{eqnarray}\label{this} \gamma^\pm &=&\mu\frac{d}{d\mu}\left(\delta Z^{\rm conn}_{00\pm}+\frac{3}{2}\delta Z_\psi\right)+{\cal O}(1/N^2_f), \end{eqnarray} where $\delta Z_{AB}=Z_{AB}-\delta_{AB}={\cal O}(1/N_f)$. At this order loops involving evanescent operators do not affect $\gamma_{00}$. Yet, there is an additional, more subtle way in which the evanescent operators impact physical processes, which holds at any order and for any $f_a$. Indeed, the very definition of bare $E_a$ is not unique, and the choice we make ultimately affects the matrix elements of the renormalized physical operators $B_\pm^r$.~\cite{Herrlich:1994kh} In fact, in complete generality, we can {\emph{define}} \begin{eqnarray}\label{bareE} E_1=T-s(\varepsilon)B, \end{eqnarray} where $s(\varepsilon)=1+\sum_{n=1}s_n\varepsilon^n$ is an arbitrary function, and still satisfy the constraint $T\to B$ (as $\varepsilon\to0$). As a result, $B^r$ also depends on $s$ in general. Employing a minimal subtraction scheme, we go back to (\ref{loop}) and take: \begin{eqnarray}\label{Br} \langle B_\pm^r\rangle={\rm finite}(1+{\cal{D}}_{00}+s_\pm(\varepsilon){\cal{D}}_{01})\langle B\rangle+{\cal O}(1/N_f). \end{eqnarray} Because in general both $B^r$ and $\gamma^\pm$ depend on $s$, a renormalization scheme is uniquely defined only once $s_\pm$ is given. This residual dependence on $s_\pm$ is discussed in section~\ref{sec:residual}. \subsection{Anomalous dimension of the physical operators $B_\pm^r$} Having introduced our renormalization prescription (\ref{Br}) we can now derive an explicit expression for (\ref{this}). At leading $1/N_f$ order, the diagrams that contribute to $\gamma^\pm$ are the same as a 1-loop analysis, with the gluon propagator re-summing all fermion bubbles, see (\ref{AA}). Within dimensional regularization, and using the formulas (\ref{div}) and (\ref{div1}) from the Appendix, the divergent parts of the {\emph{connected}} diagrams contributing to $\langle B_\pm\rangle$ read (compare to (\ref{loop})): \begin{eqnarray}\label{tens} {\rm div}\langle B_-\rangle^{\rm conn}_{\dot\alpha\beta\gamma\delta}&=&i(P_1)_{\alpha\dot\alpha}i(P_2)_{\beta\dot\beta}i(P_3)_{\gamma\dot\gamma}\epsilon_{abc}~{\rm div}\left[T^-_{\alpha\dot\beta\dot\gamma\delta\dot\sigma\dot\rho}\sum_{n=0}^\infty I_n+\xi~ {T'}^+_{\alpha\dot\beta\dot\gamma\delta\dot\sigma\dot\rho} I_0\right]\epsilon^{\dot\sigma\dot\rho}\\\nonumber {\rm div}\langle B_+\rangle^{\rm conn}_{\dot\alpha\dot\beta\dot\gamma\delta}&=&i(P_1)_{\alpha\dot\alpha}i(P_2)_{\beta\dot\beta}i(P_3)_{\gamma\dot\gamma}\epsilon_{abc}~{\rm div}\left[T^+_{\alpha\beta\gamma\delta\sigma\rho}\sum_{n=0}^\infty I_n+\xi~ {T'}^+_{\alpha\beta\gamma\delta\sigma\rho} I_0+(\rho\leftrightarrow\sigma)\right]\epsilon^{\sigma\rho}, \end{eqnarray} where $P_i={\slashed{p_i}}/{p_i^2}$ is the tree-level fermion propagator, and $\xi$ the gauge parameter. The terms $\rho\leftrightarrow\sigma$ in the second line arise from the symmetrization of the $i j$ indices in the definition of $B_+$ (see (\ref{basis})). In the above expressions we introduced the tensorial structures \begin{eqnarray}\label{tens} T^-_{\alpha\dot\beta\dot\gamma\delta\dot\sigma\dot\rho}&=&-\left[\delta_{\delta\alpha}(\bar\Gamma^{\mu\nu})_{\dot\beta\dot\sigma}(\bar\Gamma^{\mu\nu})_{\dot\gamma\dot\rho}-(\Gamma^{\mu\nu})_{\delta\alpha}\delta_{\dot\beta\dot\sigma}(\bar\Gamma^{\mu\nu})_{\dot\gamma\dot\rho}-(\Gamma^{\mu\nu})_{\delta\alpha}(\bar\Gamma^{\mu\nu})_{\dot\beta\dot\sigma}\delta_{\dot\gamma\dot\rho}\right]\\\nonumber {T}'^-_{\alpha\beta\gamma\delta\sigma\rho}&=&+3\left[\delta_{\delta\alpha}\delta_{\dot\beta\dot\sigma}\delta_{\dot\gamma\dot\rho}\right]\\\nonumber T^+_{\alpha\beta\gamma\delta\sigma\rho}&=&-\left[\delta_{\delta\alpha}(\Gamma^{\mu\nu})_{\sigma\beta}(\Gamma_{\mu\nu})_{\rho\gamma}+(\Gamma^{\mu\nu})_{\delta\alpha}\delta_{\sigma\beta}(\Gamma_{\mu\nu})_{\rho\gamma}+(\Gamma^{\mu\nu})_{\delta\alpha}(\Gamma_{\mu\nu})_{\sigma\beta}\delta_{\rho\gamma}\right]\\\nonumber {T}'^+_{\alpha\beta\gamma\delta\sigma\rho}&=&+3\left[\delta_{\delta\alpha}\delta_{\sigma\beta}\delta_{\rho\gamma}\right], \end{eqnarray} where the $d$ dimensional anti-symmetric tensors $\Gamma^{\mu\nu}, \bar\Gamma^{\mu\nu}$ are defined via $\sigma^\mu\bar\sigma^\nu=g^{\mu\nu}-2i\Gamma^{\mu\nu}$ and $\bar\sigma^\mu\sigma^\nu=g^{\mu\nu}-2i\bar\Gamma^{\mu\nu}$, whereas \begin{eqnarray}\label{In} I_n&=&-\frac{2}{3}ig^2\frac{4}{d}\int\frac{{\rm d}^d\ell}{(2\pi)^d}\frac{\ell^4}{(\ell^2-\Delta)^4}\left[\Pi(\ell)\right]^n\\\nonumber &=&-\frac{1}{N_f}\left(-\frac{\lambda}{\varepsilon Z_A}\right)^{n+1}\frac{\overline\Pi^n}{n+1}\left(\frac{\mu^2}{\Delta}\right)^{(n+1)\frac{\varepsilon}{2}} \frac{\left(1-\frac{\varepsilon}{6}\right)\Gamma\left(1+(n+1)\frac{\varepsilon}{2}\right) }{\left(1+n\frac{\varepsilon}{2}\right)\left(1+n\frac{\varepsilon}{4}\right)\left(1+n\frac{\varepsilon}{6}\right)\Gamma\left(1+n\frac{\varepsilon}{2}\right)}. \end{eqnarray} with the factor of $2/3$ is due to the group theory identity $T^A_{a'a}[T^A_{bb'}]^*\epsilon^{a'b'c}=-\frac{2}{3}\epsilon^{abc}$, obtained for fermions in the fundamental representation ($T_R=1/2$). The quantity $\Delta$ depends on Lorentz-scalar combinations of the three momenta $p_{1,2,3}$ and therefore on the corresponding Feynman diagram. However, the results of Appendix~\ref{key} imply that $\Delta$ does appear in the divergent parts, as it must be in our regularization scheme. Because our main focus is the evaluation of the anomalous dimensions, $\Delta$ can therefore be ignored. It is still worth emphasizing that, as opposed to $\gamma_\pm$, the momentum-dependent finite terms are generically affected by renormalon poles~\cite{Broadhurst:1992si}. As argued below (\ref{loop}), we are free to write $T^\pm$ as \begin{eqnarray}\label{mat} T^\pm=3\left[s_\pm(\varepsilon)\right] 1\otimes1\otimes1 +T^\pm_{E} \end{eqnarray} for any non singular $s_\pm$ satisfying $s_\pm(0)=1$. Here $T^\pm$ are explicitly given in (\ref{tens}) whereas $T^\pm_{E}$ is the tensor structure associated to $E_1^\pm$. From our prescription (\ref{Br}) follows that $\delta Z_{00}={\rm div}({\cal{D}}_{00}+s{\cal{D}}_{01})$, where ${\cal{D}}_{00}+s{\cal{D}}_{01}$ in general depends on the external momenta, whereas $\delta Z_{00}$ does not. More explicitly: \begin{eqnarray} \delta Z^{\rm conn}_{00,\pm}&=&{\rm div}\left[3s_\pm\sum_{n=0}^\infty I_n+3\xi I_0\right]+{\cal O}(1/N^2_f)\\\nonumber &=&-\sum_{n=1}^\infty\left(\frac{\lambda}{\varepsilon}\right)^n\frac{1}{n}G^{B}_0(\varepsilon)s_\pm(\varepsilon)+\frac{3\xi^r\lambda}{N_f}\frac{1}{\varepsilon}+{\cal O}(1/N_f^2), \end{eqnarray} with \begin{eqnarray} G_0^{B}(\varepsilon)&=&-\frac{3}{N_f}\frac{\left(1-\varepsilon\right)\left(1-\frac{\varepsilon}{3}\right)\Gamma\left(1-\varepsilon\right)}{\left(1-\frac{\varepsilon}{2}\right)^2\left(1-\frac{\varepsilon}{4}\right)\Gamma\left(1+\frac{\varepsilon}{2}\right)\Gamma^3\left(1-\frac{\varepsilon}{2}\right)}. \end{eqnarray} In deriving $G_0^B$ we used the expression of $I_n$ given in (\ref{In}) and took advantage of (\ref{div}) (\ref{div1}). Regarding the disconnected terms, note that the quark wave-function $Z_\psi$ can be calculated by replacing the tree gluon propagator with (\ref{AA}) in the familiar 1-loop diagram. As usual we define $Z_\psi=1+\delta Z_\psi=1+{\rm div}(\overline{\Sigma})+{\cal O}(1/N_f^2)$, where $\Sigma(q)\equiv\slashed{q}\overline{\Sigma}(q^2)$ is the 1-particle irreducible fermion 2-point function. Using the general formulas of Appendix~\ref{key} we find \begin{eqnarray}\label{psi} Z_\psi&=&1-\frac{2\xi^r\lambda}{N_f}\frac{1}{\varepsilon}-\sum_{n=1}^\infty\left(\frac{\lambda}{\varepsilon}\right)^n\frac{1}{n}G^\psi_0(\varepsilon)+{\cal O}(1/N_f^2),\\\nonumber G_0^\psi(\varepsilon)&=&-\frac{3}{2N_f}\varepsilon\frac{\left(1-\varepsilon\right)\left(1-\frac{\varepsilon}{3}\right)^2\Gamma\left(1-\varepsilon\right)}{\left(1-\frac{\varepsilon}{2}\right)^2\left(1-\frac{\varepsilon}{4}\right)\Gamma\left(1+\frac{\varepsilon}{2}\right)\Gamma^3\left(1-\frac{\varepsilon}{2}\right)}. \end{eqnarray} This quantity was computed previously by other authors (see for instance~\cite{Gracey:1993ua} for a calculation in the $\xi=0$ gauge). Its determination does not present any subtlety associated to evanescent operators. Plugging the above expressions for $G_0^{\psi,B}$ into (\ref{this}) and using (\ref{anom}) we arrive at our main result: \begin{eqnarray}\label{result} \gamma_\pm(\lambda)&=&\lambda\left[G_0^{B}(\lambda)s_\pm(\lambda)+\frac{3}{2}G_0^\psi(\lambda)\right]\\\nonumber &=&-\frac{3}{N_f}\lambda\frac{\left(1-\lambda\right)\left(1-\frac{\lambda}{3}\right)^2\Gamma\left(1-\lambda\right)}{\left(1-\frac{\lambda}{2}\right)^2\left(1-\frac{\lambda}{4}\right)\Gamma\left(1+\frac{\lambda}{2}\right)\Gamma^3\left(1-\frac{\lambda}{2}\right)}\left(\frac{3}{4}\lambda+\frac{s_\pm(\lambda)}{1-\frac{\lambda}{3}}\right)+{\cal O}(1/N_f^2), \end{eqnarray} where $\lambda={\alpha_r N_f}/{3\pi}$ and $s_\pm(\lambda)=1+s_1^\pm\lambda+\cdots$. Consistently, $\gamma^\pm$ do not depend on $\xi^r$. This is because, in any mass-independent scheme (specifically the ${\overline{\rm MS}}$ scheme adopted here), the anomalous dimension of gauge invariant operators cannot depend on the gauge parameter. In our case, once this is verified at 1-loop, the result trivially extends to all terms at first order in $1/N_f$ because the longitudinal component of the gluon propagator is not renormalized, see (\ref{AA}). \subsection{Two schemes for $s_\pm$} We would like to compare (\ref{result}) to results obtained using standard perturbation theory. We consider the 2 and 3 loop calculations performed by~\cite{Kraenkl:2011qb} and~\cite{Gracey:2012gx}. We find that these are associated respectively to \begin{eqnarray}\label{f} s_\pm=1~\cite{Kraenkl:2011qb},~\frac{d(d-1)}{12}~\cite{Gracey:2012gx}. \end{eqnarray} The fact that $s_\pm=1$ in~\cite{Kraenkl:2011qb} follows immediately from~(\ref{tens}) and the subtraction scheme introduced in that reference. Moreover, using (\ref{tens}) we see that the anomalous dimension of the general operator $O=\psi_\alpha\psi_\beta\psi_\gamma$ in that scheme is: \begin{eqnarray}\label{resultp} \gamma_O&=&-\frac{3}{N_f}\lambda\frac{\left(1-\lambda\right)\left(1-\frac{\lambda}{3}\right)^2\Gamma\left(1-\lambda\right)}{\left(1-\frac{\lambda}{2}\right)^2\left(1-\frac{\lambda}{4}\right)\Gamma\left(1+\frac{\lambda}{2}\right)\Gamma^3\left(1-\frac{\lambda}{2}\right)}\left(\frac{3}{4}\lambda~{\mathbb{C}_0}+\frac{1}{3(1-\frac{\lambda}{3})}~{\mathbb{C}_2}\right)\\\nonumber &+&{\cal O}(1/N_f^2), \end{eqnarray} where \begin{eqnarray}\nonumber {\mathbb{C}_0}&=&1\otimes1\otimes1\\\nonumber {\mathbb{C}_2}&=&1\otimes\Gamma_{\mu\nu}\otimes\Gamma^{\mu\nu}+\Gamma_{\mu\nu}\otimes1\otimes\Gamma^{\mu\nu}+\Gamma_{\mu\nu}\otimes\Gamma^{\mu\nu}\otimes1. \end{eqnarray} The 4-dimensional scalings of the 3-quark operators of spin $1/2, 3/2$ are obtained replacing ${\mathbb{C}_2}=+3,-3$, respectively, and ${\mathbb{C}_0}=1$ in both of them (there is a factor of $1/2$ difference in our definition of $\Gamma^{\mu\nu}$ compared to~\cite{Kraenkl:2011qb}). An analogous expression holds for $\widetilde{O}=\psi_\alpha({\widetilde{\psi}}_{\dot\beta}{\widetilde{\psi}}_{\dot\gamma})^*$. To see why $s_\pm={d(d-1)}/{12}$ characterizes the formalism of~\cite{Gracey:2012gx} is a bit more complicated. Rather than repeating the calculation using the operator basis defined there, we can get to~(\ref{f}) observing that in the basis used by ref.~\cite{Gracey:2012gx} only the spinor structure $1\otimes1\otimes1$ appears at order $1/N_f$, so the expression corresponding to (\ref{mat}) simply reads $T^\pm_{\rm ref}=3\left[s_\pm^{\rm ref}(\varepsilon)\right] 1\otimes1\otimes1$. Obviously, if we contract the Lorentz indices in $\langle B_\pm\rangle$ with the external momenta as \begin{eqnarray}\label{contraction1} p_1^\mu\bar{\sigma}_\mu^{\dot\alpha\delta}\epsilon^{\dot\beta\dot\gamma} \langle B_+\rangle_{\dot\alpha\dot\beta\dot\gamma\delta},~~~~~~~~~p_1^\mu\bar{\sigma}_\mu^{\dot\alpha\delta}\epsilon^{\beta\gamma} \langle B_-\rangle_{\dot\alpha\beta\gamma\delta}, \end{eqnarray} the resulting expressions are multiplicatively renormalized as well. What is less obvious is that also within our formalism the contractions (\ref{contraction1}) are multiplicatively renormalized. This follows from the fact that traces of the Gamma matrices can be simplified in any dimension using $\left\{\sigma^\mu,\bar\sigma^\nu\right\}=2g^{\mu\nu}$. (As a matter of fact, all tensor structures in (\ref{tens}) become products of identities when contracted with ${\slashed{\bar p_1}}\otimes\epsilon$). These contractions, being Lorentz scalar combinations of the external momenta, do not depend on our basis of fermionic operators. Hence, making the identification ${contraction}(T^\pm)={contraction}(T^\pm_{\rm ref})$ we obtain $s_\pm^{\rm ref}=d(d-1)/12$, as anticipated. That this factor appears in the comparison between the two schemes was already stressed in~\cite{Gracey:2012gx}. Expanding $\gamma^\pm$ in powers of $a=\alpha_r/4\pi$ we get: \begin{eqnarray} \gamma_\pm^{\left(s_\pm=\frac{d(d-1)}{12}\right)}&=&-4 a - \frac{4}{9}N_f a^2 + \frac{260}{81} N_f^2 a^3+\frac{4}{81} \left(51- 48\,\zeta_3\right)N_f^3a^4+{\cal O}(N_f^4a^5),\\\nonumber \gamma_\pm^{\left(s_\pm=1\right)}&=&-4 a - \frac{32}{9}N_f a^2 + \frac{112}{27} N_f^2 a^3+\frac{64}{81} \left(5- 3\,\zeta_3\right)N_f^3a^4+{\cal O}(N_f^4a^5), \end{eqnarray} with $\zeta_3=1.20206\cdots$. The first terms agree with the calculation of~\cite{Kraenkl:2011qb} and~\cite{Gracey:2012gx}.~\footnote{For the latter, this is true up to an overall factor of $-2$ arising from a different definition of $\gamma_\pm$. Our conventions conform with those adopted in the one-loop analysis of~\cite{Peskin:1979mn} and the two-loop calculation of~\cite{Pivovarov:1991nk} ($N_f=3$) and~\cite{Aoki:2006ib} (general $N_f$), as well as~\cite{Kraenkl:2011qb}.} This provides a non-trivial check of our result. The full expression (\ref{result}) is shown in figure~\ref{plot}. For generic $s_\pm$ it has a simple pole at $\lambda=5$, that also sets the radius of convergence of the $\lambda$ series. \begin{figure}[t] \begin{center} \includegraphics[width=9.5cm]{GammaPlot.pdf} \caption{Anomalous dimension of baryons in the ${\overline{\rm MS}}$ scheme at leading order in $1/N_f$ and all orders in the coupling $\lambda=\alpha_r N_f/3\pi$ for different definitions of evanescent operators, $s_\pm=1$ (purple)~\cite{Kraenkl:2011qb}, $s_\pm=d(d-1)/12$ (red)~\cite{Gracey:2012gx}, and $s_\pm$ such that $\gamma^\pm$ equals the 1-loop result (orange). }\label{plot} \end{center} \end{figure} \subsection{Independence of observables on $s_\pm$} \label{sec:residual} The scheme-dependent function $s_\pm$ appearing in the definition of bare evanescent operators affects the matrix elements of the renormalized physical operator, see (\ref{Br}), as well as its running, see (\ref{result}). Specifically, from (\ref{Br}) and recalling that $s=1+\sum_{n=1}^\infty s_n\varepsilon^n$, \begin{eqnarray}\label{Bra} \frac{d}{ds_n}\ln\langle B^r\rangle &=&\frac{d}{ds_n}{\rm finite}(s{\cal{D}}_{01})+{\cal O}(1/N_f)\\\nonumber &=&-\sum_{k=0}^\infty[G_0^B]_k\frac{\lambda^{n+k}}{n+k}+{\cal O}(1/N_f), \end{eqnarray} with $G_0^B(\varepsilon)=\sum_{k=0} [G_0^B]_k\varepsilon^k$. Because only the {\emph{divergent}} part of ${\cal{D}}_{01}$ contributes to this expression, (\ref{div1}) can be employed to show that $\frac{d}{ds_n}\ln\langle B^r\rangle$ is momentum independent. A more convenient way to write the above law is obtained observing that \begin{eqnarray}\label{thanks} \sum_{k=0}^\infty[G_0^B]_k\frac{\lambda^{n+k}}{n+k}=\int^{\lambda}_{0} d\lambda~G_0^B(\lambda)\lambda^{n-1}=\frac{d}{ds_n}\int^{\lambda}_{0} d\lambda\frac{\gamma}{\lambda^2}. \end{eqnarray} Of course physical quantities do not depend on our renormalization scheme, so the dependence of $\langle B^r\rangle$ on $s_\pm$ must cancel that in corresponding Wilson coefficients. To show how this cancellation works to all orders in $\alpha N_f$, consider an effective field theory defined by the following effective ($d=4-\varepsilon$ dimensional) Hamiltonian: \begin{eqnarray} {\cal H}=C^rB^rJ+\sum_{a=1} C^r_aE^r_aJ_a. \end{eqnarray} Here $J,J_a$ are fermionic currents that excite our baryonic operators. We are interested in processes with a single insertion of ${\cal H}$. Within the prescription of~\cite{Dugan:1990df}, where $f_a=0$, $C^r$ is the only physical Wilson coefficient and is derived matching $\langle{\cal H}\rangle=C^r\langle B^rJ\rangle$ onto some physical process. The RG improved coefficient reads \begin{eqnarray}\label{CRG} C^r(\mu, s)=e^{\int^{\lambda(\mu)}_{\lambda(\mu')} d\lambda\frac{\gamma}{\lambda^2}}C^r(\mu',s), \end{eqnarray} where we used $\mu d\lambda/d\mu=\lambda^2$. By RG invariance of $\langle{\cal H}\rangle$ we know that $\frac{d}{ds_n}\ln C^r(\mu, s)$ depends on $\mu$ only via $\lambda(\mu)$. An inspection of~(\ref{CRG}) reveals that this latter constraint holds as long as \begin{eqnarray}\label{schemeDep} \frac{d}{ds_n}\ln C^r(\mu, s)=\frac{d}{ds_n}\int^{\lambda}_{0} d\lambda\frac{\gamma}{\lambda^2}. \end{eqnarray} Thanks to (\ref{Bra}) and (\ref{thanks}) this is equivalent to $\frac{d}{ds_n}\langle {\cal H}\rangle=0$, as expected. Note that the right hand side of (\ref{Bra}) and (\ref{schemeDep}) are ${\cal O}(\lambda)$ because the scheme-dependence of $\gamma$ starts at 2-loops. We end with a comment on the conformal window of many flavors QCD. It is well known that the QCD beta function has zeros at $N_f^c\leq N_f\leq16$, for some unknown number $N_f^c$. At these IR fixed points, critical exponents like $\gamma^\pm$ become physical, scheme-independent quantities. However, the scheme-independence of (\ref{result}) might seem surprising given that in $\overline{\rm MS}$ the renormalized coupling does not carry any information about the evanescent operators. This puzzle is solved observing that the defining condition $\beta(\lambda_*)=0$ requires cancellations between terms of different order in the $1/N_f$ expansion; it then becomes possible for terms of different order in $1/N_f$ to conspire so as to remove any $s_n$-dependence from $\gamma^\pm(\lambda_*)$. From this observation we learn two things. First, the next to leading terms in the large $N_f$ expansion of $\gamma^\pm$ must also depend on the $s_n$'s of~(\ref{result}). This is necessary for the above cancellation to take place. Second, the variation in $\gamma^\pm(\lambda_*)$ as a function of $s_\pm$, see figure~\ref{plot}, should give us a rough estimate of the size of the next to leading corrections within the conformal window. \section{Discussion} The anomalous dimension of the QCD spin-1/2 baryons, at ${\cal O}(1/N_f)$ and all orders in $\lambda=\alpha_r N_f/3\pi<5$, can be written in the minimal subtraction scheme as: \begin{eqnarray} \gamma_\pm(\lambda)=\frac{1}{2}\gamma_m(\lambda)\left(\frac{3}{4}\lambda+\frac{s_\pm(\lambda)}{1-\frac{\lambda}{3}}\right)+{\cal O}(1/N_f^2), \end{eqnarray} where $\gamma_m$ is the anomalous dimension of the mass operator, first calculated in \cite{PalanquesMestre:1983zy}. (In our notation the scaling dimension of the quark bilinear is $3+\gamma_m$ while that of baryons $4.5+\gamma_\pm$.) Here $s_\pm(\lambda)$ are scheme-dependent functions of the renormalized coupling satisfying $s_\pm(0)=1$. The residual scheme-dependence it entails stems from an ambiguity in the definition of the evanescent operators introduced in dimensional regularization. The two schemes adopted in~\cite{Kraenkl:2011qb}\cite{Gracey:2012gx} correspond to $s_\pm=1, (1-\lambda/3)(1-\lambda/4)$, respectively. The functions $s_\pm$ also affect the matrix elements of the renormalized physical operators $B_\pm^r$. However, in any observable such dependence is exactly compensated by that of $\gamma^\pm$. We explicitly saw how this works to all orders in $\alpha_rN_f$. The anomalous dimensions $\gamma^\pm$ have a phenomenological application in scenarios beyond the Standard Model, for example in the calculation of the proton decay rate~\cite{Abbott:1980zj}. They are also relevant in scenarios with exotic QCD-like dynamics in the conformal window~\cite{Vecchi:2015fma}. Unfortunately, compared to a 1-loop estimate, Eq.~(\ref{result}) does not provide any quantitatively trustable information about their actual size because of the intrinsic non-pertubative nature of the conformal window. Indeed, for $13\leq N_f\leq16$ perturbation theory is reliable and even the scheme-independent one-loop result $\gamma^\pm=-\alpha/\pi$ is accurate. For example, at the zero of the 5-loop beta function with $N_f=13$, $\alpha_*=0.406$, the values of $\gamma^\pm(\lambda_*)$ lie within $\gamma_*=-(0.12\div0.15)$, consistently with \cite{Kraenkl:2011qb}\cite{Gracey:2012gx}. When $N_f<13$ ordinary perturbation theory becomes unreliable, as testified by the fact that the IR fixed point found at 2,3,4-loops disappears at 5-loops. Similarly, the residual scheme-dependence in (\ref{result}), argued to be of order $3/N_f$ in the previous section, quickly becomes uncomfortably large. \section*{Acknowledgments} We thank J. Gracey for comments and suggestions, as well as G. Ferretti and A. G. Grozin for discussions. We acknowledge the Mainz Institute for Theoretical Physics (MITP) for its kind hospitality and support during the final stages of this work. LV is supported by the ERC Advanced Grant no.267985 ({\emph{DaMeSyFla}}) and the MIUR-FIRB grant RBFR12H1MW.
1,108,101,566,124
arxiv
\section{Introduction} Mutualistic relationships, interspecific interactions that benefit both species, have been empirically studied for many years \cite{boucher:book:1985,hinton:PTENHS:1951,wilson:AmNat:1983,bronstein:QRB:1994,pierce:ARE:2002,kiers:Nature:2003,bshary:book:2003} and also a considerable body of theory has been put forth trying to explain the evolution and maintenance of such relationships \cite{poulin:JTB:1995,doebeli:PNAS:1998,noe:book:2001,johnstone:ECL:2002,bergstrom:PNAS:2003,hoeksema:AmNat:2003,akcay:PRSB:2007,bshary:Nature:2008}. Many of these studies utilize evolutionary game theory for developing the models. The interactions in these models are usually pairwise. A representative of each species is chosen and the outcome of the interactions between these representatives determines the evolutionary dynamics within each of the two species. However, in many cases interactions between species cannot be reduced to such pairwise encounters \cite{stadler:book:2008}. For example, in the interaction between ants and aphids or butterfly larvae \cite{kunkel:BZB:1973,pierce:BES:1987,hoelldobler:book:1990} many ants tend to these soft bodied creatures, providing them with shelter and protection from predation and parasites in exchange for honeydew, a rich source of food for the ants \cite{hill:OEC:1989,stadler:book:2008}. This is not a one to one interaction between a larva and an ant, but rather a one to many interaction from the perspective of the larva. In this manuscript we focus on this kind of -- possibly -- many to many interactions between two mutualistic species. To analyse how the benefits are shared between the two mutualistic species, we make use of evolutionary game theory \cite{weibull:book:1995,hofbauer:JMB:1996,hofbauer:book:1998}. Following Bergstrom \& Lachmann \cite{bergstrom:PNAS:2003}, we analyze the interactions between two species with a twist to the standard formulation. The two interacting species can have different evolutionary rates. In coevolutionary arms races, where species are locked in antagonistic relationships such as host-parasite interactions, we observe the Red Queen effect \cite{vanValen:EvoTheo:1973}. In these cases a higher rate of evolution will be beneficial, as it would help the parasite infecting a host or a host escaping from a parasite. However, in mutualistic relationships a slower rate of evolution was predicted to be more favourable \cite{bergstrom:PNAS:2003}. Hence, it is possible that the type of relationship between two species affects the evolution of the rate of evolution. As mentioned in \cite{bergstrom:PNAS:2003} and \cite{dawkins:PRSB:1979} the different evolutionary rates could be due to a multitude of factors ranging from different population sizes to the differing amount of segregating genetic variance. The implications of a difference in evolutionary rates are not limited to mutualism and antagonistic relationships. Epidemiological modeling and data have shown correlations between the rate of epidemic spreading and the evolutionary rate of the spreading pathogen \cite{berry:JV:2007,zehender:VIR:2008}. We first recall the mutualistic relationship between two species in a two player game, as proposed by Bergstrom and Lachmann. Then, we increase the number of players. Note that we do not increase the number of interacting species \cite{mack:OIKOS:2008,damore:Evolution:2011}, but rather the number of interacting individuals between two species (also see \cite{wang:JRSI:2011}). We include asymmetry in evolutionary rates and discuss its effect both in two player and in multiplayer games. We find that when in a two player setting it is beneficial to evolve at a slower rate, it can be detrimental in a multiplayer game. \section{Model and Results} To set the stage we first recapitulate the two player model presented in \cite{bergstrom:PNAS:2003} albeit with different notation. They considered two species each with two strategies, \textit{``Generous"} and \textit{``Selfish"}. Each species is better off being \textit{``Selfish"} as long as the other one is \textit{``Generous"}. If both are \textit{``Selfish"}, then no mutualistic benefit is generated and hence in that case it is better to be \textit{``Generous"}. Under these assumptions, the payoff matrices for the interactions describing the interactions of each type with one member of the other species are \begin{equation}\label{} \begin{array}{cccc} & & \multicolumn{2}{c}{\text{Species 2}}\\ \hline\hline & & G_2 & S_2 \\ \hline \multirow{2}{*}{Species 1} & G_1 & a_{G_1,G_2} & a_{G_1,S_2} \\ & S_1 & a_{S_1,G_2} & a_{S_1,S_2} \\ \hline\hline \end{array} \hspace{1cm} \begin{array}{cccc} & & \multicolumn{2}{c}{\text{Species 1}}\\ \hline\hline & & G_1 & S_1 \\ \hline \multirow{2}{*}{Species 2} & G_2 & a_{G_2,G_1} & a_{G_2,S_1} \\ & S_2 & a_{S_2,G_1} & a_{S_2,S_1} \nonumber \\ \hline\hline \end{array} \end{equation} where for example a generous member of species 1 obtains $a_{G_1,S_2}$ from an interaction with a selfish member of species 2, whereas the latter obtains $a_{S_2,G_1}$. In our case, we have $a_{G_i,G_j} < a_{S_i,G_j}$ and $a_{S_i,S_j} < a_{G_i,S_j}$ for $i,j=1,2$. This ordering of payoffs corresponds to a snowdrift game \cite{doebeli:EL:2005} (see Appendix) where there exists a point where the two strategies can coexist. This is because if both the players are \textit{``Generous"} then one can get away with being \textit{``Selfish"}, but if both are \textit{``Selfish"} then it actually pays to be \textit{``Generous"} and chip in. For a snowdrift game in a single species the coexistence point is stable, i.e. deviations from this point bring the system back to the equilibrium, because the deviators would always be disfavoured by selection. However for a snowdrift game between two species, this coexistence point is unstable as each species would be better off being \textit{``Selfish"}, i.e.\ exploiting the deviation of the over species. The frequency of players playing strategy \textit{``Generous"} ($G_1$) in species $1$ is given by $x$ and in species $2$ ($G_2$) by $y$. The frequencies of players playing strategy \textit{``Selfish"} ($S_1$ and $S_2$) are given by $1-x$ and $1-y$ in species $1$ and $2$, respectively. The fitness of the generous strategy in species 1, $f_{G_1}$ depends on the frequency $y$ of generous players in species 2, $f_{G_1}(y)$. Equivalently, the fitness of the generous strategy in species 2, $f_{G_2}$ depends on the frequency $x$ of generous players in species 1, $f_{G_2}(x)$. The replicator dynamics assumes that the change in frequency of a strategy is proportional to the difference between the fitness of that strategy and the average fitness of the species $\bar{f}$ \cite{taylor:MB:1978,hofbauer:book:1998,hofbauer:BAMS:2003}. Thus, the time evolution of the frequencies of the \textit{``Generous"} players in the two species are \begin{eqnarray} \dot{x} &= r_x x \left(f_{G_1}(y) - \bar{f}_1(x,y) \right) \nonumber \\ \dot{y} &= r_y y \left(f_{G_2}(x) - \bar{f}_2(x,y) \right). \label{eq:orirepeqs} \end{eqnarray} The parameters $r_x$ and $r_y$ are the evolutionary rates of the two species. We first recover the scenarios described in \cite{bergstrom:PNAS:2003}. If the evolutionary rates are equal ($r_x=r_y$) and the evolutionary game is symmetric, then the basins of attraction of $(S_1, G_2)$ and $(G_1, S_2)$ are of equal size (Fig.\ \ref{fig:compare} Panel A). For unequal evolutionary rates, the species which is evolving slower (in our case species $1$ with the rate $r_x=r_y/8$) has a larger basin of attraction (Fig. \ref{fig:compare} Panel B). This asymmetry where most of the initial conditions lead to an outcome favouring the slower evolving species has been termed as the \textit{Red King effect} \cite{bergstrom:PNAS:2003}. \begin{figure}[h] \includegraphics[width=\columnwidth]{compare.pdf} \caption{ The composition of both species can range from all selfish $(S)$ to all generous $(G)$. If the other species is sufficiently generous, selfish behaviour is favoured in both species. However, if the other species is selfish, generous behaviour is advantageous. This is captured by the snowdrift game discussed in the text. For equal evolutionary rates, $r_x=r_y$, the basins of attraction for the two outcomes $(S_1, G_2)$ and $(G_1, S_2)$ are of equal size (Panels A and C). The colours illustrate the regions leading to the outcomes favourable to species $1$ (blue shaded area leading to $(S_1, G_2)$) and species $2$ (red shaded area leading to $(G_1, S_2)$). For a two player game, $d=2$, and $r_x=r_y/8$, the basin of attraction favourable to the slower evolving species $1$ grows substantially (Panel B) \cite{bergstrom:PNAS:2003}. For a twenty player game, $d=20$, the basins of attractions have identical size for equal evolutionary rates, but the position of the internal equilibrium is shifted (Panel C). When species $1$ evolves slower than species $2$ in this situation, most of the initial conditions lead to a solution which is unfavourable to species $1$ (Panel D). Thus for 20 players instead of two, the Red King effect is reversed ($b=2$ and $c=1$). \label{fig:compare} } \end{figure} We now extend the above approach to multiplayer games. Extending the number of players from 2 to $d$ adds a polynomial nonlinearity to the fitness functions of the strategies (see Appendix). For a multiplayer game we no longer have a $2 \times 2$ payoff matrix, but rather a payoff table. For this, we use the proposal from \cite{souza:JTB:2009} for a $d$-player snowdrift game, where the costs of being generous are divided among the \textit{``Generous"} players. In addition, only if there are at least $M$ \textit{``Generous"} players, a benefit is produced. That is, for $k<M$ \textit{``Generous"} players, each one of them exhibits a loss of $c/M$ and the \textit{``Selfish"} players obtain nothing. If there are at least $M$ \textit{``Generous"} players, then a benefit is produced. The \textit{``Generous"} obtain $b - c/k$ and the \textit{``Selfish"} obtain $b$ at no cost. For species $2$ we can write down a different payoff setup which could have different values for $b$, $c$, $M$, $d$ etc.\ thus creating a ``bi-table" game. For the time being, we assume that the payoff setups are symmetric for the two species and hence we just elucidate the details for species $1$. The exact formulation of the payoffs and the calculations of fitness values are given in the Appendix. Note that for $d=2$, $M=1$, $b=2$, and $c=1$ we recover the matrix used in \cite{bergstrom:PNAS:2003}. Even for these new fitness functions for multiplayer games, the dynamics are still given by the replicator equations \cite{hauert:JTB:2006a,pacheco:PRSB:2009,gokhale:PNAS:2010}. Also note that for two player games with $M=1$, there are four fixed points in which each species is either selfish or generous. In addition, there is an internal fixed point given by $x = y = 2 (b-c)/(2 b - c) = \frac{2}{3}$. The position and the stability of the fixed points is independent of the evolutionary rates, but as we will see, it depends on the number of players $d$. For a $20$ player game ($d=20$), the basins of attractions are still of the same size, but the dynamics leading to the stable points on the vertices are completely different (Fig.~\ref{fig:compare} Panel C, $r_x=r_y$). The internal equilibrium has now shifted to $x = y = 0.063$. As before, we introduce an asymmetry in the evolutionary rates. Interestingly, we find that for a $20$ player game (Fig.~\ref{fig:compare} Panel D, $r_x=r_y/8$) for the same asymmetric values of growth rates as in the two player case, most of the initial conditions lead to a stable point where species $2$ is selfish and species $1$ is generous ($G_1$, $S_2$). This reverses the result which we got from $d=2$. Everything else being the same, in the presence of multiple players, the Red King effect is not observed in this example. Next, we explore the process due to which the Red King effect vanishes. The replicator solutions of the two species creates quadrants in the state space ($0 \leq x,y \leq 1$). Of these quadrants, the top right and the bottom left are of special interest as they contain the curve that separates the two basins of attraction (the blue and red sections in Fig.~\ref{fig:compare}). Hence all points starting on one side of that curve lead to the same equilibrium. Consider the top right quadrant. Species $2$ is represented by the $y$-axis. A faster evolution by species $2$ results in most of the initial conditions leading to the outcome favourable for species $2$, i.e. $(G_1,S_2)$. An exactly opposite scenario is taking place in the bottom left quadrant. Hence in this quadrant as species $1$ is evolving at a slower pace than species $2$, most of the initial condition here lead to an outcome favouring species $1$, i.e. $(S_1, G_2)$. As long as the internal equilibrium is on the diagonal the Red King effect depends on the sizes of these quadrants. Changing the number of players alters the sizes of these two influential quadrants. For example consider the case of the twenty player game (Fig.~\ref{fig:compare} Panel C-D). The size of the bottom left quadrant is reduced to such an extent that almost the whole state space leads to the outcome favourable for the faster evolving species. The bottom left and the top right quadrants have equal size when the internal equilibrium is at $x = y = 0.5$. For a fixed $b$ and $c$ we cannot select any arbitrary number of players $d$ to obtain this equilibrium, as $d$ is not a continuous variable. If the equilibrium is above $x = 0.5$, then a decrease in the evolutionary rate can be beneficial as demonstrated by the Red King effect. Conversely, if the equilibrium is below $x=0.5$, then an increase in the evolutionary rate might be favourable. Hence if the number of players positions the equilibrium at $x<0.5$ then the faster evolving species would be favoured. Asymmetries have been considered in mutualistic species at the level of species or other properties of the system such as interactions \cite{noe:Ethology:1991}, interaction lengths \cite{johnstone:ECL:2002} or growth rates \cite{bergstrom:PNAS:2003}. Asymmetry in the number of interacting partners has only been recently tackled \cite{wang:JRSI:2011}. Going back to the example of ants and larvae, a single larva is tended to by multiple ants. Thus while from each ant's point of view this is a two player game, for the larva this would be a multiplayer game. Where such multiplayer games are feasible, it is also possible that a certain quorum needs to be fulfilled for the game to proceed. Client fish have been shown to choose cleaning stations with two cleaners over solitary cleaners \cite{bshary:AB:2002}. A certain number of ants are required to save a caterpillar from its predator. It has been shown that the amount of secretions of a lycaenid larva is correlated to the number of attending ants \cite{axen:BE:1998}. In the following two paragraphs we explore these two points, asymmetry in the number of players for the two species and different thresholds in either species to start off the benefits of mutualistic relations. Instead of the single parameter $d$, now we have $d_1$ and $d_2$ as the number of players for the two species $1$ and $2$. For symmetric evolutionary rates, if the two species play different player games ($d_1 \neq d_2$) then the basins of attraction become asymmetric. Hence, if a species is currently at a disadvantage, a modification of the number of players or the evolutionary rate may put it on equal footing with the other species. Due to asymmetric number of players the sizes of the basins of attraction depends not just on the sizes of the quadrants but also on the shape of the curve separating the basins of attraction. Thus, it is possible to counter the Red King effect by changing the number of interacting agents (Fig.\ \ref{fig:counter}). \begin{figure} \includegraphics[width=\columnwidth]{asymplandrate.pdf} \caption{ The Red King effect can be neutralised and/or even reversed if the number of players increases. Here we show the scenario explored in \cite{bergstrom:PNAS:2003} on the left panel; a two player game ($d_1 = d_2 = 2$) with $r_x=r_y/8$. Most of the initial conditions lead towards the state favourable for species $1$, ($S_1,G_2$). This changes when the number of players in species $2$ increases from $d_2=2$ to $d_2=3$, i.e.\ now one individuals of species 2 interact with two individuals of species $1$. The horizontal and vertical lines denote the positions where the change in the strategy frequency is zero for species $1$ and $2$ respectively. The solution for species $2$ (vertical line) moves towards smaller $x$, increasing the size of the top right quadrant. For $d_1=2$ and $d_2=3$, it is given by $x = \frac{3 (2 b-c) - \sqrt{3 (4 b c - c^2)} }{2 (3 b -c)}$, whereas the the solution for species $2$ is still $ y = \frac{2(b-c)}{2 b - c}$ (see Appendix). Thus, the quadrant favouring the species with a faster evolutionary rate grows. Since the number of players affects the size of these quadrants, it can eliminate or magnify the Red King effect. } \label{fig:counter} \end{figure} Until now, we have considered that a single \textit{``Generous"} individual can generate the benefit of mutualism ($M=1$). To begin with the simplest multiplayer case, we consider a symmetric three player game with different thresholds in either species to start off the benefits of mutualistic relations (say for species $1$ the threshold is $M_1$ and similarly for species $2$ is $M_2$ where in general $M_i$ can range from $1$ to $d_i$). The payoff matrices become asymmetric due to the different thresholds for the two species. Here, it matters which dynamics we are studying, the usual replicator dynamics or the modified replicator equations (Fig.\ \ref{fig:thresholdsrep} and see Appendix) as they can result in different sizes of the basins of attraction. The choice of dynamics depends on the details of the model system under consideration. Ultimately, the macroscopic dynamics can be derived from the underlying microscopic process \cite{traulsen:PRL:2005,black:TREE:2012}. Manipulating the thresholds can also change the nature of the game from coexistence to coordination \cite{souza:JTB:2009}, which implies that different social dilemmas arise in multiplayer games. \section{Discussion} Interspecific relationships are exceedingly complex \cite{blaser:Nature:2007}. The development of a game theoretical approach for such multiplayer mutualisms requires an approach beyond that arising typically in multiplayer social dilemmas \cite{bshary:ASB:2004}. In a mutualistic framework, it is best for the two species to cooperate with each other. We do not ask the question how these mutualisms arise. Rather when they do, what is the best strategy to contribute towards the common benefit \cite{bshary:book:2003,bowles:bookchapter:2003}? It would be possible to include the interactions between the individuals of the same species, as has been explored experimentally recently \cite{wang:JRSI:2011}. It has also been shown \cite{axen:BE:1998} that the amount of larval secretions is also influenced by the quality of the other larvae in the group. But then, we would be shifting our focus from the problem of interspecific mutualism to intraspecific cooperation \cite{bshary:Nature:2008}. Here, we have focused on the interspecific interactions, where the interacting partners are always picked from the other species \cite{schuster:BC:1981}. Bergstrom and Lachmann have shown that in such a mutualistic scenario, the species which evolves slower can get away with being selfish and force the other species to make a generous contribution. They termed this as the Red King effect. If we include multiple players then the Red King effect is much more complex. For simplicity, usually pairwise interactions are assumed in game theoretical arguments. For modeling collective phenomena \cite{couzin:Nature:2005,sumpter:book:2010}, multiplayer games may be necessary. The exact number of players is a matter of choice, though. Group size distributions give us an idea about the mean group size of a species. Instead of using pairwise interactions or an arbitrary number of individuals to form a group, we could use the mean group sizes as the number of interacting individuals. Group size is known to be of importance in mutualisms \cite{wilson:AmNat:1983}. As we have seen here, it can be a influential factor in deciding how the benefits are shared. Countering the Red King or enhancing its effect is possible by altering the group size. Hence can the group size itself be an evolving strategy? The study of group size distributions has been tackled theoretically \cite{krause:book:2002,hauert:Science:2002, niwa:JTB:2003,hauert:PRSB:2006, hauert:BT:2008,veelen:JTB:2010,sumpter:book:2010,braennstroem:JMB:2011,pena:Evolution:2011} and empirically in various species ranging from house sparrows to humans \cite{zipf:book:1949,krause:book:2002,sumpter:book:2010,griesser:PLosOne:2011}. In our example of ants and butterfly larvae, it has been observed that a larva was most successful in getting more ant attendants in a group of four larvae \cite{pierce:BES:1987}. It would be interesting to see if the distributions in mutualistic species peak at the group size which is the best response to their symbiont partners choices. This brings forth another of our assumption also implicit in \cite{bergstrom:PNAS:2003}. The rate at which strategies evolve is assumed to be much faster than the rate at which the evolutionary rates or as just mentioned, the group size evolve. If these traits are genetically determined then this assumption may no longer hold. The rate of evolution is typically assumed to be constant, but it could well be a variable, subject to evolution. We have seen that the number of players can affect whether evolving slower or faster is favourable. It would then be interesting to determine the interplay between the evolving group size and the evolving evolutionary rate and what effect it has on the dynamics of strategy evolution. Another method of introducing asymmetry is to have different payoff tables for the two species (i.e. different benefits and costs for the two species). Also we have just considered two strategies per species. Asymmetric number of strategies can induce further asymmetries in the interaction \cite{schuster:BC:1981}. The intricacies of multiplayer games lend themselves to study such systems, but they also show that mutualistic interactions may be far more complex than often envisioned. Applying multiplayer game theory to mutualism unravels this dynamics between species and can be used to understand the complexity of these non-linear systems. \textbf{Acknowledgements}. We thank No\'emie Erin, Christian Hilbe, Aniek Ivens, Martin Kalbe, Michael Lachmann, Eric Miller and Istv\`an Scheuring for helpful discussions and suggestions. We also thank the referees and the editor for their detailed and constructive input. Financial support from the Emmy-Noether program of the DFG and from the Max Planck Society is gratefully acknowledged. \begin{figure}[h] \includegraphics[width=\columnwidth]{standardreplicator} \caption{ A three player game with asymmetric thresholds, but $r_x=r_y$. Species $1$ and species $2$ are both playing a three player game. For species $1$, it is enough if one individual is \textit{``Generous"} to produce the benefit ($M_1 = 1$). For species $2$, however, the minimum number of \textit{``Generous"} players required to produce any benefit strongly affects the replicator dynamics. For $M_2=M_1=1$, we observe symmetric basins of attraction. The manifolds for the saddle point plotted forward in time (dashed green lines) and backward in time (solid green lines) can be used to define the basins of attraction. For $M_2=2$, we observe a region with closed orbits in the interior (white background), almost all initial conditions outside this region lead to ($G_1,S_2$). For $M_2=3$, we observe closed orbits in almost the whole state space. To avoid negative payoffs and to facilitate the comparison with the modified dynamics (see Appendix), we have added a background fitness of $1.0$ to all payoffs, but this does not alter the dynamics here. } \label{fig:thresholdsrep} \end{figure}
1,108,101,566,125
arxiv
\section{Introduction} The primary purpose of this paper is to establish uniform Lipschitz estimates for a family of elliptic operators with rapidly oscillating, almost-periodic coefficients, arising in the theory of homogenization. More precisely, we consider the linear elliptic operator \begin{equation}\label{operator} \mathcal{L}_\varepsilon =-\text{\rm div} \big( A(x/\varepsilon)\nabla \big) =-\frac{\partial }{\partial x_i} \left\{ a_{ij}^{\alpha\beta} (x/\varepsilon) \frac{\partial}{\partial x_j} \right\}, \qquad \qquad \varepsilon>0 \end{equation} (the summation convention is used throughout). Let $A(y)=\big( a_{ij}^{\alpha\beta} (y)\big)$ be real and bounded in $\mathbb{R}^d$, where $1\le i,j\le d$ and $1\le \alpha, \beta\le m$. Throughout the paper we will assume that \begin{equation}\label{ellipticity} \mu |\xi|^2 \le a_{ij}^{\alpha\beta} (y)\xi_i^\alpha\xi_j^\beta \le \mu^{-1} |\xi |^2 \quad \text{ for any } y\in \mathbb{R}^d \text{ and } \xi=(\xi_i^\alpha) \in \mathbb{R}^{m\times d}, \end{equation} where $\mu>0$, and \begin{equation}\label{uniform-ap} \lim_{R\to \infty} \sup_{y\in \mathbb{R}^d} \inf_{\substack{ z\in \mathbb{R}^d\\ |z|\le R}} \| A(\cdot +y)-A(\cdot +z)\|_{L^\infty(\mathbb{R}^d)} = 0. \end{equation} Notice that if $A$ is bounded and continuous in $\mathbb{R}^d$, then $A$ satisfies (\ref{uniform-ap}) if and only if $A$ is \emph{uniformly almost-periodic} in $\mathbb{R}^d$, i.e., each entry of $A$ is the uniform limit of a sequence of trigonometric polynomials. We define the following modulus, which quantifies the almost periodic assumption: \begin{equation} \label{rho} \rho (R):=\sup_{y\in \mathbb{R}^d} \inf_{\substack{ z\in \mathbb{R}^d\\ |z|\le R}} \| A(\cdot +y)-A(\cdot +z)\|_{L^\infty(\mathbb{R}^d)}. \end{equation} Given a bounded $C^{1, \alpha}$ domain $\Omega\subset \mathbb{R}^d$, we are interested in estimating the quantity $\|\nabla u_\varepsilon\|_{L^\infty(\Omega)}$, uniformly in $\varepsilon>0$, for weak solutions~$u^\varepsilon$ of Dirichlet problem \begin{equation}\label{DP-1} \mathcal{L}_\varepsilon (u_\varepsilon) =F \quad \text{ in } \Omega \quad \text{ and } \quad u_\varepsilon =f \quad \text{ on } \partial\Omega, \end{equation} as well as those of the Neumann problem \begin{equation}\label{NP-1} \mathcal{L}_\varepsilon (u_\varepsilon)=F \quad \text{ in } \Omega \quad \text{ and } \quad \frac{\partial u_\varepsilon}{\partial\nu_\varepsilon} =g \quad \text{ on } \partial\Omega. \end{equation} In (\ref{NP-1}) we have used $\partial u_\varepsilon/\partial \nu_\varepsilon$ to denote the conormal derivative $n(x) A(x/\varepsilon)\nabla u_\varepsilon (x)$ on $\partial\Omega$, where $n(x)$ is the outward unit normal to $\partial\Omega$. To ensure that we have Lipschitz estimates at small scales, we assume that $A$ is uniformly H\"older continuous, i.e., there exist $\tau>0$ and $\lambda\in (0,1]$ such that \begin{equation}\label{H-continuity} |A(x)-A(y)|\le \tau |x-y|^\lambda \quad \text{ for any } x,y \in \mathbb{R}^d. \end{equation} The following are the main results of the paper. \begin{theorem}\label{main-theorem-Lip} Suppose that $A(y)$ satisfies uniform ellipticity (\ref{ellipticity}) and H\"older continuity conditions (\ref{H-continuity}). Suppose also that there exist $N>5/2$ and $C_0>0$ such that \begin{equation}\label{decay-condition} \rho (R) \le C_0 \big[ \log R\big]^{-N} \qquad \text{ for any } R\ge 2. \end{equation} Let $\Omega$ be a bounded $C^{1, \alpha}$ domain in $\mathbb{R}^d$ for some $\alpha>0$. Let $u_\varepsilon\in H^1(\Omega; \mathbb{R}^m)$ be a weak solution of Dirichlet problem (\ref{DP-1}). Then \begin{equation}\label{Lip-estimate-0} \| \nabla u_\varepsilon\|_{L^\infty(\Omega)} \le C \left\{ \| F\|_{L^p(\Omega)} +\| f\|_{C^{1, \beta} (\partial\Omega)} \right\}, \end{equation} where $p>d$, $\beta \in (0, \alpha)$, and $C$ depends only on $p$, $\beta$, $A$, and $\Omega$. \end{theorem} \begin{theorem}\label{main-theorem-Lip-N} Suppose that $A(y)$ satisfies (\ref{ellipticity}) and (\ref{H-continuity}). Also assume that the decay condition (\ref{decay-condition}) holds for some $N>3 $ and $C_0>0$. Let $\Omega$ be a bounded $C^{1, \alpha}$ domain in $\mathbb{R}^d$ for some $\alpha>0$. Let $u_\varepsilon\in H^1(\Omega; \mathbb{R}^m)$ be a weak solution of the Neumann problem (\ref{NP-1}). Then \begin{equation}\label{Lip-estimate-N-0} \| \nabla u_\varepsilon\|_{L^\infty(\Omega)} \le C \left\{ \| F\|_{L^p(\Omega)} +\| g\|_{C^{\beta} (\partial\Omega)} \right\}, \end{equation} where $p>d$, $\beta \in (0, \alpha)$, and $C$ depends only on $p$, $\beta$, $A$, and $\Omega$. \end{theorem} Note that if $A(y)$ is periodic, then $\rho (R)=0$ for $R$ sufficiently large and thus satisfies the assumption (\ref{decay-condition}) for any $N>1$. In this case the Lipschitz estimate (\ref{Lip-estimate-0}) for the Dirichlet problem (\ref{DP-1}) in $C^{1, \alpha}$ domains was established by Avellaneda and Lin \cite{AL-1987} under the conditions (\ref{ellipticity}) and (\ref{H-continuity}). This classical result was recently extended by Kenig, Lin, and Shen in \cite{KLS1}, where estimate (\ref{Lip-estimate-N-0}) was established for solutions of the Neumann problem (\ref{NP-1}) in the periodic setting, under an additional symmetry condition $A^*(y)=A(y)$, i.e., $a_{ij}^{\alpha\beta}(y)=a_{ji}^{\beta\alpha} (y)$ for any $1\le i, j\le d$ and $1\le \alpha, \beta\le m$. Our Theorems \ref{main-theorem-Lip} and \ref{main-theorem-Lip-N} further extend the main results in \cite{AL-1987} and \cite{KLS1} to the almost-periodic setting. We point out that Theorem \ref{main-theorem-Lip-N} is new even in the periodic setting, as the symmetry condition $A^*=A$ is not required. We also remark that the Lipschitz estimates in Theorems \ref{main-theorem-Lip} and \ref{main-theorem-Lip-N} are sharp in the sense that there is no uniform modulus of continuity for the gradient of solutions unless $\text{\rm div}(A)=0$. As for the $C^{1,\alpha}$ assumption on the domain $\Omega$, we note that Lipschitz estimates may fail on a $C^1$ domain even for harmonic functions. The proof of uniform estimates in both \cite{AL-1987} and \cite{KLS1} is based on a compactness argument that originated from the study of regularity theory in the calculus of variation and minimal surfaces. The argument, which was introduced in \cite{AL-1987} to the study of homogenization, extends readily to the almost-periodic setting in the case of uniform H\"older estimates. In fact it was proved in \cite{Shen-2014} that if $u_\varepsilon$ is a weak solution of the Dirichlet problem: \begin{equation}\label{DP-2} \mathcal{L}_\varepsilon (u_\varepsilon) =F +\text{\rm div} (h) \quad \text{ in } \Omega \quad \text{ and } \quad u_\varepsilon =f \quad \text{ on } \partial\Omega, \end{equation} where $\Omega$ is a bounded $C^{1,\alpha}$ domain in $\mathbb{R}^d$, then \begin{equation}\label{H-estimate} \aligned \| u_\varepsilon\|_{C^\beta (\overline{\Omega})} \le C \bigg\{ \| f\|_{C^\beta (\partial\Omega)} &+\sup_{\substack{x\in \Omega\\ 0<r<r_0}} r^{2-\beta} -\!\!\!\!\!\!\int_{B(x,r)\cap \Omega} |F|\\ &+\sup_{\substack{x\in \Omega\\ 0<r<r_0}} r^{1-\beta} \left(-\!\!\!\!\!\!\int_{B(x,r)\cap \Omega} |h|^2\right)^{1/2}\bigg\} \endaligned \end{equation} for any $\beta \in (0,1)$, where $r_0=\text{\rm diam} (\Omega)$ and $C$ depends only on $\beta $, $A$, and $\Omega$. However, for Lipschitz estimates, the approach in \cite{AL-1987, KLS1} relies on the Lipschitz estimates for interior and boundary correctors in a crucial way. It is not clear how to extend this to the almost-periodic setting, as any estimate of correctors in a non-periodic setting is far from trivial, even in the interior case. Our proof of Theorem \ref{main-theorem-Lip} and \ref{main-theorem-Lip-N} will be based on a rather general scheme for proving Lipschitz estimates at large scale in homogenization. The scheme, which was motivated by the compactness argument in \cite{AL-1987}, was recently formulated and used by the first author and C. Smart in \cite{Armstrong-Smart-2014} for convex integral functionals with random coefficients. The idea, rather than arguing by contradiction (by compactness), is to apply a $C^{1,\alpha}$ Campanato iteration directly. For this we need to show that the ``flatness" of a solution $u$ (how well it is approximated by an affine function) improves on smaller scales, e.g., for some $\theta\in(0,1/4)$, \begin{equation}\label{flatness} \aligned \frac{1}{r\theta } & \inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \left(-\!\!\!\!\!\!\int_{B_{\theta r}} |u(x)-M x-q|^2\, dx \right)^{1/2}\\ &\qquad\qquad \le \frac{1}{2} \left( \frac{1}{r} \inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \left(-\!\!\!\!\!\!\int_{B_{r}} |u(x)- M x-q|^2 \, dx \right)^{1/2} \right). \endaligned \end{equation} Since solutions of the \emph{homogenized} equation satisfy such an estimate (on all scales), we indeed have~(\ref{flatness}) up to the error arising in homogenization. For large balls, we may expect this error to be much smaller than the improvement in the flatness. Therefore, if we can control the error in homogenization effectively, we may hope to iterate the improvement of flatness estimate down to microscopic scales. Indeed, as we show in Theorem~\ref{g-theorem-1} (which is a slight modification of~\cite[Lemma 5.1]{Armstrong-Smart-2014}), this scheme yields a uniform Lipschitz estimate down to the microscopic scale, provided that the rate of homogenization is sufficiently fast: an algebraic (or even Dini-type) convergence rate suffices. Such error estimates were recently proved by the second author \cite{Shen-2014} for solutions $u_\varepsilon$ of Dirichlet problem (\ref{DP-1}). In particular, it was shown that \begin{equation}\label{error-estimate-0} \| u_\varepsilon -u_0\|_{L^2(\Omega)} \le C\, \omega (\varepsilon) \| u_0\|_{W^{2, p} (\Omega)}, \end{equation} where $p>d$ and $\omega (\varepsilon)$ is a modulus on $(0,1]$, with $\omega (0+)=0$, which can be given explicitly using the modulus $\rho (R)$ in (\ref{rho}) (see Section 2). The decay conditions on $\rho(R)$ in Theorems~\ref{main-theorem-Lip} and~\ref{main-theorem-Lip-N} are used precisely to ensure that we have Dini-type rates for homogenization. We mention that condition (\ref{decay-condition}) holds, for example, if $A(y)$ is quasi-periodic in $\mathbb{R}^d$ with frequencies satisfying the so-called Kozlov (C) condition \cite{Kozlov-1979}. In fact it was proved in \cite{Shen-2014} that the Kozlov (C) condition implies that $\rho(R)\le C_0 (R+1)^{-\lambda}$ for some $\lambda>0$. We do not know if the decay assumptions on $\rho(R)$ in Theorems~\ref{main-theorem-Lip} and~\ref{main-theorem-Lip-N} can be weakened substantially or whether uniform Lipschitz estimates hold for general uniformly almost periodic coefficients. However, we remark that the scheme for proving Lipschitz estimates formalized in Theorem~\ref{g-theorem-1} is a quite general tool that can be useful in other circumstances. It applies, for example, to the Poisson equation \begin{equation}\label{e.lapf} -\Delta u = \text{\rm div} f \end{equation} and yields a Lipschitz estimate on $u$ precisely in the case that $f$ is $C^\alpha$ (or Dini continuous). Likewise, a straightforward modification of Theorem~\ref{g-theorem-1} yields a statement that implies the classical Schauder estimates. Of course, if $f$ is merely continuous, then it is well-known that solutions of~\eqref{e.lapf} may fail to be Lipschitz continuous. This suggests that, in analogy, the decay conditions on $\rho(R)$ are natural and perhaps even necessary. The general scheme mentioned above makes minimal use of the structure of the equation and in particular does not involve correctors in a direct manner (though indirectly via approximation requirements). As a result, it can be adapted surprisingly well for proving Lipschitz estimates up to the boundary with either Dirichlet or Neumann boundary conditions. The key step then is to establish suitable error estimates of $\| u_\varepsilon -u_0\|_{L^2(\Omega)}$, not necessarily sharp, for local weak solutions with Dirichlet or Neumann condition. This will be achieved by considering the function $$ w_\varepsilon =u_\varepsilon (x)- v_0 (x) -\varepsilon \chi_T (x/\varepsilon) \nabla v_0 (x), $$ where $T=\varepsilon^{-1}$, $\mathcal{L}_0 (v_0) =0$, and $\chi_T(y)$ denotes the approximate correctors for $\mathcal{L}_\varepsilon$. The proof relies on the pointwise estimates of $\chi_T$ obtained in \cite{Shen-2014}. In the case of Neumann conditions our argument also requires uniform Lipschitz estimates of $\chi_T$, which follow from uniform interior Lipschitz estimates. However, as we indicated earlier, our approach does not use boundary correctors. Let $G_\varepsilon (x,y)$ denote the matrix of Green functions for $\mathcal{L}_\varepsilon$ in $\Omega$, with pole at $y$. It follows from the proof of Theorem \ref{main-theorem-Lip} that for any $x,y\in \Omega$ and $x\neq y$, \begin{equation}\label{G-estimate-1} |\nabla_x G_\varepsilon (x,y) |+|\nabla_y G_\varepsilon (x,y)|\le C\, |x-y|^{1-d} \end{equation} and \begin{equation}\label{G-estimate-2} |\nabla_x \nabla_y G_\varepsilon (x,y)|\le C \, |x-y|^{-d}, \end{equation} where $C$ depends only on $A$ and $\Omega$. This, in particular, implies that the Poisson kernel $P_\varepsilon (x,y)$ for $\mathcal{L}_\varepsilon$ in $\Omega$ satisfies \begin{equation}\label{P-estimate} |P_\varepsilon (x,y)|\le \frac{C\, \text{\rm dist} (x, \partial\Omega)}{|x-y|^d} \end{equation} for any $x\in \Omega$ and $y\in \partial\Omega$. As in the periodic setting \cite{AL-1987-ho, AL-1987}, estimate (\ref{P-estimate}) yields the following. \begin{theorem}\label{main-theorem-max} Suppose that $A$ and $\Omega$ satisfy the same conditions as in Theorem \ref{main-theorem-Lip}. Let $1<p<\infty$. Let $u_\varepsilon$ be the solution of the $L^p$ Dirichlet problem \begin{equation}\label{DP-3} \mathcal{L}_\varepsilon (u_\varepsilon)=0 \quad \text{ in } \Omega \quad \text{ and } \quad u_\varepsilon =f \quad \text{ on } \partial\Omega \end{equation} with $(u_\varepsilon)^*\in L^p(\partial\Omega)$, where $f\in L^p(\partial\Omega; \mathbb{R}^m)$ and $(u_\varepsilon)^*$ denotes the non-tangential maximal function of $u_\varepsilon$. Then \begin{equation}\label{max-estimate} \| (u_\varepsilon)^*\|_{L^p(\partial\Omega)} \le C_p \, \| f\|_{L^p(\partial\Omega)}, \end{equation} where $C_p$ depends only on $p$, $A$, and $\Omega$. Furthermore, if $f\in L^\infty(\partial\Omega)$, then \begin{equation}\label{m-p} \| u_\varepsilon \|_{L^\infty(\Omega)} \le C\, \| f\|_{L^\infty(\partial\Omega)}, \end{equation} where $C$ depends only on $A$ and $\Omega$. \end{theorem} In this paper we also study the uniform $W^{1,p}$ estimates for $\mathcal{L}_\varepsilon$. Related results in the periodic setting may be found in \cite{ AL-1987, AL-1991,Shen-2008, song-2012, KLS1, Geng-Shen-2014}. We emphasize that the H\"older condition (\ref{H-continuity}) is not assumed in the following two theorems. \begin{theorem}\label{main-theorem-W-1-p} Suppose that $A(y)$ is uniformly almost-periodic in $\mathbb{R}^d$ and satisfies (\ref{ellipticity}). Also assume that $A(y)$ satisfies the condition (\ref{decay-condition}) for some $N>(3/2)$. Let $\Omega$ be a bounded $C^{1, \alpha}$ domain in $\mathbb{R}^d$ for some $\alpha>0$ and $1<p<\infty$. Let $u_\varepsilon \in W^{1,p}(\Omega; \mathbb{R}^m)$ be a weak solution of the Dirichlet problem (\ref{DP-2}), where $h=(h_i^\alpha)\in L^p(\Omega; \mathbb{R}^{m\times d})$, $F\in L^p(\Omega; \mathbb{R}^m)$ and $f\in B^{p, 1-\frac{1}{p}} (\partial\Omega; \mathbb{R}^m)$. Then \begin{equation}\label{W-1-p} \| u_\varepsilon\|_{W^{1, p}(\Omega)} \le C_p \left\{ \| h\|_{L^p(\Omega)} +\| F\|_{L^p(\Omega)} + \| f\|_{B^{p, 1-\frac{1}{p}} (\partial\Omega)}\right\}, \end{equation} where $C_p$ depends only on $p$, $A$, and $\Omega$. \end{theorem} \begin{theorem}\label{main-theorem-W-1-p-N} Suppose that $A$ and $\Omega$ satisfy the same conditions as in Theorem \ref{main-theorem-W-1-p}. Let $1<p<\infty$. Let $u_\varepsilon\in W^{1,p}(\Omega; \mathbb{R}^m)$ be a weak solution to \begin{equation}\label{NP-2} \mathcal{L}_\varepsilon (u_\varepsilon)= \text{\rm div} (h) +F \quad \text{ in } \Omega \quad \text{ and }\quad \frac{\partial u_\varepsilon}{\partial \nu_\varepsilon} =g - n\cdot h \quad \text{ on } \partial\Omega. \end{equation} Then \begin{equation}\label{W-1-p-N} \| \nabla u_\varepsilon\|_{L^p(\Omega)} \le C_p \left\{ \| h\|_{L^p(\Omega)} +\| F\|_{L^p(\Omega)} + \| g\|_{B^{-\frac{1}{p},p}(\partial\Omega)} \right\}, \end{equation} where $C_p$ depends only on $p$, $A$, and $\Omega$. \end{theorem} We conclude this section with some notation and comments on bounding constants $C$. We will use $-\!\!\!\!\!\!\int_E f=\frac{1}{|E|}\int_E f$ to denote the $L^1$ average of $f$ over a set $E$. For a ball $B=B(x, r)$ we use $\alpha B$ to denote $B(x, \alpha r)$. We will use $C$ to denote constants that may depend on $d$, $m$, $A(y)$, $\Omega$, and other relevant parameters, but never on $\varepsilon$. It is important to note that since our assumptions on $A$ are invariant under translation and rotation, the constants $C$ will be invariant under any translation and rotation of $\Omega$. This allows us to use freely translation and rotation to simply the argument. As for rescaling, we observe that if $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ and $v(x)=u_\varepsilon (rx)$, then $\mathcal{L}_{\varepsilon/r} (v)=G$, where $G(x)=r^2 F(rx)$. \section{Homogenization and convergence rates} \setcounter{equation}{0} Let $\mathcal{L}_\varepsilon =-\text{div} \big(A(x/\varepsilon)\nabla \big)$. Throughout this section we assume that $A(y)= \big( a_{ij}^{\alpha\beta} (y) \big)$ is uniformly almost-periodic in $\mathbb{R}^d$ and satisfies the ellipticity condition (\ref{ellipticity}). The H\"older continuity (\ref{H-continuity}) and decay condition (\ref{decay-condition}) will not be used here. \subsection{The homogenized operator and qualitative homogenization} To define the homogenized operator $\mathcal{L}_0$, we first introduce the space $B^2(\mathbb{R}^d)$, the $L^2$ space of almost-periodic functions in the sense of Bezikovich. A function $f$ in $L^2_{\rm loc} (\mathbb{R}^d)$ is said to belong to $B^2(\mathbb{R}^d)$ if $f$ is a limit of a sequence of trigonometric polynomials in $\mathbb{R}^d$ with respect to the semi-norm $$ \| f\|_{B^2} =\limsup_{R\to \infty} \left\{ -\!\!\!\!\!\!\int_{B(0,R)} |f|^2 \right\}^{1/2}. $$ For $f\in L^1_{\rm loc} (\mathbb{R}^d)$, a number $\langle f\rangle$ is called the mean value of $f$ if $$ \lim_{\varepsilon\to 0^+} \int_{\mathbb{R}^d} f(x/\varepsilon) \varphi (x)\, dx =\langle f \rangle \int_{\mathbb{R}^d} \varphi $$ for any $\varphi\in C_0^\infty(\mathbb{R}^d)$. It can be shown that if $f, g\in B^2(\mathbb{R}^d)$, then $fg$ has the mean value. Under the equivalent relation that $f\sim g$ if $\| f-g\|_{B^2}=0$, the vector space $B^2(\mathbb{R}^d)/\sim$ becomes a Hilbert space with the inner product defined by $(f,g)=\langle f,g\rangle$. Let $V^2_{\rm pot}$ (reps. $V^2_{\rm sol}$) denote the closure in $B^2(\mathbb{R}^d; \mathbb{R}^{m\times d})$ of potential (reps. solenoidal) trigonometric polynomials with mean value zero. Then $$ B^2(\mathbb{R}^d; \mathbb{R}^{m\times d}) =V^2_{\rm pot} \oplus V^2_{\rm sol} \oplus \mathbb{R}^{m\times d}. $$ By the Lax-Milgram Theorem and the ellipticity condition (\ref{ellipticity}), for any $1\le j\le d$ and $1\le \beta\le m$, there exists a unique $\psi_j^\beta=\big(\psi_{ij}^{\alpha\beta}\big) \in V^2_{\rm pot}$ such that \begin{equation}\label{homo-equation} \langle a_{ik}^{\alpha\gamma} \psi_{kj}^{\gamma\beta} \phi_{i}^\alpha\rangle =-\langle a_{ij}^{\alpha\beta} \phi_i^\alpha\rangle \quad \text{ for any } \phi=\left( \phi_i^\alpha\right)\in V^2_{\rm pot}. \end{equation} Let $\widehat{A}=\big ( \widehat{a}_{ij}^{\alpha\beta}\big)$, where \begin{equation}\label{homo-coeff} \widehat{a}_{ij}^{\alpha\beta} =\langle a_{ij}^{\alpha\beta}\rangle + \langle a_{ik}^{\alpha\gamma} \psi_{kj}^{\gamma\beta}\rangle. \end{equation} The homogenized operator for $\mathcal{L}_\varepsilon$ is given by $\mathcal{L}_0 =-\text{\rm div} \big(\widehat{A}\nabla\big)$. We refer the reader to \cite{Jikov-1994} for details (also see earlier work in \cite{Kozlov-1979, Kozlov-1980, Papanicolaou-1979}). The proof of the following theorem may be found in \cite{Jikov-1994}. \begin{theorem}\label{homo-theorem} Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^d$. For $F\in H^{-1}(\Omega; \mathbb{R}^m)$ and $\varepsilon>0$, let $u_\varepsilon \in H^1(\Omega; \mathbb{R}^m)$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon) =F$ in $\Omega$. Suppose that for some subsequence $\{ u_{\varepsilon^\prime} \}$, $u_{\varepsilon^\prime} \to u_0$ weakly in $H^1(\Omega; \mathbb{R}^m)$ and $A(x/\varepsilon^\prime)\nabla u_{\varepsilon^\prime} \to G$ weakly in $L^2(\Omega; \mathbb{R}^{m\times d})$. Then $G=\widehat{A} \nabla u_0$ in $\Omega$. \end{theorem} The homogenization of Dirichlet problem (\ref{DP-1}) and the Neumann problem (\ref{NP-1}) follows readily from Theorem \ref{homo-theorem}. For $\varepsilon\ge 0$, $F\in H^{-1}(\Omega; \mathbb{R}^m)$ and $f\in H^{1/2}(\partial\Omega; \mathbb{R}^m)$, let $u_\varepsilon\in H^1(\Omega; \mathbb{R}^m)$ be the unique weak solution of (\ref{DP-1}). Then $u_\varepsilon \to u_0$ weakly in $H^1(\Omega; \mathbb{R}^m)$ and strongly in $L^2(\Omega; \mathbb{R}^m)$, as $\varepsilon \to 0$. Similarly, if $\int_\Omega u_\varepsilon =\int_\Omega u_0=0$, the solution of the Neumann problem (\ref{NP-1}) with $F\in H^{-1}(\Omega; \mathbb{R}^m)$ and $g \in H^{-1/2}(\partial\Omega; \mathbb{R}^m)$ converges weakly in $H^1(\Omega; \mathbb{R}^m)$ to the solution of the Neumann problem: $\mathcal{L}_0 (u_0) =F$ in $\Omega$ and $\partial u_0/{\partial \nu_0}=g$ on $\partial\Omega$, where $\partial u_0/{\partial \nu_0} = n\widehat{A}\nabla u_0$. \subsection{Quantitative estimates for the approximate correctors} To study the convergence rates of $u_\varepsilon$ to $u_0$, we need to introduce the approximate correctors $\chi_T = \big(\chi_{T, j}^\beta\big)$. Let $P_j^\beta (y)=y_j e^\beta$, where $1\le j\le d$, $1\le \beta\le m$, and $e^\beta=(0, \dots, 1, \dots, 0)$ with $1$ in the $\beta^{th}$ position. For each $T>0$, the function $u=\chi_{T, j}^\beta$ is defined as the weak solution of \begin{equation}\label{ac} -\text{\rm div} \big( A(y)\nabla u \big) + T^{-2} u =\text{\rm div} \big( A(y)\nabla P_j^\beta\big) \qquad \text{ in } \mathbb{R}^d, \end{equation} with the property $$ \sup_{x\in \mathbb{R}^d} \| u\|_{H^1(B(x, 1))} <\infty. $$ It is not hard to show that \begin{equation} \sup_{x\in \mathbb{R}^d} -\!\!\!\!\!\!\int_{B(x,T)} \big( |\nabla \chi_T|^2 +T^{-2} |\chi_T|^2\big)\le C, \end{equation} where $C$ depends only on $d$, $m$, and $\mu$ (the almost-periodicity of $A$ is not needed). For $\sigma \in (0, 1]$ and $T\ge 1$, define \begin{equation}\label{Theta} \Theta_\sigma (T)=\inf_{0<R\le T} \left\{ \rho (R) +\left(\frac{R}{T}\right)^\sigma \right\}. \end{equation} The following theorem was proved in \cite{Shen-2014}. \begin{theorem}\label{ac-theorem} Suppose that $A$ is uniformly almost-periodic in $\mathbb{R}^d$ and satisfies (\ref{ellipticity}). Let $\sigma \in (0,1)$ and $T\ge 1$. Then, for any $x,y\in \mathbb{R}^d$, \begin{equation}\label{ac-Holder} |\chi_T (x)-\chi_T (y)|\le C_\sigma\, T^{1-\sigma}\, |x-y|^\sigma, \end{equation} and \begin{equation}\label{ac-bound} T^{-1} \| \chi_T \|_{L^\infty(\mathbb{R}^d)} \le C_\sigma \, \Theta_\sigma (T), \end{equation} where $C_\sigma$ depends only on $\sigma$ and $A$. \end{theorem} The rest of this section is devoted to the study of error estimates of $\| u_\varepsilon -u_0\|_{L^2(\Omega)}$. The material is divided into two subsections. The first subsection treats Dirichlet boundary condition, while the second handles the Neumann boundary condition. \subsection{Convergence rates: Dirichlet boundary condition} We begin by using Theorem \ref{ac-theorem} to extend a result in \cite{Shen-2014}. \begin{lemma}\label{lemma-H-1} Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^d$. Let $u_\varepsilon\in H^1(\Omega; \mathbb{R}^m)$ be the weak solution of (\ref{DP-1}). Let \begin{equation} w_\varepsilon=u_\varepsilon -v_0 -\varepsilon \chi_T (x/\varepsilon)\nabla v_0 -v_\varepsilon, \end{equation} where $T=\varepsilon^{-1}$, $v_0\in W^{2,2}(\Omega;\mathbb{R}^m)$, $\mathcal{L}_0 (v_0)=F$ in $\Omega$, and $v_\varepsilon\in H^1(\Omega; \mathbb{R}^m)$ is the weak solution of $$ \left\{ \aligned \mathcal{L}_\varepsilon (v_\varepsilon)& =0 & \quad &\text{ in } \Omega,\\ v_\varepsilon& =u_\varepsilon -v_0 -\varepsilon \chi_T(x/\varepsilon) \nabla v_0&\quad &\text{ on } \partial\Omega. \endaligned \right. $$ Then, for any $\sigma \in (0,1)$, \begin{equation}\label{H-1-1} \| w_\varepsilon \|_{H^1(\Omega)} \le C_\sigma \big\{ \Theta_\sigma (T) +\langle |\psi-\nabla \chi_T|\rangle \big\} \left\{ \|\nabla^2 v_0\|_{L^2(\Omega)} +\| \nabla v_0\|_{L^2(\Omega)} \right\}, \end{equation} where $\psi =\big(\psi_{ij}^{\alpha\beta}\big)$ is defined by (\ref{homo-equation}) and $C_\sigma$ depends only on $\sigma$, $A$, and $\Omega$. \end{lemma} \begin{proof} The proof is similar to that of Theorem 7.3 in \cite{Shen-2014}, where $v_0$ is taken to be $u_0$. A direct computation shows that $$ \mathcal{L}_\varepsilon (w_\varepsilon) =-\text{\rm div} \big(B_T (x/\varepsilon)\nabla v_0\big) + \varepsilon\, \text{\rm div} \big\{ A(x/\varepsilon)\chi_T (x/\varepsilon) \nabla^2 v_0\big\}, $$ where $B_T (y) =\big( b_{T, ij}^{\alpha\beta} \big)$ is given by \begin{equation}\label{B-T} b_{T, ij}^{\alpha\beta} (y) =\widehat{a}_{ij}^{\alpha\beta} -a_{ij}^{\alpha\beta} (y) -a_{ik}^{\alpha\gamma} (y) \frac{\partial }{\partial y_k} \left\{ \chi_{T, j}^{\gamma\beta} (y)\right\}. \end{equation} Since $w_\varepsilon =0$ on $\partial\Omega$, it follows that \begin{equation}\label{H-0-1} c\int_\Omega |\nabla w_\varepsilon|^2 \le \left|\int_\Omega \text{\rm div} \big( B_T (x/\varepsilon)\nabla v_0\big) \cdot w_\varepsilon\, dx\right| +\int_\Omega |\varepsilon\chi_T (x/\varepsilon) |\, |\nabla^2 v_0|\, |\nabla w_\varepsilon|\, dx. \end{equation} Thus, it suffices to show that the right hand side of (\ref{H-0-1}) is bounded by $$ C_\sigma \big\{ \Theta_\sigma (T) +\langle |\psi-\nabla \chi_T|\rangle \big\} \left\{ \|\nabla^2 v_0\|_{L^2(\Omega) }+\|\nabla v_0\|_{L^2(\Omega)} \right\} \| w_\varepsilon\|_{H^1(\Omega)} $$ for any $\sigma\in (0,1)$. By (\ref{ac-bound}) and Cauchy inequality, the second integral in the right hand side of (\ref{H-0-1}) is bounded by $$ C_\sigma\, \Theta_\sigma (T) \, \|\nabla^2 v_0\|_{L^2(\Omega)} \|\nabla w_\varepsilon\|_{L^2(\Omega)}. $$ The estimate of the first integral is much more delicate and is done by the same argument as in the proof of Theorem 7.3 in \cite{Shen-2014}. The key idea is to solve the equation \begin{equation}\label{H} -\Delta H +T^{-2} H=B_T -\langle B_T \rangle \quad \text{ in } \mathbb{R}^d, \end{equation} and show that there exists a solution $H=H_T=\left( h_{ij}^{\alpha\beta} \right)\in W^{2,2}_{loc} (\mathbb{R}^d)$ satisfying \begin{equation}\label{H-1} \left\{ \aligned T^{-2} \| H\|_\infty & \le C \, \Theta_1 (T), \\ T^{-1} \|\nabla H\|_\infty & \le C_\sigma\, \Theta_\sigma (T), \endaligned \right. \end{equation} and \begin{equation}\label{H-2} \bigg\|\nabla \frac{\partial h_{ij}^{\alpha\beta}}{\partial x_i} \bigg\|_\infty \le C_\sigma \, \Theta_\sigma (T) \end{equation} for any $\sigma \in (0,1)$ (the index $i$ in (\ref{H-2}) is summed from $1$ to $d$). We omit the details. A similar approach will be used in the proof of Lemma \ref{NP-rate-lemma-1}. \end{proof} \begin{lemma}\label{lemma-H-2} Let $\Omega$ be a bounded $C^{1, \alpha}$ domain in $\mathbb{R}^d$ for some $\alpha>0$. Let $u_\varepsilon\in H^1(\Omega; \mathbb{R}^m)$ $(\varepsilon\ge 0)$ be the weak solution of (\ref{DP-1}). Then, for any $\sigma, \delta\in (0,1)$, \begin{equation}\label{H-2-1} \aligned \| u_\varepsilon -u_0\|_{L^2(\Omega)} & \le C \big\{ \Theta_\sigma (T) +\langle |\psi-\nabla \chi_T|\rangle \big\} \big\{ \|\nabla^2 v_0\|_{L^2(\Omega)} +\| \nabla v_0\|_{L^2(\Omega)} \big\}\\ &\qquad \qquad +C \, \| f- v_0 \|_{C^\delta (\partial\Omega)} + C \big[\Theta_\sigma (T)\big]^{1-\delta} \| \nabla v_0 \|_{C^\delta (\partial\Omega)}, \endaligned \end{equation} where $v_0 \in W^{2,2}(\Omega; \mathbb{R}^m) \cap C^{1, \delta} (\overline{\Omega}; \mathbb{R}^m)$ and $\mathcal{L}_0 (v_0)=F$ in $\Omega$. The constant $C$ depends only on $\delta$, $\sigma$, $A$, and $\Omega$. \end{lemma} \begin{proof} Let $v_\varepsilon$ and $w_\varepsilon$ be defined as in Lemma \ref{lemma-H-1}. Then \begin{equation}\label{H-2-2} \| u_\varepsilon -u_0\|_{L^2(\Omega)} \le \| w_\varepsilon\|_{L^2(\Omega)} +\| v_0 -u_0\|_{L^2(\Omega)} +\varepsilon \|\chi_T\|_\infty \|\nabla v_0\|_{L^2(\Omega)} +\| v_\varepsilon\|_{L^2(\Omega)}. \end{equation} In view of (\ref{H-1-1}) we only need to handle the last three terms in the right hand side of (\ref{H-2-2}). First, since $\mathcal{L}_0 (v_0 -u_0)=0$ in $\Omega$ and $\Omega$ is $C^{1,\alpha}$, we obtain \begin{equation}\label{H-2-3} \| v_0 -u_0\|_{L^2(\Omega)} \le C\, \| v_0 -f\|_{L^2(\partial\Omega)}. \end{equation} Next, we note that $$ \varepsilon \, \|\chi_T\|_\infty \|\nabla v_0\|_{L^2(\Omega)} =T^{-1} \|\chi_T\|_\infty \|\nabla v_0\|_{L^2(\Omega)} \le C_\sigma \Theta_\sigma (T)\| \nabla v_0\|_{L^2(\Omega)}. $$ Finally, recall that $\mathcal{L}_\varepsilon (v_\varepsilon)=0$ in $\Omega$ and $v_\varepsilon =f-v_0 -\varepsilon \chi_T(x/\varepsilon) \nabla v_0$ on $\partial\Omega$. Since $\Omega$ is $C^{1, \alpha}$, we may use the H\"older estimates (\ref{H-estimate}) to obtain \begin{equation} \aligned \| v_\varepsilon \|_{L^2(\Omega)} &\le C\, \| v_\varepsilon \|_{C^{\delta_1} (\partial\Omega)}\\ &\le C\, \| f-v_0\|_{C^{\delta_1}(\partial\Omega)} +C \|\varepsilon \chi_T (x/\varepsilon)\|_{C^{\delta_1} (\partial\Omega)} \| \nabla v_0 \|_{C^{\delta_1} (\partial\Omega)} \endaligned \end{equation} for any $\delta_1\in (0,1)$. Note that $\|\varepsilon \chi_T (x/\varepsilon)\|_\infty \le C_\sigma \Theta_\sigma (T)$ and $\|\varepsilon \chi_T (x/\varepsilon)\|_{C^{0, \delta}} \le C_\delta$. By interpolation this implies that $$ \|\varepsilon \chi_T(x/\varepsilon)\|_{C^{\delta_1}} \le C\, \big[\Theta_\sigma (T)\big]^{1- \delta_2} $$ for any $\sigma \in (0,1)$ and $0<\delta_1<\delta_2<1$. As a result, we see that $$ \| v_\varepsilon\|_{L^2(\Omega)} \le C\, \| f-v_0\|_{C^\delta(\partial\Omega)} +C\, \big[ \Theta_\sigma (T)\big]^{1-\delta} \| \nabla v_0\|_{C^\delta(\partial\Omega)} $$ for any $\sigma, \delta \in (0,1)$. The proof is now complete. \end{proof} \begin{remark} {\rm If we let $v_0=u_0$ in Lemma \ref{lemma-H-2}, then \begin{equation}\label{remark-H-1} \aligned \| u_\varepsilon -u_0\|_{L^2(\Omega)} &\le C\, \omega (\varepsilon)\, \| u_0 \|_{W^{2, p} (\Omega)}, \endaligned \end{equation} where $p>d$ and \begin{equation}\label{omega} \omega (\varepsilon)=\omega_\sigma (\varepsilon) =\big[\Theta_1 (\varepsilon^{-1})\big]^{\sigma} +\sup_{T\ge \varepsilon^{-1}} \langle |\psi -\nabla \chi_T |\rangle. \end{equation} Here we have used the observation $\Theta_\sigma (T) \le C_\sigma \big[\Theta_1(T)\big]^\sigma$ as well as Sobolev imbedding $\|\nabla u_0\|_{C^\delta(\Omega)} \le C\, \| u_0\|_{W^{2, p}(\Omega)}$ for $p>d$ and $\delta=1-(d/p)$. Note that $\omega (\varepsilon)$ is a nondecreasing continuous function on $(0,1]$ and $\omega (0+)=0$. } \end{remark} Estimate (\ref{remark-H-1}) is one of the main results proved in \cite{Shen-2014}. In the periodic setting it gives a near optimal convergence rate of $O(\varepsilon^\gamma)$ for any $\gamma\in (0,1)$. However, since $\Omega$ is only assumed to be $C^{1,\alpha}$, the $W^{2,p}$ norm in (\ref{remark-H-1}) is not convenient in some applications. Our next theorem is an attempt to resolve this issue (see \cite{KLS2} for analogous results in the periodic setting). For simplicity we assume that $F=0$. \begin{theorem}\label{rate-theorem-1} Suppose that $A(y)$ is uniformly almost-periodic in $\mathbb{R}^d$ and satisfies (\ref{ellipticity}). Let $\Omega$ be a bounded $C^{1, \alpha}$ domain in $\mathbb{R}^d$ for some $\alpha>0$. Let $u_\varepsilon $ $ (\varepsilon\ge 0)$ be the weak solution of Dirichlet problem: $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $\Omega$ and $u_\varepsilon=f$ on $\partial\Omega$. Then, for any $\delta \in (0, \alpha)$, \begin{equation}\label{rate-2} \| u_\varepsilon -u_0\|_{L^2(\Omega)} \le C \, \big[ \omega (\varepsilon)\big]^{2/3} \, \| f\|_{C^{1,\delta}(\partial\Omega)}, \end{equation} where $\omega (\varepsilon)=\omega_\sigma (\varepsilon)$ is defined by (\ref{omega}) and $C$ depends only on $\delta$, $\sigma$, $A$, and $\Omega$. \end{theorem} \begin{proof} We begin by constructing a family of bounded $C^{1, \alpha}$ domains $\{ \Omega_s: s\in (0,1/2)\}$ such that (1) $\Omega\subset \Omega_s$, (2) for each $s\in (0,1/2)$, there is a $C^{1, \alpha}$ diffeomorphism $\Lambda_s: \partial\Omega\to \partial \Omega_s$ with uniform bounds, and (3) $|x-\Lambda_s (x)|\approx \text{dist}(x, \partial\Omega_s) \approx s$ for every $x\in \partial\Omega$. The constants $C$ in the estimates below do not depend on $s$. Next, let $f_s (x) =f (\Lambda^{-1}_s (x))$ for $x\in \partial\Omega_s$ and $v=v_s$ be the solution of Dirichlet problem: $\mathcal{L}_0 (v)=0$ in $\Omega_s$ and $v=f_s$ on $\partial\Omega_s$. We will show that \begin{equation}\label{H-4-1} \| u_\varepsilon- u_0\|_{L^2(\Omega)} \le C \left\{ s + s^{-\frac{1}{2}} \omega (\varepsilon)\right\} \| f\|_{C^{1,\delta} (\partial\Omega)}. \end{equation} Estimate (\ref{rate-2}) follows from (\ref{H-4-1}) by choosing $s\in (0,1/2)$ so that $s^{3/2} =c\, \omega (\varepsilon)$. To see (\ref{H-4-1}), we use Lemma \ref{lemma-H-2} to obtain \begin{equation}\label{H-4-2} \| u_\varepsilon -u_0\|_{L^2(\Omega)} \le C\, \omega (\varepsilon) \left\{ \|\nabla^2 v\|_{L^2(\Omega)} +\|\nabla v\|_{C^{\delta} (\Omega)} \right\} +C\, \| f-v\|_{C^\delta(\partial\Omega)} \end{equation} for any $\delta\in (0,\alpha)$. Since $\mathcal{L}_0 (v)=0$ in $\Omega_s$ and $\Omega_s$ is $C^{1, \alpha}$, \begin{equation}\label{H-4-3} \| f-v\|_{C^\delta (\partial\Omega)} \le C\, s \, \|\nabla v \|_{C^{ \delta} (\Omega_s)} \le C\, s\, \| f_s\|_{C^{1,\delta}(\partial\Omega_s)} \le C\, s\, \| f\|_{C^{1, \delta}(\partial\Omega)}. \end{equation} By the interior estimates for $\mathcal{L}_0$ and the fact that $\Omega \subset \{ x\in \Omega_s: \text{dist}(x, \partial\Omega_s)\ge c\, s\}$, it is not hard to see that $$ \aligned \int_\Omega |\nabla^2 v|^2\, dx &\le C\, \|\nabla v\|_{L^\infty(\Omega_s)}^2 \int_\Omega \frac{dx}{\big[ \text{dist} (x, \partial\Omega_s)\big]^2}\\ & \le C\, s^{-1} \|\nabla v\|_{L^\infty(\Omega_s)}^2 \le C\, s^{-1}\| f\|^2_{C^{1, \delta}(\partial\Omega)}. \endaligned $$ This, together with (\ref{H-4-2})-(\ref{H-4-3}) and the estimate $\| \nabla v\|_{C^{ \delta}(\Omega)} \le C\, \| f\|_{C^{1,\delta}(\partial\Omega)}$, yields (\ref{H-4-1}). \end{proof} \subsection{Convergence rates: Neumann boundary conditions} In this subsection we establish estimates on convergence rates for the Neumann problem (\ref{NP-1}) under an additional assumption that \begin{equation}\label{ac-Lip} \sup_{T\ge 1} \| \nabla \chi_T \|_{L^\infty(\mathbb{R}^d)} \le C_0 <\infty. \end{equation} This condition follows from the uniform interior Lipschitz estimates (see Remark \ref{ac-remark}). In particular, it holds under the assumptions on $A$ in Theorem \ref{main-theorem-Lip}. \begin{lemma}\label{NP-rate-lemma-1} Suppose that $A$ is uniformly almost-periodic and satisfies (\ref{ellipticity}). Also assume that the condition (\ref{ac-Lip}) holds. Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^d$. Let \begin{equation} \left\{ \aligned \mathcal{L}_\varepsilon (u_\varepsilon) & =F &\quad &\text{ in } \Omega,\\ \frac{\partial u_\varepsilon}{\partial \nu_\varepsilon} &=g&\quad &\text{ on } \partial\Omega, \endaligned \right. \quad \text{ and } \quad \left\{ \aligned \mathcal{L}_0 (v_0) & =F &\quad &\text{ in } \Omega,\\ \frac{\partial v_0}{\partial \nu_0} &=g_0&\quad &\text{ on } \partial\Omega, \endaligned \right. \end{equation} where $F\in L^2(\Omega; \mathbb{R}^m)$ and $g, g_0\in L^2(\partial\Omega;\mathbb{R}^m)$. Suppose $\int_\Omega u_\varepsilon =\int_\Omega v_0 $. Then, for any $\sigma\in (0,1)$, \begin{equation}\label{NP-rate-1} \aligned \| u_\varepsilon -v_0\|_{L^2(\Omega)} & \le C\, \| g-g_0 \|_{H^{-1/2}(\partial\Omega)}+ C\, \big\{\Theta_\sigma (T) +\langle|\psi-\nabla \chi_T |\rangle\big\}\|\nabla^2 v_0\|_{L^2(\Omega)} \\ &\qquad\qquad\qquad +C \left\{ \big[ \Theta_\sigma (T)\big]^{\frac12} +\langle|\psi-\nabla \chi_T |\rangle\right\} \|(\nabla v_0)^*\|_{L^2(\partial \Omega)}, \endaligned \end{equation} where $T=\varepsilon^{-1}$ and $(\nabla v_0)^*$ denotes the non-tangential maximal function of $\nabla v_0$. The constant $C$ in (\ref{NP-rate-1}) depends only on $\sigma$, $A$, and $\Omega$. \end{lemma} \begin{proof} As in the case of Dirichlet boundary condition, we consider $$ w_\varepsilon =u_\varepsilon -v_0 -\varepsilon \chi_T (x/\varepsilon) \nabla v_0, $$ where $T=\varepsilon^{-1}$. We will show that \begin{equation}\label{NP-rate-2} \aligned \|\nabla w_\varepsilon\|_{L^2(\Omega)} & \le C\, \| g-g_0 \|_{H^{-1/2}(\partial\Omega)}+ C \big\{\Theta_\sigma (T) +\langle|\psi-\nabla \chi_T |\rangle\big\}\|\nabla^2 v_0\|_{L^2(\Omega)} \\ &\qquad\qquad\qquad +C \left\{ \big[ \Theta_\sigma (T)\big]^{\frac12} +\langle|\psi-\nabla \chi_T |\rangle\right\} \|(\nabla v_0)^*\|_{L^2(\partial \Omega)}. \endaligned \end{equation} Since $|\int_\Omega w_\varepsilon|\le \varepsilon \|\chi_T\|_\infty \|\nabla v_0\|_{L^1(\Omega)} \le C\, \Theta_\sigma (T) \|\nabla v_0\|_{L^1(\Omega)}$, the estimate (\ref{NP-rate-1}) follows from (\ref{NP-rate-2}) by Poincar\'e inequality. To prove (\ref{NP-rate-2}), we observe that \begin{equation}\label{NP-rate-3} \aligned &\int_\Omega \nabla w_\varepsilon \cdot A(x/\varepsilon)\nabla w_\varepsilon\, dx\\ &=<w_\varepsilon, g-g_0> -\int_\Omega \nabla w_\varepsilon \cdot B_T(x/\varepsilon) \nabla v_0 -\int_\Omega \nabla w_\varepsilon\cdot A(x/\varepsilon)\, \varepsilon \chi_T(x/\varepsilon)\nabla^2 v_0, \endaligned \end{equation} where $B_T(y)=\widehat{A} -A(y)-A(y)\nabla \chi_T (y)$, and we have used the fact $$ \int_\Omega \nabla w_\varepsilon\cdot \big( A(x/\varepsilon)\nabla u_\varepsilon -\widehat{A}\nabla v_0 \big) =<w_\varepsilon, g-g_0>. $$ Since $\int_{\partial\Omega} (g-g_0)=0$, $$ \aligned |< w_\varepsilon, g-g_0>| &\le \| g-g_0\|_{H^{-1/2}(\partial\Omega)} \| w_\varepsilon -E\|_{H^{1/2}(\partial\Omega)}\\ &\le C\, \| g-g_0\|_{H^{-1/2}(\partial\Omega)} \|\nabla w_\varepsilon\|_{L^2(\Omega)}, \endaligned $$ where $E=-\!\!\!\!\!\!\int_\Omega w_\varepsilon$. Also, the last term in the right hand of (\ref{NP-rate-3}) is bounded by $$ C_\sigma \Theta_\sigma (T)\|\nabla w_\varepsilon\|_{L^2(\Omega)} \|\nabla^2 v_0\|_{L^2(\Omega)} . $$ Furthermore, since $ |\langle B_T\rangle|\le C \, \langle |\psi-\nabla \chi_T| \rangle, $ in view of (\ref{NP-rate-3}), it suffices to show that \begin{equation}\label{NP-rate-4} \aligned & \left| \int_\Omega \nabla w_\varepsilon \cdot \left\{ B_T (x/\varepsilon)-\langle B_T \rangle \right\} \nabla v_0\right|\\ &\qquad \le C\, \| \nabla w_\varepsilon\|_{L^2(\Omega)} \left\{\Theta_\sigma (T) +\langle|\psi-\nabla \chi_T |\rangle\right\}\|\nabla^2 v_0\|_{L^2(\Omega)} \\ &\qquad\qquad\qquad\qquad +C \, \|\nabla w_\varepsilon\|_{L^2(\Omega)} \big[ \Theta_\sigma (T)\big]^{\frac12} \|(\nabla v_0)^*\|_{L^2(\partial \Omega)}. \endaligned \end{equation} This will be done by using a line of argument similar to that used in the proof of Theorem 7.3 in \cite{Shen-2014} as well as in the proof of Lemma \ref{lemma-H-1}. Let $H=H_T\in W^{2,2}_{loc} (\mathbb{R^d})$ be a solution of (\ref{H}) that satisfies (\ref{H-1})-(\ref{H-2}). In view of the first estimate in (\ref{H-1}), it suffices to prove (\ref{NP-rate-4}) with $B_T(x/\varepsilon)-\langle B_T\rangle$ replaced by $\Delta H(x/\varepsilon)$. Let $\varphi=\varphi_\delta \in C_0^\infty(\mathbb{R}^d)$ be a cut-off function such that $0\le \varphi\le 1$, $\varphi (x)=1$ if dist$(x, \partial\Omega)\ge 2c\delta$, $\varphi (x)=0$ if dist$(x, \partial\Omega)\le c\delta$, and $|\nabla\varphi|\le C\delta^{-1}$, where $\delta\in (0,1)$ is to be determined. A direct computation shows that for each $1\le j\le d$ and $1\le \beta\le m$, $$ \aligned \frac{\partial w_\varepsilon^\alpha}{\partial x_i} \cdot \Delta h_{ij}^{\alpha\beta} (x/\varepsilon) &=\frac{\partial }{\partial x_k} \left\{ \frac{\partial w_\varepsilon^\alpha}{\partial x_i} \cdot \varepsilon \frac{\partial h_{ij}^{\alpha\beta}}{\partial x_k} (x/\varepsilon) \right\} -\frac{\partial }{\partial x_i} \left\{ \frac{\partial w_\varepsilon^\alpha}{\partial x_k} \cdot \varepsilon \frac{\partial h_{ij}^{\alpha\beta}}{\partial x_k} (x/\varepsilon) \right\}\\ &\qquad\qquad\qquad +\frac{\partial w_\varepsilon^\alpha}{\partial x_k} \cdot \frac{\partial^2 h_{ij}^{\alpha\beta}}{\partial x_i \partial x_k} (x/\varepsilon), \endaligned $$ where the summation convention is used. It follows that \begin{equation}\label{NP-rate-5} \aligned &\left |\int_\Omega \nabla w_\varepsilon \cdot \Delta H (x/\varepsilon) (\nabla v_0 )\varphi\right| \le C\, \varepsilon \int_\Omega |\nabla w_\varepsilon|\, |\nabla H(x/\varepsilon)|\, |\nabla \big( (\nabla v_0)\varphi\big)|\\ & \qquad\qquad \qquad\qquad\qquad\qquad\qquad +C \int_\Omega \left|\frac{\partial w^\alpha_\varepsilon}{\partial x_k}\right| \, \left| \frac{\partial h_{ij}^{\alpha\beta}}{\partial x_i\partial x_k} (x/\varepsilon)\right|\, \left|\frac{\partial v_0^\beta}{\partial x_j}\right|\varphi\\ &\le C\, \Theta_\sigma (T) \, \| \nabla w_\varepsilon\|_{L^2(\Omega)} \left\{ \|\nabla^2 v_0\|_{L^2(\Omega)} +\delta^{-1/2} \|(\nabla v_0)^*\|_{L^2(\partial \Omega)} \right\}, \endaligned \end{equation} where we have used estimates (\ref{H-1}) and (\ref{H-2}) as well as the observation $$ \|(\nabla v_0)(\nabla \varphi)\|_{L^2(\Omega)} \le C\delta^{-1/2} \|(\nabla v_0)^*\|_{L^2(\partial\Omega)}. $$ Finally, using the condition (\ref{ac-Lip}), we see that $$ \|\Delta H\|_\infty \le T^{-2} \| H\|_\infty +2 \| B_T\|_\infty\le C+ C\, \|\nabla \chi_T\|_\infty \le C. $$ Hence, $$ \aligned \left |\int_\Omega \nabla w_\varepsilon \cdot \Delta H (x/\varepsilon) (\nabla v_0 )(1-\varphi) \right| &\le C\, \|\Delta H\|_\infty \|\nabla w_\varepsilon\|_{L^2(\Omega)} \left\{ \int_{\substack{x\in \Omega\\ \text{dist} (x, \partial\Omega)\le 2c\delta }} |\nabla v_0|^2\right\}^{1/2}\\ &\le C\, \delta^{1/2} \|\nabla w_\varepsilon \|_{L^2(\Omega)} \|(\nabla v_0)^*\|_{L^2(\partial \Omega)}. \endaligned $$ By choosing $\delta=c \, \Theta_\sigma (T)$, this, together with (\ref{NP-rate-5}), completes the proof. \end{proof} \begin{remark} {\rm Let $u_\varepsilon$ $(\varepsilon\ge 0)$ be the weak solution of (\ref{NP-1}) in a bounded Lipschitz domain $\Omega$. It follows from Lemma \ref{NP-rate-lemma-1} that \begin{equation}\label{NP-rate-7} \aligned \| u_\varepsilon -u_0\|_{L^2(\Omega)} &\le C\, \big\{\Theta_\sigma (T) +\langle|\psi-\nabla \chi_T |\rangle\big\}\|\nabla^2 u_0\|_{L^2(\Omega)} \\ &\qquad\qquad +C \left\{ \big[ \Theta_\sigma (T)\big]^{\frac12} +\langle|\psi-\nabla \chi_T |\rangle\right\} \|(\nabla u_0)^*\|_{L^2(\partial \Omega)}. \endaligned \end{equation} This estimate is not sharp in the periodic setting. It only gives $\| u_\varepsilon -u_0\|_{L^2(\Omega)} = O( \varepsilon^\gamma)$ for any $0<\gamma<(1/2)$. } \end{remark} \begin{theorem}\label{NP-rate-theorem-2} Suppose that $A$ satisfies the same condition as in Lemma \ref{NP-rate-lemma-1}. Let $\Omega$ be a bounded $C^{1, \alpha}$ domain in $\mathbb{R}^d$ with connected boundary for some $\alpha>0$. Let \begin{equation} \left\{ \aligned \mathcal{L}_\varepsilon (u_\varepsilon) & =0 &\quad &\text{ in } \Omega,\\ \frac{\partial u_\varepsilon}{\partial \nu_\varepsilon} &=g&\quad &\text{ on } \partial\Omega, \endaligned \right. \quad \text{ and } \quad \left\{ \aligned \mathcal{L}_0 (u_0) & =0 &\quad &\text{ in } \Omega,\\ \frac{\partial u_0}{\partial \nu_0} &=g&\quad &\text{ on } \partial\Omega, \endaligned \right. \end{equation} where $g\in L^2(\partial \Omega; \mathbb{R}^m)$. Assume $\int_\Omega u_\varepsilon =\int_\Omega u_0$. Then \begin{equation}\label{NP-rate-00} \| u_\varepsilon -u_0 \|_{L^2(\Omega)} \le C\, \big \{ \Theta_\sigma (T)+ \langle |\psi-\nabla \chi_T|\rangle\big\}^{1/2} \| g\|_{L^2(\partial\Omega)}, \end{equation} where $T=\varepsilon^{-1}$, $\sigma \in (0,1)$, and $C$ depends only on $\sigma$, $A$, and $\Omega$. \end{theorem} \begin{proof} We begin by constructing a family of $C^{1, \alpha}$ domains $\{ \Omega_t: \, t\in (0, 1)\}$ with the property that (1) $\Omega \subset \Omega_t$, (2) theres exist $C^{1, \alpha}$ diffeomorphisms $\Lambda_t: \partial \Omega\to \partial\Omega_t$ with uniform bounds such that dist$(x, \Lambda_t (x)) \approx \text{dist} (x, \partial\Omega_t )\approx t$ for any $x\in \partial\Omega$. Let $v=v_t$ be the weak solution to Dirichlet problem: $\mathcal{L}_0 (v)=0$ in $\Omega_t$ and $v= f_t$ on $\partial\Omega$, where $f_t(x)=u_0 (\Lambda^{-1}_t (x) )$ for $x\in \partial\Omega_t$. Next we use the non-tangential maximal function estimates $$ \left\{ \aligned & \| (w)^*\|_{L^2(\partial\Omega_t)} \le C\, \| w\|_{L^2(\partial\Omega_t)}, \quad \|(\nabla w)^* \|_{L^2 (\partial\Omega_t)} \le C\, \|\nabla_{tan} w \|_{L^2(\partial \Omega_t)},\\ & \|(\nabla w)^* \|_{L^2(\partial\Omega_t)} \le C\, \|\frac{\partial w}{\partial \nu_0} \|_{L^2(\partial\Omega_t)} \endaligned \right. $$ for the $L^2$ Dirichlet and Neumann problems for the system $\mathcal{L}_0 (w)=0$ in $C^{1, \alpha}$ domains to control $\| u_0 -v\|_{L^2(\Omega)}$. This gives \begin{equation}\label{NP-rate-10} \aligned \| u_0 -v\|_{L^2(\Omega)} &\le C\, \| u_0 -v\|_{L^2(\partial\Omega)} \le C\, t\, \| (\nabla v)^*\|_{L^2(\partial\Omega_t)}\\ &\le C\, t\, \|\nabla_{tan} v \|_{L^2(\partial\Omega_t)} \le C\, t\, \|\nabla u_0\|_{L^2(\partial\Omega)}\\ &\le C\, t\, \| g\|_{L^2(\partial\Omega)}. \endaligned \end{equation} To handle $\| u_\varepsilon -v\|_{L^2(\Omega)}$, we use Lemma \ref{NP-rate-lemma-1} to obtain \begin{equation}\label{NP-rate-20} \aligned \| u_\varepsilon -v\|_{L^2(\Omega)} &\le C\, \| g- g_0\|_{H^{-1/2} (\partial\Omega)} +C \left\{ \Theta_\sigma (T) +\langle |\psi-\nabla \chi_T |\rangle \right\} \| \nabla^2 v\|_{L^2(\Omega)}\\ &\qquad\qquad\qquad + C \left\{ \big[\Theta_\sigma (T) \big]^{1/2} +\langle |\psi-\nabla \chi_T |\rangle \right\}\|(\nabla v)^*\|_{L^2(\partial\Omega)}, \endaligned \end{equation} where $g_0 =\frac{\partial v}{\partial\nu_0}$. Since $\mathcal{L}_0 (u_0 -v)=0$ in $\Omega$, we see that \begin{equation}\label{NP-rate-30} \aligned \| g-g_0\|_{H^{-1/2}(\partial\Omega)} &\le C\, \| u_0 -v\|_{H^1(\Omega)} \le C\, \| u_0 -v\|_{H^{1/2}(\partial\Omega)}\\ &\le C\, \|u_0 -v\|_{L^2(\partial\Omega)}^{1/2} \| u_0 -v\|_{H^1(\partial\Omega)}^{1/2}\\ &\le C\, t^{1/2} \|(\nabla v)^*\|_{L^2(\partial\Omega)} \le C\, t^{1/2}\, \| g\|_{L^2(\partial\Omega)}. \endaligned \end{equation} Also, since $\mathcal{L}_0 (v)=0$ in $\Omega_t$, by the square function estimate \cite{Dahlberg-Kenig-Pipher-Verchota}, $$ \left\{ \int_{\Omega_t} |\nabla^2 v (x) |^2 \text{\rm dist} (x, \partial\Omega_t)\, dx \right\}^{1/2} \le C\, \| v\|_{H^1(\partial\Omega_t)}, $$ we obtain \begin{equation}\label{NP-rate-40} \| \nabla^2 v\|_{L^2(\Omega)} \le C\, t^{-1/2}\, \| g\|_{L^2(\partial\Omega)}. \end{equation} In view of (\ref{NP-rate-10})-(\ref{NP-rate-40}) we have proved that $$ \aligned \| u_\varepsilon -u_0\|_{L^2(\Omega)} &\le C\, t^{-1/2} \left\{ \Theta_\sigma (T) +\langle |\psi-\nabla \chi_T|\rangle\right\} \| g\|_{L^2(\partial\Omega)}\\ &\qquad + C\, \left\{ \big[\Theta_\sigma (T) \big]^{1/2} +\langle |\psi-\nabla \chi_T|\rangle\right\} \| g\|_{L^2(\partial\Omega)} + C\, t^{1/2} \| g\|_{L^2(\partial\Omega)}. \endaligned $$ Finally, the estimate (\ref{NP-rate-00}) follows by choosing $t=c\big\{ \Theta_\sigma (T) +\langle |\psi-\nabla \chi_T|\rangle\big\} $. \end{proof} \section{A general scheme for Lipschitz estimates at large scale} \setcounter{equation}{0} In this section we present a general scheme for proving Lipschitz estimates at large scale in homogenization. As we pointed out in Introduction, the scheme, which was motivated by the compactness method in \cite{AL-1987}, was recently formulated by the first author and C. Smart in \cite{Armstrong-Smart-2014}. The $L^2$ version of the scheme in this section is a slight variation of the one given in \cite{Armstrong-Smart-2014}. \begin{lemma}\label{main-lemma-1} Let $\{F_0, F_1, \dots, F_\ell\}$ and $\{ p_0, p_1, \dots, p_\ell\}$ be two sequences of nonnegative numbers. Suppose that for $0\le j\le \ell-1$, \begin{equation}\label{M-L-1} p_{j+1} \le p_j + C_0 \max \big\{ F_j, F_{j+1} \big\}, \end{equation} and for $1\le j\le \ell-1$, \begin{equation}\label{M-L-2} F_{j+1} \le \frac12 F_j + \eta_j K + \eta_j \max \big\{ p_0, \dots, p_{j-1} \big\} +\eta_j \max \big\{ F_0, \dots, F_{j-1}\big \}, \end{equation} where $K\ge 0$, $0\le \eta_1 \le \eta_2 \le \cdots \le \eta_{\ell-1}=\eta_\ell$ and $\eta_1 +\eta_2 +\cdots \eta_{\ell} \le C_1$. Then for $1\le j\le \ell$, \begin{equation}\label{M-L-3} p_j \le C \,( K+ p_0 +F_0 +F_1), \end{equation} \begin{equation}\label{M-L-4} F_j \le C\, (2^{-j} + \eta_j ) (K+ p_0 +F_0 +F_1), \end{equation} where $C$ depends only on $C_0$ and $C_1$. \end{lemma} \begin{proof} The proof of this lemma is essentially contained in the proof of \cite[Lemma 5.1]{Armstrong-Smart-2014}. We provide a proof here for the sake of completeness. By considering $\widetilde{p}_j =p_j +K$, we may assume that $K=0$. Let \begin{equation} T_j =F_j -2 \eta_j \max \big\{ p_0, \dots, p_{j-1}\big\} -2 \eta_j \max \big\{ F_0, \cdots, F_{j-1} \big\}. \end{equation} Note that $$ \aligned T_{j+1} &= F_{j+1} -2 \eta_{j+1} \max\big\{ p_0, \dots, p_j\big\} -2 \eta_{j+1} \max\big \{ F_0, \dots, F_j\big\}\\ &\le \frac12 F_j + \eta_j \max \big\{ p_0, \dots, p_{j-1} \big\} +\eta_j \max \big\{ F_0, \dots, F_{j-1} \big\}\\ &\qquad\qquad\qquad -2 \eta_{j+1} \max\big\{ p_0, \dots, p_j\big\} -2 \eta_{j+1} \max\big \{ F_0, \dots, F_j\big\}\\ &\le \frac12 F_j +(\eta_j -2\eta_{j+1})\max \big\{p_0, \dots, p_{j-1}\big\} +(\eta_j -2\eta_{j+1})\max \big\{ F_0, \cdots, F_{j-1}\big\}, \endaligned $$ where we have used (\ref{M-L-2}) for the first inequality. Since $\eta_j -2\eta_{j+1} \le -\eta_j$, we obtain $T_{j+1} \le (1/2) T_j $ for $1\le j\le\ell-1$. It follows that $T_j \le (1/2)^{j-1} T_1\le (1/2)^{j-1} F_1$. Hence, \begin{equation}\label{M-L-5} F_j \le (1/2)^{j-1} F_1 +2 \eta_j \max \big\{ p_0, \dots, p_{j-1}\big\} +2 \eta_j \max \big\{ F_0, \cdots, F_{j-1} \big\}. \end{equation} Next we will show that for $0\le j\le \ell$, \begin{equation}\label{M-L-7} F_j \le C_2 \bigg\{ (2^{-j} +\eta_j )(F_0+F_1) +\eta_j \max \big\{ p_0, \dots, p_{j-1} \big\}\bigg\}, \end{equation} where $C_2 $ depends only on $C_1$. To prove (\ref{M-L-7}), we claim that for $1\le j\le \ell$, \begin{equation}\label{M-L-9} F_j \le 2 (1+2\eta_1) \cdots (1+2\eta_{j} ) \bigg\{ (2^{-j} +\eta_j)(F_0+ F_1) + \eta_j \max \big\{ p_0, \dots, p_{j-1} \big\} \bigg\}. \end{equation} Since $\eta_1 +\eta_2 +\cdots \eta_{\ell} \le C_1$, one may use the inequality $\ln (1+x)\le x$ for $x\ge 0$ to see that \begin{equation}\label{M-L-10} (1+C\eta_1) \cdots (1+C\eta_{\ell} )\le e^{CC_1}. \end{equation} As a result, estimate (\ref{M-L-7}) follows from (\ref{M-L-9}). Estimate (\ref{M-L-9}) is proved by induction, using (\ref{M-L-5}). Indeed, suppose (\ref{M-L-9}) holds for $1\le j\le i$. Then $$ \max \big\{F_0, \dots, F_i\big\} \le 2(1+2\eta_1)\cdots (1+2\eta_{i} ) \bigg\{ (\frac12 +\eta_i)(F_0+F_1) +\eta_i\max \big\{ p_0, \dots, p_{i-1}\big\} \bigg\}, $$ where we have used the monotonicity of $\eta_j$. This, together with (\ref{M-L-5}), gives $$ \aligned F_{i+1} & \le (1/2)^i F_1 +2\eta_{i+1} \max \big\{ p_0, \dots, p_i\big\} +2\eta_{i+1} \max \big\{ F_0, \dots, F_i\big \}\\ &\le (1/2)^i F_1 +2\eta_{i+1} \max \big\{ p_0, \dots, p_i\big\}\\ &\qquad+2\eta_{i+1} \cdot 2(1+2\eta_1)\cdots (1+2\eta_{i} ) (\frac12 +\eta_i)(F_0+ F_1)\\ & \qquad +2\eta_{i+1} \cdot 2(1+2\eta_1)\cdots (1+2\eta_{i} ) \cdot \eta_i\max \big\{ p_0, \dots, p_{i-1}\big\}\\ &\le 2(1+2\eta_1)\cdots (1+2\eta_{i+1}) \bigg\{ (2^{-i-1} +\eta_{i+1})(F_0 +F_1) +\eta_{i+1} \max\big \{ p_0, \dots, p_i\big\} \bigg\} . \endaligned $$ Finally, we give the proof for estimate (\ref{M-L-3}), which, together with (\ref{M-L-7}), yields (\ref{M-L-4}). To this end we use (\ref{M-L-1}) and (\ref{M-L-7}) to obtain $$ \aligned p_{j+1} &\le p_j + C_0 \max \{ F_j, F_{j+1} \}\\ & \le p_j +C \, (2^{-j-1} +\eta_{j+1} )(F_0 +F_1) +C \, \eta_{j+1} \max \big\{ p_0, \dots, p_{j} \big\}\\ &\le (1+C\eta_{j+1} )\max \big\{ p_0,\dots, p_j\big\} +C\, (2^{-j-1} +\eta_{j+1}) (F_0 +F_1), \endaligned $$ where $C$ depends only on $C_0$ and $C_1$. By a simple induction argument it follows \begin{equation} p_j \le (1+C\eta_1)\cdots (1+C\eta_j) \bigg\{ p_0 +F_0 +F_1 +C\, \sum_{k=1}^j (2^{-k} +\eta_k) (F_0 +F_1) \bigg\}, \end{equation} where $C$ depends only on $C_0$ and $C_1$. In view of (\ref{M-L-10}) this gives the desired estimate (\ref{M-L-3}). The proof is now complete. \end{proof} \begin{theorem}\label{g-theorem-1} Let $B_r=B(0,r)$ and $u\in L^2(B_1; \mathbb{R}^m)$. Let $0<\varepsilon<1/4$. Suppose that for each $r\in (\varepsilon, 1/4)$, there exists $w=w_r\in L^2(B_{r}; \mathbb{R}^m)$ such that \begin{equation}\label{g-t-1-1} \left\{ -\!\!\!\!\!\!\int_{B_r} | u-w|^2\right\}^{1/2} \le \eta (\varepsilon/r) \left\{ \inf_{q\in \mathbb{R}^m} \left(-\!\!\!\!\!\!\int_{B_{2r}} |u-q |^2\right)^{1/2} + r\, K \right\}, \end{equation} and \begin{equation}\label{g-t-1-2} \aligned \frac{1}{\theta } & \inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \left(-\!\!\!\!\!\!\int_{B_{\theta r}} |w(x)-M x-q|^2\, dx \right)^{1/2}\\ &\qquad\qquad \le \frac{1}{2} \inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \left(-\!\!\!\!\!\!\int_{B_{r}} |w(x)- M x-q|^2 \, dx \right)^{1/2}, \endaligned \end{equation} where $K\ge 0$, $\theta\in (0,1/4)$, and $\eta (t)$ is a nondecreasing function on $(0,1]$. Assume that \begin{equation}\label{g-t-1-3} I=\int_0^1 \frac{\eta (t)}{t}\, dt<\infty. \end{equation} Then, for $\varepsilon<t< (1/4)$, \begin{equation}\label{g-t-1-4} \frac{1}{t} \inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{B_t} |u-q|^2\right\}^{1/2} \le C \, \left\{ K+\left(-\!\!\!\!\!\!\int_{B_1} |u|^2\right)^{1/2} \right\}, \end{equation} and \begin{equation}\label{g-t-1-5} \aligned \frac{1}{t} \inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} & \left\{ -\!\!\!\!\!\!\int_{B_t} |u(x)-M x-q|^2 \, dx\right\}^{1/2}\\ &\qquad \le C \, \big\{ t^\alpha +\eta (\varepsilon /t)\big\} \left\{ K+\left( -\!\!\!\!\!\!\int_{B_1} |u|^2\right)^{1/2}\right\}, \endaligned \end{equation} where $\alpha=\alpha(\theta)>0$ and $C$ depends only on $d$, $m$, $\theta$, and $I$. \end{theorem} \begin{proof} It follows from the assumptions (\ref{g-t-1-2}) and (\ref{g-t-1-1}) that for $r\in (\varepsilon, 1/2)$, \begin{equation}\label{g-t-1-6} \aligned & \frac{1}{\theta r} \inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \left( -\!\!\!\!\!\!\int_{B_{\theta r}} | u-Mx-q|^2\, dx\right)^{1/2} \\ & \le \frac{C}{r} \left\{-\!\!\!\!\!\!\int_{B_r} |u-w_r|^2 \right\}^{1/2} +\frac{1}{2r} \inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \left(-\!\!\!\!\!\!\int_{B_{r}} | u-M x-q|^2\, dx\right)^{1/2} \\ &\le C\, \eta(\varepsilon/r) \left\{ \frac{1}{2r} \inf_{q\in \mathbb{R}^m} \left(-\!\!\!\!\!\!\int_{B_{2r}} |u-q|^2 \right)^{1/2} +K \right\}\\ &\qquad\qquad\qquad +\frac{1}{2r} \inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \left( -\!\!\!\!\!\!\int_{B_{ r}} | u-Mx-q|^2\, dx\right)^{1/2}. \endaligned \end{equation} Let $r_j =\theta^{j+1}$ for $0\le j\le \ell$, where $\ell$ is chosen so that $\theta^{\ell+2}<\varepsilon\le \theta^{\ell+1}$ (we may assume that $\varepsilon<\theta$). Let \begin{equation}\label{g-t-1-7} \aligned F_j &=\frac{1}{r_j} \inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \left( -\!\!\!\!\!\!\int_{B_{r_j}} |u-Mx-q|^2\, dx \right)^{1/2} \\ &=\frac{1}{r_j} \inf_{q\in \mathbb{R}^m} \left(-\!\!\!\!\!\!\int_{B_{r_j}} |u-M_j x -q|^2\, dx \right)^{1/2} \endaligned \end{equation} and $p_j =|M_j|$. Note that by (\ref{g-t-1-6}), \begin{equation}\label{g-t-8} \aligned F_{j+1} &\le \frac12 F_j + C \, \eta(\varepsilon \theta^{-j-1}) \left\{ \frac{1}{2r_j} \inf_{q\in \mathbb{R}^m} \left( -\!\!\!\!\!\!\int_{B_{2r_j}} |u-q|^2\right)^{1/2} +K \right\}\\ & \le \frac12 F_j + C\, \eta(\varepsilon \theta^{-j-1}) \big\{ K+F_{j-1} +p_{j-1} \big\}. \endaligned \end{equation} Also observe that $$ \aligned & |M_{j+1} -M_j| \le \frac{C}{r_{j+1}} \inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{B_{r_{j+1}}} |(M_{j+1} -M_j) x-q |^2\, dx \right\}^{1/2}\\ &\le \frac{C}{r_{j+1}} \inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{B_{r_{j+1}}} |u- M_{j+1}x-q|^2 \right\}^{1/2} + \frac{C}{r_{j+1}}\inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{B_{r_{j+1}}} |u- M_{j}x-q|^2 \right\}^{1/2}\\ & \le C \, ( F_j +F_{j+1} ). \endaligned $$ This gives \begin{equation} p_{j+1} =|M_{j+1}|\le |M_j| + C\, (F_j + F_{j+1}) = p_j +C\, (F_j + F_{j+1}). \end{equation} We further note that $$ \sum_{j=0}^{\ell-1} \eta(\varepsilon\theta^{-j-1}) \le \frac{1}{\ln (1/\theta)} \int_0^1 \frac{\eta (t)}{t}\, dt<\infty. $$ Thus the sequences $\{ F_0, F_1, \dots, F_\ell \}$ and $\{ p_0, p_1, \dots, p_\ell\}$ satisfy the conditions in Lemma \ref{main-lemma-1}. Consequently, we obtain \begin{equation} \aligned F_j &\le C \, (2^{-j} +\eta(\varepsilon \theta^{-j-1}) ) (K+ p_0 +F_0 +F_1)\\ &\le C \, (2^{-j} +\eta(\varepsilon \theta^{-j-1}) ) \left\{ K + \left(-\!\!\!\!\!\!\int_{B_1} |u|^2\right)^{1/2} \right\}, \endaligned \end{equation} and \begin{equation} \aligned p_j & \le C\, ( K + p_0 +F_0 +F_1)\\ &\le C \left\{ K + \left(-\!\!\!\!\!\!\int_{B_1} |u|^2\right)^{1/2} \right\}. \endaligned \end{equation} Finally, given any $t\in (\varepsilon, \theta)$ (the case $t\ge \theta$ is trivial), we choose $j\ge 0$ so that $\theta^{j+2} < t\le \theta^{j+1} $. Then $$ \aligned \frac{1}{t} \inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} & \left\{ -\!\!\!\!\!\!\int_{B_t} |u-M x-q|^2 \right\}^{1/2} \le C\, F_j \\ &\le C \, \big\{ 2^{-j} +\eta(\varepsilon \theta^{-j-1}) \big\} \left\{ K + \left(-\!\!\!\!\!\!\int_{B_1} |u|^2\right)^{1/2} \right\}\\ &\le C \big\{ t^\alpha +\eta(\varepsilon/t)\big\} \left\{ K + \left(-\!\!\!\!\!\!\int_{B_1} |u|^2\right)^{1/2} \right\}, \endaligned $$ where $\alpha =\alpha (\theta)>0$, and $$ \aligned \frac{1}{t} \inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{B_t} |u-q|^2\right\}^{1/2} &\le \frac{C}{r_j} \inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{B_{r_j}} |u-q|^2\right\}^{1/2}\\ & \le C\,\big\{ F_j +p_j\}\\ & \le C \left\{ K + \left(-\!\!\!\!\!\!\int_{B_1} |u|^2\right)^{1/2} \right\}. \endaligned $$ This completes the proof. \end{proof} \begin{remark} {\rm The $L^2$ norm plays no role in the proof above. Theorem \ref{g-theorem-1} continues to hold if one replaces the $L^2$ average over $B_r$ by the $L^p$ average over $B_r$ for any $1\le p< \infty$ or by the $L^\infty$ norm over $B_r$. } \end{remark} In the next section we will use Theorem \ref{g-theorem-1} to establish uniform interior Lipschitz estimates for $\mathcal{L}_\varepsilon$. The function $w=w_r(x)$ will be a suitably chosen solution of $\mathcal{L}_0 (w)=0$ in $B_r$. Since the homogenized operator $\mathcal{L}_0$ has constant coefficients, its solutions possess $C^{1, \alpha}$ estimates that make (\ref{g-t-1-2}) possible. As we shall see in Sections 7 and 8, with our results on convergence rates in Section 2, this approach for the interior Lipschitz estimates may be adapted for boundary Lipschitz estimates with either Dirichlet or Neumann conditions. \section{Interior Lipschitz estimates} \setcounter{equation}{0} In this section we establish the uniform Lipschitz estimates for $\mathcal{L}_\varepsilon=-\text{\rm div } \big( A(x/\varepsilon)\nabla \big)$. Our approach is based on Theorem \ref{g-theorem-1}. The key ingredients are provided by the next three lemmas. \begin{lemma}\label{i-L-lemma-1} Let $B_r=B(0, r)$. Suppose that $u_\varepsilon\in H^1 (B_{2r}; \mathbb{R}^m)$ and $\mathcal{L}_\varepsilon (u_\varepsilon) =0$ in $B_{2r}$ for some $0<\varepsilon<r<1$. Then there exists $w\in H^1(B_r; \mathbb{R}^m)$ such that $\mathcal{L}_0 (w)=0$ in $B_r$ and \begin{equation}\label{i-L-1-00} \left\{ -\!\!\!\!\!\!\int_{B_r} |u_\varepsilon -w|^2 \right\}^{1/2} \le C_\delta\, \big[ \omega(\varepsilon/r)\big]^{\frac23 -\delta} \inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{B_{2r}} |u_\varepsilon -q|^2\right\}^{1/2} \end{equation} for any $\delta\in (0,1/4)$, where $\omega (t)=\omega_\sigma (t)$ is defined by (\ref{omega}). The constant $C_\delta$ depends only on $\delta$, $\sigma$, and $A$. \end{lemma} \begin{proof} By a simple rescaling we may assume that $r=1$. By subtracting a constant we may also assume $ \int_{B_2} u_\varepsilon=0$. Let $f_t =u_\varepsilon * \varphi_t$, where $\varphi_t (x)=t^{-d} \varphi (x/t)$, $\varphi\in C_0^\infty(B_1)$ and $\int_{\mathbb{R}^d} \varphi =1$. Since $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $B_2$, using the interior H\"older estimate for $\mathcal{L}_\varepsilon$, $$ \| u_\varepsilon\|_{C^\beta (B_{7/4})}\le C_\beta\, \| u_\varepsilon \|_{L^2(B_2)} \qquad \text{ for any } 0<\beta<1 $$ (see Theorem 3.4 in \cite{Shen-2014}), it is easy to see that \begin{equation}\label{i-L-1-10} \| f_t- u_\varepsilon \|_{C^{ \alpha} (B_{3/2})}\le C_{\alpha, \beta} \, t^{\beta-\alpha} \,\| u_\varepsilon\|_{L^2(B_2)} \end{equation} and \begin{equation}\label{i-L-1-11} \| f_t \|_{C^{1, \alpha} (B_{3/2})} \le C_{\alpha, \beta} \, t^{\beta-\alpha-1} \, \| u_\varepsilon \|_{L^2(B_2)}, \end{equation} where $t\in (0,1/4)$ and $0<\alpha<\beta<1$. We now solve the Dirichlet problems \begin{equation} \left\{ \aligned \mathcal{L}_\varepsilon (v_\varepsilon) & =0 & \quad & \text{ in } B_{5/4},\\ v_\varepsilon &=f_t &\quad &\text{ on } \partial B_{5/4}, \endaligned \right. \quad \text{ and } \quad \left\{ \aligned \mathcal{L}_0 (w) & =0 & \quad & \text{ in } B_{5/4},\\ w &=f_t &\quad &\text{ on } \partial B_{5/4}, \endaligned \right. \end{equation} where $t\in (0,1/4)$ is to be determined. Since $\mathcal{L}_\varepsilon (u_\varepsilon -v_\varepsilon)=0$ in $B_{5/4}$, it follows from (\ref{H-estimate}) and (\ref{i-L-1-10}) that \begin{equation}\label{i-L-1-1} \| u_\varepsilon -v_\varepsilon\|_{L^\infty(B_{5/4})} \le C_\alpha\, \| u_\varepsilon -f_t\|_{C^\alpha(\partial B_{5/4})} \le C_{\alpha, \beta} \, t^{\beta-\alpha} \, \| u_\varepsilon\|_{L^2(B_2)}. \end{equation} Also, observe that by Theorem \ref{rate-theorem-1}, \begin{equation}\label{i-L-1-2} \aligned \| v_\varepsilon-w\|_{L^2(B_{5/4})} & \le C_\alpha \, \big[\omega (\varepsilon )\big]^{2/3} \, \| f_t \|_{C^{1, \alpha}(\partial B_{5/4})}\\ & \le C_{\alpha, \beta}\, t^{\beta-\alpha-1} \, \big[\omega(\varepsilon)\big]^{2/3}\, \| u_\varepsilon\|_{L^2(B_2)}. \endaligned \end{equation} In view of (\ref{i-L-1-1}) and (\ref{i-L-1-2}) we obtain \begin{equation} \aligned \| u_\varepsilon -w\|_{L^2(B_1)} &\le \| u_\varepsilon -v_\varepsilon\|_{L^2(B_1)} +\| v_\varepsilon-w\|_{L^2(B_1)}\\ &\le C_{\alpha, \beta}\, t^{\beta-\alpha} \big\{ 1+ t^{-1} \big[\omega (\varepsilon)\big]^{2/3} \big\} \| u_\varepsilon \|_{L^2(B_2)}. \endaligned \end{equation} We now choose $t= c\, \big[\eta (\varepsilon)\big]^{2/3} \in (0, 1/4)$, $\alpha=(3/4)\delta$, and $\beta=1-\alpha$, where $\delta \in (0,1/4)$. This gives \begin{equation} \aligned \| u_\varepsilon -w\|_{L^2(B_1)} &\le C_{\delta} \, \big[ \omega(\varepsilon) \big]^{\frac23-\delta} \| u_\varepsilon\|_{L^2(B_2)}\\ &\le C_{\delta} \, \big[ \omega(\varepsilon) \big]^{\frac23-\delta} \inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{B_2} |u_\varepsilon -q|^2\right\}^{1/2}, \endaligned \end{equation} where we have used the fact $\int_{B_2} u_\varepsilon =0$ for the last inequality. \end{proof} \begin{lemma}\label{i-L-lemma-0} Suppose $w\in H^1(B_r; \mathbb{R}^m)$ and $\mathcal{L}_0 (w)=0$ in $B_r$, where $B_r=B(0,r)$. Then, for any $\theta\in (0,1/2)$, \begin{equation}\label{g-t-interior-1} \frac{1}{\theta } \inf_{\substack{ M\in \mathbb{R}^{m\times d} \\ q\in \mathbb{R}^m}} \left\{ -\!\!\!\!\!\!\int_{B_{\theta r}} |w-Mx -q|^2\right\}^{1/2} \le {C\theta}{ } \inf_{\substack{ M\in \mathbb{R}^{m\times d} \\ q\in \mathbb{R}^m}} \left\{ -\!\!\!\!\!\!\int_{B_{ r}} |w-Mx -q|^2\right\}^{1/2}, \end{equation} where $C$ depends only on $d$, $m$, and $\mu$. As a result, by choosing $\theta$ so small that $C\theta<(1/2)$, solutions of $\mathcal{L}_0 (w)=0$ in $B_r$ satisfy the condition (\ref{g-t-1-2}) in Theorem \ref{g-theorem-1}. \end{lemma} \begin{proof} Estimate (\ref{g-t-interior-1}) follows readily from the interior $C^2$ estimates for $\mathcal{L}_0$. Indeed, by rescaling, we may assume that $r=1$. In this case the left hand side of (\ref{g-t-interior-1}) is bounded by $C\, \theta \|\nabla^2 w\|_{L^\infty(B_\theta)}$. Since $\mathcal{L}_0 (w-Mx -q)=0$ in $B_1$ for any $M\in \mathbb{R}^{m\times d}$ and $q\in \mathbb{R}^m$, by the $C^2$ estimates for $\mathcal{L}_0$, \begin{equation} \theta \| \nabla^2 w\|_{L^\infty(B_\theta)} \le \theta \|\nabla^2 w\|_{L^\infty(B_{1/2})} \le C \theta \inf_{\substack{ M\in \mathbb{R}^{m\times d} \\ q\in \mathbb{R}^m}} \left\{ -\!\!\!\!\!\!\int_{B_1} |w-Mx -q|^2\right\}^{1/2}, \end{equation} where $C$ depends only on $d$, $m$, and $\mu$. The proof is complete. \end{proof} \begin{lemma}\label{Dini-lemma} Suppose that there exist $C_0>0$ and $N>(5/2)$ such that $$ \rho(R)\le C_0 \, \big[ \log R \big]^{-N} \qquad \text{ for any } R\ge 2, $$ where $\rho(R)$ is defined by (\ref{rho}). Then there exist $\sigma \in (0,1)$ and $\delta\in (0,1/4)$ such that $$ \int_0^1 \big[ \omega_\sigma (t) \big]^{\frac{2}{3} -\delta} \, \frac{dt}{t}<\infty, $$ where $\omega_\sigma (t)$ is defined by (\ref{omega}). \end{lemma} \begin{proof} It follows from the definition of $\Theta_\sigma (T)$ that $$ \Theta_\sigma (T) \le \rho(\sqrt{T}) +\left(\frac{1}{\sqrt{T}}\right)^\sigma \le C_\sigma \, \big[ \log T\big]^{-N} $$ for $T\ge 2$. Also, it was proved in \cite[Theorem 6.6]{Shen-2014} that $$ \langle |\psi-\nabla \chi_T|\rangle \le C_\sigma \int_{T/2}^\infty \frac{\Theta_\sigma (r)}{r}\, dr $$ for any $\sigma \in (0,1)$. As a result, if $\sigma=1-N^{-1}$, we obtain $$ \aligned \eta_\sigma (t) & =\big[ \Theta_1 (t^{-1}) \big]^{\sigma} +\sup_{T\ge t^{-1}} \langle |\psi -\nabla \chi_T |\rangle\\ & \le \big[ \Theta_1 (t^{-1}) \big]^{\sigma} +C_\sigma \int_{(2t)^{-1}}^\infty \frac{\Theta_\sigma (r)}{r}\, dr\\ &\le C_\sigma \big[ \log (1/t) \big]^{1-N} \endaligned $$ for $t\in (0, 1/2)$. Finally, since $N>(5/2)$, we may choose $\delta\in (0,1/4)$ so small that $((2/3)-\delta)(1-N)<-1$. This leads to $$ \int_0^1 \big[ \eta_\sigma (t) \big]^{\frac{2}{3} -\delta} \, \frac{dt}{t} \le C + C \int_0^{1/2} \big[ \log (1/t) \big]^{(1-N)(\frac23-\delta)} \, \frac{dt}{t} <\infty, $$ and completes the proof. \end{proof} We are now ready to prove the interior Lipschitz estimates for $\mathcal{L}_\varepsilon$. We first treat the case $\mathcal{L}_\varepsilon (u_\varepsilon)=0$. \begin{lemma}\label{i-L-lemma-3} Suppose that $A(y)$ satisfies the same conditions as in Theorem \ref{main-theorem-Lip}. Let $u_\varepsilon\in H^1(2B; \mathbb{R}^m)$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $2B$, where $B=B(x_0, r)$ for some $x_0\in \mathbb{R}^d$ and $r>0$. Then $|\nabla u_\varepsilon| \in L^\infty(B)$ and \begin{equation}\label{interior-Lip} \| \nabla u_\varepsilon\|_{L^\infty(B)} \le \frac{C}{r} \left\{ -\!\!\!\!\!\!\int_{2B} | u_\varepsilon|^2\right\}^{1/2}, \end{equation} where $C$ depends only on $A$. \end{lemma} \begin{proof} By translation and dilation it suffices to prove that \begin{equation}\label{i-L-1} |\nabla u_\varepsilon (0)|\le C \left\{ -\!\!\!\!\!\!\int_{B(0,1)} |u_\varepsilon|^2\right\}^{1/2}, \end{equation} if $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $B(0,1)$. Note that we only need to treat the case $0<\varepsilon<(1/4)$, since the case $\varepsilon\ge (1/4)$ follows from the standard local regularity theory for second-order elliptic systems with H\"older continuous coefficients. Let $v_\varepsilon (x)=\varepsilon^{-1} u_\varepsilon (\varepsilon x)$. Then $\mathcal{L}_1 (v_\varepsilon)=0$ in $B(0, 2\varepsilon^{-1})$. By the standard regularity theory for $\mathcal{L}_1$, $$ \aligned |\nabla u_\varepsilon (0)| &=|\nabla v_\varepsilon (0)| \le C \inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{B(0,2)} |v_\varepsilon -q|^2\right\}^{1/2}\\ &= C\inf_{q\in \mathbb{R}^m}\frac{1}{\varepsilon} \left\{ -\!\!\!\!\!\!\int_{B(0, 2\varepsilon)} |u_\varepsilon -q|^2 \right\}^{1/2}. \endaligned $$ To complete the proof we use Theorem \ref{g-theorem-1}, with $K=0$, to obtain \begin{equation}\label{i-L-3} \frac{1}{\varepsilon}\inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{B(0, 2\varepsilon)} |u_\varepsilon -q|^2 \right\}^{1/2} \le C \left\{ -\!\!\!\!\!\!\int_{B(0,1)} |u_\varepsilon|^2\right\}^{1/2}. \end{equation} Note that the condition (\ref{g-t-1-1}) is given by Lemma \ref{i-L-lemma-1}, while the condition (\ref{g-t-1-2}) is given by Lemma \ref{i-L-lemma-0}. Also, the Dini condition (\ref{g-t-1-3}) is satisfied in view of Lemma \ref{Dini-lemma}. As a result, the estimate (\ref{i-L-3}) follows from (\ref{g-t-1-4}) with $t=2\varepsilon$. \end{proof} \begin{remark} \label{Liouville} {\rm In the argument for Lemma 4.4, we used only the first conclusion~\eqref{g-t-1-4} of Theorem~\ref{g-theorem-1}. The second conclusion (\ref{g-t-1-5}) is also useful, and yields the following Liouville result: if $u\in H^1(\mathbb{R}^d;\mathbb{R}^m)$ is any solution of $\mathcal L_1(u) = 0$ in $\mathbb{R}^d$ satisfying the linear growth condition \begin{equation*} \label{} \limsup_{r\to\infty} \frac 1r \left\{ -\!\!\!\!\!\!\int_{B(0,r)} |u|^2 \right\}^{1/2} <\infty, \end{equation*} then there exists $M\in \mathbb{R}^{m\times d}$ such that \begin{equation*} \label{} \limsup_{r\to \infty} \frac 1r \left\{ -\!\!\!\!\!\!\int_{B(0,r)} |u(x) - Mx |^2\,dx \right\}^{1/2} = 0. \end{equation*} In other words, if an entire solutions grows at most linearly, it is close to an affine function. To prove this, we follow the argument of Lemma 4.4 with $\varepsilon>0$ fixed and $u_\varepsilon(x):= \varepsilon u(x/\varepsilon)$. We notice that in the application of Theorem~\ref{g-theorem-1} we invoked to get~\eqref{i-L-3}, we also obtain from the second conclusion of the theorem that, for every $\varepsilon < t < 1/4$, \begin{equation*} \label{} \frac{1}{t}\inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \left\{ -\!\!\!\!\!\!\int_{B(0, 2 t)} |u_\varepsilon(x)-Mx -q|^2\,dx \right\}^{1/2} \le C \left\{ t^\alpha + \eta(\varepsilon/t) \right\} \left\{ -\!\!\!\!\!\!\int_{B(0,1)} |u_\varepsilon|^2\right\}^{1/2}. \end{equation*} By undoing the scaling and writing this in terms of $u$, we obtain, for every $1 < r < 1/4\varepsilon$, \begin{equation*} \label{} \frac{1}{r}\inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \left\{ -\!\!\!\!\!\!\int_{B(0, 2 r)} |u(x)-Mx -q|^2\,dx \right\}^{1/2} \le C \left\{ (\varepsilon r)^\alpha + \eta(1/r) \right\} \varepsilon \left\{ -\!\!\!\!\!\!\int_{B(0,1/\varepsilon)} |u|^2\right\}^{1/2}. \end{equation*} Sending $\varepsilon\to 0$ and using the growth hypothesis, we get \begin{equation*} \label{} \frac{1}{r}\inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \left\{ -\!\!\!\!\!\!\int_{B(0, 2 r)} |u(x)-Mx -q|^2\,dx \right\}^{1/2} \le C \eta(1/r). \end{equation*} We now obtain the Liouville property by applying the previous inequality on the dyadic scales $r_k:=2^k$, $k\in\mathbb{N}$, and using the Dini condition~\eqref{g-t-1-3} to verify that the sequence $\{M_k\}_{k\in\mathbb{N}}\subset \mathbb{R}^m$ of corresponding affine approximations is a Cauchy sequence. } \end{remark} As we were finishing the writing of this paper, we became aware of some very recent results of Gloria, Neukamm, and Otto~\cite{GNO-2014}, who obtain a more general version of the Liouville result presented above in Remark~\ref{Liouville}. Their scheme is similar to the one from~\cite{Armstrong-Smart-2014}, which we use here. Both are based on a Campanato iteration to obtain an improvement of flatness for solutions, although ``flatness" in~\cite{Armstrong-Smart-2014}, and in this paper, is defined with respect to \emph{affine functions}, while~\cite{GNO-2014}, following~\cite{AL-1987, AL-liouv}, defines it with respect to \emph{correctors}. The latter notion allows to formulate some more precise results, although it does not seem to help estimating the gradient of the correctors themselves (which is more or less equivalent to the task of obtaining uniform Lipschitz estimates). \smallskip \begin{theorem}\label{i-L-theorem} Suppose that $A(y)$ satisfies the same conditions as in Theorem \ref{main-theorem-Lip}. Let $u_\varepsilon$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ in $2B$, where $B=B(x_0, r)$. Then \begin{equation}\label{estimate-i-L} \|\nabla u_\varepsilon\|_{L^\infty(B)} \le \frac{C}{r} \left\{ -\!\!\!\!\!\!\int_{2B} |u_\varepsilon|^2 \right\}^{1/2} +C \, r^\beta\sup_{\substack{y\in 2B\\ 0<t<r}} t^{1-\beta} -\!\!\!\!\!\!\int_{B(y, t)\cap 2B} |F| \end{equation} for any $\beta\in (0,1)$, where $C$ depends only on $\beta$ and $A$. \end{theorem} \begin{proof} By translation and dilation we may assume that $x_0=0$ and $r=1$. We may also assume $d\ge 3$, as the 2-d case may be reduced to the 3-d case by adding a dummy variable. Consider $$ v_\varepsilon (x)=\int_{2B} \Gamma_\varepsilon (x,y) F(y)\, dy, $$ where $\Gamma_\varepsilon (x,y)$ denotes the matrix of fundamental solutions for $\mathcal{L}_\varepsilon$ in $\mathbb{R}^d$, with pole at $y$. Note that $\mathcal{L}_\varepsilon (v_\varepsilon)=F$ in $2B$. By the interior H\"older estimates in \cite{Shen-2014}, we have \begin{equation}\label{size-estimate} |\Gamma_\varepsilon (x,y)|\le C\, |x-y|^{2-d} \qquad \text{ for any } x, y\in \mathbb{R}^d, \end{equation} where $C$ depends only on $A$. Since $\mathcal{L}_\varepsilon \big (\Gamma_\varepsilon (\cdot, y) \big)=0$ in $\mathbb{R}^d\setminus \{ y\}$, we may use (\ref{size-estimate}) and (\ref{interior-Lip}) to obtain \begin{equation}\label{gradient-estimate} |\nabla_x \Gamma_\varepsilon (x,y)|\le C\, |x-y|^{1-d} \qquad \text{ for any } x,y\in \mathbb{R}^d. \end{equation} It is not hard to see that this gives \begin{equation}\label{estimate-v} \|\nabla v_\varepsilon\|_{L^\infty(2B)} +\| v_\varepsilon\|_{L^\infty(2B)} \le C_\beta\, \sup_{\substack{y\in 2B\\ 0<t<1}} t^{1-\beta} -\!\!\!\!\!\!\int_{ B(y,t)\cap 2B} |F|. \end{equation} for any $\beta\in (0,1)$. Finally, since $\mathcal{L}_\varepsilon (u_\varepsilon -v_\varepsilon)=0$ in $2B$, we may invoke Lemma \ref{i-L-lemma-3} to obtain $$ \aligned \|\nabla (u_\varepsilon -v_\varepsilon)\|_{L^\infty(B)} &\le C \left\{ -\!\!\!\!\!\!\int_{2B} |u_\varepsilon -v_\varepsilon|^2\right\}^{1/2}\\ &\le C \left\{ -\!\!\!\!\!\!\int_{2B}|u_\varepsilon|^2\right\}^{1/2} +C_\beta \sup_{\substack{y\in 2B\\ 0<t<1}} t^{1-\beta} -\!\!\!\!\!\!\int_{B(y,t)\cap 2B} |F|, \endaligned $$ where we have used (\ref{estimate-v}) for the last inequality. This, together with (\ref{estimate-v}), yields the estimate (\ref{estimate-i-L}). \end{proof} \begin{remark}\label{ac-remark} {\rm Fix $1\le j\le d$, $1\le \beta\le m$, and $y\in \mathbb{R}^d$. Let $$ u(x)= \chi_{T, j}^\beta (x) -\chi_{T, j}^\beta (y) + (x_j-y_j) e^\beta, $$ where $T\ge 1$. Then $\mathcal{L}_1 (u) = -T^{-2} \chi_{T, j}^\beta$ in $\mathbb{R}^d$. It follows from Theorem \ref{i-L-theorem} that $$ |\nabla u (y)|\le \frac{C}{T} \left(-\!\!\!\!\!\!\int_{B(y, T)} |u|^2 \right)^{1/2} + C\, T^{-1} \| \chi_T\|_\infty \le C. $$ As a result, if $A$ satisfies the same conditions in Theorem \ref{main-theorem-Lip}, then \begin{equation}\label{ac-Lip-bound} \|\nabla \chi_T \|_{L^\infty(\mathbb{R}^d)} \le C, \end{equation} where $C$ depends only on $A$. } \end{remark} \section{Interior $W^{1,p}$ estimates} \setcounter{equation}{0} The goal of this section is to prove the following theorem. \begin{theorem}\label{i-W-1-p-theorem} Suppose that $A(y)$ is uniformly almost-periodic in $\mathbb{R}^d$ and satisfies (\ref{ellipticity}). Also assume $A(y)$ satisfies the condition (\ref{decay-condition}) for some $N>5/2$. Let $u_\varepsilon \in H^1(2B;\mathbb{R}^m)$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon) =\text{\rm div} (f)$ in $2B$ for some ball $B$ in $\mathbb{R}^d$. Suppose that $f=(f_i^\alpha)\in L^p (2B; \mathbb{R}^{dm})$ for some $2<p<\infty$. Then \begin{equation}\label{i-W-1-p} \left\{ -\!\!\!\!\!\!\int_B |\nabla u_\varepsilon|^p \right\}^{1/p} \le C_p \left\{ \left(-\!\!\!\!\!\!\int_{2B} |\nabla u_\varepsilon|^2 \right)^{1/2} +\left(-\!\!\!\!\!\!\int_{2B} |f|^p \right)^{1/p} \right\}, \end{equation} where $C_p$ depends only on $p$ and $A$. \end{theorem} We remark that in contrast to Theorem \ref{i-L-theorem}, the H\"older continuity condition (\ref{H-continuity}) is not required for $W^{1,p}$ estimates. We first treat the case where $\mathcal{L}_\varepsilon (u_\varepsilon) =0$. \begin{lemma}\label{interior-W-1-p-lemma} Assume $A$ satisfies the same assumptions as in Theorem \ref{i-W-1-p-theorem}. Let $u_\varepsilon\in H^1(2B; \mathbb{R}^m)$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $2B$, where $B=B(x_0, r)$ for some $x_0\in \mathbb{R}^d$ and $r>0$. Then $|\nabla u_\varepsilon |\in L^p(B)$ for any $2<p<\infty$, and \begin{equation}\label{interior-W-1-p} \left\{-\!\!\!\!\!\!\int_{B} |\nabla u_\varepsilon|^p \right\}^{1/p} \le C_p \left\{ -\!\!\!\!\!\!\int_{2B} |\nabla u_\varepsilon|^2\right\}^{1/2}, \end{equation} where $C_p$ depends only on $p$ and $A$. \end{lemma} \begin{proof} Fix $2<p<\infty$. By translation and dilation we may assume $x_0=0$ and $r=1$. By subtracting a constant we may also assume $\int_{2B} u_\varepsilon =0$. We may further assume that $0<\varepsilon<(1/4)$, as the case $\varepsilon\ge (1/4)$ follows from the standard local $W^{1,p}$ estimates for second-order elliptic systems with continuous coefficients. By rescaling the same theory also gives \begin{equation}\label{i-W-1-p-1} \left\{ -\!\!\!\!\!\!\int_{B(0, \varepsilon)} |\nabla u_\varepsilon|^p\right\}^{1/p} \le \frac{C_p}{\varepsilon} \inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{B(0, 2\varepsilon)} |u_\varepsilon -q|^2 \right\}^{1/2}. \end{equation} An inspection of the proof for Lemma \ref{i-L-lemma-3} shows that estimate (\ref{i-L-3}) continues to hold under the assumption in Theorem \ref{i-W-1-p-theorem} (the H\"older continuity of $A$ is not required). Thus, $$ \left\{ -\!\!\!\!\!\!\int_{B(0, \varepsilon)} |\nabla u_\varepsilon|^p\right\}^{1/p} \le C_p \, \| u_\varepsilon\|_{L^2(B(0,1))}. $$ By translation this implies that for any $z\in B(0,1)$, $$ \int_{B(z,\varepsilon)} |\nabla u_\varepsilon|^p\, dx \le C_p \, \varepsilon^d\, \| u_\varepsilon\|^p_{L^2(B(0,2))}. $$ It follows by a simple covering argument that $$ \int_{B(0,1)} |\nabla u_\varepsilon|^p\le C_p\, \| u_\varepsilon\|^p_{L^2(B(0,2))} \le C_p\, \|\nabla u_\varepsilon\|_{L^2(B(0,2))}^p, $$ where we have used Poincar\'e inequality for the last step. \end{proof} The reduction of Theorem \ref{i-W-1-p-theorem} to Lemma \ref{interior-W-1-p-lemma} is done through a refined version of Calder\'on-Zygmund argument due to Caffarelli and Peral in \cite{CP-1998}. Motivated by \cite{CP-1998}, the following theorem was formulated and proved in \cite{Shen-2007} (also see \cite{Shen-2005}). \begin{theorem}\label{Shen-theorem} Let $F\in L^2(4B_0)$ and $f\in L^p(4B_0)$ for some $2<p<q<\infty$, where $B_0$ is a ball in $\mathbb{R}^d$. Suppose that for each ball $B\subset 2B_0$ with $|B|\le c_1 |B_0|$, there exist two measurable functions $F_B$ and $R_B$ on $2B$, such that $|F|\le |F_B| +|R_B|$ on $2B$, and \begin{equation}\label{Shen-1} \aligned \left\{ -\!\!\!\!\!\!\int_{2B} |R_B|^q \right\}^{1/q} &\le C_1 \left\{ \left(-\!\!\!\!\!\!\int_{c_2 B} |F|^2 \right)^{1/2} +\sup_{4B_0\supset B^\prime\supset B} \left(-\!\!\!\!\!\!\int_{B^\prime} |f|^2\right)^{1/2} \right\},\\ \left\{ -\!\!\!\!\!\!\int_{2B} |F_B|^2 \right\}^{1/2} & \le C_2 \sup_{4B_0\supset B^\prime\supset B} \left\{ -\!\!\!\!\!\!\int_{B^\prime} |f|^2 \right\}^{1/2}, \endaligned \end{equation} where $C_1, C_2>0$, $0<c_1<1$, and $c_2> 2$. Then $F\in L^p (B_0)$ and \begin{equation} \left\{-\!\!\!\!\!\!\int_{B_0} |F|^p\right\}^{1/p} \le C \left\{ \left(-\!\!\!\!\!\!\int_{4B_0} |F|^2 \right)^{1/2} +\left(-\!\!\!\!\!\!\int_{4B_0} |f|^p \right)^{1/p} \right\}, \end{equation} where $C$ depends only on $d$, $C_1$, $C_2$, $c_1$, $c_2$, $p$, and $q$. \end{theorem} \begin{proof}[\bf Proof of Theorem \ref{i-W-1-p-theorem}] Suppose that $\mathcal{L}_\varepsilon (u_\varepsilon)=\text{\rm div} (f)$ in $2B_0$ and $f\in L^p(2B_0; \mathbb{R}^{dm})$ for some $2<p<\infty$. Let $q=p+1$. We will apply Theorem \ref{Shen-theorem} to $F=|\nabla u_\varepsilon|$. For each ball $B$ such that $4B\subset 2B_0$, we write $u_\varepsilon=v_\varepsilon +w_\varepsilon$ on $2B$, where $v_\varepsilon \in H^1_0(4B; \mathbb{R}^{dm})$ is the solution to $\mathcal{L}_\varepsilon (v_\varepsilon)=\text{\rm div} (f)$ in $4B$. Let $$ F_B =|\nabla v_\varepsilon| \qquad \text{ and } \qquad R_B =|\nabla w_\varepsilon|. $$ Then $|F|\le F_B + R_B$ on $2B$. It is easy to see that the first inequality in (\ref{Shen-1}) follows from the energy estimate. Since $\mathcal{L}_\varepsilon (w_\varepsilon)=0$ in $4B$, it follows from Lemma \ref{interior-W-1-p-lemma} that $$ \aligned \left\{-\!\!\!\!\!\!\int_{2B} |R_B|^q\right\}^{1/q} &\le C \left\{ -\!\!\!\!\!\!\int_{4B} |R_B|^2\right\}^{1/2}\\ &\le C \left\{ -\!\!\!\!\!\!\int_{4B} |\nabla u_\varepsilon|^2\right\}^{1/2} +\left\{ -\!\!\!\!\!\!\int_{4B} |\nabla v_\varepsilon|^2\right\}^{1/2}\\ &\le C\left\{ -\!\!\!\!\!\!\int_{4B} |F|^2\right\}^{1/2} +C \left\{ -\!\!\!\!\!\!\int_{4B} |f|^2\right\}^{1/2}, \endaligned $$ where we have used the energy estimate for the last inequality. This give the second inequality in (\ref{Shen-1}). It then follows by Theorem \ref{Shen-theorem} that $$ \left\{ -\!\!\!\!\!\!\int_{B} |\nabla u_\varepsilon|^p\right\}^{1/p} \le C \left\{ \left(-\!\!\!\!\!\!\int_{4B} |\nabla u_\varepsilon|^2\right)^{1/2} +\left(-\!\!\!\!\!\!\int_{4B} |f|^p\right)^{1/p} \right\} $$ for any ball $B$ such that $4B\subset 2B_0$. By a simple covering argument this gives (\ref{i-W-1-p}) for $B=B_0$. \end{proof} \section{Boundary $W^{1, p}$ estimates and proof of Theorems \ref{main-theorem-W-1-p} and \ref{main-theorem-W-1-p-N}} \setcounter{equation}{0} In this section we establish uniform boundary $W^{1,p}$ estimate for $\mathcal{L}_\varepsilon$ with Dirichlet or Neumann condition. As we shall see, boundary $W^{1,p}$ estimates follow from the interior $W^{1,p}$ estimates and boundary H\"older estimates. For $r>0$, let \begin{equation}\label{Delta} \aligned D_r & =\big\{ (x^\prime, x_d)\in \mathbb{R}^d: \, |x^\prime|<r \text{ and } \phi(x^\prime)<x_d<\phi(x^\prime) + 10(K_0+1)r \big\},\\ \Delta_r &= \big\{ (x^\prime, \phi(x^\prime))\in \mathbb{R}^d:\, |x^\prime|<r \big\}, \endaligned \end{equation} where $\phi : \mathbb{R}^{d-1} \to \mathbb{R}$ is a $C^{1,\alpha}$ function such that supp$(\phi)\subset \{ x^\prime\in \mathbb{R}^{d-1}: \ |x^\prime|\le 10 \}$, \begin{equation}\label{phi} \phi (0)=0, \quad \nabla \phi (0)=0, \quad \text{ and } \quad \|\nabla \phi\|_{C^{\alpha} (\mathbb{R}^{d-1})}\le K_0+1. \end{equation} The constant $K_0>0$ in (\ref{phi}) is fixed. The bounding constants $C$ in the next two lemmas will depend on $(\alpha, K_0)$, but otherwise not directly on $\phi$. \begin{lemma}\label{b-H-lemma} Suppose that $A$ is uniformly almost-periodic in $\mathbb{R}^d$ and satisfies the ellipticity condition (\ref{ellipticity}). Let $u_\varepsilon\in H^1(D_{2r}; \mathbb{R}^m)$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $D_{2r}$, with either $u_\varepsilon=0$ or $\frac{\partial u_\varepsilon}{\partial \nu_\varepsilon}=0$ on $\Delta_{2r}$, for some $0<r\le 1$. Then, for any $0<\beta<1$, \begin{equation}\label{b-H-00} |u_\varepsilon (x) - u_\varepsilon (y)| \le C_\beta\, r\, \left(\frac{|x-y|}{r}\right)^\beta \left(-\!\!\!\!\!\!\int_{D_{2r}} |\nabla u_\varepsilon|^2\right)^{1/2}, \end{equation} where $C_\beta$ depends only on $\beta$, $A$, and $(\alpha, K_0)$ in (\ref{phi}). \end{lemma} \begin{proof} In the case of Dirichlet condition $u_\varepsilon =0$ on $\Delta_{2r}$, the estimate (\ref{b-H-00}) was proved in \cite{Shen-2014} by using a three-step compactness argument introduced in \cite{AL-1987}. The compactness argument in \cite{AL-1987} for H\"older estimates does not involve correctors and extends readily to the almost-periodic setting. This is also true in the case of Neumann boundary conditions. We omit the details and refer the reader to \cite{KLS1}, where uniform boundary H\"older estimates with Neumann conditions were established in the periodic setting. \end{proof} \begin{lemma}\label{b-W-lemma-1} Suppose that $A$ is uniformly almost-periodic in $\mathbb{R}^d$ and satisfies (\ref{ellipticity}). Also assume that the decay condition (\ref{decay-condition}) holds for some $C_0>0$ and $N>(3/2)$. Let $u_\varepsilon\in H^1(D_{2r}; \mathbb{R}^m)$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $D_{2r}$, with either $u_\varepsilon =0$ or $\frac{\partial u_\varepsilon}{\partial \nu_\varepsilon} =0$ on $\Delta_{2r}$, for some $0<r\le1$. Then, for any $2<p<\infty$, \begin{equation}\label{b-W-1-00} \left\{ -\!\!\!\!\!\!\int_{D_r} |\nabla u_\varepsilon|^p\right\}^{1/p} \le C_p \left\{ -\!\!\!\!\!\!\int_{D_{2r}} |\nabla u_\varepsilon|^2\right\}^{1/2}, \end{equation} where $C_p$ depends only on $p$, $A$, and $(\alpha, K_0)$ in (\ref{phi}). \end{lemma} \begin{proof} By rescaling we may assume $r=1$. Also assume that $\| \nabla u_\varepsilon\|_{L^2(D_2)} =1$. Let $\delta (x)=\text{\rm dist} (x, \partial D_2)$. It follows from the interior $W^{1,p}$ estimates in Theorem \ref{i-W-1-p-theorem} that \begin{equation}\label{b-W-001} \aligned \left(-\!\!\!\!\!\!\int_{B(y, \delta(y)/8)} |\nabla u_\varepsilon|^p \right)^{1/p} & \le C \left(-\!\!\!\!\!\!\int_{B(y, \delta(y)/4)} |\nabla u_\varepsilon|^2 \right)^{1/2}\\ &\le C \left(-\!\!\!\!\!\!\int_{B(y, \delta(y)/2)} |u_\varepsilon (x)-u_\varepsilon (y)|^2\, dx\right)^{1/2}\\ &\le C_\beta \big[ \delta (y) \big]^{\beta -1} \endaligned \end{equation} for any $\beta \in (0,1)$, where we have used Lemma \ref{b-H-lemma} for the last inequality. By choosing $\beta \in (1-\frac{1}{p}, 1)$, this implies that \begin{equation}\label{b-W-002} \int_{D_1} \left(-\!\!\!\!\!\!\int_{B(y, \delta(y)/8)} |\nabla u_\varepsilon (x)|^p \, dx \right)\, dy \le C. \end{equation} By Fubini's Theorem we then obtain \begin{equation}\label{b-W-003} \int_{D_1} |\nabla u(x)|^p \left\{ \int_{\{ y\in D_1: \, |y-x|<\frac{\delta(y)}{8} \} } \frac{dy}{\big[\delta (y) \big]^d} \right\}dx \le C. \end{equation} Finally, we note that if $|y-x|<\frac{\delta(y)}{8}$, then $\delta (x)\approx \delta (y)$. Also, it is not hard to verify that for $x\in D_1$, $$ D_1 \cap B(x, \delta(x)/16)\subset \{ y\in D_1: \, |y-x|<\delta(y)/8 \}. $$ It follows that $$ \int_{\{ y\in D_1: \, |y-x|<\frac{\delta(y)}{8} \} } \frac{dy}{\big[\delta (y) \big]^d} \ge c>0. $$ This, together with (\ref{b-W-003}), gives $$ \int_{D_1} |\nabla u_\varepsilon (x)|^p\, dx\le C, $$ and completes the proof. \end{proof} \begin{theorem}\label{b-W-1-p-theorem} Suppose that $A$ is uniformly almost-periodic in $\mathbb{R}^d$ and satisfies (\ref{ellipticity}). Also assume that the decay condition (\ref{decay-condition}) holds for some $C_0>0$ and $N>(3/2)$. Let $\Omega$ be a bounded $C^{1, \alpha}$ domain in $\mathbb{R}^d$ for some $\alpha>0$. i) Let $u_\varepsilon\in W_0^{1, p}(\Omega; \mathbb{R}^m)$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon) =\text{\rm div} (h)$ in $\Omega$, where $1<p<\infty$ and $h\in L^p(\Omega; \mathbb{R}^{m\times d})$. Then \begin{equation}\label{b-W-007} \|\nabla u_\varepsilon \|_{L^p(\Omega)} \le C_p \, \| h\|_{L^p(\Omega)}, \end{equation} where $C_p$ depends only on $p$, $\Omega$, and $A$. ii) Let $u_\varepsilon\in W^{1, p}(\Omega; \mathbb{R}^m)$ be a weak solution to $\mathcal{L}_\varepsilon (u_\varepsilon) =\text{\rm div} (h)$ in $\Omega$ and $\frac{\partial u_\varepsilon}{\partial \nu_\varepsilon} =-n\cdot h$ on $\partial\Omega$, where $1<p<\infty$ and $h\in L^p(\Omega; \mathbb{R}^{m\times d})$. Then estimate (\ref{b-W-007}) holds with $C_p$ depending only on $p$, $\Omega$, and $A$. \end{theorem} \begin{proof} Since the adjoint operator $\mathcal{L}_\varepsilon^*$ satisfies the same conditions as $\mathcal{L}_\varepsilon$, by a duality argument, we may assume that $p>2$. By a real-variable argument (see \cite{Shen-2005,Geng-2012}), to prove (\ref{b-W-007}) for a fixed $p>2$, it suffices to establish two weak reverse H\"older estimates for some $q>p$: (i)\ \ if $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $2B$ and $2B\subset \Omega$, then \begin{equation}\label{reverse-H-1} \left\{-\!\!\!\!\!\!\int_{B} |\nabla u_\varepsilon|^q \right\}^{1/q} \le C \left\{-\!\!\!\!\!\!\int_{2B} |\nabla u_\varepsilon|^2 \right\}^{1/2}, \end{equation} (ii)\ \ if $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ on $2B\cap \Omega$ with either $u_\varepsilon =0$ or $\frac{\partial u_\varepsilon}{\partial \nu_\varepsilon}=0$ on $2B\cap \partial\Omega$, where $B=B(x_0, r)$, $x_0\in \partial\Omega$ and $0<r<r_0=c_0\, \text{\rm diam}(\Omega)$, then \begin{equation}\label{reverse-H-2} \left\{-\!\!\!\!\!\!\int_{B\cap \Omega } |\nabla u_\varepsilon|^q \right\}^{1/q} \le C \left\{-\!\!\!\!\!\!\int_{2B\cap\Omega} |\nabla u_\varepsilon|^2 \right\}^{1/2}. \end{equation} Note that estimate (\ref{reverse-H-1}) is the interior $W^{1,p}$ estimate given by Lemma \ref{interior-W-1-p-lemma}, while (\ref{reverse-H-2}) follows from the boundary $W^{1,p}$ estimates proved in Lemma \ref{b-W-lemma-1}. \end{proof} We are now in a position to give the proof of Theorems \ref{main-theorem-W-1-p} and \ref{main-theorem-W-1-p-N}. \begin{proof}[\bf Proof of Theorems \ref{main-theorem-W-1-p} and \ref{main-theorem-W-1-p-N}] For the Neumann condition the reduction of Theorem \ref{main-theorem-W-1-p-N} to Theorem \ref{b-W-1-p-theorem} may be found in \cite{Geng-2012, KLS1}. For Dirichlet condition the reduction of Theorem \ref{main-theorem-W-1-p} to Theorem \ref{b-W-1-p-theorem} is also more or less well known. By considering $u_\varepsilon -w$, where $w\in W^{1, p}(\Omega; \mathbb{R}^m)$ is the solution of $-\Delta w=0$ in $\Omega$ and $w=f$ on $\partial\Omega$, it suffices to prove the theorem for the case $f=0$. Here we have used the fact that the theorem holds for $\mathcal{L}_\varepsilon =-\Delta$. Next, in view of Theorem \ref{b-W-1-p-theorem}, we may further assume that $h=0$. Finally, the case that $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ in $\Omega$ and $u_\varepsilon =0$ on $\partial\Omega$ may be handled by a duality argument. Indeed, let $v_\varepsilon$ be a solution of $\mathcal{L}_\varepsilon^* (v_\varepsilon) =\text{\rm div} (h)$ in $\Omega$ and $v_\varepsilon=0$ on $\partial\Omega$, where $h=(h_i^\alpha) \in C_0^\infty (\Omega; \mathbb{R}^{m\times d})$. Then $$ \aligned \left|\int_\Omega \frac{\partial u_\varepsilon^\alpha}{\partial x_i} \cdot h_i^\alpha \right| & =\left| \int_\Omega F^\alpha \cdot v^\alpha\right| \le \| F\|_{L^p(\Omega)} \| v_\varepsilon\|_{L^{p^\prime} (\Omega)}\\ &\le C\, \| F\|_{L^p(\Omega)} \| \nabla v_\varepsilon\|_{L^{p^\prime} (\Omega)} \le C\, \| F\|_{L^p(\Omega)} \| h\|_{L^{p^\prime}(\Omega)}, \endaligned $$ where we have used Poincar\'e inequality and $W^{1, p}$ estimates for $\mathcal{L}_\varepsilon^*$. By duality this gives $\|\nabla u_\varepsilon\|_{L^p(\Omega)} \le C\, \| F\|_{L^p(\Omega)}$. \end{proof} \section{Boundary Lipschitz estimates with Dirichlet condition and Proof of Theorems \ref{main-theorem-Lip} and \ref{main-theorem-max}} \setcounter{equation}{0} In this section we establish the uniform boundary Lipschitz estimates for $\mathcal{L}_\varepsilon$ in bounded $C^{1, \alpha}$ domains and give the proof of Theorem \ref{main-theorem-Lip}. As in the case of interior Lipschitz estimates, our approach is based on the general scheme outlined in Section 3. However, modifications are needed to take into account the boundary contribution. \begin{lemma}\label{b-L-lemma-1} Suppose that $\mathcal{L}_0 (w)=0$ in $D_r$ and $w=f$ on $\Delta_r$ for some $0<r\le 1$. Let $$ \aligned G(t)= & \frac{1}{t} \inf_{\substack{M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^d}} \bigg\{ \left(-\!\!\!\!\!\!\int_{D_t} |w-Mx-q|^2\right)^{1/2} +\| f-Mx-q\|_{L^\infty(\Delta_t)}\\ &\qquad + t \|\nabla_{tan} \big (f-Mx-q\big)\|_{L^\infty(\Delta_t)} +t^{1+\beta} \|\nabla_{tan} \big (f-Mx-q\big)\|_{C^{0, \beta}(\Delta_t)} \bigg\} \endaligned $$ for $0<t\le r$, where $\beta =\alpha/2$. Then, there exists $\theta\in (0,1/4)$, depending only on $\mu$ and $(\alpha, K_0)$ in (\ref{phi}), such that \begin{equation} G(\theta r) \le (1/2) G(r). \end{equation} \end{lemma} \begin{proof} The lemma follows from boundary $C^{1, \alpha}$ estimates for second-order elliptic systems with constant coefficients. By rescaling we may assume $r=1$. By choosing $q=w(0)$ and $M=\nabla w(0)$, it is easy to see that for any $\theta\in (0,1/4)$, $$ G(\theta) \le C\, \theta^{\beta } \| w\|_{C^{1,\beta}(D_\theta)}. $$ By boundary $C^{1, \alpha}$ estimates for $\mathcal{L}_0$, we obtain $$ \| w\|_{C^{1,\beta}(D_\theta)} \le C \left\{ \left(-\!\!\!\!\!\!\int_{D_1} |w|^2\right)^{1/2} +\| g\|_{L^\infty(\Delta_1)} +\|\nabla_{tan} g \|_{L^\infty(\Delta_1)} +\| g\|_{C^{0, \beta} (\Delta_1)}\right\}, $$ where $C$ depends only on $\mu$ and $(\alpha, K_0)$. It follows that $$ G(\theta) \le C\, \theta^{\beta} \left\{ \left(-\!\!\!\!\!\!\int_{D_1} |w|^2\right)^{1/2} +\| g\|_{L^\infty(\Delta_1)} +\|\nabla_{tan} g \|_{L^\infty(\Delta_1)} +\| g\|_{C^{0, \beta} (\Delta_1)}\right\}. $$ Finally, since $\mathcal{L}_0 (Mx +q)=0$ for any $M\in \mathbb{R}^{m\times d}$ and $q\in \mathbb{R}^m$, the estimate above implies that $$ G(\theta)\le C\, \theta^\beta G(1). $$ The desired estimate follows by choosing $\theta\in (0,1/4)$ so small that $C\theta^\beta\le (1/2)$. \end{proof} \begin{lemma}\label{b-L-lemma-2} Let $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $D_{2r}$ and $u_\varepsilon=f$ on $\Delta_{2r}$, where $0<\varepsilon<r\le 1$. Then there exists $w$ such that $\mathcal{L}_0 (w)=0$ in $D_{r}$, $w=f$ on $\Delta_r$, and \begin{equation}\label{b-L-2-0} \aligned \left\{ -\!\!\!\!\!\!\int_{D_r} |u_\varepsilon -w|^2 \right\}^{1/2} \le C \big[ \omega (\varepsilon/r)\big]^{\frac23-\delta} &\bigg\{ \inf_{q\in \mathbb{R}^m}\bigg[ \bigg( -\!\!\!\!\!\!\int_{D_{2r}} |u_\varepsilon-q |^2 \bigg)^{1/2} +\| f-q\|_{L^\infty(\Delta_{2r})} \bigg]\\ & +r\, \| \nabla_{tan} f\|_{L^\infty(\Delta_{2r})} +r^{1+\beta } \|\nabla_{tan} f\|_{C^{0, \beta } (\Delta_{2r})} \bigg\}, \endaligned \end{equation} where $\delta\in (0,1/4)$, $\beta=\alpha/2$, and $\omega (t)=\omega_\sigma (t)$ is defined by (\ref{omega}). The constant $C$ depends only on $\delta$, $\sigma$, $(\alpha, K_0)$ in (\ref{phi}), and $A$. \end{lemma} \begin{proof} By rescaling we may assume $r=1$. For each $t\in [0,1/4)$, we construct a bounded $C^{1,\alpha}$ domain $\Omega_{1+t}$ in $\mathbb{R}^d$ such that (1) $D_1\subset\Omega_1 \subset \Omega_{1+t}\subset D_{3/2}$, (2) there exists a $C^{1, \alpha}$ diffeomorphism $\Lambda_t: \partial\Omega_1 \to \partial\Omega_{1+t}$ with uniform bounds and the property that $|\Lambda_t (x)-x|\le C\, t$ for any $x\in \partial\Omega_1$, and (3) for each $x\in \partial\Omega_1$, $B(x, ct)\cap D_2 \subset \Omega_{1+t}$. Let $w=w_t$ be the solution of Dirichlet problem: $\mathcal{L}_0 (w)=0$ in $\Omega_{1+t}$ and $w=u_\varepsilon$ on $\partial\Omega_{1+t}$. Note that $\mathcal{L}_0 (w) =0$ in $D_1$ and $w=f$ on $\Delta_1$. We will show that $w$ satisfies the estimate (\ref{b-L-2-0}) for some suitable choice of $t$. Let $v_\varepsilon$ be the solution of $\mathcal{L}_\varepsilon (v_\varepsilon)=0$ in $\Omega_1$ and $v_\varepsilon=w$ on $\partial \Omega_1$. Since $\mathcal{L}_\varepsilon (u_\varepsilon -v_\varepsilon)=0$ in $\Omega_1$, by the H\"older estimate (\ref{H-estimate}) for $\mathcal{L}_\varepsilon$, \begin{equation}\label{b-L-2-1} \aligned \| u_\varepsilon -v_\varepsilon\|_{L^2(D_1)} & \le C\, \| u_\varepsilon -w\|_{C^\kappa (\partial \Omega_1)} \le C\, t^{\gamma-\kappa} \, \| u_\varepsilon\|_{C^\gamma (D_{3/2})}\\ &\le C\, t^{\gamma-\kappa} \left\{ \left(-\!\!\!\!\!\!\int_{D_{2}} |u_\varepsilon|^2 \right)^{1/2} +\| f\|_{L^\infty(\Delta_2)} +\| \nabla_{tan} f\|_{L^\infty(\Delta_{2})} \right\}, \endaligned \end{equation} where $0<\kappa<\gamma<1$. The fact that $\Lambda_t (x)-x|\le C\, t$ for $t\in \partial\Omega_1$ is used for the second inequality in (\ref{b-L-2-1}). Next, by Theorem \ref{rate-theorem-1}, we see that \begin{equation}\label{b-L-2-2} \aligned \| v_\varepsilon -w\|_{L^2(D_1)} & \le C\, \big[\omega(\varepsilon)\big]^{2/3} \, \| w\|_{C^{1, \kappa}({\partial\Omega_1})}\\ & \le C\, \big[\omega(\varepsilon)\big]^{2/3} t^{\gamma-1-\kappa} \left\{ \| w\|_{C^{\gamma} (\Omega_{1+t})} +\| g\|_{C^{ 1,\kappa}(\Delta_2)} \right\}\\ &\le C\, \big[\omega(\varepsilon)\big]^{2/3} t^{\gamma-1-\kappa} \left\{ \| u_\varepsilon \|_{C^{\gamma} (\Omega_{1+t})} +\| g\|_{C^{1,\kappa}(\Delta_2)} \right\}\\ &\le C\, \big[\omega(\varepsilon)\big]^{2/3} t^{\gamma-1-\kappa} \left\{ \left(-\!\!\!\!\!\!\int_{D_2} |u_\varepsilon|^2\right)^{1/2} + \| g\|_{C^{1, \kappa}(\Delta_2)} \right\}, \endaligned \end{equation} where we have used the boundary $C^{1,\kappa}$ estimates for $\mathcal{L}_0$ for the second inequality and H\"older estimates for the third. We point out that for the second inequality in (\ref{b-L-2-2}) we also have used the fact $B(x, ct)\cap D_2\subset \Omega_{1+t}$ for any $x\in \partial\Omega_1$. It follows from (\ref{b-L-2-1}) and (\ref{b-L-2-2}) that $$ \aligned \| u_\varepsilon -w\|_{L^2(D_1)} & \le \| u_\varepsilon -v_\varepsilon \|_{L^2(D_1)} +\| v_\varepsilon -w\|_{L^2(D_1)}\\ &\le C t^{\gamma-\kappa} \left\{ 1+ t^{-1} \big[ \omega (\varepsilon) \big]^{2/3}\right\} \left\{ \left(-\!\!\!\!\!\!\int_{D_2} |u_\varepsilon|^2\right)^{1/2} +\| g\|_{C^{1, \kappa}(\Delta_2)} \right\},\\ &\le C \, \big[\omega (\varepsilon)\big]^{\frac{2}{3}-\delta} \left\{ \left(-\!\!\!\!\!\!\int_{D_2} |u_\varepsilon|^2\right)^{1/2} +\| g\|_{C^{1, \kappa}(\Delta_2)} \right\}, \endaligned $$ where we have chosen $t=c\big[\omega (\varepsilon)\big]^{2/3}\in (0,1/4)$, $\kappa=(3/4)\delta$ and $\gamma=1-(3/4)\delta$. This yields the estimate (\ref{b-L-2-0}), as $\mathcal{L}_\varepsilon (u_\varepsilon -q)=0$ in $D_2$ for any $q\in \mathbb{R}^m$. \end{proof} We are now ready to prove the boundary Lipschitz estimates for $\mathcal{L}_\varepsilon$. \begin{theorem}\label{b-L-theoem-3} Suppose that $A(y)$ satisfies the same conditions as in Theorem \ref{main-theorem-Lip}. Let $u_\varepsilon$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $D_{2r}$ and $u_\varepsilon =f$ on $\Delta_{2r}$ for some $0<r\le 1$. Then \begin{equation} \aligned \|\nabla u_\varepsilon\|_{L^\infty(D_r)} &\le C\bigg\{ \frac{1}{r} \left(-\!\!\!\!\!\!\int_{D_{2r}} |u_\varepsilon|^2 \right)^{1/2} +r \| f\|_{L^\infty(\Delta_{2r})}\\ & \qquad \qquad \qquad+\|\nabla_{\tan} f\|_{L^\infty(\Delta_{2r})} +r^\beta \|\nabla_{tan} f\|_{C^{0,\beta} (\Delta_{2r})} \bigg\}, \endaligned \end{equation} where $\beta=\alpha/2$ and $C$ depends only on $(\alpha,K_0)$ and $A$. \end{theorem} \begin{proof} By rescaling we may assume that $r=1$. Let $$ \aligned H(t)= & t^{-1} \inf_{\substack{M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^d}} \bigg\{ \left(-\!\!\!\!\!\!\int_{D_t} |u_\varepsilon-Mx-q|^2\right)^{1/2} +\| f-Mx-q\|_{L^\infty(\Delta_t)}\\ &\qquad + t \|\nabla_{tan} \big (f-Mx-q\big)\|_{L^\infty(\Delta_t)} +t^{1+\beta} \|\nabla_{tan} \big (f-Mx-q\big)\|_{C^{0, \beta}(\Delta_t)} \bigg\}, \endaligned $$ where $0<t\le 1$. For each $\varepsilon<t<1$, let $w=w_t$ be a solution of $\mathcal{L}_0 (w)=0$ in $D_t$ with $w=f$ on $\Delta_t$, given by Lemma \ref{b-L-lemma-2}. For $0<s\le t$, let $G(s)$ be defined as $H(t)$, but with $u_\varepsilon$ replaced by $w$ and $t$ replaced by $s$. Observe that $$ \aligned H(\theta t) &\le G(\theta t) + \frac{1}{\theta t} \left\{ -\!\!\!\!\!\!\int_{D_{\theta t}} |u_\varepsilon -w|^2 \right\}^{1/2}\\ & \le \frac12 G(t) +\frac{1}{\theta t} \left\{ -\!\!\!\!\!\!\int_{D_{\theta t}} |u_\varepsilon -w|^2 \right\}^{1/2}\\ &\le \frac12 H(t) + \frac{C}{t} \left\{ -\!\!\!\!\!\!\int_{D_{t}} |u_\varepsilon -w|^2 \right\}^{1/2}, \endaligned $$ where we have used Lemma \ref{b-L-lemma-1} for the second inequality. This, together with Lemma \ref{b-L-lemma-2}, gives \begin{equation}\label{b-L-3-10} \aligned H(\theta t) \le \frac12 H(t) + C \big[ \omega (\varepsilon/t)\big]^{\frac23-\delta} &\bigg\{ t^{-1} \inf_{q\in \mathbb{R}^m}\bigg[ \bigg( -\!\!\!\!\!\!\int_{D_{2t}} |u_\varepsilon-q |^2 \bigg)^{1/2} +\| f-q\|_{L^\infty(\Delta_{2t})} \bigg]\\ & + \| \nabla_{tan} f\|_{L^\infty(\Delta_{2t})} +t^{\beta } \|\nabla_{tan} f\|_{C^{0, \beta } (\Delta_{2t})} \bigg\} \endaligned \end{equation} for any $\varepsilon<t\le 1$. Let $r_j=\theta^{j+1}$ for $0\le j\le \ell$, where $\ell$ is chosen so that $\theta^{\ell+2}<\varepsilon\le \theta^{\ell +1}$. Let $$ F_j =H(r_j) \quad \text{ and } \quad p_j=|M_j|, $$ where $M_j \in \mathbb{R}^{m\times d}$ is a matrix such that $$ \aligned H(r_j) = & r_j^{-1} \inf_{ q\in \mathbb{R}^d} \bigg\{ \left(-\!\!\!\!\!\!\int_{D_{r_j}} |u_\varepsilon-M_jx-q|^2\right)^{1/2} +\| f-M_j x-q\|_{L^\infty(\Delta_{r_j})}\\ &\qquad + r_j \|\nabla_{tan} \big (f-M_jx-q\big)\|_{L^\infty(\Delta_{r_j})} +r_j^{1+\beta} \|\nabla_{tan} \big (f-M_jx-q\big)\|_{C^{0, \beta}(\Delta_{r_j})} \bigg\}. \endaligned $$ In view of (\ref{b-L-3-10}) we obtain \begin{equation}\label{b-L-3-20} F_{j+1} \le \frac12 F_j + C \big[ \omega (\varepsilon \theta^{-j-1})\big]^{\frac23-\delta} \big\{ F_{j-1} +p_{j-1} \big\}. \end{equation} Also observe that since $D_{r} $ satisfies the interior cone condition, $$ \aligned & |M_{j+1} -M_j| \le \frac{C}{r_{j+1}} \inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{D_{r_{j+1}}} |(M_{j+1} -M_j) x-q |^2 \right\}^{1/2}\\ &\le \frac{C}{r_{j+1}} \inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{D_{r_{j+1}}} |u_\varepsilon - M_{j+1}x-q|^2 \right\}^{1/2} + \frac{C}{r_{j+1}}\inf_{q\in \mathbb{R}^m} \left\{ -\!\!\!\!\!\!\int_{D_{r_{j+1}}} |u_\varepsilon - M_{j}x-q|^2 \right\}^{1/2}\\ & \le C \, ( F_j +F_{j+1} ). \endaligned $$ It follows that \begin{equation} p_{j+1} =|M_{j+1}|\le |M_j| + C\, (F_j + F_{j+1}) = p_j +C\, (F_j + F_{j+1}). \end{equation} Recall that the condition (\ref{decay-condition}) implies that $$ \sum_{j=0}^\ell \big[ \omega (\varepsilon \theta^{-j-1})\big]^{\frac23 -\delta} \le C \int_0^1 \big[ \omega (t) \big]^{\frac23 -\delta} \, \frac{dt}{t} <\infty, $$ for some $\sigma, \delta \in (0,1)$. This allows us to apply Lemma \ref{main-lemma-1} to obtain $$ \aligned F_j +p_j & \le C\, \big\{ p_0 +F_0 +F_1 \big\}\\ &\le C \left\{ \left(-\!\!\!\!\!\!\int_{D_1} |u_\varepsilon|^2 \right)^{1/2} +\| f\|_{C^{1, \beta} (\Delta_1)} \right\} \endaligned $$ for any $0\le j\le \ell$. As a result, we see that for any $\varepsilon< t<1/4$, \begin{equation}\label{b-L-3-30} \aligned \left(-\!\!\!\!\!\!\int_{D_t} |\nabla u_\varepsilon|^2\right)^{1/2} &\le \frac{C}{t} \inf_{q\in \mathbb{R}^m} \left\{ \left(-\!\!\!\!\!\!\int_{D_{2t}} |u_\varepsilon -q|^2\right)^{1/2} +\| f-q \|_{L^\infty(\Delta_{2t})} \right\} + C \| \nabla_{tan} f \|_{L^\infty (\Delta_{2t})}\\ &\le C \left\{ \left(-\!\!\!\!\!\!\int_{D_1} |u_\varepsilon|^2\right)^{1/2} +\| f\|_{C^{1,\beta}(\Delta_1)} \right\}, \endaligned \end{equation} where we have used Caccipoli's inequality for the first inequality. Finally, since $A(y)$ is H\"older continuous, we may use apply the classical boundary Lipschitz estimates for $\mathcal{L}_1$ and a blow-up argument to obtain $$ \aligned \|\nabla u_\varepsilon\|_{L^\infty(D_\varepsilon)} & \le \frac{C}{\varepsilon} \inf_{q\in \mathbb{R}^m} \left\{ \left(-\!\!\!\!\!\!\int_{D_{2\varepsilon}} |u_\varepsilon -q|^2\right)^{1/2} +\| f-q \|_{L^\infty(\Delta_{2\varepsilon})} \right\} +C\ \| \nabla_{tan} f \|_{C^\sigma (\Delta_{2\varepsilon})}\\ &\le C \left\{ \left(-\!\!\!\!\!\!\int_{D_1} |u_\varepsilon|^2\right)^{1/2} +\| f\|_{C^{1,\beta}(\Delta_1)} \right\}, \endaligned $$ where we have used (\ref{b-L-3-30}) with $t=2\varepsilon$ for the second inequality. Consequently, we see that \begin{equation}\label{b-L-3-40} \left(-\!\!\!\!\!\!\int_{D_t} |\nabla u_\varepsilon|^2\right)^{1/2} \le C \left\{ \left(-\!\!\!\!\!\!\int_{D_1} |u_\varepsilon|^2\right)^{1/2} +\| f\|_{C^{1,\beta}(\Delta_1)} \right\} \end{equation} holds for any $0<t<1/4$. This, together with the interior Lipschitz estimates proved in Section 4, yields \begin{equation}\label{b-L-3-50} \| \nabla u_\varepsilon \|_{L^\infty(D_1)} \le C \left\{ \left(-\!\!\!\!\!\!\int_{D_2} |u_\varepsilon|^2\right)^{1/2} +\| f\|_{C^{1,\beta}(\Delta_2)} \right\}. \end{equation} The proof is complete. \end{proof} We now give the proof of Theorem \ref{main-theorem-Lip}. \begin{proof}[\bf Proof of Theorem \ref{main-theorem-Lip}] It suffices to show that if $\mathcal{L}_\varepsilon (u_\varepsilon)=F$ in $D_{2r}$ and $u_\varepsilon=f $ on $\Delta_{2r}$ for some $0<r<1$, then \begin{equation}\label{b-L-local} \aligned \| \nabla u_\varepsilon\|_{L^\infty(D_r)} \le C r^{-1} \| u_\varepsilon\|_{L^\infty(D_{2r})} & +C \|\nabla_{tan} f\|_{L^\infty(\Delta_{2r})} +C r^\beta \|\nabla_{\tan} f\|_{C^{0, \beta}(\Delta_{2r})}\\ &+ Cr^\beta \sup_{\substack{x\in D_{2r}\\ 0<t<r}} t^{1-\beta} -\!\!\!\!\!\!\int_{B(x,t)\cap D_{2r}} |F|. \endaligned \end{equation} Estimate (\ref{Lip-estimate-0}) follows from (\ref{b-L-local}) and the interior Lipschitz estimate by a simple covering argument. To prove (\ref{b-L-local}), we may assume that $r=1$ and $d\ge 3$. The case $F=0$ is already proved in the last lemma. The general case may be handled by the use of Green functions. Indeed, let $\Omega$ be a bounded $C^{1, \alpha}$ domain in $\mathbb{R}^d$ such that $D_{3/2}\subset \Omega\subset D_2$. Let $G_\varepsilon (x,y)$ denote the matrix of Green functions for $\mathcal{L}_\varepsilon $ in $\Omega$, with pole at $y$. By the boundary H\"older estimates in \cite{Shen-2014}, we know $|G_\varepsilon (x,y)|\le C\, |x-y|^{2-d}$ for any $x,y\in \Omega$. Since $\mathcal{L}_\varepsilon \big( G_\varepsilon (\cdot, y)\big)=0$ in $\Omega \setminus \{ y\}$ and $G(x, y)=0$ for $x\in \partial\Omega$, we may use the boundary Lipschitz estimate in the last lemma to show that $|\nabla_x G_\varepsilon (x,y)|\le C\, |x-y|^{1-d}$ for any $x,y\in \Omega$. One then considers $u_\varepsilon -v_\varepsilon$ in $D_2$, where $v_\varepsilon (x)=\int_\Omega G_\varepsilon (x,y) F(y)\, dy$. The rest of the argument is similar to that in the proof of Theorem \ref{i-L-theorem}. We omit the details. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{main-theorem-max}] The non-tangential maximal function of $u_\varepsilon$ is defined by \begin{equation}\label{non-tan} (u_\varepsilon)^* (Q)=\sup \big\{ |u_\varepsilon (x)|: \ x\in \Omega \text{ and } |x-Q|<C_0\, \text{\rm dist} (x, \partial\Omega) \big\} \end{equation} for $Q\in \partial\Omega$, where $C_0=C_0(\Omega)$ is sufficiently large. Suppose that $\mathcal{L}_\varepsilon (u_\varepsilon) =0$ in $\Omega$ and $u_\varepsilon =f$ on $\partial\Omega$. It is well known that the estimate (\ref{P-estimate}) implies that $\| u_\varepsilon\|_{L^\infty (\Omega)} \le C\, \| f\|_{L^\infty(\partial\Omega)}$, and $$ (u_\varepsilon)^* (Q) \le C\, \mathcal{M}_{\partial\Omega} (f) (Q) \qquad \text{ for any } Q\in \partial\Omega, $$ where $\mathcal{M}_{\partial\Omega} (f)$ denotes the Hardy-Littlewood maximal function of $f$ on $\partial\Omega$. It follows that $$ \| (u_\varepsilon)^*\|_{L^p(\partial\Omega)} \le C_p \, \| f\|_{L^p(\partial\Omega)} $$ for any $1<p<\infty$. \end{proof} \section{Boundary Lipschitz estimates with Neumann conditions and proof of Theorem \ref{main-theorem-Lip-N}} \setcounter{equation}{0} In this section we establish the uniform Lipschitz estimates with Neumann boundary conditions and give the proof of Theorem \ref{main-theorem-Lip-N}. Throughout the section we will assume that $A$ satisfies the same conditions as in Theorem \ref{main-theorem-Lip-N}. Let $D_r$ and $\Delta_r$ be defined as in (\ref{Delta}) and $(\alpha, K_0)$ given in (\ref{phi}). \begin{lemma}\label{Lip-N-lemma-1} Suppose that $\mathcal{L}_0 (w)=0$ in $D_{2r}$ and $\frac{\partial w}{\partial \nu_0}=g$ on $\Delta_{2r}$. Let $$ \aligned \Psi(t) = &\frac{1}{t}\inf_{\substack{M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^d}} \bigg\{ \left(-\!\!\!\!\!\!\int_{D_t} |w-Mx -q|^2\right)^{1/2} +t \left\| \frac{\partial}{\partial\nu_0} \big( w-Mx\big) \right\|_{L^\infty(\Delta_t)}\\ &\qquad\qquad\qquad\qquad \qquad\qquad +t^{1+\beta} \left\| \frac{\partial}{\partial \nu_0} \big (w-Mx\big) \right\|_{C^{0,\beta} (\Delta_t)} \bigg\}, \endaligned $$ for $0<t\le r$, where $\beta=\alpha/2$. Then there exists $\theta\in (0,1/4)$, depending only on $\mu$, $\alpha$ and $K_0$, such that \begin{equation}\label{NP-Lip-8-1} \Psi(\theta r) \le (1/2) \Psi (r). \end{equation} \end{lemma} \begin{proof} The lemma follows from boundary $C^{1,\alpha}$ estimates with Neumann conditions for second-order elliptic systems with constant coefficients. The argument is similar to that in the case of Dirichlet condition. We leave the details to the reader. \end{proof} \begin{lemma}\label{decay-rate-lemma-8} Let $\sigma\in (0,1)$ and \begin{equation}\label{NP-eta} \eta (t) =\eta_\sigma (t)= \left\{ \Theta_\sigma (t^{-1}) + \sup_{T\ge t^{-1} }\langle |\psi-\nabla \chi_T|\rangle \right\}^{1/2}. \end{equation} Suppose that there exist $C_0>0$ and $N>3$ such that $\rho (R)\le C_0 \big[\log R\big]^{-N}$ for all $R\ge 2$. Then $\int_0^1 \eta (t)\, \frac{dt}{t} <\infty$. \end{lemma} \begin{proof} Recall that if there exist $C_0>0$ and $N>1$ such that $\rho(R)\le C_0 \big[ \log R \big]^{-N}$ for all $R\ge 2$, then $$ \Theta_\sigma (T)\le C_\sigma \big[\log T\big]^{-N}\quad \text{ and } \quad \langle |\psi-\nabla \chi_T| \rangle \le C\, \big[\log T\big]^{-N+1} $$ for all $T\ge 2$ (see the proof of Lemma \ref{Dini-lemma}). This gives $$ \eta (t) \le C \, \big[\log (1/t) \big]^{(1-N)/2} \qquad \text{ for } t\in (0,1/2), $$ from which the lemma follows readily. \end{proof} \begin{lemma}\label{Lip-N-lemma-2} Suppose that $A$ satisfies the same conditions as in Theorem \ref{main-theorem-Lip}. Let $u_\varepsilon$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $D_{2r}$ with $\frac{\partial u_\varepsilon}{\partial\nu_\varepsilon} =g$ on $\Delta_{2r}$, where $0<\varepsilon<r\le 1$ and $g\in C^{1,\beta} (\Delta_{2r})$. Then there exists $w\in H^1(D_r;\mathbb{R}^m)$ such that $\mathcal{L}_0 (w)=0$ in $D_r$, $\frac{\partial w}{\partial \nu_0} =g$ on $\Delta_{2r}$, and \begin{equation}\label{NP-Lip-8-2} \left\{ -\!\!\!\!\!\!\int_{D_r} |u_\varepsilon -w|^2 \right\}^{1/2} \le C \, \eta \left( \frac{\varepsilon}{r}\right)\bigg\{ \inf_{q\in \mathbb{R}^m} \left(-\!\!\!\!\!\!\int_{D_{2r}} |u_\varepsilon -q|^2\right)^{1/2} +r\, \| g\|_{L^\infty(\Delta_{2r})}\bigg\} \end{equation} where $\beta=\alpha/2$ and $\eta(t)$ is given by (\ref{NP-eta}). The constant $C$ depends only on $\alpha$, $K_0$, and $A$. \end{lemma} \begin{proof} By rescaling we may assume $r=1$. By subtracting a constant we may assume that $\int_{D_2} u_\varepsilon =0$. Using Cacciopoli's inequality \begin{equation}\label{NP-C} \int_{D_{3/2}} |\nabla u_\varepsilon|^2 \le C \left\{ \int_{D_2} |u_\varepsilon|^2 +\int_{\Delta_2} |g|^2 \right\} \end{equation} and the co-area formula, it is not hard to see that there exists a $C^{1, \alpha}$ domain $\Omega$ such that $D_1\subset \Omega\subset D_{3/2}$ and \begin{equation}\label{NP-Lip-8-4} \int_{\partial\Omega} |\nabla u_\varepsilon|^2 \le C \left\{ \int_{D_2} |u_\varepsilon|^2 +\int_{\Delta_2} |g|^2 \right\}. \end{equation} Now let $w$ be the weak solution of $\mathcal{L}_\varepsilon (w)=0$ in $\Omega$ with $\frac{\partial w}{\partial \nu_0} =\frac{\partial u_\varepsilon}{\partial\nu_\varepsilon}$ on $\partial\Omega$ and $\int_\Omega w=\int_\Omega u_\varepsilon$. It follows from Theorem \ref{NP-rate-theorem-2} \begin{equation}\label{NP-Lip-8-5} \| u_\varepsilon -w\|_{L^2(\Omega)} \le C\, \eta (\varepsilon) \| \nabla u_\varepsilon\|_{L^2(\partial\Omega)} \le C\, \eta (\varepsilon) \left\{ \| u_\varepsilon\|_{L^2(D_2)} +\| g\|_{L^2(\Delta_2)} \right\}, \end{equation} where we have used (\ref{NP-Lip-8-4}) for the last inequality. Since $\int_{D_2} u_\varepsilon =0$, this yields (\ref{NP-Lip-8-2}). \end{proof} \begin{theorem}\label{NP-Lip-theorem-8} Suppose that $A$ satisfies the same conditions as in Theorem \ref{main-theorem-Lip-N}. Let $u_\varepsilon\in H^1(D_{2r}; \mathbb{R}^m)$ be a weak solution of $\mathcal{L}_\varepsilon (u_\varepsilon)=0$ in $D_{2r}$ with $\frac{\partial u_\varepsilon}{\partial\nu_\varepsilon} =g$ on $\Delta_{2r}$. Then \begin{equation}\label{NP-Lip-8-10} \|\nabla u_\varepsilon\|_{L^\infty(D_r)} \le C\left\{ \left(-\!\!\!\!\!\!\int_{D_{2r}} |\nabla u_\varepsilon|^2 \right)^{1/2} +\| g\|_{L^\infty(\Delta_{2r})} +r^\beta \| g\|_{C^{0, \beta} (\Delta_{2r})} \right\}, \end{equation} where $\beta=\alpha/2$ and $C$ depends only on $(\alpha, K_0)$ and $A$. \end{theorem} \begin{proof} With Lemmas \ref{Lip-N-lemma-1}, \ref{decay-rate-lemma-8} and \ref{Lip-N-lemma-2} at our disposal, the theorem follows by the same line of argument as in the case of Dirichlet condition. By rescaling we may assume $r=1$. Let $$ \aligned \Phi (t)= & t^{-1}\inf_{\substack{ M\in \mathbb{R}^{m\times d}\\ q\in \mathbb{R}^m}} \bigg\{ \left(-\!\!\!\!\!\!\int_{D_t} |u_\varepsilon -Mx -q|^2 \right)^{1/2} +t \left\| g -\frac{\partial}{\partial \nu_0} \big( Mx \big)\right\|_{L^\infty(\Delta_t)} \\ &\qquad\qquad\qquad\qquad\qquad +t^{1+\beta} \left\| g -\frac{\partial}{\partial\nu_0} \big( Mx \big) \right\|_{C^{0,\beta} (\Delta_t)} \bigg\} \endaligned $$ for $0<t\le 1$. For each $\varepsilon<t\le 1$, let $w=w_t$ be the solution of $\mathcal{L}_0(w)=0$ in $D_t$ with $\frac{\partial w}{\partial \nu_0} =g$ on $\Delta_t$, given by Lemma \ref{Lip-N-lemma-2}. As in the case of Dirichlet condition, it follows from Lemma \ref{Lip-N-lemma-1} that $$ \Phi (\theta t) \le \frac12 \Phi (t) +\frac{C}{t} \left\{ -\!\!\!\!\!\!\int_{D_t} |u_\varepsilon -w|^2\right\}^{1/2}, $$ where $\theta \in (0,1/4)$ is given by Lemma \ref{Lip-N-lemma-1}. In view of Lemma \ref{Lip-N-lemma-2}, this leads to \begin{equation}\label{NP-Lip-8-20} \Phi (\theta t) \le \frac12 \Phi (t) +C \, \eta (\varepsilon/t) \left\{ \frac{1}{t} \inf_{q\in \mathbb{R}^m} \left(-\!\!\!\!\!\!\int_{D_{2t}} |u_\varepsilon -q|^2 \right)^{1/2} +\| g\|_{L^\infty(\Delta_{2t}) }\right\}. \end{equation} Now, let $r_j=\theta^{j+1}$ for $0\le j\le \ell$, where $\ell$ is chosen so that $\theta^{\ell+1}<\varepsilon\le \theta^{\ell +1}$. Let $$ F_j =\Phi (r_j) \quad \text{ and } \quad p_j =|M_j|, $$ where $M_j\in \mathbb{R}^{m\times d}$ is a matrix such that $$ \aligned \Phi (r_j)= & r^{-1}_j \bigg\{ \inf_{q\in \mathbb{R}^m} \left(-\!\!\!\!\!\!\int_{D_{r_j}} |u_\varepsilon -M_j x -q|^2 \right)^{1/2} +r_j \left\| g -\frac{\partial}{\partial\nu_0} \big( M_j x \big)\right\|_{L^\infty(\Delta_{r_j})} \\ &\qquad\qquad\qquad\qquad\qquad +r_j^{1+\beta} \left\| g-\frac{\partial}{\partial \nu_0} \big( M_j x \big) \right\|_{C^{0,\beta} (\Delta_{r_j})} \bigg\}. \endaligned $$ It follows from the estimate (\ref{NP-Lip-8-20}) that \begin{equation}\label{NP-Lip-8-30} F_{j+1} \le \frac12 F_j + C\, \eta (\varepsilon 2^{-j-1}) \big\{ F_{j-1} + p_{j-1} \big\}. \end{equation} As in the proof of Theorem \ref{b-L-theoem-3}, we also have \begin{equation} p_{j+1} \le p_j +C \big\{ F_j +F_{j+1} \big\}. \end{equation} Furthermore, by Lemma \ref{decay-rate-lemma-8}, $$ \sum_{j=1}^\ell \eta (\varepsilon \theta^{-j-1}) \le C \int_0^1 \eta (t)\, \frac{dt}{t} <\infty. $$ Consequently, we may apply Lemma \ref{main-lemma-1} to obtain $$ \aligned F_j + p_j &\le C \big\{ p_0 +F_0 +F_1\big\}\\ &\le C \left\{ \left(-\!\!\!\!\!\!\int_{D_1} |u_\varepsilon|^2\right)^{1/2} + \| g\|_{C^\beta (\Delta_1)} \right\}. \endaligned $$ This, together with the Cacciopoli's inequality, yields that for any $\varepsilon<t<(1/4)$, \begin{equation}\label{NP-Lip-8-40} \left\{ -\!\!\!\!\!\!\int_{D_t} |\nabla u_\varepsilon|^2 \right\}^{1/2} \le C \left\{ \left(-\!\!\!\!\!\!\int_{D_1} |u_\varepsilon|^2\right)^{1/2} + \| g\|_{C^\beta (\Delta_1)} \right\}. \end{equation} As in the case of Dirichlet condition, we may use a blow-up argument and (\ref{NP-Lip-8-40}) to show that the estimate above in fact holds for any $0<t<(1/4)$. Finally, we observe that the estimate (\ref{NP-Lip-8-10}) follows from (\ref{NP-Lip-8-40}) and the interior Lipschitz estimates. \end{proof} \begin{remark}\label{remark-NP} {\rm Let $\Omega$ be a bounded $C^{1,\alpha}$ domain in $\mathbb{R}^d$. Let $N_\varepsilon(x,y)$ denote the matrix of Neumann functions for $\mathcal{L}_\varepsilon$ in $\Omega$, with pole at $y$; i.e., $$ \left\{ \aligned \mathcal{L}_\varepsilon \big\{ N_\varepsilon (\cdot, y) \big\} & = I_{m\times m} \delta_ y(x) &\quad &\text{ in } \Omega,\\ \frac{\partial}{\partial \nu_\varepsilon} \big\{ N_\varepsilon (\cdot, y) \big\} & =-|\partial\Omega|^{-1} I_{m\times m} &\quad & \text{ on } \partial\Omega, \endaligned \right. $$ where $I_{m\times m}$ denotes the $m\times m$ identity matrix. Suppose that $A$ satisfies the conditions in Theorem \ref{main-theorem-Lip-N}. Since $A^*$ also satisfies the same conditions, it follows from Theorem \ref{NP-Lip-theorem-8} that if $d\ge 3$, \begin{equation}\label{N-F-estimate} \left\{ \aligned |N_\varepsilon (x,y)| & \le C\, |x-y|^{2-d},\\ |\nabla_x N_\varepsilon (x,y)| +|\nabla_y N_\varepsilon (x,y)| & \le C\, |x-y|^{1-d},\\ |\nabla_x\nabla_y N_\varepsilon (x,y)| & \le C\, |x-y|^{-d} \endaligned \right. \end{equation} for any $x,y\in \Omega$, $x\neq y$, where $C$ depends only on $A$ and $\Omega$. We refer the reader to \cite{KLS1} for the proof in the periodic setting. } \end{remark} We now give the proof of Theorem \ref{main-theorem-Lip-N} \begin{proof}[\bf Proof of Theorem \ref{main-theorem-Lip-N}] It suffices to show that if $\mathcal{L}(u_\varepsilon) =F$ in $D_{2r}$ and $\frac{\partial u_\varepsilon}{\partial \nu_\varepsilon} =g$ on $\partial\Omega$ for some $0<r<1$, then \begin{equation}\label{NP-Lip-8-100} \aligned \|\nabla u_\varepsilon\|_{L^\infty(D_r)} &\le C\left(-\!\!\!\!\!\!\int_{D_{2r}} |\nabla u_\varepsilon|^2\right)^{1/2} + C \, \| g\|_{L^\infty(\Delta_{2r})} + C\, r^\beta \|g\|_{C^{0, \beta}(\Delta_{2r})}\\ &\qquad\qquad\qquad\qquad +C r^\beta \sup_{\substack{x\in D_{2r}\\ 0<t<r }} t^{1-\beta} -\!\!\!\!\!\!\int_{B(x,t)\cap D_{2r}} |F|. \endaligned \end{equation} By rescaling we may assume $r=1$. The case $F=0$ is given by Theorem \ref{NP-Lip-theorem-8}. To deal with the general case, we assume $d\ge 3$ (the case $d=2$ is reduced to the case $d=3$ by adding a dummy variable). Let $\Omega$ be a bounded $C^{1,\alpha}$ domain such that $D_{3/2}\subset \Omega\subset D_2$. Let $N_\varepsilon (x,y)$ denote the matrix of Neumann functions for $\mathcal{L}$ in $\Omega$, with pole at $y$. Let $v_\varepsilon (x)=\int_\Omega N_\varepsilon (x,y) F(y)\, dy$. Note that by (\ref{N-F-estimate}), $$ |\nabla v_\varepsilon (x)|\le C\int_\Omega \frac{|F(y)|}{|x-y|^{d-1}}\, dy \le C \sup_{\substack{x\in D_{2}\\ 0<t<1 }} t^{1-\sigma} -\!\!\!\!\!\!\int_{B(x,t)\cap D_{2}} |F|. $$ By considering $u_\varepsilon -v_\varepsilon$, we may reduce the general case to the case $F=0$. \end{proof}
1,108,101,566,126
arxiv
\section{Introduction} In~\cite{strassen:1969}, V.\ Strassen presented a noncommutative algorithm for multiplication of two~$\matrixsize{2}{2}$ matrices using only~$7$ multiplications. The current upper bound~$23$ for~$\matrixsize{3}{3}$ matrix multiplication was reached by J.B.\ Laderman in~\cite{laderman:1976a}. This note presents a \emph{geometric} relationship between Strassen and Laderman algorithms. By doing so, we retrieve a \emph{geometric} formulation of results very similar to those presented by O.\ S\'ykora in~\cite{sykora:1977a}. \subsection{Disclaimer: there is no improvement in this note}\label{sec:disclaimer} We do not improve any practical algorithm or prove any theoretical bound in this short note but focus on effective manipulation of tensor associated to matrix multiplication algorithm. To do so, we present only the minimal number of needed definitions and thus leave many facts outside our scope. We refer to~\cite{landsberg:2010} for a complete description of the field and to~\cite{Ambainis:2014aa} for a state-of-the-art presentation of theoretical complexity issues. \subsection{So, why writing (or reading) it?} We follow the geometric spirit of~\cite{Grochow:2016aa,Chiantini:2016aa,Burgisser:2015aa,burichenko:2015,burichenko:2014} and related papers: symmetries could be used in practical design of matrix multiplication algorithms. Hence, this note presents another example of this philosophy by giving a precise geometric meaning to the following statement: \begin{quote} Laderman matrix multiplication algorithm is composed by four~$\matrixsize{2}{2}$ optimal matrix multiplication algorithms, a half of the classical~$\matrixsize{2}{2}$ matrix multiplication algorithm and a correction term. \end{quote} \section{Framework} To do so, we have to present a small part of the classical framework (for a complete presentation see~{\cite{groot:1978a,groot:1978,landsberg:2010}}) mainly because we do not take it literally and only use a simplified version. Let us start by some basic definitions and notations as the following generic matrices: \small% \begin{equation} \label{eq:1} A=\!\left(% \begin{array}{ccc} {a_{11}}&{a_{12}}&{a_{13}}\\ {a_{21}}&{a_{22}}&{a_{23}}\\ {a_{31}}&{a_{32}}&{a_{33}} \end{array}\right) \!, \ B=\!\left(% \begin{array}{ccc} {b_{11}}&{b_{12}}&{b_{13}}\\ {b_{21}}&{b_{22}}&{b_{23}}\\ {b_{31}}&{b_{32}}&{b_{33}} \end{array}\right)\!,\ C=\!\left(% \begin{array}{ccc} {c_{11}}&{c_{12}}&{c_{13}}\\ {c_{21}}&{c_{22}}&{c_{23}}\\ {c_{31}}&{c_{32}}&{c_{33}} \end{array}\right)\!, \end{equation} \normalsize% that will be used in the sequel. Furthermore, as we also consider their~$\matrixsize{2}{2}$ submatrices, let us introduce some associated notations. \begin{notations}\label{def:MatrixProjection} Let~$n,i,j$ be positive integers such that~${i\leq n}$ and~${j\leq n}$. We denote by~$\matrixprojectionoperator{n}{j}$ the identity~$\matrixsize{n}{n}$ matrix where the~$j$th diagonal term is~$0$. Given a~$\matrixsize{n}{n}$ matrix~$A$, we denote by~$\MatrixZero{j}{k}{A}$ the matrix~${\matrixprojectionoperator{n}{j}\cdot A\cdot \matrixprojectionoperator{n}{k}}$. For example, the matrix~$\MatrixZero{3}{3}{A}$,~$\MatrixZero{3}{2}{B}$ and~$\MatrixZero{2}{3}{C}$ are: \begin{equation} \label{eq:3} \left(% \begin{array}{ccc} {a_{11}}&{a_{12}}&{0}\\ {a_{21}}&{a_{22}}&{0}\\ {0}&{0}&{0} \end{array}\right)\!, \quad \left(% \begin{array}{ccc} {b_{11}}&{0}&{b_{13}}\\ {b_{21}}&{0}&{b_{23}}\\ {0}&{0}&{0} \end{array}\right) \quad \textup{and}\quad \left(% \begin{array}{ccc} {c_{11}}&{c_{12}}&{0}\\ {0}&{0}&{0}\\ {c_{31}}&{c_{32}}&{0} \end{array}\right)\!. \end{equation} Given a\,~$\matrixsize{n}{n}$ matrix~$A$, we sometimes consider~$\MatrixZero{i}{j}{A}$ as the~$\matrixsize{(n-1)}{(n-1)}$ matrix~$\MatrixProjection{i}{j}{A}$ where the line and column composed of~$0$ are removed. \par At the opposite, given any~$\matrixsize{(n-1)}{(n-1)}$ matrix~$A$, we denote by~$\MatrixLift{i}{j}{A}$ the~$\matrixsize{n}{n}$ matrix where a line and column of~$0$ were added to~$A$ in order to have~${\MatrixProjection{i}{j}{\MatrixLift{i}{j}{A}}=A}$. \end{notations} \subsection{Strassen multiplication algorithm} Considered as~$\matrixsize{2}{2}$ matrices, the matrix product~${\MatrixProjection{3}{3}{C}=\MatrixProjection{3}{3}{A}\cdot\MatrixProjection{3}{3}{B}}$ could be computed using Strassen algorithm (see~\cite{strassen:1969}) by performing the following computations: \begin{equation} \label{eq:StrassenMultiplicationAlgorithm} \begin{aligned} \begin{aligned} t_{1} &= (a_{11} + a_{22}) (b_{11} + b_{22}), &t_{2} & = (a_{12} - a_{22})(b_{21} + b_{22}), \\ t_{3} &= (-a_{11} + a_{21}) (b_{11} + b_{12}), &t_{4} & =(a_{11}+a_{12})b_{22}, \end{aligned} \\ \begin{aligned} t_{5} = a_{11} (b_{12} - b_{22}),\ t_{6} = a_{22} (-b_{11} + b_{21}),\ t_{7} = (a_{21} + a_{22}) b_{11}, \end{aligned}\\ \begin{aligned} c_{11} &= t_{1} + t_{2} - t_{4} + t_{6}, & c_{12} &= t_{6} + t_{7}, \\ c_{21} &= t_{4} + t_{5}, &c_{22} &= t_{1} + t_{3} + t_{5} -t_{7}. \end{aligned} \end{aligned} \end{equation} In order to consider above algorithm under a geometric standpoint, it is usually presented as a tensor. \subsection{Bilinear mappings seen as tensors and associated trilinear forms} \begin{definitions}\label{def:tensor} Given a tensor~$\tensor{T}$ decomposable as sum of rank-one tensors: \begin{equation} \label{eq:5} \tensor{T}=\sum_{i=1}^{r} T_{i1}\otimes T_{i2}\otimes T_{i3}, \end{equation} where~$T_{ij}$ are~$\matrixsize{n}{n}$ matrices: \begin{itemize} \item the integer~$r$ is the \emph{tensor rank} of tensor~$\tensor{T}$; \item the unordered list~${[{(\matrixrank{M_{ij}})}_{j=1\ldots 3}]}_{i=1\ldots r}$ is called the \emph{type} of tensor~$\tensor{T}$ ($\matrixrank{A}$ being the classical rank of the matrix~$A$). \end{itemize} \end{definitions} \subsection{Tensors' contractions} To explicit the relationship between what is done in the sequel and the bilinear mapping associated to matrix multiplication, let us consider the following tensor's contractions: \begin{definitions}\label{def:contractions} Using the notation of definition~\ref{def:tensor} given a tensor~$\tensor{T}$ and three~$\matrixsize{n}{n}$ matrices~$A,B$ and~$C$ with coefficients in the algebra~$\mathbb{K}$: \begin{itemize} \item the~$(1,2)$ contraction of~$\tensor{T}\otimes A\otimes B$ defined by: \begin{equation} \label{eq:6} \sum_{i=1}^{r} \textup{Trace} (\Transpose{T_{i1}} \cdot A)\, \textup{Trace} (\Transpose{T_{i2}} \cdot B) T_{i3} \end{equation} corresponds to a bilinear application~$\mathbb{K}^{\matrixsize{n}{n}}\times \mathbb{K}^{\matrixsize{n}{n}} \mapsto \mathbb{K}^{\matrixsize{n}{n}}$ with indeterminates~$A$ and~$B$. \item the~$(1,2,3)$ (a.k.a.\ full) contraction of~$\tensor{T}\otimes A\otimes B\otimes C$ defined by: \begin{equation} \label{eq:7a} {\left\langle\tensor{T} | A \otimes B \otimes C \right\rangle} = \sum_{i=1}^{r} \textup{Trace} (\Transpose{T_{i1}} \cdot A)\, \textup{Trace} (\Transpose{T_{i2}} \cdot B)\, \textup{Trace}(\Transpose{T_{i3}}\cdot C) \end{equation} corresponds to a trilinear form~$\mathbb{K}^{\matrixsize{n}{n}}\times \mathbb{K}^{\matrixsize{n}{n}} \times \mathbb{K}^{\matrixsize{n}{n}} \mapsto \mathbb{K}$ with indeterminates~$A,B$ and~$C$. \end{itemize} \end{definitions} \begin{remarks} As the studied object is the tensor, its expressions as full or incomplete contractions are equivalent. Thus, even if matrix multiplication is a bilinear application, we are going to work in the sequel with trilinear forms (see~\cite{Dumas:2016aa} for bibliographic references on this standpoint). \par The definition in~\ref{def:contractions} are taken to express the full contraction as a degenerate inner product between tensors; it is not the usual choice made in the literature and so, we have to explicitly recall some notions used in the sequel. \end{remarks} Strassen multiplication algorithm~(\ref{eq:StrassenMultiplicationAlgorithm}) is equivalent to the tensor~$\tensor{S}$ defined by:\par \footnotesize% \begin{equation} \label{eq:4} \begin{aligned} & \left(\! \begin{array}{cc} 1&0\\ 0&1\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 1&0\\ 0&1\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 1&0\\ 0&1\\ \end{array} \!\right) &+ \left(\! \begin{array}{cc} 0&1\\ 0&-1\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 0&0\\ 1&1\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 1&0\\ 0&0\\ \end{array} \!\right) + \\[\smallskipamount] & \left(\! \begin{array}{cc} -1&0\\ 1&0\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 1&1\\ 0&0\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 0&0\\ 0&1\\ \end{array} \!\right) &+ \left(\! \begin{array}{cc} 1&1\\ 0&0\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 0&0\\ 0&1\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} -1&0\\ 1&0\\ \end{array} \!\right) +\\[\smallskipamount] & \left(\! \begin{array}{cc} 1&0\\ 0&0\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 0&1\\ 0&-1\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 0&0\\ 1&1\\ \end{array} \!\right) &+ \left(\! \begin{array}{cc} 0&0\\ 0&1\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} -1&0\\ 1&0\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 1&1\\ 0&0\\ \end{array} \!\right) + \\[\smallskipamount] & \left(\! \begin{array}{cc} 0&0\\ 1&1\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 1&0\\ 0&0\\ \end{array} \!\right) \otimes{} \left(\! \begin{array}{cc} 0&1\\ 0&-1\\ \end{array} \!\right)\!. \end{aligned} \end{equation} \normalsize% This tensor defines the matrix multiplication algorithm~(\ref{eq:StrassenMultiplicationAlgorithm}) and its tensor rank is~$7$. \subsection{$\matrixsize{2}{2}$ matrix multiplication tensors induced by a~$\matrixsize{3}{3}$ matrix multiplication tensor}\label{sec:Projection} Given any~$\matrixsize{3}{3}$ matrix multiplication tensor, one can define~$3^{3}$ induced~$\matrixsize{2}{2}$ matrix multiplication tensors as shown in this section. First, let us introduce the following operators that generalize to tensor the notations~\ref{def:MatrixProjection}: \begin{definitions}\label{def:Projection} Using notations introduced in definition~\ref{def:MatrixProjection}, we define: \begin{subequations} \begin{align} \label{eq:14a} \TensorZero{i}{j}{k}{A\otimes{} B\otimes{} C}= \MatrixZero{i}{j}{A} \otimes{} \MatrixZero{j}{k}{B} \otimes{} \MatrixZero{k}{i}{C}, \\ \label{eq:14b} \TensorProjection{i}{j}{k}{A\otimes{} B\otimes{} C}= \MatrixProjection{i}{j}{A} \otimes{} \MatrixProjection{j}{k}{B} \otimes{} \MatrixProjection{k}{i}{C}, \\ \label{eq:14c} \TensorLift{i}{j}{k}{A\otimes{} B\otimes{} C}= \MatrixLift{i}{j}{A} \otimes{} \MatrixLift{j}{k}{B} \otimes{} \MatrixLift{k}{i}{C} \end{align} \end{subequations} and we extend the definitions of these operators by additivity in order to be applied on any tensor~$\tensor{T}$ described in definition~\ref{def:tensor}. \end{definitions} There is~$n^{3}$ such projections and given any matrix multiplication tensor~$\tensor{M}$, the full contraction satisfying the following trivial properties: \begin{equation} \label{eq:16} \left\langle \tensor{M} \,\big|\,\TensorZero{i}{j}{k}{A\otimes{} B\otimes{} C} \right\rangle = \left\langle \TensorZero{i}{j}{k}{\tensor{M}} \,\big|\,A\otimes{} B \otimes{} C \right\rangle = \left\langle \TensorProjection{i}{j}{k}{\tensor{M}}\, \big|\, \TensorProjection{i}{j}{k}{A\otimes{} B\otimes{} C} \right\rangle \end{equation} (where the projection operator apply on an~$\matrixsize{n}{n}$ matrix multiplication tensor); it defines explicitly a~$\matrixsize{(n-1)}{(n-1)}$ matrix multiplication tensor. \par The following property holds: \begin{lemma} \begin{equation} \label{eq:15} {(n-1)}^{3}\left\langle\tensor{M} | A \otimes{} B \otimes{} C \right\rangle = \sum_{1\leq i,j,k \leq n} \left\langle \tensor{M} \Big| \TensorZero{i}{j}{k}{A\otimes{} B\otimes{} C}\right\rangle \end{equation} and thus, we have: \begin{equation} \label{eq:16a} \left\langle \tensor{M} | A \otimes{} B \otimes{} C \right\rangle = \left\langle \frac{1}{{(n-1)}^{3}}\sum_{1\leq i,j,k \leq n} \TensorZero{i}{j}{k}{\tensor{M}}\Big| A \otimes{} B \otimes{} C \right\rangle\!. \end{equation} \end{lemma} The obvious facts made in this section underline the relationships between any~$\matrixsize{n}{n}$ matrix multiplication tensor and the~$n^{3}$ induced~$\matrixsize{(n-1)}{(n-1)}$ algorithms. \par Considering the Laderman matrix multiplication tensor, we are going to explore further this kind of relationships. First, let us introduce this tensor. \subsection{Laderman matrix multiplication tensor} The Laderman tensor~$\tensor{L}$ described below by giving its full contraction: \begin{equation} \label{eq:17} \begin{array}{lc} \left( { a_{11}}-{ a_{21}}+{ a_{12}}-{ a_{22}}-{ a_{32}}+{ a_{13}}-{ a_{33}} \right) { b_{22}} \,{ c_{21}} &+ \\{ a_{22}}\, \left( -{ b_{11}}+{ b_{21}}-{ b_{31}}+{ b_{12}}-{ b_{22}}-{ b_{23}}+{ b_{33}} \right) { c_{12}} &+\\ { a_{13}}\,{ b_{31}}\, \left( { c_{11}}+{ c_{21}}+{ c_{31}}+{ c_{12} }+{ c_{32}}+{ c_{13}}+{ c_{23}} \right) &+ \\ \left( { a_{11}}-{ a_{31}}+{ a_{12}}-{ a_{22}}-{ a_{32}}+{ a_{13}}-{ a_{23}} \right) { b_{23}}\,{ c_{31}}&+\\{ a_{32}}\, \left( -{ b_{11}}+{ b_{21}}-{ b_{31}}-{ b_{22}}+{ b_{32}}+{ b_{13}}-{ b_{23}} \right) { c_{13}} & + \\ { a_{11}}\,{ b_{11}}\, \left( { c_{11}}+{ c_{21}}+{ c_{31}}+{ c_{12}}+{ c_{22}}+{ c_{13}}+{ c_{33}} \right) &+ \\ \left( -{ a_{11}}+{ a_{31}}+{ a_{32}} \right) \left( { b_{11}}-{ b_{13}}+{ b_{23}} \right) \left( { c_{31}}+{ c_{13}}+{ c_{33}} \right) &+ \\ \left( { a_{22}}-{ a_{13}}+{ a_{23}} \right) \left( { b_{31}}+{ b_{23}}-{ b_{33}} \right) \left( { c_{31}}+{ c_{12}}+{ c_{32}} \right) & + \\ \left( -{ a_{11}}+{ a_{21}}+{ a_{22}} \right) \left( { b_{11}}-{ b_{12}}+{ b_{22}} \right) \left( { c_{21}}+{ c_{12}}+{ c_{22}} \right) & + \\ \left( { a_{32}}-{ a_{13}}+{ a_{33}} \right) \left( { b_{31}}+{ b_{22}}-{ b_{32}} \right) \left( { c_{21}}+{ c_{13}}+{ c_{23}} \right) &+\\ \left( { a_{21}}+{ a_{22}} \right) \left( -{ b_{11}}+{ b_{12}} \right) \left( { c_{21}}+{ c_{22}} \right)& + \\ \left( { a_{31}}+{ a_{32}} \right) \left( -{ b_{11}}+{ b_{13}} \right) \left( { c_{31}}+{ c_{33}} \right) & + \\ \left( { a_{13}}-{ a_{33}} \right) \left( { b_{22}}-{ b_{32}} \right) \left( { c_{13}}+{ c_{23}} \right) &+\\ \left( { a_{11}}-{ a_{21}} \right) \left( -{ b_{12}}+{ b_{22}} \right) \left( { c_{12}}+{ c_{22}} \right) &+\\ \left( { a_{32}}+{ a_{33}} \right) \left( -{ b_{31}}+{ b_{32}} \right) \left( { c_{21}}+{ c_{23}} \right)&+\\ \left( -{ a_{11}}+{ a_{31}} \right) \left( { b_{13}}-{ b_{23}} \right) \left( { c_{13}}+{ c_{33}} \right) &+\\ \left( { a_{13}}-{ a_{23}} \right) \left( { b_{23}}-{ b_{33}} \right) \left( { c_{12}}+{ c_{32}} \right) &+\\ \left( { a_{22}}+{ a_{23}} \right) \left( -{ b_{31}}+{ b_{33}} \right) \left( { c_{31}}+{ c_{32}} \right) &+\\ { a_{12}}\,{ b_{21}}\,{ c_{11}}+{ a_{23}}\,{ b_{32}}\,{ c_{22}} + { a_{21}}\,{ b_{13}}\,{ c_{32}}+{ a_{31}}\,{ b_{12}}\,{ c_{23}}+{ a_{33}}\,{ b_{33}}\,{ c_{33}} \end{array} \end{equation} and was introduced in~\cite{laderman:1976a} (we do not study in this note any other \emph{inequivalent} algorithm of same tensor rank e.g.~\cite{johnson:1986a,courtois:2011,oh:2013a,smirnov:2013a}, etc). Considering the projections introduced in definition~\ref{def:Projection}, we notice that: \begin{remark} Considering definitions introduced in Section~\ref{sec:Projection}, we notice that Laderman matrix multiplication tensor defines~$4$ optimal~$\matrixsize{2}{2}$ matrix multiplication tensors~$\TensorProjection{i}{j}{k}{\tensor{L}}$ with~${(i,j,k)}$ in~${\lbrace (2,1,3),(2,3,2), (3,1,2),(3,3,3)\rbrace}$ and~$23$ other with tensor rank~$8$. \end{remark} Further computations show that: \begin{remark} The type of the Laderman matrix multiplication tensor is \begin{equation} \label{eq:18} \big[ \repeated{(2,2,2)}{4}, \repeated{((1,3,1), (3,1,1), (1,1,3))}{2}, \repeated{(1,1,1)}{13} \big] \end{equation} where~$\repeated{m}{n}$ indicates that~$m$ is repeated~$n$ times. \end{remark} \subsection{Tensors' isotropies} We refer to~\cite{groot:1978a, groot:1978} for a complete presentation of automorphism group operating on varieties defined by algorithms for computation of bilinear mappings and as a reference for the following theorem: \begin{theorem} The isotropy group of the~$\matrixsize{n}{n}$ matrix multiplication tensor is \begin{equation} \label{eq:2} {\mathsc{pgl}({\mathbb{C}}^{n})}^{\times 3} \rtimes \mathfrak{S}_{3}, \end{equation} where~$\mathsc{pgl}$ stands for the projective linear group and~$\mathfrak{S}_{3}$ for the symmetric group on~$3$ elements. \end{theorem} Even if we do not completely explicit the concrete action of this isotropy group on matrix multiplication tensor, let us precise some terminologies: \begin{definitions} Given a tensor defining matrix multiplication computations, the orbit of this tensor is called the \emph{multiplication algorithm} and any of the points composing this orbit is a \emph{variant} of this algorithm. \end{definitions} \begin{remark} As shown in~\cite{Gesmundo:2016aa}, matrix multiplication is characterised by its isotropy group. \end{remark} \begin{remark} In this note, we only need the~${\mathsc{pgl}({\mathbb{C}}^{n})}^{\times 3}$ part of this group (a.k.a.\ sandwiching) and thus focus on it in the sequel. \end{remark} As our framework and notations differ slightly from the framework classically found in the literature, we have to explicitly define several well-known notions for the sake of clarity. Hence, let us recall the \emph{sandwiching} action: \begin{definition} Given~${\Isotropy{g}={(G_{1}\times G_{2} \times G_{3})}}$ an element of~${\mathsc{pgl}({\mathbb{C}}^{n})}^{\times 3}$, its action on a tensor~$\tensor{T}$ is given by: \begin{equation} \label{eq:7} \begin{aligned} \IsotropyAction{\Isotropy{g}}{\tensor{T}} &= \sum_{i=1}^{r} \IsotropyAction{\Isotropy{g}}{(T_{i1}\otimes{} T_{i2}\otimes{} T_{i3})}, \\ \IsotropyAction{\Isotropy{g}}{(T_{i1}\otimes{} T_{i2}\otimes{} T_{i3})} &= \left( \Transpose{G_{1}^{-1}} T_{i1}\Transpose{G_{2}} \right) \otimes{} \left( \Transpose{G_{2}^{-1}} T_{i2}\Transpose{G_{3}} \right) \otimes{} \left( \Transpose{G_{3}^{-1}} T_{i3}\Transpose{G_{1}} \right)\!. \end{aligned} \end{equation} \end{definition} \begin{example} Let us consider the action of the following isotropy \begin{equation} \label{eq:8} \left(% \begin{array}{cc} 0 & 1/\lambda \\ -1 & 0 \end{array}\right) \times\left(% \begin{array}{cc} 1/\lambda & -1/\lambda \\ 0 & 1 \end{array}\right) \times\left(% \begin{array}{cc} -1/\lambda & 0 \\ 1 & -1 \end{array}\right) \end{equation} on the Strassen variant of the Strassen algorithm. The resulting tensor~$\tensor{W}$ is: \par \scriptsize% \begin{equation} \label{eq:10} \begin{aligned} \sum_{i=1}^{7} w_{i} &= \left(\!\! \begin{array}{cc} -1&\lambda\\ -\frac{1}{\lambda}&0\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 1&-\lambda\\ \frac{1}{\lambda}&0\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 1&-\lambda\\ \frac{1}{\lambda}&0\\ \end{array} \!\!\right) + \left(\!\! \begin{array}{cc} -1&l\\ -\frac{1}{\lambda}&1\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 0&0\\ 1&0\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 0&1\\ 0&0\\ \end{array} \!\!\right) \\[\smallskipamount] & + \left(\!\! \begin{array}{cc} 1&0\\ \frac{1}{\lambda}&0\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 1&0\\ \frac{1}{\lambda}&0\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 1&0\\ \frac{1}{\lambda}&0\\ \end{array} \!\!\right) + \left(\!\! \begin{array}{cc} 0&0\\ 0&1\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 0&0\\ 0&1\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 0&0\\ 0&1\\ \end{array} \!\!\right) \\[\smallskipamount] & + \left(\!\! \begin{array}{cc} 0&0\\ 1&0\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 0&1\\ 0&0\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} -1&\lambda\\ -\frac{1}{\lambda}&1\\ \end{array} \!\!\right) + \left(\!\! \begin{array}{cc} 1&-\lambda\\ 0&0\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 1&-\lambda\\ 0&0\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 1&-\lambda\\ 0&0\\ \end{array} \!\!\right) \\[\smallskipamount] &+ \left(\!\! \begin{array}{cc} 0&1\\ 0&0\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} -1&\lambda\\ -\frac{1}{\lambda}&1\\ \end{array} \!\!\right) \!\otimes\! \left(\!\! \begin{array}{cc} 0&0\\ 1&0\\ \end{array} \!\!\right) \end{aligned} \end{equation} \normalsize% \end{example} that is the well-known Winograd variant of Strassen algorithm. \begin{remarks} We keep the parameter~$\lambda$ useless in our presentation as a tribute to the construction made in~\cite{chatelin:1986a} that gives an elegant and elementary (i.e.\ based on matrix eigenvalues) construction of Winograd variant of Strassen matrix multiplication algorithm. \par This variant is remarkable in its own as shown in~\cite{bshouty:1995a} because it is optimal w.r.t.\ multiplicative \emph{and} additive complexity. \end{remarks} \begin{remark} Tensor's type is an invariant of isotropy's action. Hence, two tensors in the same orbit share the same type. Or equivalently, two tensors with the same type are two variants that represent the same matrix multiplication algorithm. \end{remark} This remark will allow us in Section~\ref{sec:ResultingTensor} to recognise the tensor constructed below as a variant of the Laderman matrix multiplication algorithm. \section{A tensor's construction}\label{sec:LadermanWinogradConstruction} Let us now present the construction of a variant of Laderman matrix multiplication algorithm based on Winograd variant of Strassen matrix multiplication algorithm. \par First, let us give the full contraction of the tensor~$\TensorLift{1}{1}{1}{\tensor{W}}\otimes{} A \otimes{} B \otimes{} C$: \begin{subequations} \begin{align} \label{eq:FULLWIN1} \left( -{ a_{22}}-{\frac {{ a_{32}}}{\lambda}}+\lambda{ a_{23}} \right) \left( { b_{22}}+{ \frac {{ b_{32}}}{\lambda}}-\lambda{ b_{23}} \right) \left( { c_{22}}+{\frac {{ c_{32}}}{\lambda}}-\lambda{ c_{23}} \right) &+ \\ \left( { a_{22}}-\lambda{ a_{23}} \right) \left( { b_{22}}-\lambda{ b_{23}} \right) \left( { c_{22}}-\lambda{ c_{23}} \right) &+ \\ \left( { a_{22}}+{\frac {{ a_{32}}}{\lambda}} \right) \left( { b_{22}}+{\frac {{ b_{32}}}{\lambda}} \right) \left( { c_{22}}+{\frac {{ c_{32}}}{\lambda}} \right) &+ \\ \label{eq:FW1E1}\mathcolor{blue}{{ a_{23}}\, \left( -{ b_{22}}-{\frac {{ b_{32}}}{\lambda}}+\lambda{ b_{23}}+{ b_{33}} \right) { c_{32}}} &+\\ \label{eq:FW1E2}\mathcolor{blue}{\left( -{ a_{22}}-{\frac {{ a_{32}}}{\lambda}}+\lambda{ a_{23}}+{ a_{33}} \right) { b_{32}}\,{ c_{23}}} &+\\ \label{eq:FW1E3}\mathcolor{blue}{{ a_{32}}\,{ b_{23}}\, \left( -{ c_{22}}-{\frac {{ c_{32}}}{\lambda}}+\lambda{ c_{23}}+{ c_{33}} \right)} &+\\ \label{eq:FP1}\mathcolor{cyan}{{ a_{33}}\,{ b_{33}}\,{c_{33}}} \end{align} \end{subequations} \subsection{A Klein four-group of isotropies}\label{sec:KleinFourGroup} Let us introduce now the following notations: \begin{equation} \label{eq:12} \IdMat{3}=\left(% \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right)\! \quad \textup{and}\quad P_{(12)}=\left(% \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array}\right) \end{equation} used to defined the following group of isotropies: \begin{equation} \label{eq:KleinFourGroup} \Group{K} = \left\lbrace \begin{array}{cc} \Isotropy{g_{1}}={\IdMat{3}}^{\times 3}, & \Isotropy{g_{2}} =\left(\IdMat{3} \times P_{(12)} \times P_{(12)}\right)\!, \\ \Isotropy{g_{3}}=\left(P_{(12)} \times P_{(12)} \times \IdMat{3}\right)\!, & \Isotropy{g_{4}} =\left(P_{(12)} \times \IdMat{3} \times P_{(12)}\right) \end{array} \right\rbrace{} \end{equation} that is isomorphic to the Klein four-group. \subsection{Its action on Winograd variant of Strassen algorithm} In the sequel, we are interested in the action of Klein four-group~(\ref{eq:KleinFourGroup}) on our Winograd variant of Strassen algorithm: \begin{equation} \label{eq:19} \IsotropyGroupAction{\Group{K}}{\TensorLift{1}{1}{1}{\tensor{W}}}=\sum_{\Isotropy{g}\in\Group{K}} \IsotropyAction{\Isotropy{g}}{\TensorLift{1}{1}{1}{\tensor{W}}} = \sum_{\Isotropy{g}\in\Group{K}} \sum_{i=1}^{7}\IsotropyAction{\Isotropy{g}}{\TensorLift{1}{1}{1}{w_{i}}} \end{equation} As we have for any isotropy~$\Isotropy{g}$: \begin{equation} \label{eq:21} \left\langle\IsotropyAction{\Isotropy{g}}{\TensorLift{1}{1}{1}{\tensor{W}}} | A\otimes{} B \otimes{} C\right\rangle = \left\langle \TensorLift{1}{1}{1}{\tensor{W}} | \IsotropyAction{\Isotropy{g}}{(A \otimes{} B\otimes{} C)}\right\rangle, \end{equation} the action of isotropies~$\Isotropy{g_{i}}$ is just a permutation of our generic matrix coefficients. Hence, we have the full contraction of the tensor~$(\IsotropyAction{\Isotropy{g_{2}}}{\TensorLift{1}{1}{1}{\tensor{W}}})\otimes{} A \otimes{} B \otimes{} C$: \begin{subequations} \begin{align} \label{eq:FULLWIN2} \left( -{ a_{21}}-{\frac {{ a_{31}}}{\lambda}}+\lambda{ a_{23}} \right) \left( { b_{11}}+{ \frac {{ b_{31}}}{\lambda}}-\lambda{ b_{13}} \right) \left( { c_{12}}+{\frac {{ c_{32}}}{\lambda}}-\lambda{ c_{13}} \right) &+ \\ \left( { a_{21}}-\lambda{ a_{23}} \right) \left( { b_{11}}-\lambda{ b_{13}} \right) \left( { c_{12}}-\lambda{ c_{13}} \right) &+ \\ \left( { a_{21}}+{\frac {{ a_{31}}}{\lambda}} \right) \left( { b_{11}}+{\frac {{ b_{31}}}{\lambda}} \right) \left( { c_{12}}+{\frac {{ c_{32}}}{\lambda}} \right) &+ \\ \label{eq:FW2E1}\mathcolor{blue}{{ a_{23}}\, \left( -{ b_{11}}-{\frac {{ b_{31}}}{\lambda}}+\lambda{ b_{13}}+{ b_{33}} \right) { c_{32}}}&+\\ \label{eq:FW2E2}\mathcolor{blue}{\left( -{ a_{21}}-{\frac {{ a_{31}}}{\lambda}}+\lambda{ a_{23}}+{ a_{33}} \right) { b_{31}}\,{ c_{13}}} &+\\ \label{eq:FW2E3}\mathcolor{blue}{{ a_{31}}\,{ b_{13}}\, \left( -{ c_{12}}-{\frac {{ c_{32}}}{\lambda}}+\lambda{ c_{13}}+{ c_{33}} \right)} &+\\ \label{eq:FP2}\mathcolor{cyan}{{ a_{33}}\,{ b_{33}}\,{c_{33}}}, \end{align} \end{subequations} the full contraction of the tensor~$(\IsotropyAction{\Isotropy{g_{3}}}{\TensorLift{1}{1}{1}{\tensor{W}}})\otimes{} A \otimes{} B \otimes{} C$: \begin{subequations} \begin{align} \label{eq:FULLWIN3} \left( -{ a_{11}}-{\frac {{ a_{31}}}{\lambda}}+\lambda{ a_{13}} \right) \left( { b_{12}}+{ \frac {{ b_{32}}}{\lambda}}-\lambda{ b_{13}} \right) \left( { c_{21}}+{\frac {{ c_{31}}}{\lambda}}-\lambda{ c_{23}} \right) &+ \\ \left( { a_{11}}-\lambda{ a_{13}} \right) \left( { b_{12}}-\lambda{ b_{13}} \right) \left( { c_{21}}-\lambda{ c_{23}} \right) &+ \\ \left( { a_{11}}+{\frac {{ a_{31}}}{\lambda}} \right) \left( { b_{12}}+{\frac {{ b_{32}}}{\lambda}} \right) \left( { c_{21}}+{\frac {{ c_{31}}}{\lambda}} \right) &+ \\ \label{eq:FW3E1}\mathcolor{blue}{{ a_{13}}\, \left( -{ b_{12}}-{\frac {{ b_{32}}}{\lambda}}+\lambda{ b_{13}}+{ b_{33}} \right) { c_{31}}} &+\\ \label{eq:FW3E2}\mathcolor{blue}{\left( -{ a_{11}}-{\frac {{ a_{31}}}{\lambda}}+\lambda{ a_{13}}+{ a_{33}} \right) { b_{32}}\,{ c_{23}}} &+\\ \label{eq:FW3E3}\mathcolor{blue}{{ a_{31}}\,{ b_{13}}\, \left( -{ c_{21}}-{\frac {{ c_{31}}}{\lambda}}+\lambda{ c_{23}}+{ c_{33}} \right)} &+\\ \label{eq:FP3}\mathcolor{cyan}{{ a_{33}}\,{ b_{33}}\,{c_{33}}} \end{align} \end{subequations} and the full contraction of the tensor~$(\IsotropyAction{\Isotropy{g_{4}}}{\TensorLift{1}{1}{1}{\tensor{W}}})\otimes{} A \otimes{} B \otimes{} C$: \begin{subequations} \begin{align} \label{eq:FULLWIN4} \left( -{ a_{12}}-{\frac {{ a_{32}}}{\lambda}}+\lambda{ a_{13}} \right) \left( { b_{21}}+{ \frac {{ b_{31}}}{\lambda}}-\lambda{ b_{23}} \right) \left( { c_{11}}+{\frac {{ c_{31}}}{\lambda}}-\lambda{ c_{13}} \right) &+ \\ \left( { a_{12}}-\lambda{ a_{13}} \right) \left( { b_{21}}-\lambda{ b_{23}} \right) \left( { c_{11}}-\lambda{ c_{13}} \right) &+ \\ \left( { a_{12}}+{\frac {{ a_{32}}}{\lambda}} \right) \left( { b_{21}}+{\frac {{ b_{31}}}{\lambda}} \right) \left( { c_{11}}+{\frac {{ c_{31}}}{\lambda}} \right) &+ \\ \label{eq:FW4E1}\mathcolor{blue}{{ a_{13}}\, \left( -{ b_{21}}-{\frac {{ b_{31}}}{\lambda}}+\lambda{ b_{23}}+{ b_{33}} \right) { c_{31}}} &+\\ \label{eq:FW4E2}\mathcolor{blue}{\left( -{ a_{12}}-{\frac {{ a_{32}}}{\lambda}}+\lambda{ a_{13}}+{ a_{33}} \right) { b_{31}}\,{ c_{13}}} &+\\ \label{eq:FW4E3}\mathcolor{blue}{{ a_{32}}\,{ b_{23}}\, \left( -{ c_{11}}-{\frac {{ c_{31}}}{\lambda}}+\lambda{ c_{13}}+{ c_{33}} \right)} &+\\ \label{eq:FP4}\mathcolor{cyan}{{ a_{33}}\,{ b_{33}}\,{c_{33}}}. \end{align} \end{subequations} There is several noteworthy points in theses expressions: \begin{remarks} \begin{itemize} \item the term~(\ref{eq:FP1}) is a fixed point of~$\Group{K}$'s action; \item the trilinear terms~(\ref{eq:FW1E1}) and~(\ref{eq:FW2E1}), (\ref{eq:FW1E2}) and~(\ref{eq:FW3E2}), (\ref{eq:FW1E3}) and~(\ref{eq:FW4E3}), (\ref{eq:FW2E2}) and~(\ref{eq:FW4E2}), (\ref{eq:FW2E3}) and~(\ref{eq:FW3E3}), (\ref{eq:FW3E1}) and~(\ref{eq:FW4E1}) could be \emph{added} in order to obtain new rank-on tensors without changing the tensor rank. For example~(\ref{eq:FW1E1})+(\ref{eq:FW2E1}) is equal to: \begin{equation} \label{eq:23} \mathcolor{blue}{{ a_{23}}\, \left( -{ b_{22}}-{\frac {{ b_{32}}}{\lambda}}+ \lambda{ b_{23}}+2 { b_{33}} -{ b_{11}}-{\frac {{ b_{31}}}{\lambda}}+ \lambda{ b_{13}}\right) { c_{32}}}. \end{equation} \end{itemize} \end{remarks} The tensor rank of the tensor~${\IsotropyGroupAction{\Group{K}}{\TensorLift{1}{1}{1}{\tensor{W}}}=\sum_{\Isotropy{g}\in\Group{K}} \IsotropyAction{\Isotropy{g}}{\TensorLift{1}{1}{1}{\tensor{W}}}}$ is~${1+3\cdot 4 + 6=19}$. Unfortunately, this tensor does not define a matrix multiplication algorithm (otherwise according to the lower bound presented in~\cite{blaser:2003}, it would be optimal and this note would have another title and impact). \par In the next section, after studying the action of isotropy group~$\Group{K}$ on the classical matrix multiplication algorithm, we are going to show how the tensor constructed above take place in construction of matrix multiplication tensor. \subsection{How far are we from a multiplication tensor?} Let us consider the classical~$\matrixsize{3}{3}$ matrix multiplication algorithm \begin{equation} \label{eq:9} \tensor{M} = \sum_{1\leq i,j,k \leq 3} e^{i}_{j} \otimes{} e^{j}_{k} \otimes{} e^{k}_{i} \end{equation} where~$e^{i}_{j}$ denotes the matrix with a single non-zero coefficient~$1$ at the intersection of line~$i$ and column~$j$. By considering the trilinear monomial: \begin{equation} \label{eq:22} a_{ij}b_{jk}c_{ki} = \left\langle e^{i}_{j} \otimes{} e^{j}_{k} \otimes{} e^{k}_{i} \,\big|\, A \otimes{} B \otimes{} C \right\rangle, \end{equation} we describe below the action of an isotropy~$\Isotropy{g}$ on this tensor by the induced action: \begin{equation} \label{eq:11} \begin{aligned} \IsotropyAction{\Isotropy{g}}{a_{ij}b_{jk}c_{ki}}&= \left\langle \IsotropyAction{\Isotropy{g}}{(e^{i}_{j} \otimes{} e^{j}_{k} \otimes{} e^{k}_{i})}\,\big|\, A \otimes{} B \otimes{} C \right\rangle\!, \\ &= \left\langle {e^{i}_{j} \otimes{} e^{j}_{k} \otimes{} e^{k}_{i}}\,\big|\, \IsotropyAction{\Isotropy{g}}{(A \otimes{} B \otimes{} C)} \right\rangle\!. \end{aligned} \end{equation} \begin{remark} The isotropies in~$\Group{K}$ act as a permutation on rank-one composant of the tensor~$\tensor{M}$: we say that the group~$\Group{K}$ is a \emph{stabilizer} of~$\tensor{M}$. More precisely, we have the following~$9$ orbits represented by the trilinear monomial sums:\par \footnotesize% \begin{subequations} \begin{align} \label{eq:20:1} \sum_{i=1}^{4} \IsotropyAction{\Isotropy{g_{i}}}{{a_{11}}\,{b_{11}}\,{c_{11}}} & = {a_{11}}\,{b_{11}}\,{c_{11}} + {a_{12}}\,{b_{22}}\,{c_{21}} + {a_{22}}\,{b_{21}}\,{c_{12}} + {a_{21}}\,{b_{12}}\,{c_{22}}, \\ \label{eq:20:2} \sum_{i=1}^{4} \IsotropyAction{\Isotropy{g_{i}}}{\mathcolor{blue}{{a_{22}}\,{b_{22}}\,{c_{22}}}} &= \mathcolor{blue}{{a_{22}}\,{b_{22}}\,{c_{22}}} + {a_{21}}\,{b_{11}}\,{c_{12}} + {a_{11}}\,{b_{12}}\,{c_{21}} + {a_{12}}\,{b_{21}}\,{c_{11}}, \\ \label{eq:20:3} \sum_{i=1}^{4} \IsotropyAction{\Isotropy{g_{i}}}{\mathcolor{blue}{{a_{22}}\,{b_{23}}\,{c_{32}}}}&= \mathcolor{blue}{{a_{22}}\,{b_{23}}\,{c_{32}}} + {a_{21}}\,{b_{13}}\,{c_{32}} + {a_{11}}\,{b_{13}}\,{c_{31}} + {a_{12}}\,{b_{23}}\,{c_{31}}, \\ \label{eq:20:4} \sum_{i=1}^{4} \IsotropyAction{\Isotropy{g_{i}}}{\mathcolor{blue}{{a_{23}}\,{b_{32}}\,{c_{22}}}}&= \mathcolor{blue}{{{a_{23}}\,{b_{32}}\,{c_{22}}}} + {a_{23}}\,{b_{31}}\,{c_{12}} + {a_{13}}\,{b_{32}}\,{c_{21}} + {a_{13}}\,{b_{31}}\,{c_{11}}, \\ \label{eq:20:5} \sum_{i=1}^{4} \IsotropyAction{\Isotropy{g_{i}}}{\mathcolor{blue}{{a_{32}}\,{b_{22}}\,{c_{23}}}}&= \mathcolor{blue}{{a_{32}}\,{b_{22}}\,{c_{23}}} + {a_{31}}\,{b_{11}}\,{c_{13}} + {a_{31}}\,{b_{12}}\,{c_{23}} + {a_{32}}\,{b_{21}}\,{c_{13}}, \\ \label{eq:20:6} \frac{1}{2}\sum_{i=1}^{4} \IsotropyAction{\Isotropy{g_{i}}}{\mathcolor{blue}{{a_{23}}\,{b_{33}}\,{c_{32}}}} &= \mathcolor{blue}{{a_{23}}\,{b_{33}}\,{c_{32}}} +{a_{13}}\,{b_{33}}\,{c_{31}}, \\ \label{eq:20:7} \frac{1}{2}\sum_{i=1}^{4} \IsotropyAction{\Isotropy{g_{i}}}{\mathcolor{blue}{{a_{32}}\,{b_{23}}\,{c_{33}}}} &= \mathcolor{blue}{{a_{32}}\,{b_{23}}\,{c_{33}}} +{a_{31}}\,{b_{13}}\,{c_{33}}, \\ \label{eq:20:8} \frac{1}{2}\sum_{i=1}^{4} \IsotropyAction{\Isotropy{g_{i}}}{\mathcolor{blue}{{a_{33}}\,{b_{32}}\,{c_{23}}}}&= \mathcolor{blue}{{a_{33}}\,{b_{32}}\,{c_{23}}} +{a_{33}}\,{b_{31}}\,{c_{13}}, \\ \label{eq:20:9} \frac{1}{4} \sum_{i=1}^{4} \IsotropyAction{\Isotropy{g_{i}}}{\mathcolor{blue}{{a_{33}}\,{b_{33}}\,{c_{33}}}} &= \mathcolor{blue}{{a_{33}}\,{b_{33}}\,{c_{33}}}. \end{align} \end{subequations} \normalsize \end{remark} Hence, the action of~$\Group{K}$ decomposes the classical matrix multiplication tensor~$\tensor{M}$ as a transversal action of~$\Group{K}$ on the implicit projection~$\TensorZero{1}{1}{1}{\tensor{M}}$, its action on the rank-one tensor~${ e^{1}_{1} \otimes{} e^{1}_{1} \otimes{} e^{1}_{1}}$ and a correction term also related to orbits under~$\Group{K}$: \begin{equation} \label{eq:25b} \begin{aligned} \tensor{M}& = \IsotropyGroupAction{\Group{K}}{\left( e^{1}_{1} \otimes{} e^{1}_{1} \otimes{} e^{1}_{1}\right)} + \IsotropyGroupAction{\Group{K}}{\TensorZero{1}{1}{1}{\tensor{M}}} - \tensor{R}, \\ \tensor{R} &=(1/2)\, \IsotropyGroupAction{\Group{K}}{\left( e^{2}_{3} \otimes{} e^{3}_{3}\otimes{} e^{3}_{2}\right)} +(1/2)\, \IsotropyGroupAction{\Group{K}}{\left( e^{3}_{3} \otimes{} e^{3}_{2}\otimes{} e^{2}_{3}\right)}\\ &+(1/2)\, \IsotropyGroupAction{\Group{K}}{\left( e^{3}_{2} \otimes{} e^{2}_{3}\otimes{} e^{3}_{3}\right)} + 3\,\IsotropyGroupAction{\Group{K}}{\left( e^{3}_{3} \otimes{} e^{3}_{3}\otimes{} e^{3}_{3}\right)}. \end{aligned} \end{equation} \subsection{Resulting matrix multiplication algorithm}\label{sec:ResultingTensor} The term~${\TensorZero{1}{1}{1}{\tensor{M}}}$ is a~$\matrixsize{2}{2}$ matrix multiplication algorithm that could be replaced by any other one. Choosing~$\TensorLift{1}{1}{1}{\tensor{W}}$, we have the following properties: \begin{itemize} \item the tensor rank of~$\IsotropyGroupAction{\Group{K}}{\TensorLift{1}{1}{1}{\tensor{W}}}$ is~$19$; \item its addition with the correction term~$\tensor{R}$ does not change its tensor rank. \end{itemize} Hence, we obtain a matrix multiplication tensor with rank~${23(={19+4})}$. Furthermore, the resulting tensor have the same type than the Laderman matrix multiplication tensor, and thus it is a variant of the same algorithm. \par We conclude that the Laderman matrix multiplication algorithm can be constructed using the orbit of an optimal~$\matrixsize{2}{2}$ matrix multiplication algorithm under the action of a given group leaving invariant classical~$\matrixsize{3}{3}$ matrix multiplication variant/algorithm and with a transversal action on one of its projections. \section{Concluding remarks} All the observations presented in this short note came from an experimental mathematical approach using the computer algebra system Maple~\cite{monagan:2007a}. While implementing effectively (if not efficiently) several tools needed to manipulate matrix multiplication tensor---tensors, their isotropies and contractions, etc.---in order to understand the theory, the relationship between the Laderman matrix multiplication algorithm and the Strassen algorithm became clear by simple computations that will be tedious or impossible by hand. \par As already shown in~\cite{sykora:1977a}, this kind of geometric configuration could be found and used with other matrix size. \par The main opinion supported by this work is that symmetries play a central role in effective computation for matrix multiplication algorithm and that only a geometrical interpretation may brings further improvement. \paragraph{Acknowledgment.} The author would like to thank Alin Bostan for providing information on the work~\cite{sykora:1977a}. \bibliographystyle{acm}
1,108,101,566,127
arxiv
\section{Introduction} Integrating renewable energy generations, distributed microgenerators, and storage systems into power grids is one of the key features of enabling the future smart grid \cite{SG-techreport}. However, this integration gives rise to new challenges, such as the appearance of overvoltages at the distribution level. Accurate and reliable \emph{state estimation} must be developed to achieve the real-time monitoring and control of this hybrid distributed generation system and therefore assure the proper and reliable operation of the future grid \cite{Huang-12}. An increase in the penetration of the distributed generator necessarily leads to an unusual increase in measurements \cite{Nagasawa-12}. Conventional state estimation techniques, such as the weighted least squares (WLS) algorithm, rely on measurements from the supervisory control and data acquisition (SCADA) systems \cite{abur2004powe}. A well-known fact is that the measurements provided by SCADA are intrinsically \emph{less} accurate \cite{Li-14-TPS,Gol-14}. Moreover, adapting conventional WLS technique to SCADA-based state estimation is \emph{not} robust due to its vulnerability to poor measurements \cite{Gol-14}. More recently, the deployment of high-precision phasor measurement units (PMUs) in electric power grids has been proven to improve the accuracy of state estimation algorithms \cite{Hurtgen-08,Phadke-09,PMU-09-book,Gol-14}. However, PMUs remain expensive at present, and \emph{limited} PMU measurements, along with conventional SCADA measurements, must be incorporated into the state estimator for the active control of the smart grid. Several state estimation methods using a mix of conventional SCADA and PMU measurements have already been proposed for electric power grids, as shown in Refs. \cite{Zhou-06,Gol-15}. However, the joint processing of measurements of \emph{different} qualities may result in an \emph{ill-conditioned} system. Moreover, another critical challenge but essential task in deploying the future grid at a large scale is the \emph{massive} amount of measurement data that needs to be transmitted to the data processing control center (DPCC). This poses a risk to the grid's operator: DPCC is drowning in data overload, a phenomenon called ``data tsunami.'' A massive amount of measurement data also results in a long time for data collection, so that the state estimation result is not prompt. To alleviate the impact of data tsunami, Alam et al. \cite{Alam-14} took advantage of the compressibility of spatial power measurements to decrease the number of measurements based on the compressive sensing technique. Nevertheless, the performance of \cite{Alam-14} is relatively sensitive to the influence of the so-called compressive sensing matrix. We first propose a practical solution to address the abovementioned challenges. Inspired by \cite{Alam-14}, we can compress the measurement not only with compressive sensing matrix but also itself. Therefore, we use a very short length to compress the partial measurements of the system.\footnote{The work in \cite{Alam-14} designed a compressed matrix to shorten the measurements, where the compressed measurements are \emph{still} represented with 12 or 16 bits for transmission. However, in the present study, partial measurements are represented in extremely short length for transmission. Therefore, the focus of our study is different from that of \cite{Alam-14}.} The use of a \emph{very short} word length (e.g., 1-6 bits)\footnote{In practical application, all of the measurements obtained by the meter devices must be quantized before being transmitted to the DPCC for processing. Modern SCADA systems use a typical word length of 12 (or 16) bits to represent the measurements employed to obtain a high-resolution quantized measurement.} to represent a partial measurement of the system state in the meter device reduces the amount of data that the DPCC needs to process. This data-reduction technique considerably enhances the efficiency of the grid's communication infrastructure and bandwidth because only a limited number of bits representing the measurements are sent to the DPCC. In addition, instead of substituting all sensors in the cerrunt power system with PMUs, we only have to add several wireless meters with low bit analog-to-digital converter, which are cheaper than conventional meters. Hence, the cost of placing the meters can be reduced. Nevertheless, the traditional state estimation methods cannot be applied to the system with partial measurements represented by very short length. Thus, we develop a new scheme to obtain an optimal state estimate and then minimize the performance loss due to quantization while incorporating measurements of different qualities. Before designing the state estimation algorithm, we first formalize the linear state estimation problem using data with different qualities as a probabilistic inference problem. Then, this problem can be tackled efficiently by describing an appropriate factor graph related to the power grid topology. Particularly, the factorization properties of the factor graphs improve the accuracy of mixing measurements of different qualities. Then, the concept of the estimation algorithm is motivated by using the maximum posteriori (MAP) estimate to construct the system states. The proposed MAP state estimate algorithm derived from the generalized approximate message passing (GAMP)-based algorithms \cite{Rangan-11-ISIT,Krzakala-12,SwAMP-2015}, which exhibit excellent performance in terms of both precision and speed in dealing with high-dimensional inference problems, while preserving low complexity. In contrast to the traditional \emph{linear} solver for state estimation, which does not use prior information on the system state, the proposed scheme can learn and therefore exploit prior information on the system state by using expectation-maximization (EM) algorithm \cite{Vila-13TSP} based on the estimation result for each iteration. The proposed framework is tested in different test systems. The simulation results show that the proposed algorithm performs \emph{significantly} better than other linear estimates. In addition, by using the proposed algorithm, the obtained state estimations retain accurate results, even when \emph{more than half} of the measurements are quantized to a very short word length. Thus, the proposed algorithm can integrate data with different qualities while reducing the amount of data. {\em Notations}---Throughout the paper, we use $\mathbb{R}$ and $\mathbb{C}$ to represent the set of real numbers and complex numbers, respectively. The superscripts $(\cdot)^{\mathsf{H}}$ and $(\cdot)^{*}$ denote the Hermitian transposition and conjugate transpose, respectively. The identity matrix of size $N$ is denoted by $\mathbf{I}_{N}$ or simply $\mathbf{I}$. A complex Gaussian random variable $x$ with mean $\widehat{x}$ and variance $\sigma^2_{x}$ is denoted by $\mathscr{N}_{\mathbb{C}}(x;\widehat{x},\sigma^2_{x}) \triangleq (\pi\sigma^2_{x})^{-1}\exp(-|x-\widehat{x}|^2/\sigma^2_{x})$ or simply $\mathscr{N}_{\mathbb{C}}(\widehat{x},\sigma^2_{x})$. $\mathbb{E}[\cdot]$ and $\mathbb{VAR}[\cdot]$ represent the expectation and variance operators, respectively. $\Re\{ \cdot \}$ and $\Im\{ \cdot \}$ return the real and imaginary parts of its input argument, respectively. $\arg(\cdot)$ returns the principal argument of its input complex number. Finally, $\mathrm{j} \triangleq \sqrt{-1}$. \section{System Model and Data Reduction}\label{sec:02} \subsection{System Model} Our interest is oriented toward applications in the distribution system. Following the canonical work on the formulation of the linear state estimation problem \cite{PMU-09-book} and power flow analysis \cite{Chen-91-TPD}, we use a $\pi$-model transmission line to indicate how voltage and current measurements are related to the considered linear state estimation problem. For easy understanding of this model, we start with a $\pi$-equivalent of a transmission line connecting two PMU-equipped buses $i$ and $j$ as shown in Fig.\ref{fig:line_ex}, where $Y_{ij}$ is the series admittance of the transmission line, $Y_{i0}$ and $Y_{j0}$ are the shunt admittances of the side of the transmission line in which the current measurements $I_{i0}$ and $I_{j0}$ are taken, respectively, and the parallel conductance is neglected. In this case, the system state variables are the voltage magnitude and angle at each end of the transmission line, that is, $V_{i}\in \mathbb{C}$ and $V_{j}\in \mathbb{C}$. \begin{figure} \begin{center} \resizebox{3.35in}{!}{% \includegraphics*{Access-2017-06130-fig1} }% \caption{Transmission $\pi$-line model for calculating line flows.}\label{fig:line_ex} \end{center} \end{figure} In Fig. \ref{fig:line_ex}, the line current $I_{ij}$, measured at bus $i$, is positive in the direction flowing from bus $i$ to bus $j$, which is given by \begin{equation}\label{eq:I_ij} I_{ij} = I_{l}+I_{i0} =Y_{ij}\left(V_{i}-V_{j}\right) +Y_{i0}V_{i}. \end{equation} Likewise, the line current $I_{ji}$, measured at bus $j$, is positive in the direction flowing from bus $j$ to bus $i$, which can be expressed as \begin{equation}\label{eq:I_ji} I_{ji} = -I_{l}+I_{j0} =Y_{ij}\left(V_{j}-V_{i}\right) +Y_{j0}V_{j}. \end{equation} Then, (\ref{eq:I_ij}) and (\ref{eq:I_ji}) can be written in matrix form as \begin{equation}\label{eq:transmission_line__matrix} \begin{bmatrix} I_{ij} \\ I_{ji} \end{bmatrix} = \begin{bmatrix} Y_{ij}+Y_{i0} & -Y_{ij}\\ -Y_{ij} & Y_{ij}+Y_{j0} \end{bmatrix} \begin{bmatrix} V_{i} \\ V_{j} \end{bmatrix}. \end{equation} Given that PMU devices are installed in both buses, the bus voltage and the current flows through the bus are available through PMU measurements. Based on these measured data, the \emph{complete} state equation can be expressed as \begin{equation}\label{eq:basic_linear_eq} \begin{bmatrix} V_{i}\\ V_{j}\\ I_{ij}\\ I_{ji} \end{bmatrix} = \underbrace{ \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ Y_{ij}+Y_{i0} & -Y_{ij} \\ -Y_{ij} & Y_{ij}+Y_{j0} \end{bmatrix}}_{\triangleq\, {\bf H}} \begin{bmatrix} V_{i} \\ V_{j} \end{bmatrix}. \end{equation} Here, ${\bf H}$ can be decomposed into four matrices related to power system topology \cite{PMU-09-book,Jones-11}. These matrices are termed the current measurement-bus incidence matrix, the voltage measurement-bus incidence matrix, the series admittance matrix, and the shunt admittance matrix, as explained later in this section. Thereafter, (\ref{eq:basic_linear_eq}) can be further extended to the general model in power systems. \begin{figure} \begin{center} \resizebox{3.5in}{!}{% \includegraphics*{Access-2017-06130-fig2} }% \caption{(a) A fictitious six-bus system, where current measurement is represented by an arrow above a current meter. (b) The corresponding $\mathbf{A}$, $\boldsymbol{\Pi}$, $\mathbf{Y}_{l}$, and $\mathbf{Y}_{s}$ for this six-bus system. (c) The full state equation for this six-bus system.}\label{fig:system_model_example} \end{center} \end{figure} Before explaining the rules for constructing these matrices used in the state equation, a \emph{fictitious} six-bus system is first presented, as shown in Fig. \ref{fig:system_model_example}(a). This simple system is used to demonstrate how each of these matrices is constructed for the sake of clarity. As Fig \ref{fig:system_model_example}(a) indicates, the line current flowing through each line is directly measurable with a current meter. However, bus voltages are measurable only at buses 1, 5, and 6 because of the PMUs are installed only in these three buses. Thus, in this example, the number of buses is $N=6$, the number of PMUs (or the number of buses that have a voltage measurement) is $L=3$, and the number of current measurements is $M = 5$. The explicit rules for constructing each of the four matrices are provided. First, $\mathbf{A} \in \mathbb{R}^{M\times N}$ is the current measurement-bus incidence matrix that indicates the location of the current flow measurements in the network, where the rows and columns of $\mathbf{A}$ represent, respectively, the \emph{serial} number of the current measurement and the bus number. More specifically, the entries of $\mathbf{A}$ are defined as follows. If the $m$-th current measurement $I_{m}$ (corresponding to the $m$-th row) leaves from the $n$-th bus (corresponding to the $n$-th column) and heads toward the $n'$-th bus (corresponding to the $n'$-th column), the $(m,n)$-th element of $\mathbf{A}$ is $1$, the $(m,n')$-th element of $\mathbf{A}$ is $-1$, and all the remaining entries of $\mathbf{A}$ are identically zero. Second, $\boldsymbol{\Pi} \in \mathbb{R}^{L\times N}$ is the voltage measurement-bus incidence matrix that points out the relationship between a voltage measurement and its corresponding location in the network, where the rows and columns of $\boldsymbol{\Pi}$ represent the \emph{serial} number of the voltage measurement and the bus number, respectively. Hence, the entries of $\boldsymbol{\Pi}$ can be defined in such a way. If the $l$-th voltage measurement (corresponding to the $l$-th row) is located at the $n$-th bus (corresponding to the $n$-th column), then the $(l,n)$-th element of $\boldsymbol{\Pi}$ is $1$, and all the other elements of $\boldsymbol{\Pi}$ are zero. Third, $\mathbf{Y}_{l} \in \mathbb{C}^{M\times M}$, which denotes the series admittance matrix, is a \emph{diagonal} matrix, whose diagonal terms are the line admittance of the transmission line being measured. Thus, $\mathbf{Y}_{l}$ is populated using the following single rule. For the $m$-th current measurement, the $(m,m)$-th element of $\mathbf{Y}_{l}$ is the series admittance of the line being measured. Fourth, $\mathbf{Y}_{s} \in \mathbb{C}^{M\times N}$ is the shunt admittance matrix whose elements are determined by the shunt admittances of the lines which have a current measurement. The following rules are used to populate the matrix. If the $m$-th current measurement \emph{leaves} the $n$-th bus, then the $(m,n)$-th element of $\mathbf{Y}_{s}$ is the shunt admittance of the line, and all the other elements of $\mathbf{Y}_{s}$ are zero. By following these rules, the constructions of $\mathbf{A}$, $\boldsymbol{\Pi}$, $\mathbf{Y}_{l}$, and $\mathbf{Y}_{s}$ for the six-bus system are illustrated in Fig. \ref{fig:system_model_example}(b). Given the above definitions, the linear state equation in (\ref{eq:basic_linear_eq}) can be further extended to \emph{general} linear state equation with $N$ buses, $L$ voltage measurements, denoted by $\mathbf{v}\in \mathbb{C}^{L}$, and $M$ current measurements, denoted by $\mathbf{i}\in \mathbb{C}^{M}$, as follows \cite{Jones-11} \begin{equation}\label{eq:basic_linear_matrix_eq_error_free} \underbrace{ \begin{bmatrix} \mathbf{v} \\ \mathbf{i} \end{bmatrix}}_{\triangleq\, \mathbf{z}}= \underbrace{\begin{bmatrix} \boldsymbol{\Pi} \\ \mathbf{Y}_{l} \mathbf{A} + \mathbf{Y}_{s} \end{bmatrix}}_{\triangleq\, \mathbf{H} } \mathbf{x}, \end{equation} where $\mathbf{z}\in \mathbb{C}^{L+M}$ denotes a vertical concatenation of the set of voltage and current phasor measurements, $\mathbf{x}\in \mathbb{C}^{N}$ is the \emph{complex} system state, and ${\bf H}\in \mathbb{C}^{(L+M)\times N}$ is a topology matrix (i.e., also referred to as the measurement matrix in a general linear system).\footnote{Using slight modifications, the system model in (\ref{eq:basic_linear_matrix_eq_error_free}) can easily be extended to \emph{three-phase} power systems \cite{Jones-11}. Each element of ${\bf H}$ is modified as follows. Elements ``1'' and ``0'' in $\boldsymbol{\Pi} $ and $\mathbf{A}$ are replaced with a $3 \times 3$ identity matrix and a $3 \times 3$ null matrix, respectively. Each diagonal element of $\mathbf{Y}_{l}$ is replaced with $3 \times 3$ admittance structures, whereas the off-diagonal elements become $3 \times 3$ zero matrices. Finally, each nonzero element of $\mathbf{Y}_{s}$ is replaced with $3 \times 3$ admittance structures and the remaining elements become $3 \times 3$ zero matrices.} Considering again the fictitious six-bus system presented earlier, the full system state for this system is also provided in Fig. \ref{fig:system_model_example}(c) for ease of understanding. Defining $P\triangleq L+M$ and accounting for the measurement error in the linear state equation, (\ref{eq:basic_linear_matrix_eq_error_free}) then becomes\footnote{As described in (\ref{eq:basic_linear_matrix_eq}), the considered system model is expressed as $\mathbf{y} = \mathbf{H} \mathbf{x} \,+\, \mathbf{e}$ because we aimed to estimate $\mathbf{x}$. Thus, $\mathbf{H}$ should be at least a square matrix or an overdetermined system. In this case, $P = (L+M) \geq N$.} \begin{equation}\label{eq:basic_linear_matrix_eq} \mathbf{y} = \underbrace{\mathbf{H} \mathbf{x}}_{=\, \mathbf{z}} \,+\, \mathbf{e}, \end{equation} where $\mathbf{y}\in \mathbb{C}^{P}$ is the raw measurement vector of the voltage and current phasors, $\mathbf{z}\in \mathbb{C}^{P}$ is also referred to as the \emph{noiseless} measurement vector, and $\mathbf{e}\in \mathbb{C}^{P}$ is the measurement error, in which each entry is modeled as an identically and independently distributed (i.i.d.) complex Gaussian random variable with zero mean and variance $\sigma^{2}$. \subsection{Data Reduction and Motivation} In reality, all of the measurements must be quantized before being transmitted to the DPCC for processing. For example, modern SCADA systems are equipped with an analog device that converts the measurement into binary values (i.e., the usual word lengths are 12 to 16 bits). To achieve this, the measurements $\mathbf{y}= \left\{y_{\mu}\right\}_{\mu =1}^{P}$ are processed by a complex-valued quantizer in the following componentwise manner: \begin{equation} \label{eq:Q_y} \widetilde{\mathbf{y}} = \left\{\widetilde{y}_{\mu}\right\}_{\mu =1}^{P}= \mathcal{Q}_{\mathbb{C}}\left(\mathbf{y} \right) = \left\{\mathcal{Q}_{\mathbb{C}}\left(y_{\mu} \right) \right\}_{\mu =1}^{P}, \end{equation} where $\widetilde{\mathbf{y}}=\left\{\widetilde{y}_{\mu}\right\}_{\mu =1}^{P}$ is the \emph{quantized version} of $\mathbf{y}=\left\{y_{\mu}\right\}_{\mu =1}^{P}$, and each complex-valued quantizer $\mathcal{Q}_{\mathbb{C}}\left(\cdot\right)$ is defined as $\widetilde{y}_{\mu} = \mathcal{Q}_{\mathbb{C}}\left(y_{\mu} \right) \triangleq \mathcal{Q}\left(\Re\left(y_{\mu} \right)\right) + \mathrm{j}\mathcal{Q}\left(\Im\left(y_{\mu} \right)\right)$. This means that, for each complex-valued quantizer, two real-valued quantizers exist that \emph{separately} quantize the real and imaginary part of the measurement data. Here, the real-valued quantizer $\mathcal{Q}\left(\cdot\right)$ is a $B$ bit midrise quantizer \cite{Proakis-book} that maps a real-valued input to one of $2^B$ \emph{disjoint} quantization regions, which are defined as $\mathscr{R}_{1}=(-\infty,r_{1}], \mathscr{R}_{2}=(r_{1},r_{2}], \ldots, \mathscr{R}_{b}=(r_{b-1},r_{b}], \ldots, \mathscr{R}_{2^{B}}=(r_{2^{B}-1},\infty)$, where $-\infty<r_{1} < r_{2}<\cdots<r_{2^{B}-1}<\infty$. All the regions, except for $\mathscr{R}_{1}$ and $\mathscr{R}_{2^{B}}$, exhibit equal spacing with increments of $\Delta$. In this case, the boundary point of $\mathscr{R}_{b}$ is given by $r_{b} = \left( -2^{B-1} + b \right)\Delta$, for $b=1,\ldots, 2^{B}-1$. Thus, if a real-valued input falls in the region $\mathscr{R}_{b}$, then the quantization output is represented by $r_{b}-\frac{\Delta}{2}$, that is, the \emph{midpoint} of the quantization region in which the input lies. When the DPCC receives the quantized measurement vector $\widetilde{\mathbf{y}}$, it can perform state estimation using the linear minimum mean square error (LMMSE) method: \begin{equation} \label{eq:LMMSE_def} \widehat{\mathbf{x}}^{\mathsf{LMMSE}} = \left({\bf H}^{\mathsf{H}} {\bf H} + \sigma^{2} \mathbf{I}\right)^{-1} {\bf H}^{\mathsf{H}}\widetilde{\mathbf{y}}. \end{equation} As can be observed, the accuracy of the LMMSE state estimator highly depends on the quantized measurements $\widetilde{\mathbf{y}}$. A relatively high-resolution quantizer must be employed in the meter device to maintain the high-precision measurement data and therefore prevent the LMMSE performance from being affected by lower-resolution measurements. However, this is unfortunately accompanied by a significant increase in the data for transmission and processing. This unusual trend of increasing data motivates the need for a data-reduction solution. To reduce the amount of high-precision measurement data, we propose quantizing and representing \emph{partial} measurements using a \emph{very short} word length (e.g., 1-6 bits), instead of adopting a higher number of quantization bits to represent \emph{all} the measurements. In this way, a more efficient use of the available bandwidth can be achieved. However, lower-resolution measurements tend to degrade the state estimation performance. Moreover, quantized measurements with different resolutions require a proper design of the data fusion process to improve the state estimation performance. Given the above problems, we develop in the next section a new framework based on a Bayesian belief inference to incorporate the quantized measurements from the meter devices employing different resolution quantizers to obtain an optimal state estimate. \section{State Estimation Algorithm}\label{sec:03} \subsection{Theoretical Foundation and Factor Graph Model} The objective of this work is to estimate the system state $\mathbf{x}=\{x_{i}\}_{i=1}^{N}$ from the quantized measurement vector $\widetilde{\mathbf{y}}$ and the knowledge of matrix ${\bf H}$ using the minimum mean square error (MMSE) estimator. A well-known fact is that the Bayesian MMSE inference of $x_i$ is equal to the posterior mean,\footnote{In what follows, we will derive the posterior mean and variance based on the MMSE estimation.} that is, \begin{equation}\label{eq:compuate_marginal} \widehat{x}_{i}^{\mathsf{MMSE}} = \int \, x_{i}\mathscr{P}(x_{i}|{\bf H} ,\widetilde{\mathbf{y}}) \mathrm{d} x_{i} ,\; \forall i, \end{equation} where $\mathscr{P}(x_{i}|\mathbf{H},\widetilde{\mathbf{y}})$ is the marginal posterior distribution of the joint posterior distribution $\mathscr{P}(\mathbf{x}|\mathbf{H},\widetilde{\mathbf{y}})$. According to Bayes' rule, the joint posterior distribution obeys \begin{equation}\label{eq:joint_def} \mathscr{P}(\mathbf{x}|\mathbf{H},\widetilde{\mathbf{y}}) \propto \mathscr{P}(\widetilde{\mathbf{y}}|\mathbf{H},\mathbf{x}) \mathscr{P}_{\mathsf{o}}(\mathbf{x}), \end{equation} where $\mathscr{P}(\widetilde{\mathbf{y}}|{\bf H},\mathbf{x})$ is the likelihood function, $\mathscr{P}_{\mathsf{o}}(\mathbf{x})$ is the prior distribution of the system state $\mathbf{x}$, and $\propto$ denotes that the distribution is to be normalized so that it has a unit integral.\footnote{On the basis of Bayes' theorem, (\ref{eq:joint_def}) is originally expressed as \begin{equation}\label{eq:joint_def11} \mathscr{P}(\mathbf{x}|\mathbf{H},\widetilde{\mathbf{y}}) = \frac{ \mathscr{P}(\widetilde{\mathbf{y}}|\mathbf{H},\mathbf{x}) \mathscr{P}_{\mathsf{o}}(\mathbf{x}) } {\mathscr{P}(\widetilde{\mathbf{y}}|\mathbf{H}) }, \end{equation} where the denominator \begin{equation}\label{eq:joint_def22} \mathscr{P}(\widetilde{\mathbf{y}}|\mathbf{H}) = \int \mathscr{P}(\widetilde{\mathbf{y}}|\mathbf{H},\mathbf{x}) \mathscr{P}_{\mathsf{o}}(\mathbf{x}) \mathrm{d}\mathbf{x} \end{equation} defines the ``prior predictive distribution'' of $\widetilde{\mathbf{y}}$ for a given topology matrix $\mathbf{H}$ and may be set to an unknown \emph{constant}. In calculating the density of $\mathbf{x}$, any function that does not depend on this parameter, such as $\mathscr{P}(\widetilde{\mathbf{y}}|\mathbf{H})$, can be discarded. Therefore, by removing $\mathscr{P}(\widetilde{\mathbf{y}}|\mathbf{H})$ from (\ref{eq:joint_def11}), the relationship changes from being ``equals'' to being ``proportional.'' That is, $\mathscr{P}(\mathbf{x}|\mathbf{H},\widetilde{\mathbf{y}})$ is proportional to the numerator of (\ref{eq:joint_def11}). However, in discarding $\mathscr{P}(\widetilde{\mathbf{y}}|\mathbf{H})$, the density $\mathscr{P}(\mathbf{x}|\mathbf{H},\widetilde{\mathbf{y}})$ has lost some properties, such as integration to one over the domain of $\mathbf{x}$. To ensure that $\mathscr{P}(\mathbf{x}|\mathbf{H},\widetilde{\mathbf{y}})$ is properly distributed, the symbol $\propto$ simply means that the distribution should be normalized to have a unit integral.} Given that the entries of the measurement noise vector ${\bf e}$ are i.i.d. random variables and under the assumption that the prior distribution of $\mathbf{x}$ has a \emph{separable} form, that is, $\mathscr{P}_{\mathsf{o}}(\mathbf{x}) = \prod_{i= 1}^{N} \mathscr{P}_{\mathsf{o}}(x_{i})$, (\ref{eq:joint_def}) can be further factored as \begin{equation}\label{eq:joint} \mathscr{P}(\mathbf{x}|\mathbf{H},\widetilde{\mathbf{y}}) \propto \prod_{\mu = 1}^{P} \mathscr{P}(\widetilde{y}_{\mu}|\mathbf{H},\mathbf{x})\prod_{i= 1}^{N} \mathscr{P}_{\mathsf{o}}(x_{i}), \end{equation} where $\mathscr{P}_{\mathsf{o}}(x_{i})$ is the prior distribution of the $i$-th element of $\mathbf{x}$ and $\mathscr{P}(\widetilde{y}_{\mu}|\mathbf{H},\mathbf{x})$ describes the $\mu$-th measurement with i.i.d. complex Gaussian noise \cite{Wen-15}, which can be explicitly represented as follows: \begin{equation} \mathscr{P}(\widetilde{y}_{\mu}|\mathbf{H},\mathbf{x}) = \int_{\widetilde{y}_{\mu}-\frac{\Delta}{2}}^{\widetilde{y}_{\mu}+\frac{\Delta}{2}} \mathscr{N}_{\mathbb{C}}\left( y_{\mu}; \sum_{i=1}^{N} H_{\mu i} x_{i}, \sigma^2\right) {\rm d} y_{\mu}, \end{equation} where $H_{\mu i}$ denotes the component of $\mathbf{H}$ in the $\mu$-th row and $i$-th column. For the considered problem, the entries of the system state $\mathbf{x}$ can be treated as i.i.d. complex Gaussian random variables with mean $\nu_{x}$ and variance $\sigma^2_{x}$ for each prior distribution $\mathscr{P}_{\mathsf{o}}(x_{i})$, that is, $\mathscr{P}_{\mathsf{o}}(x_{i})=\mathscr{N}_{\mathbb{C}}(\nu_{x},\sigma^2_{x})$ \cite{Hu-11-CIM}. For brevity, the prior distribution of $x_{i}$ is characterized by the prior parameter $\boldsymbol{\theta}_{\mathsf{o}}=\{\nu_{x},\sigma^2_{x}\}$. The decomposition of the joint distribution in (\ref{eq:joint}) can be well represented by a factor graph $\mathcal{G}=(\mathcal{V},\mathcal{F},\mathcal{E})$, where $\mathcal{V}=\{x_i\}_{i=1}^{N}$ is the set of \emph{unobserved} variable nodes, $\mathcal{F}=\{\mathscr{P}(\widetilde{y}_{\mu}|\mathbf{H},\mathbf{x})\}_{\mu=1}^{P}$ is the set of factor nodes, where each factor node ensures (in probability) the condition $\widetilde{y}_{\mu} = {\cal Q}_{\mathbb{C}}\left( \sum_{i} H_{\mu i} x_{i} + e_{\mu} \right)$, and $\mathcal{E}$ denotes the set of edges. Specifically, edges indicate the involvement between function nodes and variable nodes; that is, an edge between variable node $x_i$ and factor node $\mathscr{P}(\widetilde{y}_{\mu}|\mathbf{H},\mathbf{x})$ indicates that the given factor function $\mathscr{P}(\widetilde{y}_{\mu}|\mathbf{H},\mathbf{x})$ is a function of $x_i$. Fig. \ref{fig:factorgraph_example} provides a factor graph representation for the fictitious six buses system shown in Fig. \ref{fig:system_model_example}. \begin{figure} \begin{center} \resizebox{3.5in}{!}{% \includegraphics*{Access-2017-06130-fig3} }% \caption{Factor graph representing the state estimation problem for Fig. \ref{fig:system_model_example}. The unobserved/observed random variables are depicted as solid/dashed lines with open circles, and the factor nodes as depicted as black squares. The factor nodes ensure (in probability) the condition $\widetilde{y}_{\mu} = {\cal Q}_{\mathbb{C}}\left( \sum_{i} H_{\mu i} x_{i} + e_{\mu} \right)$.}\label{fig:factorgraph_example} \end{center} \end{figure} Given the factor graph representation, message-passing-based algorithms, such as \emph{belief propagation} (BP) \cite{Pea-book-88,FG-01-IT}, can be applied to compute $\mathscr{P}(x_{i}|\mathbf{H},\widetilde{\mathbf{y}})$ approximately. Methodically, BP passes the following ``messages,'' which denote the probability distribution functions, along the edges of the graph as follows \cite{MP-book}: \begin{align} \hspace{-0.35cm} \mathscr{M}_{i \rightarrow \mu}^{(t+1)}(x_{i}) & \propto \mathscr{P}_{\mathsf{o}}(x_{i}) \prod_{\begin{subarray}{c} \gamma \,\in\, \mathscr{L}(i)\\ \gamma \,\neq\, \mu \end{subarray}} \mathscr{M}_{\gamma \rightarrow i}^{(t)}(x_{i}), \label{m_f_c_v} \\ \hspace{-0.35cm} \mathscr{M}_{\mu \rightarrow i}^{(t+1)}(x_{i}) & \propto \!\!\int\!\!\!\!\! \prod_{\begin{subarray}{c} k \,\in\, \mathscr{L}(\mu)\\ k \,\neq\, i \end{subarray}}\!\!\!\!\!\! \mathrm{d}x_{k} \! \Bigg[\!\mathscr{P}(\widetilde{y}_{\mu}|\mathbf{H},\mathbf{x})\!\!\!\!\! \prod_{\begin{subarray}{c} k \,\in\, \mathscr{L}(\mu)\\ k \,\neq\, i \end{subarray}}\!\!\!\!\!\! \mathscr{M}_{k \rightarrow \mu}^{(t)}(x_{k})\Bigg],\!\! \!\label{m_v_2_f} \end{align} where superscript $t$ indicates the iteration number, $\mathscr{M}_{i \rightarrow \mu}(x_{i}) $ is the message from the $i$-th variable node to the $\mu$-th factor node, $\mathscr{M}_{\mu \rightarrow i}(x_{i})$ is the message from the $\mu$-th factor node to the $i$-th variable node, $\mathscr{L}(i)$ is the set of factor nodes that are neighbors of the $i$-th variable node, and $\mathscr{L}(\mu)$ is the set of variable nodes that are neighbors of the $\mu$-th factor node. Then, the approximate marginal distribution is computed according to the following equation: \begin{equation}\label{eq:marginal_App} \mathscr{P}^{(t)}(x_{i}|\mathbf{H},\mathbf{y}) \propto \mathscr{P}_{\mathsf{o}}(x_{i}) \prod_{\mu \,\in\, \mathscr{L}(i)} \mathscr{M}_{\mu \rightarrow i}^{(t)}(x_{i}). \end{equation} \begin{algorithm}[tbp]\label{ago:ago1} \footnotesize \caption{{\tt EMSwGAMP} algorithm} \KwIn{$\widetilde{\mathbf{y}}, \mathbf{H}, \sigma$} \KwOut{$\{\widehat{x}_{i}^{(t)}\}_{i=1}^{N}$} \SetKwFunction{KwFn}{Fn} $\widehat{\mathbf{x}}^{\left(0\right)}=\{\widehat{x}_{i}^{\left(0\right)}\}_{i=1}^{N} \leftarrow \{ 1 \}$\\ $\boldsymbol{\tau}^{\left(0\right)}=\{\tau_{i}^{\left(0\right)}\}_{i=1}^{N} \leftarrow \{1\}$\\ $\boldsymbol{\varrho}^{\left(0\right)}=\{{\varrho}_{\mu}^{\left(0\right)}\}_{\mu = 1}^{P} \leftarrow \{1\}$\\ $\boldsymbol{\omega}^{\left(0\right)}=\{{\omega}_{\mu}^{\left(0\right)}\}_{\mu = 1}^{P} \leftarrow \{\widetilde{y}_{\mu}\}$\\ $t \leftarrow 1$ \\ \While{Stopping criteria are not met}{ \For{$\mu=1$ \emph{\KwTo} $P$}{ $\varrho^{(t)}_{\mu} \leftarrow \sum_{i=1}^{N} \left| H_{\mu i}\right|^{2} \tau_{i}^{(t-1)}$ \\ $\omega_{\mu}^{(t)} \leftarrow \sum_{i=1}^{N} H_{\mu i} \widehat{x}_{i}^{(t-1)} - \frac{\widetilde{y}_{\mu} - \omega_{\mu}^{(t-1)}}{\sigma+\varrho^{(t-1)}_{\mu}}\varrho^{(t)}_{\mu}$ \\ $\varsigma_{\mu}^{(t)} \leftarrow \mathbb{VAR} \left[z_{\mu}|\widetilde{y}_{\mu}, \omega_{\mu}^{(t)}, \varrho_{\mu}^{(t)}\right]$\\ $\widehat{z}_{\mu}^{(t)} \leftarrow \mathbb{E}\left[z_{\mu}|\widetilde{y}_{\mu}, \omega_{\mu}^{(t)}, \varrho_{\mu}^{(t)}\right]$\\ $\widehat{s}_{\mu}^{(t)} \leftarrow \frac{\widehat{z}_{\mu}^{(t)} - \omega_{\mu}^{(t)}}{\varrho_{\mu}^{(t)}} $\\ $\zeta_{\mu}^{(t)} \leftarrow \left(1- \frac{\varsigma_{\mu}^{(t)}}{\varrho_{\mu}^{(t)}}\right)\frac{1}{\varrho_{\mu}^{(t)}}$ \\ } $[\ell_1, \ell_2, \cdots, \ell_{N} ] \leftarrow {\tt permute}(\{1,2,\cdots,N\})$ \\ \For{$k=1$ \emph{\KwTo} $N$}{ $i\leftarrow \ell_k$\\ $(\Sigma_{i}^{(t)})^2 \leftarrow \left[\sum_{\mu =1}^{P} \left|H_{\mu i}\right|^{2} \zeta_{\mu}^{(t)} \right]^{-1}$\\ $R_{i}^{(t)} \leftarrow \widehat{x}_{i}^{(t-1)} + (\Sigma_{i}^{(t)})^2 \sum_{\mu =1}^{P} H_{\mu i}^{*}\, \widehat{s}_{\mu}^{(t)}$\\ $\widehat{x}_{i}^{(t)} \leftarrow \mathbb{E}{\left[ x_{i} | \boldsymbol{\theta}_{\mathsf{o}}, R_{i}^{(t)}, (\Sigma_{i}^{(t)})^2 \right]} $\\ $\tau_{i}^{(t)} \leftarrow \mathbb{VAR}{\left[ x_{i} | \boldsymbol{\theta}_{\mathsf{o}}, R_{i}^{(t)}, (\Sigma_{i}^{(t)})^2 \right]} $ \\ \For{$\mu \in \mathscr{L}(i)$ }{ $\breve{\varrho}_{\mu}^{(t)} \leftarrow \varrho_{\mu}^{(t)}$\\ $\varrho_{\mu}^{(t)} \leftarrow \breve{\varrho}_{\mu}^{(t)} + \left|H_{\mu i}\right|^{2} (\tau_{i}^{(t)} - \tau_{i}^{(t-1)} )$\\ $\omega_{\mu}^{(t)} \leftarrow \omega_{\mu}^{(t)} + H_{\mu i} (\widehat{x}_{i}^{(t)} - \widehat{x}_{i}^{(t-1)}) - \frac{\widetilde{y}_{\mu} - \omega_{\mu}^{(t-1)}}{\sigma+\varrho^{(t-1)}_{\mu}}(\varrho^{(t)}_{\mu}- \breve{\varrho}_{\mu}^{(t)})$ } } $\nu_{x} \leftarrow \frac{1}{N}\sum_{i=1}^{N} R_{i}^{(t)}$\\ $\sigma_{x}^{2} \leftarrow \frac{1}{N}\sum_{i=1}^{N} \left[(\nu_{x}- R_{i}^{(t)})^2+(\Sigma_{i}^{(t)})^2\right]$\\ $t \leftarrow t+1$ } \end{algorithm} \subsection{EM-assisted GAMP for the State Estimation Problems} However, according to \cite{Guo-06-ITW,Wang-14-WCNC,Rangan-11-ISIT}, BP remains \emph{computationally intractable} for large-scale problems because of the high-dimensional integrals involved and large number of messages required. Moreover, the prior parameters $\boldsymbol{\theta}_{\mathsf{o}}$ and $\sigma^2$ are usually \emph{unknown} in advance. Fortunately, BP can be simplified as the GAMP algorithm \cite{Rangan-11-ISIT} based on the central limit theorem and Taylor expansions to enhance computational tractability.\footnote{To state this more precisely, applying the central limit theorem to messages yields a Gaussian approximation that can be used to simplify BP from a recursion on functions to a recursion on numbers. On the basis of a series of Taylor expansions, the number of messages can be reduced significantly.} By contrast, the EM algorithm can be applied to learn the prior parameters \cite{Vila-13TSP}. With the aid of these two algorithms, we develop an iterative method involving the following two phases per iteration for the state estimation problems: First, ``swept'' GAMP ({\tt SwGAMP}) \cite{SwAMP-2015}, a modified version of GAMP, is exploited to estimate $\mathbf{x}$. Second, the EM algorithm is applied to learn the prior parameters from the data at hand. The stepwise implementation procedure of the proposed state estimation scheme, referred to as {\tt EMSwGAMP}, is presented in Algorithm \ref{ago:ago1}. Considering the space limitations, a detailed derivation of (complex) GAMP and EM is excluded in this paper. For the details, refer to \cite{Rangan-11-ISIT,Krzakala-12,SwAMP-2015,Jay-15-TVT,Vila-13TSP}. Then, we provide several explanations for each line of Algorithm \ref{ago:ago1} to ensure better understanding of the proposed scheme. In Algorithm \ref{ago:ago1}, $\widehat{x}_{i}^{(t)}$ denotes the estimate of the $i$-th element of $\mathbf{x}$ in the $t$-th iteration and $\tau_{i}^{(t)}$ can be interpreted as an approximation of the posterior variance of $\widehat{x}_{i}^{(t)}$; these two quantities are initialized as $\widehat{x}_{i}^{(0)} = 1$ and $\tau_{i}^{(0)}=1$, respectively. For each factor node, we introduce two auxiliary variables $\omega_{\mu}^{(t)}$ and $\varrho_{\mu}^{(t)}$, given in Lines 9 and 8 of Algorithm \ref{ago:ago1}, describing the current mean and variance estimates of the $\mu$-th element of $\widetilde{\mathbf{y}}$, respectively. The initial conditions of $\omega_{\mu}^{(0)}$ and $\varrho_{\mu}^{(0)}$ are specified in Lines 4 and 3 of Algorithm \ref{ago:ago1}, respectively. As stated previously, $z_{\mu}$ is the $\mu$-th element of the \emph{noise free} measurement vector $\mathbf{z}$, that is, $z_{\mu} = \sum_{i} H_{\mu i}\,x_{i}$. Therefore, according to the derivations of \cite{Rangan-11-ISIT}, $z_{\mu}$ conditioned on $x_i$ can be further approximated as a Gaussian distribution with the mean and variance given in Lines 9 and 8 of Algorithm \ref{ago:ago1}, respectively, which are evaluated with respect to the following expression \begin{equation} \label{eq:post_z} \mathscr{P}(z_{\mu}|\widetilde{y}_{\mu}) \propto \int_{\widetilde{y}_{\mu}-\frac{\Delta}{2}}^{\widetilde{y}_{\mu}+\frac{\Delta}{2}} \mathscr{N}_{\mathbb{C}}(y_{\mu}; z_{\mu}, \sigma^2)\mathscr{N}_{\mathbb{C}}(z_{\mu}; \omega_{\mu}^{(t)}, \varrho_{\mu}^{(t)}) {\rm d} y_{\mu}. \end{equation} (\ref{eq:post_z}) represents the quantization noise that can be regarded as a Gaussian distribution whose mean is $\omega_{\mu}$ and variance is $\varrho_{\mu}$. Finally, the messages from factor nodes to variable nodes are reduced to a simple message, which is parameterized by $\widehat{s}_{\mu}$ and $\zeta_{\mu}$, given in Lines 12 and 13 of Algorithm \ref{ago:ago1}. Therefore, we refer to messages $\{\widehat{s}_{\mu},\zeta_{\mu}\}$ as measurement updates. Similarly, for the variable nodes, we also introduce two auxiliary variables $R_{i}^{(t)}$ and $(\Sigma_{i}^{(t)})^2$, given in Lines 18 and 17 of Algorithm \ref{ago:ago1}, describing the current mean and variance estimates of the $i$-th element of $\mathbf{x}$ \emph{without} considering the prior information of $x_{i}$, respectively. Then, adding the prior information of $x_{i}$, that is, $\mathscr{P}_{\mathsf{o}}(x_{i})=\mathscr{N}_{\mathbb{C}}(\nu_{x},\sigma^2_{x})$, to the message updates, the posterior mean and variance of $x_{i}$ are given in Lines 19 and 20 of Algorithm \ref{ago:ago1}, respectively where is considered with respect to the following expression: \begin{equation} \label{eq:post_x} \! \mathscr{P}(x_{i};\widehat{x}_{i}^{(t)},\tau_{i}^{(t)})\! \propto\! \mathscr{N}_{\mathbb{C}}(x_i ; \nu_{x}, \sigma^2_{x}) \mathscr{N}_{\mathbb{C}}(x_i; R_{i}^{(t)}, (\Sigma_{i}^{(t)})^2).\! \end{equation} Here, $\widehat{x}_{i}^{(t)}$ (i.e., the calculation of Expectation in Line 19 of Algorithm 1) and $\tau_{i}^{(t)}$ (i.e., the calculation of VAR in Line 20 of Algorithm 1) can be easily obtained using the standard formulas for Gaussian distributions as \cite{Wang-15-ICC,Barbier-15} \begin{align} \widehat{x}_{i}^{(t)} & = R_{i}^{(t)} + \frac{(\Sigma_{i}^{(t)})^2 }{(\Sigma_{i}^{(t)})^2 + \sigma^2_{x}} {\left(\nu_{x} - R_{i}^{(t)} \right)},\\ \tau_{i}^{(t)} & = \frac{(\Sigma_{i}^{(t)})^2 \sigma^2_{x}}{(\Sigma_{i}^{(t)})^2 + \sigma^2_{x}}. \label{eq:def_fc} \end{align} Manoel et al. \cite{SwAMP-2015} slightly modified the update scheme for GAMP, where partial quantities are updated \emph{sequentially} rather than \emph{in parallel}, to improve the stability of GAMP. \footnote{The empirical studies demonstrate that GAMP with slight modifications not only exhibits good convergence performances but is also more robust to difficult measurement matrix ${\bf H}$ as compared with the original GAMP.} Specifically, $\sum_{i} H_{\mu i} \widehat{x}_{i}^{(t-1)}$ and $\varrho_{\mu}^{(t)}$ are recomputed as the sweep updates over $i$ for a single iteration step. Lines 22-24 of Algorithm \ref{ago:ago1} are the core steps to perform the so-called sweep (or reordering) updates. In brief, we refer to messages $\{\widehat{x}_{i},\tau_{i}\}$ as variable updates and to messages $\{\widehat{s}_{\mu},\zeta_{\mu}\}$ as measurement updates for the {\tt SwGAMP} algorithm. One iteration of the {\tt SwGAMP} algorithm involves the implementation of these updates together with the estimation of the system state $\mathbf{x}$. In the first phase of Algorithm \ref{ago:ago1}, the prior parameters $\boldsymbol{\theta}_{\mathsf{o}}= \{\nu_{x},\sigma^2_{x}\}$ are treated as known parameters, but may be unknown in practice. Thus, the second phase of the proposed algorithm is to adopt the EM algorithm to learn the prior parameters $\boldsymbol{\theta}_{\mathsf{o}}$ on the basis of the quantities acquired in the first phase of the algorithm. The EM algorithm is a general iterative method for likelihood optimization in probabilistic models with hidden variables. In our case, the EM-updates will be expressed in the following form \cite{Vila-13TSP} \begin{equation} \label{eq:EM_update} \boldsymbol{\theta}_{\mathsf{o}}^{\rm new} = \operatornamewithlimits{arg\, max}_{\boldsymbol{\theta}_{\mathsf{o}}} \mathbb{E}\left\{ \ln \mathscr{P}(\mathbf{x},\mathbf{y};\boldsymbol{\theta}_{\mathsf{o}}) \right\}, \end{equation} where the expectation takes over the posterior probability of $\mathbf{x}$ conditioned on $\boldsymbol{\theta}_{\mathsf{o}}$ and $\mathbf{y}$. Following similar steps in \cite{Vila-13TSP}, we can derive a set of EM-based update equations for the hyperparameters, that is, the \emph{prior} information of the system states (i.e., $\nu_{x}$ and $\sigma^2_{x}$) that should be inferred. The detailed EM updates for the hyperparameters are provided in Lines 25 and 26 of Algorithm \ref{ago:ago1}, respectively. Notably, the quantities $\{R_{i}^{(t)}\}_{i=1}^{N}$, $\{(\Sigma_{i}^{(t)})^2\}_{i=1}^{N}$, $\{\widehat{z}_{\mu}^{(t)} \}_{\mu=1}^{P}$, and $\{\varsigma_{\mu}^{(t)}\}_{\mu = 1}^{P}$ are readily available after running the {\tt SwGAMP} algorithm in the first phase. {\it Remark 3.1 (Calculating Lines 10 and 11 of Algorithm \ref{ago:ago1} with high resolution representation of the measured data) :} In modern SCADA systems, each measurement is quantized and represented using a word length of 12 (or 16) bits. With such high precision representation of the measurement data, the error between the quantized value $\widetilde{y}_{\mu}$ and the actual value $y_{\mu}$ can be negligible, that is, $\widetilde{y}_{\mu} \simeq{y}_{\mu}$. In this case, (\ref{eq:post_z}) can be rewritten as follows: \begin{equation} \label{eq:post_z_unq} \mathscr{P}(z_{\mu}|\widetilde{y}_{\mu})\propto \mathscr{N}_{\mathbb{C}}(y_{\mu}; z_{\mu}, \sigma^2) \mathscr{N}_{\mathbb{C}}(z_{\mu}; \omega_{\mu}^{(t)}, \varrho_{\mu}^{(t)}). \end{equation} Then, the moments, $\widehat{z}_{\mu}^{(t)}$ and $\varsigma_{\mu}^{(t)}$, can be easily obtained using standard formulas for Gaussian distributions, as follows \cite{Wang-15-ICC}: \begin{align} \widehat{z}_{\mu}^{(t)} & = \omega_{\mu}^{(t)} + \frac{ \varrho_{\mu}^{(t)}}{\varrho_{\mu}^{(t)} + \sigma^2} {\left(\widetilde{y}_{\mu} - \omega_{\mu}^{(t)}\right)}, \label{z_mean} \\ \varsigma_{\mu}^{(t)} & = \frac{ \varrho_{\mu}^{(t)} \sigma^2}{\varrho_{\mu}^{(t)} + \sigma^2}. \label{z_var} \vspace{-.05in} \end{align} {\it Remark 3.2 (Calculating Lines 10 and 11 of Algorithm \ref{ago:ago1} under the ``quantized'' scenario) :} When quantization error is \emph{nonnegligible}, particularly at coarse quantization levels, (\ref{eq:post_z_unq}) is no longer valid because of the fact that using $\widetilde{y}_{\mu}$ to approximate ${y}_{\mu}$ will result in severe performance degradation. In this case, we have to adopt (\ref{eq:post_z}) to determine the conditional mean $\widehat{z}_{\mu}^{(t)}$ and conditional variance $\varsigma_{\mu}^{(t)}$, which can be obtained as follows: \begin{align} \widehat{z}_{\mu}^{(t)} & = \frac{ \int_{\widetilde{y}_{\mu}-\frac{\Delta}{2}}^{\widetilde{y}_{\mu}+\frac{\Delta}{2}} y_{\mu} \mathscr{N}_{\mathbb{C}}(y_{\mu};\omega_{\mu}^{(t)}, \sigma^2+\varrho_{\mu}^{(t)}) {\rm d} y_{\mu} } { \int_{\widetilde{y}_{\mu}-\frac{\Delta}{2}}^{\widetilde{y}_{\mu}+\frac{\Delta}{2}} \mathscr{N}_{\mathbb{C}}(y_{\mu};\omega_{\mu}^{(t)}, \sigma^2+\varrho_{\mu}^{(t)}) {\rm d} y_{\mu} }, \label{Q_z_mean} \\ \varsigma_{\mu}^{(t)} & = \frac{ \int_{\widetilde{y}_{\mu}-\frac{\Delta}{2}}^{\widetilde{y}_{\mu}+\frac{\Delta}{2}} | y_{\mu} - \widehat{y}_{\mu}^{(t)} |^2 \mathscr{N}_{\mathbb{C}}(y_{\mu};\omega_{\mu}^{(t)}, \sigma^2+\varrho_{\mu}^{(t)}) {\rm d} y_{\mu} } { \int_{\widetilde{y}_{\mu}-\frac{\Delta}{2}}^{\widetilde{y}_{\mu}+\frac{\Delta}{2}} \mathscr{N}_{\mathbb{C}}(y_{\mu};\omega_{\mu}^{(t)}, \sigma^2+\varrho_{\mu}^{(t)}) {\rm d} y_{\mu} }. \label{Q_z_var} \end{align} Explicit expressions of (\ref{Q_z_mean}) and (\ref{Q_z_var}) are provided in \cite{Wen-15}. {\it Remark 3.3 (Stopping criteria):} The algorithm can be discontinued either when a predefined number of iterations is reached or when it converges in the relative difference of the norm of the estimate of $\mathbf{x}$, or both. The relative difference of the norm is given by the quantity $\epsilon \triangleq \sum_{i}^{N} |\widehat{x}_{i}^{(t)} - \widehat{x}_{i}^{(t-1)}|^2$. \section{Simulation Results and Discussion}\label{sec:04} In this section, we evaluate the performance of the proposed {\tt EMSwGAMP} algorithm for single-phase state and three-phase state estimations. The optimal PMU placement issue is \emph{not} included in this study, and we assume that PMUs are placed in terminal buses. In the single-phase state estimation, IEEE 69-bus radial distribution network \cite{69-bus-data} is used for the test system, where the subset of buses with PMU measurements is denoted by $\mathcal{P}_{69}=\{1, 27, 35, 46, 50, 52, 67, 69\}$. A \emph{modified} version of IEEE $69$-bus radial distribution network, referred to as 69m in this study, is examined to verify the robustness of the proposed algorithm. The system settings of this modified test system are identical to those of the IEEE 69-bus radial distribution network, with the exception of the bus voltages of this test system being able to vary within a large range, thereby increasing the load levels of this system. For these two test system, we have $68$ current measurements and $8$ voltage measurements. The software toolbox MATPOWER \cite{MATPOWER} is utilized to run the proposed state estimation algorithm for various cases in the single-phase state estimation. The IEEE 37-bus three-phase system is used as the test system for the three-phase state estimation, where the subset of buses with PMU measurements is denoted by $\mathcal{P}_{37}=\{1, 10, 15, 20, 29, 35\}$. In contrast to the single-phase state estimation, the system state of the three-phase estimation is generated by test system documents instead of MATPOWER. We have $105$ current measurements and $18$ voltage measurements in 37-bus three-phase system. Prior distributions of the voltage at each bus for different test systems can be found in Table \ref{tb:initial_priori}. In each estimation, the mean squared error (MSE) of the bus voltage magnitude and that of the bus voltage phase angle are used as comparison measures, which are expressed as ${\rm MSE} = \frac{1}{N}\sum_{i=1}^{N}|x_{i}-\widehat{x}_{i}|^2$, ${\rm MSE}_{\rm magn} = \frac{1}{N}\sum_{i=1}^{N}(|x_{i}|-|\widehat{x}_{i}|)^2$, and ${\rm MSE}_{\rm phase} = \frac{1}{N}\sum_{i=1}^{N}[\arg(x_{i})-\arg(\widehat{x}_{i})]^2$, respectively. The LMMSE estimator is tested for comparison. In our implementation, termination of Algorithm \ref{ago:ago1} is declared when the corresponding constraint violation is less than $\epsilon =10^{-8}$. A total of $1,000$ Monte Carlo simulations were conducted and evaluated to obtain average results and to analyze the achieved measures. The simulations for computation time were conducted utilizing an Intel i7-4790 computer with 3.6 GHz CPU and 16 GB RAM. For clarity, the number of measurements quantized to be $\mathcal{B}$-bit is denoted as $\mathcal{K}$, where $\mathcal{B}$ denotes the number of bits used for quantization. \begin{table} \begin{center} \caption{The prior distribution of bus voltage for different systems}\label{tb:initial_priori} \begin{tabular}{|C{1.35cm}|c|c|c|c|c|c|c|} \hline & \multirow{2}{*}{$N$} & Magnitude & Phase &\multirow{2}{*}{$\sigma^2_{x}$} \\ & & (mean) & (mean) & \\ \hline\hline \multirow{2}{*}{single-phase} &\multirow{1}{*}{69\phantom{m}} & $1.00$ & $5.60 \times 10^{-4}$ & $5.46 \times10^{-7}$ \\ \cline{2-5} &\multirow{1}{*}{69m} & $1.04$ & $1.71 \times 10^{-2}$ & $5.66 \times 10^{-4}$ \\ \hline three-phase &\multirow{1}{*}{37\phantom{m}} & $0.01$ & $1.12$ & $9.86 \times 10^{-1}$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Average estimation results obtained by {\tt EMSwGAMP} and LMMSE with the unquantized measured data for the various systems}\label{tb:gaussian_model_test_result} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{$N$} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{${\rm MSE}$} & \multirow{2}{*}{${\rm MSE}_{\rm magn}$} & \multirow{2}{*}{${\rm MSE}_{\rm phase}$} \\ &&& & \\ \hline\hline \multirow{2}{*}{69\phantom{m}} & {\tt EMSwGAMP} & $3.84 \times 10^{-4}$ & $2.06 \times 10^{-4}$ & $7.78 \times 10^{-4}$ \\ \cline{2-5} & LMMSE & $8.16 \times 10^{-4}$ & $4.65 \times 10^{-4}$ & $3.55 \times 10^{-4}$ \\ \hline \hline \multirow{2}{*}{69m} & {\tt EMSwGAMP} & $7.11 \times 10^{-4}$ & $3.00 \times 10^{-4}$ & $3.78 \times 10^{-4}$ \\ \cline{2-5} & LMMSE & $8.22 \times 10^{-4}$ & $4.75 \times 10^{-4}$ & $3.23 \times 10^{-4}$ \\ \hline \end{tabular} \end{center} \end{table} Table \ref{tb:gaussian_model_test_result} shows a summary of the average ${\rm MSE}$, ${\rm MSE}_{\rm magn}$, and ${\rm MSE}_{\rm phase}$ achieved by {\tt EMSwGAMP} and LMMSE for single-phase state estimation with various systems. The results show that even under the traditional \emph{unquantized} setting,\footnote{As mentioned in Remark 3.1, when the measured data are represented using a wordlength of 16 bits, the quantization error can be negligible. In this case, such high-precision measurement data are henceforth referred to as the \emph{unquantized} measured data. Therefore, all measurements are quantized with $16$ bits in In Table \ref{tb:gaussian_model_test_result}.} {\tt EMSwGAMP} still outperforms LMMSE because {\tt EMSwGAMP} exploits the statistical knowledge of the estimated parameters $\boldsymbol{\theta}_{\mathsf{o}}$, which is learned from the data via the second phase of {\tt EMSwGAMP}, that is, the EM learning algorithm. Table \ref{tb:gaussian_model_parameter_learning} reveals that the estimation results of the system states using {\tt EMSwGAMP} are close to the true values, which validates the effectiveness of the proposed learning algorithm. From a detailed inspection of Table \ref{tb:gaussian_model_parameter_learning}, we found that the mean value of voltage magnitude can be exactly estimated through the EM learning algorithm. Therefore, the average ${\rm MSE}_{\rm magn}$ of {\tt EMSwGAMP} is better than that of LMMSE. However, the mean value of voltage phase cannot be estimated accurately by the EM learning algorithm. As a result, the average ${\rm MSE}_{\rm phase}$ of {\tt EMSwGAMP} is inferior to that of LMMSE. \begin{table} \begin{center} \caption{Parameter learning results using {\tt EMSwGAMP}}\label{tb:gaussian_model_parameter_learning} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{$N$} &\multirow{2}{*}{Algorithm} & Magnitude & Phase \\ & & (mean) & (mean) \\ \hline\hline \multirow{2}{*}{69\phantom{m}} & True value & $1.00$ & $\phantom{-}5.60\times 10^{-4}$ \\ \cline{2-4} & {\tt EMSwGAMP} & $1.00$ & $-2.53 \times 10^{-4}$ \\ \hline \hline \multirow{2}{*}{69m} & True value & $1.04$ & $\phantom{-}1.71 \times 10^{-2}$ \\ \cline{2-4} & {\tt EMSwGAMP} & $1.04$ & $\phantom{-}1.10 \times 10^{-2}$ \\ \hline \end{tabular} \end{center} \end{table} We consider an extreme scenario where several measurements are quantized to ``$1$'' bit, but others are not, to reduce the amount of transmitted data. The measurements selected to be quentized are provided in Appendix A. Table \ref{tb:1bit_num_MSE_compare} shows the average ${\rm MSE}$, ${\rm MSE}_{\rm magn}$, and ${\rm MSE}_{\rm phase}$ against $\mathcal{K}$ obtained by {\tt EMSwGAMP}, where in the performance of the LMMSE algorithm with $\mathcal{K}= 17$ is also included for the purpose of comparison. The following observations are noted on the basis of Table \ref{tb:1bit_num_MSE_compare}: First, when the measurement is quantized with 1 bit, we only know that the measurement is positive or not so that the information related to the system is lost. Hence, as expected, increasing $\mathcal{K}$ naturally degrades the average MSE performance because more information is lost. However, the achieved performance of the 69-bus test system is \emph{less} sensitive to $\mathcal{K}$ because the bus voltage variations in this system are small. Thus, the proposed algorithm can easily deal with \emph{incomplete} data. Second, for the system with large bus voltage fluctuations, the obtained ${\rm MSE}_{\rm phase}$ performance of the modified 69-bus test system can still achieve $10^{-3}$ when $\mathcal{K}\leq 17$. However, with $\mathcal{K}= 17$, the LMMSE algorithm exhibits \emph{poor} performance for both considered test systems, which cannot be used in practice. \begin{table} \begin{center} \caption{Average estimation results obtained by {\tt EMSwGAMP} with different number of $1$-bit quantized measurements for the various systems}\label{tb:1bit_num_MSE_compare} \begin{tabular}{|C{0.35cm}|C{1.25cm}|c|C{1.45cm}|C{1.45cm}|C{1.45cm}|} \hline \multirow{2}{*}{$N$} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{$\mathcal{K}$} & \multirow{2}{*}{${\rm MSE}$} & \multirow{2}{*}{${\rm MSE}_{\rm magn}$} & \multirow{2}{*}{${\rm MSE}_{\rm phase}$} \\ & & && & \\ \hline\hline \multirow{10}{*}{69\phantom{m}} & \multirow{9}{*}{{\tt EMSwGAMP}}& $\phantom{0}0$ & $3.84 \times 10^{-4}$ & $2.07 \times 10^{-4}$ & $1.77 \times 10^{-4}$ \\ \cline{3-6} & & $\phantom{0}2$ & $1.00 \times 10^{-3}$ & $5.97 \times 10^{-4}$ & $4.44 \times 10^{-4}$ \\ \cline{3-6} & & $\phantom{0}4$ & $1.00 \times 10^{-3}$ & $6.18 \times 10^{-4}$ & $4.74 \times 10^{-4}$ \\ \cline{3-6} & & $17$ & $1.00 \times 10^{-3}$ & $5.76 \times 10^{-4}$ & $4.19 \times 10^{-4}$ \\ \cline{3-6} & & $19$ & $1.00 \times 10^{-3}$ & $5.72 \times 10^{-4}$ & $4.34 \times 10^{-4}$ \\ \cline{3-6} & & $23$ & $1.00 \times 10^{-3}$ & $5.89 \times 10^{-4}$ & $4.56 \times 10^{-4}$ \\ \cline{3-6} & & $27$ & $1.10 \times 10^{-3}$ & $6.04 \times 10^{-4}$ & $4.87 \times 10^{-4}$ \\ \cline{3-6} & & $34$ & $1.00 \times 10^{-3}$ & $5.70 \times 10^{-4}$ & $4.47 \times 10^{-4}$ \\ \cline{3-6} & & $42$ & $1.00 \times 10^{-3}$ & $5.59 \times 10^{-4}$ & $4.41 \times 10^{-4}$ \\ \cline{2-6} & LMMSE & $17$ & $2.39 \times 10^{-1}$ & $1.07 \times 10^{-1}$ & $1.59 \times 10^{-1}$\\ \hline \hline \multirow{10}{*}{69m}& \multirow{9}{*}{{\tt EMSwGAMP}}& $\phantom{0}0$ & $7.00 \times 10^{-4}$ & $3.90 \times 10^{-4}$ & $3.58 \times 10^{-4}$ \\ \cline{3-6} & & $\phantom{0}2$ & $1.00 \times 10^{-3}$ & $5.59 \times 10^{-4}$ & $3.87 \times 10^{-4}$ \\ \cline{3-6} & & $\phantom{0}4$ & $1.00 \times 10^{-3}$ & $5.68 \times 10^{-4}$ & $3.99 \times 10^{-4}$ \\ \cline{3-6} & & $17$ & $1.00 \times 10^{-3}$ & $5.40 \times 10^{-4}$ & $3.74 \times 10^{-4}$ \\ \cline{3-6} & & $19$ & $1.20 \times 10^{-3}$ & $7.19 \times 10^{-4}$ & $4.37 \times 10^{-4}$ \\ \cline{3-6} & & $23$ & $1.20 \times 10^{-3}$ & $7.12 \times 10^{-4}$ & $4.64 \times 10^{-4}$ \\ \cline{3-6} & & $27$ & $1.30 \times 10^{-3}$ & $7.20 \times 10^{-3}$ & $5.17 \times 10^{-4}$ \\ \cline{3-6} & & $34$ & $1.30 \times 10^{-3}$ & $7.45 \times 10^{-3}$ & $5.44 \times 10^{-4}$ \\ \cline{3-6} & & $42$ & $1.50 \times 10^{-3}$ & $8.00 \times 10^{-3}$ & $6.04 \times 10^{-4}$ \\ \cline{2-6} & LMMSE & $17$ & $2.46 \times 10^{-1}$ & $1.01 \times 10^{-1}$ & $1.65 \times 10^{-1}$\\ \hline \end{tabular} \end{center} \end{table} Table \ref{tb:1bit_num_MSE_compare} also shows that the proposed algorithm can only achieve \emph{reasonable} performance with $\mathcal{K}= 17$. This finding naturally raises the question: How many quantization bits of these $17$ measurements are needed to achieve a performance \emph{close to} that of the unquantized measurements? Therefore, Table \ref{tb:1bit_model_test_result} shows the performance of the proposed algorithm using different quantization bits for the $17$ measurements, where the number of bits used for quantization is denoted as $\mathcal{B}$. For ease of reference, the performance of the proposed algorithm with the unquantized measurements is also provided. Furthermore, the average running time of the proposed algorithm for Table \ref{tb:1bit_model_test_result} is provided in Table \ref{tb:run_time_diff_bit_69bus}. Table \ref{tb:1bit_model_test_result} shows that increasing $\mathcal{B}$ results in the improvement of the state estimation performance. However, as shown in Table \ref{tb:run_time_diff_bit_69bus}, the required running time also increases with the value of $\mathcal{B}$. Fortunately, the required running time is within 2 s even at $\mathcal{B}=6$. Moreover, if we further increase the quantization bit from $\mathcal{B} = 6$ to $\mathcal{B} = 7$, the performance remains the same. However, the corresponding running time rapidly increases from 1.90 s to 2.45 s. These findings indicate that $(\mathcal{B},\mathcal{K}) = (6,17)$ are appropriate parameters for the proposed framework. Therefore, in the following simulations, we consider the scenario where more than half of the measurements are quantized to $6$ bits and the others are unquantized to reduce the data transmitted from the measuring devices to the data gathering center further. \begin{table} \begin{center} \caption{The performance of {\tt EMSwGAMP} using different quantization bits for the $17$ measurements for the various systems}\label{tb:1bit_model_test_result} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{$N$} & \multirow{2}{*}{$\mathcal{B}$-bit} & \multirow{2}{*}{${\rm MSE}$} & \multirow{2}{*}{${\rm MSE}_{\rm magn}$} & \multirow{2}{*}{${\rm MSE}_{\rm phase}$} \\ & && & \\ \hline\hline \multirow{7}{*}{69\phantom{m}} & 1-bit & $9.49 \times 10^{-4}$ & $5.40 \times 10^{-4}$ & $4.05 \times 10^{-4}$ \\ \cline{2-5} & 2-bit & $9.49 \times 10^{-4}$ & $5.40 \times 10^{-4}$ & $4.05 \times 10^{-4}$ \\ \cline{2-5} & 3-bit & $9.47 \times 10^{-4}$ & $5.39 \times 10^{-4}$ & $4.04 \times 10^{-4}$ \\ \cline{2-5} & 4-bit & $9.11 \times 10^{-4}$ & $5.22 \times 10^{-4}$ & $3.87 \times 10^{-4}$ \\ \cline{2-5} & 5-bit & $8.80 \times 10^{-4}$ & $5.06 \times 10^{-4}$ & $3.71 \times 10^{-4}$ \\ \cline{2-5} & 6-bit & $8.69 \times 10^{-4}$ & $5.00 \times 10^{-4}$ & $3.66 \times 10^{-4}$ \\ \cline{2-5} & unquantized & $3.78 \times 10^{-4}$ & $2.02 \times 10^{-4}$ & $1.75 \times 10^{-4}$ \\ \hline \hline \multirow{7}{*}{69m} & 1-bit & $9.79 \times 10^{-4}$ & $5.53 \times 10^{-4}$ & $3.91 \times 10^{-4}$ \\ \cline{2-5} & 2-bit & $9.79 \times 10^{-4}$ & $5.53 \times 10^{-4}$ & $3.91 \times 10^{-4}$ \\ \cline{2-5} & 3-bit & $9.75 \times 10^{-4}$ & $5.49 \times 10^{-4}$ & $3.89 \times 10^{-4}$ \\ \cline{2-5} & 4-bit & $9.36 \times 10^{-4}$ & $5.28 \times 10^{-4}$ & $3.73 \times 10^{-4}$ \\ \cline{2-5} & 5-bit & $9.04 \times 10^{-4}$ & $5.12 \times 10^{-4}$ & $3.58 \times 10^{-4}$ \\ \cline{2-5} & 6-bit & $8.93 \times 10^{-4}$ & $5.06 \times 10^{-4}$ & $3.54 \times 10^{-4}$ \\ \cline{2-5} & unquantized & $7.15 \times 10^{-4}$ & $3.09 \times 10^{-4}$ & $3.74\times 10^{-4}$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{The run time of {\tt EMSwGAMP} using different quantization bits for the $17$ measurements for the various systems}\label{tb:run_time_diff_bit_69bus} \begin{tabular}{|c|c|c|c|c|c|c|} \cline{1-3} \cline{5-7} \multirow{2}{*}{$N$} & \multirow{2}{*}{$\mathcal{B}$-bit} & \multirow{2}{*}{Time (s)} & & \multirow{2}{*}{$N$} & \multirow{2}{*}{$\mathcal{B}$-bit} & \multirow{2}{*}{Time (s)} \\ & & & & & & \\ \cline{1-3} \cline{5-7} \cline{1-3} \cline{5-7} \multirow{7}{*}{69\phantom{m}} & 1-bit & $1.22$ & &\multirow{7}{*}{69m} & 1-bit & $1.17$ \\ \cline{2-3} \cline{6-7} & 2-bit & $1.28$ & & & 2-bit & $1.23$ \\ \cline{2-3} \cline{6-7} & 3-bit & $1.31$ & & & 3-bit & $1.29$ \\ \cline{2-3} \cline{6-7} & 4-bit & $1.37$ & & & 4-bit & $1.36$ \\ \cline{2-3} \cline{6-7} & 5-bit & $1.56$ & & & 5-bit & $1.53$ \\ \cline{2-3} \cline{6-7} & 6-bit & $1.90$ & & & 6-bit & $1.87$ \\ \cline{2-3} \cline{6-7} & unquantized & $0.20$ & & & unquantized & $0.20$\\ \cline{1-3} \cline{5-7} \end{tabular} \end{center} \end{table} Table \ref{tb:34_42_q_six_bits} shows the average performance of two algorithms with $\mathcal{K}=34$ and $\mathcal{K}=42$ for the two test systems. Here, $\mathcal{K}$ denotes the number of 6-bit quantized measurements and the performance of {\tt EMSwGAMP} using only the unquantized measurements is also listed for convenient reference. Notably, the proposed {\tt EMSwGAMP} algorithm significantly outperforms LMMSE, where the performance of LMMSE deteriorates again to an unacceptable level. We also observed that increasing the number of $6$ bit quantized measurements from $\mathcal{K}=34$ to $\mathcal{K}=42$ only results in a slight performance degradation for {\tt EMSwGAMP}. Consequently, by using {\tt EMSwGAMP}, we can drastically reduce the amount of transmitted data \emph{without} compromising performance. The total amount of measurements of a 69-bus test system is $76$, where $68$ current measurements and 8 voltage measurements originate from the meters and PMUs, respectively. Therefore, if the measurements are quantized as $16$ bits for the conventional meters and PMUs, $16 \times 76 = 1, 216$ bits should be transmitted. However, for the proposed algorithm with $\mathcal{K} = 34$ (i.e., 34 measurements quantized with 6 bits and 42 measurements quantized with 16 bits), only $34 \times 6 + 42 \times 16 = 876$ bits should be transmitted. In this case, the transmission data can be reduced by $27.96\%$. Similarly, for the proposed algorithm with $\mathcal{K} = 42$ (i.e., 42 measurements quantized with 6 bits and 34 measurements quantized with 16 bits), the transmission data can be reduced by $34.53\%$. In addition, we further discuss the required transmission bandwidth of the proposed framework and the conventional system under the assumption that the meters can update measurements every 1 s. As defined by IEEE 802.11n, when the data are modulated with quadrature phase-shift keying for a 20 MHz channel bandwidth, the data rate is $21.7$ Mbps. Therefore, we can approximate the transmission rate as 1.085/Hz/s. For the proposed algorithm with $\mathcal{K} = 34$ and $\mathcal{B} = 6$ (i.e., $876$ bits should be transmitted), the required transmission bandwidth is $808$ Hz. However, the required transmission bandwidth for the conventional system (i.e., $1,216$ bits should be transmitted) is $1, 121$ Hz. In this case, the proposed architecture can also reduce the transmission bandwidth by $27.83\%$. Notably, this study only focuses on the reduction of the transmission data. However, references that specifically discuss the smart meter data transmission system are few (e.g., \cite{DN-12-CM,2016-smartgrid-comm-meter}), which are not considered in this study. Further studies can expand the scope of the present work to include these transmission mechanism to provide more efficient transmission framework. \begin{table} \begin{center} \caption{Average estimation results obtained by {\tt EMSwGAMP} and LMMSE with different numbers of $6$-bit quantized measurements for the various systems}\label{tb:34_42_q_six_bits} \begin{tabular}{|C{0.35cm}|c|C{1.25cm}|C{1.45cm}|C{1.45cm}|C{1.45cm}|} \hline \multirow{2}{*}{$N$} &\multirow{2}{*}{$\mathcal{K}$} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{${\rm MSE}$} & \multirow{2}{*}{${\rm MSE}_{\rm magn}$} & \multirow{2}{*}{${\rm MSE}_{\rm phase}$} \\ & &&& & \\ \hline\hline \multirow{6}{*}{69\phantom{m}} & \multirow{3}{*}{34} & unquantized & $3.86 \times 10^{-4}$ & $2.11 \times 10^{-4}$ & $1.78 \times 10^{-4}$ \\ \cline{3-6} & & {\tt EMSwGAMP} & $1.00 \times 10^{-3}$ & $6.01 \times 10^{-4}$ & $4.34 \times 10^{-4}$ \\ \cline{3-6} & & LMMSE & $9.39 \times 10^{-1}$ & $9.34 \times 10^{-1}$ & $2.21 \times 10^{-1}$ \\ \cline{2-6} & \multirow{3}{*}{42} & unquantized & $3.77 \times 10^{-4}$ & $2.06 \times 10^{-4}$ & $1.72 \times 10^{-4}$ \\ \cline{3-6} & & {\tt EMSwGAMP} & $7.89 \times 10^{-4}$ & $4.52 \times 10^{-4}$ & $3.32 \times 10^{-4}$ \\ \cline{3-6} & & LMMSE & $9.39 \times 10^{-1}$ & $9.32 \times 10^{-1}$ & $2.88 \times 10^{-1}$ \\ \hline \multirow{6}{*}{69m} & \multirow{3}{*}{34} & unquantized & $7.78 \times 10^{-4}$ & $3.89 \times 10^{-4}$ & $3.56 \times 10^{-4}$ \\ \cline{3-6} & & {\tt EMSwGAMP} & $1.10 \times 10^{-3}$ & $6.28 \times 10^{-4}$ & $4.31 \times 10^{-4}$ \\ \cline{3-6} & & LMMSE & $1.02$ & $1.02$ & $2.10 \times 10^{-1}$ \\ \cline{2-6} & \multirow{3}{*}{42} & unquantized & $6.77 \times 10^{-4}$ & $2.86 \times 10^{-4}$ & $3.59 \times 10^{-4}$ \\ \cline{3-6} & & {\tt EMSwGAMP} & $1.20 \times 10^{-3}$ & $6.56 \times 10^{-4}$ & $4.53 \times 10^{-4}$ \\ \cline{3-6} & & LMMSE & $1.02$ & $1.02$ & $2.69 \times 10^{-1}$ \\ \hline \end{tabular} \end{center} \vspace{-.15in} \end{table} Finally, we evaluate the performance of the proposed {\tt EMSwGAMP} algorithm for three-phase state estimation. The above simulation results show that almost half of the measurements can be represented with low-precision. Hence, in this test system, $\mathcal{K}=51$ measurements are quantized with 6-bit. Therefore, Table \ref{tb:48_q_six_bits} shows the average performance of two algorithms with $\mathcal{K}=51$ for IEEE 37-bus three-phase system. The performance of {\tt EMSwGAMP} using only unquantized measurements is also included for ease of reference. Table \ref{tb:48_q_six_bits} shows that {\tt EMSwGAMP} still outperformed LMMSE but only with a slight degradation as compared to the unquantized result. The proposed {\tt EMSwGAMP} algorithm can reduce transmission data by $25.91\%$ compared to the high-precision measurement data. Hence, the proposed algorithm can be applied not only to a single-phase but also to a three-phase system. Most importantly, the proposed algorithm can also decrease the amount of data required to be transmitted and processed. \begin{table} \begin{center} \caption{Average estimation results obtained through {\tt EMSwGAMP} and LMMSE with $6$-bit quantized measurements for the three-phase systems}\label{tb:48_q_six_bits} \begin{tabular}{|C{0.35cm}|c|C{1.25cm}|C{1.45cm}|C{1.45cm}|C{1.45cm}|} \hline \multirow{2}{*}{$N$} &\multirow{2}{*}{$\mathcal{K}$} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{${\rm MSE}$} & \multirow{2}{*}{${\rm MSE}_{\rm magn}$} & \multirow{2}{*}{${\rm MSE}_{\rm phase}$} \\ & &&& & \\ \hline\hline \multirow{3}{*}{37} & \multirow{3}{*}{51} & unquantized & $2.02 \times 10^{-4}$ & $1.17 \times 10^{-4}$ & $8.79 \times 10^{-5}$ \\ \cline{3-6} & & {\tt EMSwGAMP} & $6.64 \times 10^{-4}$ & $4.42 \times 10^{-4}$ & $2.30 \times 10^{-4}$ \\ \cline{3-6} & & LMMSE & $0.92$ & $0.92$ & $1.49 \times 10^{-4}$ \\ \cline{1-6} \end{tabular} \end{center} \end{table} \section{Conclusion}\label{sec:05} We first proposed a data reduction technique via coarse quantization of partial uncensored measurements and then developed a new framework based on a Bayesian belief inference to incorporate quantization-caused measurements of different qualities to obtain an optimal state estimation and reduce the amount of data while still incorporating different quality of data. The simulation results indicated that the proposed algorithm performs significantly better than other linear estimates, even for a case scenario in which more than half of measurements are quantized to $6$ bits. This finding verifies the effectiveness of the proposed scheme. \appendices \section{How the measurements are being picked for quantization} For ease of explanation, a 69-bus test system is provided in Fig. 4, where the subset of buses $\mathcal{M}=\{1,2,\ldots,27\}$, called the \emph{main} chain of the system, plays an important role for estimating the system states. Therefore, the measurements from the \emph{side} chain of the system are selected for quantization. In addition, the numbers of the quantized measurements considered in this study are $\mathcal{K} = 2$, $\mathcal{K}=4$, $\mathcal{K} = 17$, $\mathcal{K} = 19$, $\mathcal{K} = 23$, $\mathcal{K} = 27$, $\mathcal{K}=34$, and $\mathcal{K}=42$. More specifically, if $\mathcal{K} = 2$, the current measurements from the subset of buses $\{12,68,69\}$ are selected for quantization; if $\mathcal{K} = 4$, the current measurements from the subset of buses $\{12,68,69\}$ and $\{11,66,67\}$ are selected for quantization; if $\mathcal{K} = 17$, the current measurements from the subset of buses $\{12,68,69\}$, $\{11,66,67\}$ and $\{9,53,54,\ldots,65\}$ are being picked for quantization; if $\mathcal{K} = 19$, the current measurements from the subset of buses $\{12,68,69\}$, $\{11,66,67\}$, $\{8,51,52\}$ and $\{9,53,54,\ldots,65\}$ are selected for quantization; if $\mathcal{K} = 23$, the current measurements from the subset of buses $\{12,68,69\}$, $\{11,66,67\}$, $\{8,51,52\}$, $\{4,47,48,49,50\}$, and $\{9,53,54,\ldots,65\}$ are selected for quantization; if $\mathcal{K} = 27$, the current measurements from the subset of buses $\{12,68,69\}$, $\{11,66,67\}$, $\{8,51,52\}$, $\{3,28,29,\ldots,35\}$, and $\{9,53,54,\ldots,65\}$ are selected for quantization; if $\mathcal{K} = 34$, the current measurements from the subset of buses $\{12,68,69\}$, $\{11,66,67\}$, $\{8,51,52\}$, $\{4,47,48,49,50\}$, $\{36,38,\ldots,46\}$, and $\{9,53,54,\ldots,65\}$ are selected for quantization; if $\mathcal{K} = 42$, the current measurements from the subset of buses $\{12,68,69\}$, $\{11,66,67\}$, $\{8,51,52\}$, $\{4,47,48,49,50\}$, $\{3,28,29,\ldots,35\}$, $\{36,38,\ldots,46\}$, and $\{9,53,54,\ldots,65\}$ are selected for quantization. \begin{figure} \begin{center} \resizebox{3.5in}{!}{% \includegraphics*{Access-2017-06130-fig4} }% \caption{A 69-bus test system.}\label{fig:69bus} \end{center} \end{figure} \bibliographystyle{IEEEtran}
1,108,101,566,128
arxiv
\section{Introduction} A large fraction of early-type main sequence stars are associated with ultracompact H{\sc ii}-Re\-gions (UCH{\sc ii}s, Wood \& Churchwell \cite{wood}). These are characterized by large electron densities $N_{\rm e} \ge 10^4$ $\mbox{cm}^{-3}$, sizes $< 0.1$~pc and temperatures $T_e \approx 10\,000$~K. The overpressure in these regions should lead to expansion and dissipation on time scales of a few thousand years. Considering the expected lifetimes of massive stars of several million years, however, the high abundance of observed UCH{\sc ii}s translates into UCH{\sc ii} mean lifetimes of several $10^5$ years. This contradiction can be resolved by: 1) the UCH{\sc ii}s could be constrained by high pressure in their vicinity, or 2) by gravitationally infalling material (Reid et al. \cite{reid}), or 3) there could exist a process which continuously ``feeds'' the UCH{\sc ii}s with matter. High pressures certainly can be expected in the highly turbulent molecular cloud cores, which are the birthplaces of young massive stars (De~Pree et al.~\cite{depree}, Garc\'{\i}a-Segura \&\ Franco~\cite{segura}, Xie et al. \cite{xie}). Still, it is not clear how turbulence in a cold clumpy medium can contain the warm ($T \sim 10^4$~K), high density ionized material for extended periods of time -- many of the technical details of this proposal need to be worked out. The photoevaporating disk model proposed by Hollenbach et al. (\cite{hollenbach93}) and Yorke \&\ Welz (\cite{yowe93}) offers an attractive alternative. A circumstellar disk around a luminous OB star is continuously photoionized by the central source. The existence of a powerful stellar wind can modify the quantitative details of this model, but the basic result remains the same. Long-lived UCH{\sc ii}s are the necessary consequence of disks around hydrogen-ionizing sources. In a subsequent paper by Hollenbach et al. (\cite{hollenbach94}) the quasi-steady state structure of disks a\-round ionizing sources with winds has been calculated \mbox{(semi-)} analytically and in Yorke (\cite{yorke95}), Yorke \&\ Welz (\cite{yowe96}, hereafter Paper I), and Richling \&\ Yorke (\cite{riyo}, hereafter Paper II) the evolution of such circumstellar disks has been followed numerically under a variety of conditions. In Paper I it has been stressed that the phenomenon of disks in the process of photoionization is not restricted to the (pre\-sum\-ably highly symmetrical) case of circumstellar disks a\-round OB stars. Disk formation is a common by-product of the star formation process. Because OB stars seldom form in isolation, close companions with disks to a powerful source of ionizing {\sc uv} radiation and a stellar wind should be common. Strongly asymmetric UCH{\sc ii}s should result. Wood \&\ Churchwell~(\cite{wood}) observed 75 UCH{\sc ii}s at $\lambda=2$~cm and 6~cm with spatial resolution of 0\mysec4 using the VLA telescope and classified them by their spatial morphological structure into several types: \begin{itemize} \item cometary shaped (20\%), \item core-halo (16\%), \item shell type (4\%), \item irregular or multiply peaked (17\%) and \item spherical or unresolved (43\%). \end{itemize} In order to interpret these observations in light of the photoionized disk models, further work must be done in refining the hydrodynamical models for the asymmetric morphological configurations expected when a disk is ionized by external sources. Diagnostic radiation transfer calculations of these numerical models are necessary for a quantitative comparison. Goal of the present investigation is to determine spectral characteristics and to calculate the expected isophote maps of the {\em symmetrical} UCH{\sc ii}s which result from circumstellar disks around OB stars. We are restricted by the limited number of star/disk configurations which have been considered to date. We discuss in detail the physical (Sect.~\ref{physmod}) and numerical (Sect.~\ref{nummod}) models of radiation transfer which we employed. The results for selected hydrodynamical models from Papers I and II are discussed in Sect.~\ref{results} and compared to observations of specific sources in Sect. \ref{comparison}. We summarize our main conclusions in Sect.~\ref{sect:conclusions}. \section{The physical model} \label{physmod} In Papers I and II of this series the time dependent photo\-e\-vap\-o\-ration of a 1.6~M$_{\odot}$ circumstellar disk around a 8.4~M$_{\odot}$ star was calculated under a variety of physical conditions. The ionizing flux of the central source and its ``hardness'' as well as the stellar wind parameters (mass loss rate and terminal velocity) were varied. States of these models at selected evolutionary times are the basis for our diagnostic radiation transfer calculations. \subsection{Continuum transport} To determine the continuum spectral energy distribution (SED) over a frequency range from the radio region up to the optical, we take into account three major radiation processes: thermal free-free radiation (i.e.\ bremsstrahlung of electrons moving in the potential of protons in the H\,{\sc ii}-region), thermal dust radiation and the radiation emitted from the photosphere of an embedded source. \subsubsection{Free-free radiation} For this process we adopt the approximation for the emission coefficient (Spitzer~\cite{spitzer}): \begin{equation} \label{eq:kapff} \epsilon_{\rm ff}=\frac{8}{3} \left( \frac{2 \pi}{3} \right)^{\frac{1}{2}} \frac{e^6}{m^2c^3} \left( \frac{m}{kT_{\rm e}} \right)^{\frac{1}{2}} g_{\rm ff} N_{\rm e} N_{\rm p} \exp \left(-\frac{h\nu}{kT_{\rm e}} \right) . \end{equation} Here, $N_{\rm e}$ and $N_{\rm p}$ are the particle densities of electrons and protons. All other symbols have their usual meanings. We approximate the Gaunt factor $g_{\rm ff}$ for a non-relativistic plasma by: \begin{equation} g_{\rm ff}=\mbox{max} \left\{ \frac{3^{1/2}}{\pi} \left( \ln \frac{\left(2kT_{\rm e}\right)^{3/2}}{\pi e^2 \nu m^{1/2}}-\frac{5 \gamma}{2}\right) ,1 \right\} , \end{equation} where $\gamma$ ($\approx 0.577$) is Euler's constant. Assuming the validity of Kirchhoff's Law $S_{\nu} = \epsilon_\nu / \kappa_\nu = B_{\nu}(T_{\rm e})$, the absorption coefficient for thermal free-free radiation can be written ($h\nu \ll kT_{\rm e}$): \begin{equation} \kappa_\nu^{\rm ff} = \frac{4 (2\pi)^{1/2}e^6N_{\rm e} N_{\rm p} g_{\rm ff}}{(3 m k)^{3/2} c T_{\rm e}^{3/2} \nu^2} . \end{equation} \subsubsection{Dust emission} We adopt the `dirty ice' dust model developed by Preibisch et al.~(\cite{preib}), which includes two refractory components: amorphous carbon grains (aC) and silicate grains as well as volatile ice coatings on the surface of the silicate grains at temperatures below 125 K (Core Mantle Particles, CMP's). The icy coatings contain 7\%\ of the available amorphous carbon and consist of water and ammonium with a volume ratio of 3:1. At temperatures above 125~K the silicate core and approximately 11 amorphous carbon particles are released into the dusty gas for each CMP. In Table~\ref{tab:dust} the sublimation temperature $T_{\rm sub}$, the mean radius $\bar{a}_{\rm d}$ and the number of grains per gram gas $n_{\rm d}$ are listed for the different species. \begin{table}[tb] \caption[Staubparameter]{Parameters for the grain species used in the dust model of Preibisch et al.~(\cite{preib}).} \begin{flushleft} \begin{tabular}{llll} \hline\noalign{\smallskip} Grain Species & $T_{\rm sub}/[\mbox{K}]$ & $\log \bar{a}_{\rm d}/[\mbox{cm}]$ & $\log n_{\rm d}/[\mbox{g}^{-1}]$ \\ \noalign{\smallskip} \hline\noalign{\smallskip} aC & 2000 & $-$6.024 & 14.387 \\ Silicate & 1500 & $-$5.281 & $-$ \\ CMP & 125 & $-$5.222 & 12.218 \\ \noalign{\smallskip} \hline \end{tabular} \label{tab:dust} \end{flushleft} \end{table} The absorption coefficient [${\rm cm}^{-1}$] for the individual dust components is given by: \begin{equation} \kappa_\nu^{\rm d} = n_{\rm d} \rho \pi \bar{a}_{\rm d}^2 Q^{\rm abs}_{\rm d,\nu} \; , \end{equation} where the mean absorption efficiency $Q^{\rm abs}_{\rm d,\nu}$ for grain type ``d'' has been determined using Mie theory for spherical grains of a given size distribution. Figure~\ref{fig:dustparms} displays the absorption efficiencies for the different dust components as a function of frequency. Each dust component's contribution to the source function due to thermal emission $S_{\nu}^{\rm d}$ is also calculated under the assumption that $S_{\nu}^{\rm d} = B_{\nu}(T_{\rm d})$. \begin{figure} \begin{center} \epsfig{file=figure1.ps,height=5.02cm} \end{center} \caption[]{Mean absorption efficiencies for the different dust components. Solid line: amorphous carbon, dotted line: silicate, dashed line: CMP's.} \label{fig:dustparms} \end{figure} \subsubsection{Net continuum absorption and emission} Both emission processes mentioned above occur simultaneous\-ly within the same volume. Thus the net absorption coefficient and source function are: \begin{equation} \kappa_\nu = \sum_{\rm d} \kappa^{\rm d}_\nu + \kappa^{\rm ff}_\nu \end{equation} \begin{equation} S_\nu = \frac{1}{\kappa_\nu} \left( \sum_{\rm d} \kappa^{\rm d}_\nu B_\nu(T_{\rm d}) + \kappa^{\rm ff}_\nu B_\nu(T_{\rm e}) \right) . \end{equation} \subsection{Forbidden lines} In order to calculate profiles of the forbidden lines for the elements oxygen and nitrogen ([O\,{\sc ii}] 3726, [O\,{\sc iii}] 5007 and [N\,{\sc ii}] 6584), we adopt the following procedure. First, the equilibrium ionization structure of these elements is calculated over the volume of consideration. Next, the occupation densities of me\-ta\-stable levels $N_{\rm i}$ due to collisional excitation by electrons is determined. We take into account Doppler shifts due to bulk gas motions and thermal Doppler broadening to calculate the profile function $\phi$: \begin{equation} \phi(\nu) = \frac{1}{\sqrt{\pi} \Delta \nu_{\rm D}} \exp \left[ -\left( \frac{\nu - \tilde{\nu}}{\Delta \nu_{\rm D}} \right)^2 \right] , \end{equation} with the thermal Doppler width: \begin{equation} \label{eq:doppler} \Delta \nu_{\rm D} = \frac{\tilde{\nu}}{c}\sqrt{\frac{2RT_{\rm e}}{\mu}} . \end{equation} Here $R$ is the gas constant, $\mu$ the atomic weight of the relevant ion, and $\tilde{\nu} = \nu_{\rm 0}(1+v_{\rm R}/c)$ is the transition frequency $\nu_{\rm 0}$ Doppler-shifted by the radial velocity $v_{\rm R}$ of the gas relative to the observer. The emission coefficient of the transition $k \rightarrow j$, which enters into the equation of radiative transfer, is then given by: \begin{equation} \label{eq:lineemis} \epsilon_{\rm L}(\nu) = \frac{1}{4 \pi} N_{\rm k} A_{\rm kj} h \nu_{\rm 0} \phi(\nu) = \tilde{\epsilon}_{\rm L} \cdot \phi(\nu) , \end{equation} where $A_{\rm kj}$ is the Einstein coefficient for spontaneous emission. Note that we have neglected radiative excitation and stimulated emission in this approximation. \subsubsection{Ionization equilibrium} The equations for ionization equilibrium for two neighboring ionization stages, ${\rm r}$ and ${\rm r}+1$, are: \begin{eqnarray} \label{eq:ionglg} N^{\rm r} \left[ \int_{\nu_{\rm I}}^\infty f_\nu \sigma_\nu^{\rm r} \mbox{d} \nu +N_{\rm e} q^{\rm r}+N_{\rm p} \delta^{\rm r} \right] = \nonumber \\ N^{{\rm r}+1} \left[ N_{\rm e} \left( \alpha_{\rm R}^{\rm r}+\alpha_{\rm D}^{\rm r} \right) + N_{{\rm H}^{\rm 0}} \delta'^{\rm r} \right] . \end{eqnarray} We solve these equations simultaneously for the $N^{\rm r}$ up to the ionization stage $r=3$ for both oxygen and nitrogen. \paragraph{Radiative ionization.} The rate of radiative ionization is calculated from the flux of incident photons $f_\nu$ and the absorption cross section $\sigma_\nu^r$ integrated over all ionizing frequencies. We use the radiation field of the central source and neglect scattering to determine $f_\nu$: \begin{equation} f_\nu = \frac{1}{h\nu} \frac{B_\nu(T_*) R_*^2 \pi}{4 \pi R^2} \exp (-\tau) . \end{equation} An analytical expression for the absorption cross section $\sigma_\nu^r$ is given in Henry~(\cite{henry}). \paragraph{Collisional ionization.} This ionization process is important in hot plasmas, where the mean kinetic energy of the electrons is comparable to the ionization potentials of the ions. N\,{\sc i}, for example, has an ionization potential 14.5~eV; the corresponding Boltzmann temperature is $\sim$~170\,000~K. The coefficient for collisional ionization $q^{\rm r}$ is approximated by the analytical expression in Shull \&\ van Steenberg~(\cite{shull}). \paragraph{Radiative recombination.} This is the inverse process to radiative ionization. For the recombination coefficient $\alpha_{\rm R}$ we use the formula given in Aldrovandi \&\ Pequinot~(\cite{aldro1}, \cite{aldro2}). \paragraph{Dielectronic recombination.} The probability for recombination is enhanced when the electron being captured has a kinetic energy equal to the energy necessary to excite a second electron in the shell of the capturing ion. The density of excited levels in the term scheme of the ions grows with energy. Thus, this process becomes more and more important with increasing temperature. We use two analytical expressions for $\alpha_{\rm D}$: one for temperatures between 2000~K and 60\,000~K (Nussbaumer \&\ Storey~\cite{nuss}) and one for higher temperatures (Shull \&\ van Steenberg~\cite{shull}). \paragraph{Charge exchange.} \label{sect:chargeex} The exchange of electrons during encounters of atoms and ions, e.g. $N^{++}+H^{\rm 0} \rightarrow N^++H^+$ is also important. Arnaud \&\ Rothenflug~(\cite{arnaud}) give an expression for the coefficients $\delta'^{\rm r}$. Special care is necessary in the case of the reaction $O^+ + H^{\rm 0} \rightleftarrows O^{\rm 0}+H^+$. Due to the similarity of the ionization energies of hydrogen and oxygen ($\Delta E=0.19$~eV) the backward reaction is also very effective. At sufficiently high electron temperatures this leads to the establishment of an ionization ratio $ N_{O^{\rm 0}}/N_{O^+} \approx (9/8) N_{H^{\rm 0}}/N_{H^+} $, even in the absence of ionizing radiation. We explicitly include both reactions in Eq.~(\ref{eq:ionglg}) via the term $\delta^{\rm r}$. An expression for this coefficient can also be found in Arnaud \&\ Rothenflug~(\cite{arnaud}). \subsubsection{Collisional excitation of metastable states} Neglecting the effects of radiative excitation and stimulated emission, we solve the equations of excitation equilibrium for the population densities $N_{\rm k}$ (sums over all values ``j'' for which the conditions under the summation signs are fulfilled): \begin{eqnarray} N_{\rm k} \left[ N_{\rm e} \sum_{E_{\rm j} \neq E_{\rm k}} q_{\rm kj}+ \sum_{E_{\rm j}<E_{\rm k}} A_{\rm kj} \right] & = & \nonumber \\ N_{\rm e} \sum_{E_{\rm j} \neq E_{\rm k}} N_{\rm j} q_{\rm jk} & + & \sum_{E_{\rm j}>E_{\rm k}} N_{\rm j} A_{\rm jk} , \end{eqnarray} together with the condition $ \sum N_{\rm j} = N_{\rm ges}$. We use the formulae for the activation and deactivation coefficients given in e.g.\ Osterbrock~(\cite{oster}): \begin{equation} q_{12}=8.63 \cdot 10^{-6} \, \frac{\Omega_{12}}{\omega_1} \, T_{\rm e}^{-1/2} \exp \left(- \frac{\Delta E_{12}}{k T_{\rm e}} \right) \end{equation} and \begin{equation} q_{21}=8.63 \cdot 10^{-6} \, \frac{\Omega_{12}}{\omega_2} \, T_{\rm e}^{-1/2} , \end{equation} where $\Omega_{12}$ denotes the collision strength for the transition $1 \rightarrow 2$, $\omega_1$ and $\omega_2$ the statistical weights of both states involved and $\Delta E_{12}$ the energy difference between them. For the $\Omega_{12}$ we use the tables given in Osterbrock~(\cite{oster}). \subsection{Balmer lines} Our neglect of line absorption of Balmer photons by hydrogen is justified as long as the density of Ly$_\alpha$ photons is sufficiently low to insure that the hydrogen 2p state is not significantly populated. This is equivalent to the assumption that Ly$_\alpha$ photons generated in the nebula by recombination either are quickly destroyed, e.g.\ by dust absorption or by hydrogen Ly$_\alpha$ absorption followed by 2-photon emission, or are able to escape sufficiently rapidly, e.g.\ by a random walk in frequency (Osterbrock~\cite{oster2}). The emission coefficient of the Balmer lines is given by: \begin{equation} \tilde{\epsilon}_{\rm L}({\rm H}_{\rm i}) = \frac{1}{4 \pi} \alpha_{{\rm H}_{\rm i}}^{\rm eff} \cdot N_{\rm p} N_{\rm e} h \nu_{{\rm H}_{\rm i}} . \end{equation} The effective recombination coefficients $\alpha_{{\rm H}_{\rm i}}^{\rm eff}$ used in this work were adopted from Hummer \&\ Storey~(\cite{hummer}). \subsection{Radiation from the central star} As argued in Paper I the resulting {\sc uv} spectrum of a star accreting material via an accretion disk is very uncertain. For simplicity we have assumed that the photospheric emission of the central source (star + transition zone) can be approximated by a black body of given temperature $T_*$ in the frequency range of interest ($\lambda < 100$~nm). $T_*$ determines the ``hardness'' of the ionizing photons, thus affecting both the nebula temperature and the ionization fraction of oxygen and nitrogen. We use the same values for $T_*$ as in Papers I and II for the hydrodynamic models. Nevertheless, the successful spectral classification of the ionizing star in the UCH{\sc ii} region G29.96-0.02 by Watson \&\ Hanson (\cite{watson}) gives rise to the hope that more information on the spectral properties of young, still accreting massive stars will be available in the future. \section{The Numerical Model} \label{nummod} \subsection{Structure of the underlying models} \label{intmod} \begin{figure} \epsfig{file=figure2.ps,width=8.6cm} \caption[]{Density, velocity and ionization structure of model A and C. Gray scale and black contour lines display the density structure. These contour lines vary from $\log\rho=-13.0$ to $\log\rho=-19.5$ in increments of $\Delta\log\rho=0.5$. The white contour lines mark the position of the ionization front and the arrows show the velocity field. The normalization is given at the upper right corner.} \label{fig:acmodels} \end{figure} \begin{figure} \epsfig{file=figure3.ps,width=8.8cm,clip=1} \caption[]{Density, velocity and ionization structure of model G2, G3 and G4. Symbols and lines have the same meaning as in Fig.~\ref{fig:acmodels} except the black density contour lines, which are drawn down to $\log\rho=-21.5$.} \label{fig:gmodels} \end{figure} The underlying numerical models were calculated on five multiply nested grids, each with 62 x 62 grid cells (see Yorke \&\ Kaisig~\cite{yokai}, Paper I, and Paper II). The spatial resolution of the finest grid was $\Delta R = \Delta Z \approx 2 \times 10^{13}$~cm (R is the distance to the symmetry axis, Z to the equatorial plane). Axial symmetry and mirror symmetry with respect to the equatorial plane were assumed for the models. The simulations were performed within a volume $(R_{\rm max}^2 + Z_{\rm max}^2)^{1/2} \le 10^{16}$~cm until a quasi-steady state was reached. For the diagnostic radiation transfer calculations discussed here we use the final states of five simulations described in Paper II. Some of the relevant parameters of these simulations are given in Table~\ref{tab:models}. Figure~\ref{fig:acmodels} and Fig.~\ref{fig:gmodels} display the density and ionization structure as well as the velocity field of the selected models. Models A and C are the results of simulations with the same moderate stellar wind and the same radiation source. But in the simulation leading to model A the diffuse {\sc uv} radiation field originating from scattering on dust grains was completely neglected. For that reason we got a higher photoevaporation rate $\dot{M}_{\rm ph}$ for model C. In Fig.~\ref{fig:acmodels} this is recognizable by the greater overall density in the ionized regions and by the higher velocity in the ``shadow'' regions of the disk in the case of model C. In order to investigate the variation of spectral characteristics with the stellar wind velocity we chose the models with the greatest wind velocities G2, G3 and G4. Figure~\ref{fig:gmodels} shows the increasing opening angle of the cone of freely expanding wind with increasing wind velocity. \subsection{Strategy of solution} We use the model data to calculate the ionization structure and the level population. From the level populations we determine the emissivities of each line transition and the continuum emission at each point within the volume of the hydrodynamic mo\-del. For each viewing angle $\Theta$ considered, we solve the time independent equation of radiation transfer in a non-relativistic moving medium along a grid of lines of sight (LOS) through the domain, neglecting the effects of scattering: \begin{equation} \label{eq:rte} \frac{{\rm d}I_\nu}{{\rm d}\tau_\nu}=-I_{\nu}+S_{\nu} , \end{equation} where the optical depth is defined as $\tau_\nu=\int \kappa_\nu {\rm d}s$. Integrations were performed for a given set of frequencies, whereby the effects of Doppler shifts for the line emissivities were taken into account. The resulting intensities are used to determine SEDs, intensity maps and line profiles. Spectra are obtained from the spatial intensity distributions by integration, taking into account that each LOS has an associated ``area''. Depending on $\Theta$ the symmetry of the configurations could be utilized to minimize the computational effort (see Fig.~\ref{fig:parcels}). For the pole-on view ($\Theta = 0^\circ$), for example, only a one dimensional LOS array need be considered. For the edge-on view ($\Theta = 90^\circ$) lines of sight either through a single quadrant (continuum transfer) or through two quadrants (line transfer) are necessary. The resolution of the central regions is enhanced by overlaying a finer LOS grid in accordance with the multiple nested grid strategy used in the hydrodynamic calculations. Each point in Fig.~\ref{fig:parcels} corresponds to an LOS trajecto\-ry through the model. Mapping such a trajectory onto the (R,Z) model grid yields hyperbolic curves as displayed in Fig.~\ref{fig:path}. Beginning with a starting intensity ($I_\nu(-\infty)$), the solution of Eq.~(\ref{eq:rte}) is obtained by subdividing the LOS into finite intervals and analytically integrating over each interval assuming a sub-grid model (see below). \begin{figure*} \begin{center} \epsfig{file=figure4.ps,height=6.0cm} \end{center} \caption[]{Choice of lines of sight (LOS) and their associated areas for different viewing angles $\Theta$. Filled dots indicate the LOS used for the continuum calculations. Empty dots refer to the additional Lines of Sight necessary for the line profile calculations.} \label{fig:parcels} \end{figure*} \begin{figure} \begin{center} \epsfig{file=figure5.ps,height=7.2cm} \end{center} \caption{Projection of a typical LOS trajectory (curved dashed line) onto the model data grid (solid lines). Temperature, density, degree of ionization and velocity are defined at cell centers. The small circles divide the LOS into subintervals; the source function $S_\nu$ is evaluated at the location of the circles, chosen to lie on the intersections of the LOS with lines connecting the grid cell centers.} \label{fig:path} \end{figure} \subsection{Continuum radiation transfer} If no discontinuities are present within the subinterval under consideration, we assume $S_\nu$ varies linearly with $\tau$, i.e.\ \begin{equation} S_\nu(\tau)=S_\nu^i+(S_\nu^{i+1}-S_\nu^i) \frac{\tau}{\Delta \tau} , \end{equation} where $i$ and $i+1$ denote the starting and end points of the interval, respectively, and $\Delta \tau$ is a mean optical depth over the interval: \begin{equation} \Delta \tau = \frac{\kappa^i + \kappa^{i+1}}{2} \Delta s . \end{equation} With this formulation the solution of Eq.~(\ref{eq:rte}) over the entire interval is given by (see Yorke~\cite{yorke1}): \begin{eqnarray} \label{eq:step} I_{i+1} = I_{i} \exp ( - \Delta \tau) & + & S_{i} \left[ \frac{ 1-\exp (-\Delta \tau)}{\Delta \tau} -\exp (-\Delta \tau) \right] \nonumber \\ & + & S_{i+1} \left[ 1-\frac{1-{\rm exp}(-\Delta \tau)}{\Delta \tau} \right] . \end{eqnarray} For the cases considered here we choose $I_0 = 0$ as the starting LOS intensity. For ``proplyd''-type models (considered in a subsequent paper of this series) a non-negligible background intensity should be specified. \subsection{Radiation transfer in emission lines} For the transitions considered here the radiation field can be considered ``diffuse'' $I_\nu \ll B_\nu$ and the contribution of spontaneous emission dominates over line absorption and stimulated emission processes. After separating the source function $S_\nu = S_{\rm C} + S_{\rm L}$ and the intensity $I_\nu = I_{\rm C} + I_{\rm L}$ of Eq.~(\ref{eq:rte}) into the contributions of the continuum and the line, we obtain \begin{equation} \label{eq:linint} I_{\rm L} = \frac{\tilde{S}_{\rm L} \exp (-\Delta \tau)}{ \sqrt{\pi} \Delta \nu_{\rm th}} \int_0^{\Delta \tau}\exp\left[\tau-\left( \frac{\nu - \tilde{\nu}(\tau)}{\Delta \nu_{\rm th}} \right)^2 \right] {\rm d} \tau , \end{equation} where ${\rm d}\tau = \kappa_{\rm C} \, {\rm d}s$ and $S_{\rm L}=\epsilon_{\rm L}/ \kappa_{\rm C}$. Here $\tilde{\nu}(\tau)$ is the Doppler-shifted frequency of the transition, $\Delta \nu_{\rm th}$ the Doppler width and $\tilde{S}_{\rm L}$ the net source function integrated over the line. Assuming that $\tilde{\nu}$ is linear in $\tau$ over the whole interval yields the analytical solution to Eq.~(\ref{eq:linint}): \begin{equation} \label{eq:linsol} I_{\rm L} = \frac{\tilde{S}_{\rm L} \Delta \tau}{2 \Delta \tilde{\nu}} \exp \left( - \frac{( \tilde{\nu}_2 - \nu) \Delta \tau}{\Delta \tilde{\nu}} \right) \cdot \left[ \mbox{erf} (Y_2) - \mbox{erf} (Y_1) \right] , \end{equation} where ${\rm erf}(y)=2/\sqrt{\pi} \int_0^z \exp(-t^2) dt$ is the error function and $Y_i=(\tilde{\nu}_i-\nu)/\Delta \nu_{\rm th}$ a dimensionless frequency shift. The net source function $\tilde{S}_{\rm L}$ is calculated according to the algorithm suggested by Yorke (\cite{yorke1}): \begin{eqnarray} \tilde{S}_{\rm L} & = & \frac{ \mbox{erf}(Y_{\rm M}) - \mbox{erf}(Y_{\rm 1})}{\mbox{erf} (Y_2) - \mbox{erf} (Y_1)} \tilde{S}_1 + \frac{ \mbox{erf} (Y_2) - \mbox{erf}(Y_{\rm M})}{\mbox{erf}(Y_2) - \mbox{erf}(Y_1)} \tilde{S}_2, \end{eqnarray} with $Y_{\rm M} = (Y_1 + Y_2) / 2$. The line source functions $\tilde{S}_1$ and $\tilde{S}_2$ are calculated from the total line emission coefficient $\tilde{\epsilon}_{\rm L}$ as defined in Eq.~(\ref{eq:lineemis}) at the boundaries of the evaluated interval and from the continuum absorption coefficient: $\tilde{S}_{\rm i}= \tilde{\epsilon}_{\rm L,i}/{\kappa_{\rm C}}$. If $\Delta \tilde{\nu} \ll \Delta \nu_{\rm th}$, i.e.\ there is negligible Doppler shift within the subinterval, the solution of Eq.~(\ref{eq:linint}) with $\tilde{\nu}_1=\tilde{\nu}_2=\tilde{\nu}$ is used: \begin{equation} \label{eq:narrowl} I_{\rm L} = \frac{S_{\rm L}}{\sqrt{\pi}\Delta \nu_{\rm th}} \exp \left[ - \left( \frac{\nu-\tilde{\nu}}{\Delta \nu_{\rm th}} \right)^2 \right] \left(1-\exp(-\Delta \tau) \right) . \end{equation} \subsection{Treatment of ionization fronts} The numerical models considered contain unresolved ionization fronts due to the coarseness of the hydrodynamic grid. At these positions jumps occur in the physical parameters and the solutions given by Eq.~(\ref{eq:step}) and Eq.~(\ref{eq:linsol}/\ref{eq:narrowl}) are poor approximations. The exact location of the fronts within a grid cell are unknown; we assume they lie at the center of the corresponding interval. Our criterion for the presence of an ionization front is a change in the degree of ionization $\Delta x > 0.1$ between two evaluation points. For the continuum calculations Eq.~(\ref{eq:step}) is applied to each half interval with $\Delta \tau = \kappa_i \Delta x /2$. For the first half $S= S_i$ kept constant and for the second half $S=S_{i+1}$ is held constant. For the line calculations Eq.~(\ref{eq:narrowl}) is used with $\tilde{\nu}=\nu_1$ ($\nu_2$) and $S_{\rm L}=S_1$ ($S_2$) for the first (second) half interval. \subsection{Treatment of the central radiation source} The central source is modeled by a black body radiator of temperature $T_*$ and radius $R_*$. The integration along the line of sight through the center is started at the position of the source with the initial intensity \begin{equation} I_0 = B_\nu(T_*) \frac{R_*^2 \pi}{A} , \end{equation} with $A$ is the area associated with the central LOS. \section{Results} \label{results} With the code described above we determined SEDs, continuum isophotal maps and line profiles for the forbidden lines [N{\sc ii}] 6584, [O{\sc ii}]~3726 and [O{\sc iii}] 5007 as well as the H$\alpha$-line for the models introduced in Sect.~\ref{intmod}. Their relevant physical parameters are listed in Table~\ref{tab:models}. \subsection{Continuum emission} \subsubsection{Spectral Energy Distributions} Figure~\ref{fig:modg2cont} shows the SEDs of model G2 presented in Paper~II for different viewing angles $\Theta$. The spectra can be divided into three regimes dominated by different physical processes: \begin{enumerate} \item In the frequency range from $10^8$ to $10^{11}$ Hz the SED is dominated by the thermal free-free radiation in the ionized region around the dust torus. \item The {\sc ir}-excess from $10^{11}$ to $10^{14}$ Hz is due to the optically thick dusty torus itself, which has a mean surface temperature of about 250 K. \item Beyond $10^{14}$ Hz the SED depends strongly on the viewing angle: if the star is obscured by the dusty torus then the free-free radiation of the H{\sc ii}-region again dominates the spectrum, otherwise the stellar atmosphere shows up. Due to the uncertainties in the stellar spectra and the neglect of scattering by dust in Eq.~(\ref{eq:rte}), which becomes more and more important with increasing frequency, a discussion of the SED beyond $10^{14}$~Hz and comparison with observations are not useful. \end{enumerate} \begin{table*}[tb] \caption{Scattering coefficient $\kappa^{\rm scat}_{\rm dust} \rho^{-1}$ as well as parameters for the stellar wind (mass loss rate $\dot{M}_{\rm wind}$ and velocity $v_{\rm wind}$) and the ionizing source (stellar photon rate $S_{\rm star}$ and temperature $T_{\rm eff}$) used in the calculations. The evaporation time scale $t_{\rm evap}$ is calculated from $t_{\rm evap}=M_{\rm disk}/\dot{M}_{\rm ph}$ with $M_{\rm disk} = 1.67\,M_\odot$.} \begin{center} \begin{tabular}{cccccccc} \hline\noalign{\smallskip} model & $\kappa^{\rm scat}_{\rm dust}/\rho$ & $\dot{M}_{\rm wind}$ & $v_{\rm wind}$ & $\log_{10} S_{\rm star}$ & $T_{\rm eff}$ & $\dot{M}_{\rm ph}$ & $t_{\rm evap}$ \\ & $\mbox{cm}^2\mbox{g}^{-1}$ & $10^{-8}M_\odot\mbox{yr}^{-1}$ & $\mbox{km\,s}^{-1}$ & $\mbox{s}^{-1}$ & $\mbox{K}$ & $10^{-6}M_\odot\mbox{yr}^{-1}$ & $10^6\mbox{yr}$ \\ \noalign{\smallskip} \hline\noalign{\smallskip} A & 0 & 2 & 50 & 46.89 & 30\,000 & 0.565 & 2.96 \\ C & 200 & 2 & 50 & 46.89 & 30\,000 & 1.35 & 1.24 \\ \hline\noalign{\smallskip} G2 & 200 & 2 & 400 & 46.89 & 30\,000 & 1.65 & 1.01 \\ G3 & 200 & 2 & 600 & 46.89 & 30\,000 & 1.67 & 1.00 \\ G4 & 200 & 2 & 1000 & 46.89 & 30\,000 & 1.64 & 1.02 \\ \noalign{\smallskip} \hline \end{tabular} \label{tab:models} \end{center} \end{table*} According to the analysis of Panagia \&\ Felli~(\cite{panagia}), who calculated analytically the free-free emission of an isothermal, spherical, ionized wind, the flux density should obey a $\nu^{0.6}$-law: \begin{eqnarray} F_{\nu}=6.46 \cdot 10^{-3}\; \mbox{Jy} \cdot \left[ \frac{\dot{M}}{10^{-5} \mbox{~M}_\odot / \mbox{yr}} \right] ^{4/3} \cdot \left[ \frac{\nu}{10\mbox{~GHz}} \right] ^{0.6} \cdot \nonumber \\ \cdot \left[ \frac{T}{10^4\mbox{~K}} \right] ^{0.1} \cdot \left[ \frac{d}{\mbox{kpc}} \right] ^{-2} \cdot \left[ \frac{v_{\rm wind}}{10^3\mbox{~km\,s$^{-1}$}} \right] ^{-4/3}. \label{eq:panagia} \end{eqnarray} Schmid-Burgk~(\cite{schmid}) showed that this holds, modified by a geo\-metry dependent factor of order unity, even for non-symmetri\-cal point-source winds as long as $\rho$ drops as $R^{-2}$. Additionally he postulated that the flux density should hardly be dependent on the angle at which the object is viewed. In Fig.~\ref{fig:modg2cont} we include for comparison the flux distribution given by Eq.~\ref{eq:panagia} for the photoevaporation rate $\dot{M}_{\rm ph}=1.65\cdot 10^{-6}\;\mbox{M}_\odot\mbox{yr}^{-1}$ (see Paper II), $T=10\,000$~K and $v_{\rm wind}=20\;\mbox{km~s}^{-1}$ derived from the line profiles in Sect.~\ref{sect:lines}. The slope of the SED in regime 1 is slightly steeper in our results, because our volume of integration is finite; Panagia \&\ Felli~(\cite{panagia}) derived their analytical results by assuming an infinite integration volume. The flux is almost independent of $\Theta$, which is in good agreement with Schmid-Burgk~(\cite{schmid}). The deviations between $10^{10}$ and $10^{11}$ Hz are due to the break in the $R^{-2}$-law caused by the neutral dust torus. Figure~\ref{fig:modg2cont} also includes the SED of a blackbody at $T=240$~K. The flux $F_{\nu} \propto \nu^{2.2}$ in the far {\sc ir} between $5\cdot10^{11}$\,Hz and $3\cdot10^{12}$\,Hz is slightly steeper than the comparison blackbody spectrum, because the dust torus is not quite optically thick. With increasing $\Theta$ the maximum shifts towards lower frequencies, because the warm dust on the inside of the torus is being concealed by the torus itself. We obtain qualitatively the same results for a number of models presented in Paper II. \begin{figure} \begin{center} \epsfig{file=figure6.ps,height=6.25cm} \end{center} \caption[]{Spectral Energy Distribution for model G2 and different $\Theta$.} \label{fig:modg2cont} \end{figure} \begin{figure*} \begin{center} \epsfig{file=figure7.ps,height=21cm} \end{center} \caption[]{Isophotal maps of model C for different viewing angles and wavelengths as indicated. Assumed distance 300 pc. The circle in the lower right corner marks the FWHM of the point spread function. Values for contours see text.} \label{fig:contourc} \end{figure*} \begin{figure*} \begin{center} \epsfig{file=figure8.ps,height=21cm} \end{center} \caption[]{Isophotal maps of model G4 for different viewing angles and wavelengths as indicated. Assumed distance 300 pc. The circle in the lower right corner marks the FWHM of the point spread function. Values for contours see text.} \label{fig:contourg4} \end{figure*} \subsubsection{Isophotal maps} We also calculated isophotal maps over the projected $(S,T)$ grid of the sky for models C and G4 (Figs.~\ref{fig:contourc},\ref{fig:contourg4}). The maps were convolved with a Gaussian point spread function (FWHM 0\mysec3 for $\lambda=6$\,cm, 0\mysec1 for $\lambda=2$\,cm and $12\,\mu$m) in order to compare the numerical models with observations of limited resolution. The values (in percent) of the contour lines relative to the maximum flux per beam are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 60, 70, 80, 90 for $\lambda=6\,$cm, and 2, 5, 15, 25, 35, 45, 55, 65, 75, 85, 95 for $\lambda=2\,$cm and $12\mu$m. The difference in these two models is revealed most strikingly in the maps for $\lambda=6\,$cm and $2\,$cm. The mass outflow of model C is more evenly distributed over its whole opening angle, whereas in model G4 most of the mass is transported outward in a cone between $\Theta=30^\circ$ and $70^\circ$. This leads to the X-shape of the corresponding radio maps for viewing angles $\Theta \ge 60^\circ$. The high density in the region between star and disk for this model results in an optically thick torus in this region at $\lambda=2\,$cm. Thus the contours for $\Theta=30^\circ$ and $60^\circ$ are not symmetric to the equatorial plane at $T=0^{''}$. In the maps corresponding to $\Theta = 30^\circ, \lambda = 12 \mu$m there appears a peculiar horseshoe-like feature. It is generated by the hottest region of the dust torus, which is the innermost boundary with the smallest distance to the star. It can be seen as a ring in the maps for $\Theta = 0^\circ$. For $\Theta = 30^\circ$ the part of the ring next to the observer is obscured by the dust torus, whereas the other parts are still visible. The beam width chosen by us allows the resolution of the ring and thus leads to the horseshoe-like feature. For $\Theta = 60^\circ$ only the most distant part of this hot ring is visible, leading to a maximum in the flux with a smaller spatial extent than for $\Theta = 30^\circ$. \begin{figure*} \begin{center} \epsfig{file=figure9.ps,height=17cm,angle=270} \end{center} \caption[]{Lines [N{\sc ii}] 6584 and H$\alpha$ for models G2, G3 and G4, and different ``viewing angles'' $\Theta$. The models differ only in wind velocity $v_{\rm wind}$.} \label{fig:lines1} \end{figure*} \begin{figure*} \begin{center} \epsfig{file=figure10.ps,height=17cm,angle=270} \end{center} \caption[]{Lines [N{\sc ii}] 6584 and H$\alpha$ for models A and C, and different ``viewing angles'' $\Theta$. In model A scattering of {\sc uv} photons by dust was neglected, in model C included. Note the different scales on the abscissae.} \label{fig:lines3} \end{figure*} \begin{figure*} \begin{center} \epsfig{file=figure11.ps,height=17cm,angle=270} \end{center} \caption[]{Lines [O{\sc ii}] 3726 and [O{\sc iii}] 5007 for models G2, G3 and G4, and different ``viewing angles'' $\Theta$. The models differ only in wind velocity $v_{\rm wind}$.} \label{fig:lines2} \end{figure*} \begin{figure*} \begin{center} \epsfig{file=figure12.ps,height=17cm,angle=270} \end{center} \caption[]{Lines [O{\sc ii}] 3726 and [O{\sc iii}] 5007 for models A and C, and different ``viewing angles'' $\Theta$. In model A scattering of {\sc uv} photons by dust was neglected, in model C included. Note the different scales on the abscissae.} \label{fig:lines4} \end{figure*} \subsection{Line profiles} \label{sect:lines} Line profiles can provide useful information on the velocity structure in H{\sc ii} regions. Because the thermal doppler broadening decreases with increasing atomic weight as $A^{-1/2}$, ions such as N{\sc ii} and O{\sc iii} are better suited for velocity diagnostics than H$_\alpha$. This can be seen in the line profiles we obtained (Figs.~\ref{fig:lines1}-\ref{fig:lines4}). They show the flux integrated over the whole area of the object including the disk, the evaporated flow and the cone of the stellar wind. In all cases the line broadening of several ten km\,s$^{-1}$ is dominated by the velocity distribution of the escaping gas. For comparison, the rotational velocity of the dust torus $v_{\rm rot} \simeq 13$~km\,s$^{-1}$ for the inner parts, and the thermal Doppler broadening from Eq.~(\ref{eq:doppler}) at $T = 10\,000$~K is $v_{\rm th} \simeq 9$~km\,s$^{-1}$ for H$\alpha$ and $\simeq 2$~km\,s$^{-1}$ for O{\sc iii}. Figures~\ref{fig:lines3} and \ref{fig:lines4} show the line profiles for the models A (no scattering of H-ionizing photons) and C (calculated assuming non-negligible {\sc uv} scattering during the hydrodynamical evolution) from Paper~II. {\sc uv} scattering leads to stronger illumination of the neutral torus by ionizing radiation and thus to a higher photoevaporation rate in model C (by a factor of $\sim 2.3$) compared to model A. Due to the higher density in the regions filled with photoevaporated gas, the lines for model C are generally more intense. In the case of the line [O{\sc ii}] 3726 one notices that the difference between the fluxes for different angles is the smallest of all transitions. Not only is the density of the outflowing ionized gas higher for case~C, but the charge exchange reactions discussed in Sect.~\ref{sect:chargeex} lead to the establishment of a non-negligible O{\sc ii}-abundance even in the ``shadow'' regions not accessible to direct stellar illumination. These regions dominate the line spectra for all angles. Figures~\ref{fig:lines1} and \ref{fig:lines2} show the calculated line profiles for the models G2, G3 and G4, which only differ by the assumed stellar wind velocity $v_{\rm wind}$, increasing from 400~km\,s$^{-1}$ (G2) to 1\,000~km\,s$^{-1}$ (G4). Comparing the results we find two features: \begin{enumerate} \item The intensity and overall structure of the line profiles considered are almost independent of the stellar wind velocity $v_{\rm wind}$. \item No high-velocity component appears in the lines due to the stellar wind. \end{enumerate} Both features can be explained by the fact that the density of material contained in the stellar wind is much lower than the density in the photoionized outflow. Remembering that $\rho = \dot M / 4\pi r^2 v$ in a steady-state outflow and that the line emissivity $\epsilon_{\rm L} \propto \rho^2$, we can understand why the low expansion velocities of about $20$~km\,s$^{-1}$ (i.e. material close to the torus) dominate the spectra. The overall evaporation rates as well as the expansion velocities are almost equal for all three models, leading to very similar line profiles. It would be necessary to consider transitions which predominate in hot winds in order to detect this high velocity component and to find significant differences between these models. In spite of our assumption of optically thin line transfer, the profiles for $\Theta=90^\circ$ are not symmetric. This is due to the dust extinction within the H{\sc ii}-region. The receding material is on average further away from the observer than the approaching. The longer light paths result in stronger extinction of the redshifted components. We are aware of the fact that the neglection of scattering by dust in Eq.~(\ref{eq:rte}) may lead to serious errors in the calculated line profiles. We expect non-negligible contributions especially in the red-shifted parts of the lines due to light scattered by the dense, dusty surface of the neutral torus. This light was originally emitted by gas receding from the torus. The resulting redshift ``seen'' by the torus remains unchanged during the scattering process and will thus lead to enhancement of the red-shifted wings of the lines. Nevertheless, we expect our qualitative results still to hold since the arguments mentioned above referring to the geometry of the underlying models are still applicable. \section{Comparison with observations} \label{comparison} Although the cases presented here describe the situation of an intermediate mass ionizing source (8.4 M$_\odot$) with a circumstellar disk, many of the basic spectral features can be generalized. Thus, it is interesting to compare and contrast our results with observed UCH{\sc ii}s, even though many of the central sources are presumably much more massive. The collections of photometry data in the catalogues of Wood \&\ Churchwell~(\cite{wood}) and Kurtz et al.~(\cite{kurtz}) show that the SEDs of almost all UCH{\sc ii}-regions possess roughly the same structure as the ones of the models discussed here: a flat spectrum in the radio and mm regime following a $\nu ^{0.6}$-law due to free-free emission and an {\sc ir} excess originating from heated dust exceeding the free-free emission by $\sim 3-4$ orders of magnitude. A closer inspection shows that the dust temperatures are by an order of magnitude lower in most of the observed sources when compared to our models. This may be an indication that the disks are being photoionized by a close companion in a multiple system rather than the central source. Alternatively, the emitting dust could be distributed in a shell swept up by the expanding H{\sc ii}-region and thus further away from the exciting star than for the cases discussed here. The large beam width of IRAS and the tendency of massive stars to form in clusters also make it likely that the {\sc ir} fluxes belong to dust emission caused by more than one heating star. Objects of this type would appear as ``unresolved'' in the maps presented by Wood \&\ Churchwell~(\cite{wood}) for distances larger than $\sim$~300~pc. Due to the cooler dust, the ``unresolved'' objects cannot be explained by the models of circumstellar disks around {\sc uv} luminous sources specifically discussed in this paper. Certainly the cometary shaped UCH{\sc ii}s can be explained by a disk being evaporated by the ionizing radiation of an external star and interaction with its stellar wind. Numerical models dealing with this scenario will be presented in the next paper of this series. \begin{figure*} \begin{center} \epsfig{file=figure13.ps,width=5.5cm,angle=270} \end{center} \caption{VLA-maps of \object{MWC~349}. Left: $\lambda=$~6~cm (Cohen et al. \cite{cohen}), FWHM~$=$~0\mysec3. Contour levels at 1, 2,..., 9, 10, 20,..., 80, 90\% of maximum flux 16,59~mJy\,beam$^{-1}$. Right: $\lambda=$~2~cm (White \&\ Becker~\cite{white}), FWHM~$=$~0\mysec1. Contour levels at $-2$, 2, 5, 15, 25,...95\% of maximum flux 156~mJy\,beam$^{-1}$.} \label{fig:mwcvla} \end{figure*} \subsection{\object{MWC~349}~A} A commonly accepted candidate for an evaporating disk around a young massive star is \object{MWC~349}~A. Its radio continuum flux obeys the $\nu^{0.6}$-law up to $\lambda=30$~cm (see collection of photometry results in Thum \&\ Mart\'{\i}n-Pintado~\cite{thum}). Radio maps obtained by Cohen et al.~(\cite{cohen}) and White \&\ Becker~(\cite{white}) show an extended ionized bipolar outflow with a peculiar X-shape (Fig.~\ref{fig:mwcvla}). In the center Leinert~(\cite{leinert}) finds an elongated, dense clump with $T \sim 900\,$K, with optically thick {\sc ir} emission, which makes \object{MWC~349}~A one of the brightest IRAS sources. The elongated structure shows an almost Keplerian velocity profile along its major axis, perpendicular to the outflow axis (Thum \&\ Mart\'{\i}n-Pintado~\cite{thum}). This leads to the assumption of a neutral dust torus around the central star with a small outer radius $< 100\,$AU. Kelly et al.~(\cite{kelly}) estimated the extinction towards this early-type star to be $A_{\rm V} = 10.8$. The SED of \object{MWC~349}~A shows all the features which we find for our models. The extinction and the geometry of the outflow, as well as the lack of a high-velocity component in the line profiles (Hartmann et al. \cite{hartmann}, Hamann \&\ Simon~\cite{hamann}), could be explained by a model with fast stellar wind presented in this paper, assuming a viewing angle $\Theta \sim 90^\circ$. On the other hand, the high dust temperatures in the torus, $T_{\rm d} \sim 800$\,K, and the extremely high mass loss rate in the outflow, $\dot{M} = (1.16 \pm 0.05) \times 10^{-5}\,$M$_\odot$\,yr$^{-1}$ (Cohen et al. \cite{cohen}) remain puzzling and need clarification by a numerical model after the method described in this series but especially ``tailored'' to suit the needs of \object{MWC~349}~A. \section{Conclusions} \label{sect:conclusions} In this paper we showed that the models of photoevaporating disks around intermediate mass stars cannot explain the large number of ``unresolved'' UCH{\sc ii}'s observed by Wood \&\ Churchwell~(\cite{wood}) and Kurtz et al.~(\cite{kurtz}), because the inferred dust temperatures of these objects are in most cases an order of magnitude lower than those obtained in the numerical models. But the question remains whether the disks of more massive stars than considered here could be responsible for the high abundance of the ``unresolved'' UCH{\sc ii}'s. Disks around close companions of massive stars should be treated in greater detail. If we assume that circumstellar disks are the rule in the process of star formation, the simplicity and straightforwardness of the model make it favorable compared to alternative suggestions. The extremely high radiation pressure in the vicinity of massive stars could lead to a larger distance between star and disk and thus to smaller dust temperatures. Another important result of this work is that the profiles of forbidden lines in the optical for the models G2, G3 and G4 with wind velocities of $400-1000$ km\,s$^{-1}$ are almost independent of $v_{\rm wind}$. This is due to the fact that the mass loss rate and velocity of the evaporated disk material is not affected by its interaction with the stellar wind, but by the rate of ionizing photons and the peculiar shock structure which is very similar in the numerical models (see Fig.~\ref{fig:gmodels}). Since the total mass loss rate is dominated by the evaporated component with low velocity, and the emission is proportional to $\rho^2$, the intensity in the lines does not depend on details of the stellar wind and their profiles show the same low-velocity components. Treatment of optically thick line emission and scattering effects is not possible with the method presented above. In order to compare non-LTE-effects like masing lines, which are observed in various objects related with the formation of massive stars, one has to refer to different methods, e.g. the Monte-Carlo method presented by Juvela~(\cite{juvela}). This would immensely help us in our understanding of the process of formation and evolution of massive stars. \begin{acknowledgements} This research has been supported by the Deutsche Forschungsgemeinschaft (DFG) under grants number Yo 5/19-1 and Yo 5/19-2. Calculations were performed at the HLRZ in J\"ulich and the LRZ in Munich. We'd also like to thank D.J. Hollenbach for useful discussions. \end{acknowledgements}
1,108,101,566,129
arxiv
\section{Introduction} A fractional quantum Hall (QH) system of filling fraction $\nu$ has edge channels that support fractional charges obeying fractional braiding statistics~\cite{Wen_review}. At $\nu = 2/3$, the edge states are decomposed into a $\nu_c^{\textrm{edge}}=2/3$ charge mode and a counterpropagating neutral mode~\cite{Kane_PRL,Kane_PRB}. They originate from renormalization of two counterpropagating charge modes~\cite{Neutral.MacDonald, Neutral.Johnson}, $\nu_1^{\textrm{edge}}=1$ and $\nu_2^{\textrm{edge}}=-1/3$, and stabilize at low temperature under strong disorder. Neutral modes have attracted much attention, as they are charge neutral and carry energy. They have been recently detected through shot noise measurements~\cite{Neutral.Bid}, and their properties such as energy and decay length have been extensively studied~\cite{Overbosch,Neutral.Rosenow,Neutral.Takei,Neutral.Viola,Gross,Gurman,Venkatachalam,Neutral.Altimiras,Inoue2,Neutral.Wang,Meier,Kamenev, Bishara, Bonderson}. Electron interaction is a dominant source of dephasing at low temperature~\cite{Hansen}. It leads to electron fractionalization~\cite{Pham,Jagla} in quantum wires; an electron, injected into a wire, splits into constituents (spin-charge separation, charge fractionalization), showing reduction of interference visibility or dephasing~\cite{Neutral.LeHur}. Interestingly, when the wire is finite, the constituents recombine after bouncing at wire ends, resulting in coherence revival~\cite{Neutral.Kim}. Fractionalization was detected~\cite{Neutral.Steinberg} in a non-chiral wire, and studied in the integer QH edge~\cite{Levkivskyi,Neutral.Berg,Neutral.Horsdal,Neutral.Neder,Neutral.Bocquillon,Neutral.Inoue1}. Coherent transport, as well as dephasing, can be tackled through the study of low energy dynamics at the edge. This is particularly important in the context of the fractional QH regime. The present study implies that the presence of neutral modes could be a dominant source of dephasing. Note that neutral modes have been observed in almost all fractional QH systems~\cite{Inoue2}. At the same time there is no uncontested observation of anyonic interference oscillations in the pure Aharonov-Bohm regime of a fractional QH interferometer\red{~\cite{Ofek}}. The present study of the $\nu = 2/3$ QH regime emphasizes two dephasing mechanisms by fractionalization of an electron into charge and neutral components, {\it plasmonic dephasing} and {\it topological dephasing}. Concerning the plasmonic dephasing mechanism, the overlap between the plasmonic parts of the charge and neutral components decreases with time, as the two components propagate with different velocities in the opposite directions. The resulting dephasing is similar to the plasmonic dephasing that takes place in a quantum wire or in integer QH edges. On the other hand, the topological dephasing is a new mechanism unnoticed so far. It occurs because the zero-mode parts of the components, satisfying fractional statistics, may braid with thermally excited anyons. Thermal average of the resulting braiding phase leads to dephasing that occurs only in the interfering processes characterized by particular values of topological winding numbers. \begin{figure}[b] \includegraphics[width=\columnwidth]{Setup1.pdf} \caption{(Color online) A large quantum dot (Fabry-Perot interferometer) in the fractional QH regime of $\nu = 2/3$, coupled to lead edge states of $\nu = 2/3$ (black solid lines) through quantum point contacts (QPCs) at $x_\mathcal{L,R}$. Electron (rather than fractional quasiparticle) tunneling occurs through the QPCs (dotted lines). Following the tunneling, each electron (and the hole left behind in the lead edge) fractionalizes into a charge component propagating at velocity $v_c$ (solid blue arrow) and a neutral component counterpropagating at velocity $v_n$ (dashed red). The magnetic flux in the dot area is $\Phi$. }\label{Setup1} \end{figure} Our analysis addresses the AB oscillation of differential conductance $\mathcal{G}$ through a quantum dot (QD) in the $\nu = 2/3$ QH regime. We focus on linear response of electron sequential tunneling into the QD. $\mathcal{G}$ is decomposed into the harmonics of the AB flux $\Phi$ in the QD, \begin{eqnarray} \mathcal{G} = \frac{e^2}{h} \sum_{\delta p=0,1,2,\cdots} \mathcal{G}_{\delta p} \cos ( 2 \pi \delta p \frac{\Phi}{\Phi_0}), \label{g_decomp} \end{eqnarray} where $\Phi_0 \equiv \hbar c / |e|$ is a flux quantum; see Fig.~\ref{Setup1}. Semiclassically, $\delta p$ represents the relative winding number of a fractionalized charge component, around the circumference $L$ of the QD, between two interfering paths: an electron, after tunneling into the QD, fractionalizes into charge and neutral components; see Fig.~\ref{Setup1}. The charge (neutral) component has propagation velocity $v_{c(n)}$, spatial width $L_{T,c (n)} \equiv \hbar v_{c (n)} / (2 \pi k_B T \delta_{c (n)})$ at temperature $T$, level spacing $E_{c (n)} \equiv 2\pi \hbar v_{c (n)} / L$, and scaling dimension $\delta_c = 3/4$ ($\delta_n = 1/4$) in the electron tunneling operator at low temperatures. $\mathcal{G}_{\delta p}$ is determined by the overlaps of the components of the same kind between two interfering paths of relative charge winding $\delta p$. We find two mechanisms suppressing $\mathcal{G}_{\delta p \ne 0}$, the plasmonic dephasing and the topological dephasing; the former (latter) involves plasmon (zero-mode) parts of the components. In the plasmonic dephasing, $\mathcal{G}_{\delta p}$ is contributed from the two interfering paths whose charge components overlap maximally between the paths. But, their neutral components overlap only partially between the interfering paths, reducing $\mathcal{G}_{\delta p}$; similar dephasing occurs in other fractionalizations~\cite{Neutral.LeHur,Neutral.Kim}. The topological dephasing additionally occurs, but depending on $\delta p$, in contrast to the other known mechanisms. When $v_{c} \gg v_{n}$~\cite{Wan,Lee}, the first harmonics $\mathcal{G}_{\delta p=1}$ is suppressed at $k_B T > E_n / (4\pi^2 \delta_n)$ (namely, $L > L_{T,n}$). It is because the charge component gains thermally fluctuating fractional braiding phase of $\pi N_c$ (leading to $e^{i \pi N_c} =\pm 1$), while it winds once ($\delta p = 1$) around $N_c$ electronic or anyonic thermal excitations on the QD edge or in the bulk. By contrast, the second harmonics $\mathcal{G}_{\delta p=2}$ is not affected by the topological dephasing (as braiding phase $\pi N_c \delta p$ and $(\pm 1)^{\delta p}=1$ are trivial) and dominates $\mathcal{G}$, resulting in $h/(2e)$ AB oscillations. These above findings occur in both the regimes of strong disorder and weak disorder in the edge of the QD. Note that the topological dephasing does not occur in the Coulomb dominated regime~\cite{Ofek, Rosenow} where Coulomb interactions between the bulk and edge of the QD is strong, as discussed later. \section{Setup and Hamiltonian} The $\nu=2/3$ QD is coupled to two lead edges via quantum point contacts (QPCs)~\cite{Neutral.Furusaki}; see Fig.~\ref{Setup1}. The Hamiltonian is $H=H_\textrm{D} + H_\mathcal{L} + H_\mathcal{R} + H_\textrm{T}$. $H_\textrm{D}$ describes the edge of the QD, while $H_\mathcal{L(R)}$ the $\nu = 2/3$ left (right) lead edge. Each edge consists of the bosonic mode $\phi_1$ ($\nu_1^{\textrm{edge}}=1$) and the counterpropagating $\phi_2$ ($\nu_2^{\textrm{edge}}=-1/3$). $\phi_{i=1,2}$ supports charge $e \nu_i^{\textrm{edge}}$ and satisfies $[\phi_i(x), \phi_{i'}(x')] = i\pi \nu_i^{\textrm{edge}} \text{sgn}(x-x')\delta_{ii'}$ at positions $x, x'$. Introducing the charge mode $\phi_c \equiv \sqrt{3/2}(\phi_1+\phi_2)$ (supporting charge $2e/3$) and the neutral mode $\phi_n \equiv (\phi_1 + 3\phi_2)/\sqrt{2}$, one writes~\cite{Kane_PRL,Kane_PRB} \begin{eqnarray} \label{Hamiltoniandot} H_\textrm{D} &=& \frac{\hbar}{4\pi} \int^L_0 dx [v_c (\partial_x \phi_c)^2 + v_n(\partial_x \phi_n)^2+v\partial_x\phi_c\partial_x \phi_n] \nonumber \\ &+&\int^L_0 dx [\xi (x) \exp(i\sqrt{2}\phi_n) + \text{H.c.}]. \end{eqnarray} Disorder-induced tunneling amplitude $\xi(x)$ between $\phi_1$ and $\phi_2$ is modeled by a Gaussian random variable with mean zero and variance $\overline{\xi^{*}(x)\xi(x)}=W\delta (x-x')$. For a finite range of bare parameters, $\phi_c$ and $\phi_n$ decouple~\cite{Kane_PRL} at low temperatures, rendering $v$ irrelevant. $H_\mathcal{L,R}$ is written similarly to $H_{\textrm{D}}$, except $\int^L_0 \to \int^\infty_{-\infty}$ in Eq.~\eqref{Hamiltoniandot}. Note that we ignore the Coulomb interaction between the bulk and edge of the QD, considering that the QD size is large enough~\cite{Rosenow}. The QPCs are almost closed, so electron tunneling is facilitated. Renormalization group analysis~\cite{Kane_PRL,Kane_PRB} indicates four equally most relevant electron tunneling operators between the electron field operators, $\Psi_{\pm} (x_\alpha) = e^{i\sqrt{3/2}\phi_c (x_\alpha)}e^{\pm i \phi_n (x_\alpha)/ \sqrt{2}} / \sqrt{2\pi a}$ at $x_{\alpha = \mathcal{L,R}}$ on the QD, and $\Psi_{\alpha, \pm}(0)$ on lead edge $\alpha$; $a$ is an ultra-violet cutoff and $\Psi_{\alpha, \pm}$ has the same form as $\Psi_{\pm}$. So the tunneling Hamiltonian is $H_\textrm{T} = \sum_{\alpha = \mathcal{L},\mathcal{R}} \sum_{i,j=\pm} [t_{\alpha ij} \Psi_{\alpha,i}^{\dagger} (0) \Psi_{j} (x_\alpha) + \text{H.c.}]$, where $t_{\alpha ij}$ is the tunneling strength. \section{Topological dephasing} We show that at $\nu = 2/3$, fractionalization and fractional statistics cause the topological dephasing. We address the number operator $N_{c (n)}$ of charge (neutral) mode at the QD edge, \begin{eqnarray} \label{OldnumberNewnumber} \frac{1}{3}N_c = N_1 - \frac{1}{3} N_2, \quad \quad N_n = N_2-N_1, \end{eqnarray} defined through the zero-mode parts of $\phi_{1,2}$ (see Appendix~\ref{AppenNumber}). The number operator $N_{1(2)}$ of $\phi_{1(2)}$ is an integer since $e$ and $-e/3$ are the elementary charges of $\phi_{1,2}$; $N_c$ is an integer measuring charge excitations in the units of $e/3$ ($N_c = 1$ for a quasiparticle of charge $e/3$; $N_c = 3$ for an electron). A quasiparticle of charge $e/3$ at position $x$ on the QD edge is written as $e^{i \phi_c(x) / \sqrt{6}}e^{\pm i \phi_n(x) / \sqrt{2}}$~\cite{Neutral.Viola}. Consider clockwise exchange of two such quasiparticles. Since $[\phi_{c}(x), \phi_{c}(x')] = i\pi \text{sgn}(x-x')$, the exchange of the two charge components results in statistical phase $\pi/6$, \begin{equation} e^{\frac{i}{\sqrt{6}} \phi_c(x)} e^{\frac{i}{\sqrt{6}} \phi_c(x')} = e^{\pm i \frac{\pi}{6} \textrm{sgn}(x'-x)} e^{\frac{i }{\sqrt{6}}\phi_c(x')} e^{\frac{i }{\sqrt{6}}\phi_c(x)}. \label{Braid} \end{equation} So, after the charge component of the electron operator $\Psi_\pm$ winds once clockwise around $N_c$ charge-mode excitations on the edge, a phase $3\times N_c \times 2 \times \pi/6 = \pi N_c$ is gained~\cite{Chamon1}. Here, 3 means the number of charge components forming $\Psi_\pm (x)$, and $2$ refers to braiding (double exchanges). Similarly, the exchange of the neutral components of the two quasiparticles leads to exchange phase $-\pi/2$. So the neutral component of $\Psi_\pm (x)$ gains $\pm 1 \times N_n \times 2\times (-\pi/2)= \mp \pi N_n$, after winding once around $N_n$ neutral-mode excitations; the number of the neutral components of $\Psi_\pm$ is $\pm 1$. This has implications on the dynamics of an electron which enters into the QD and then fractionalizes. When $v_c \gg v_n$, there is a process where the charge component of the electron winds once around the QD, while the neutral component moves very little. In terms of the winding numbers of the charge and neutral components, $p$ and $q$, this process is denoted by $(p,q)=(1,0)$. This process interferes with that of no winding $(p',q') = (0,0)$, contributing to the $h/e$ harmonics $\mathcal{G}_{\delta p =1}$; see Fig.~\ref{Interferencepaths2}. The relative winding numbers between the two interfering paths are $(\delta p = p-p'=1, \delta q = q - q'=0)$, and the net braiding phase gained from that winding around $N_c$ charge and $N_n$ neutral excitations on the edge is $\pi (N_c \delta p \mp N_n \delta q) = \pi N_c$. Since $N_c$ is an integer, thermal fluctuations of quasiparticle (or electron) excitations on the edge give rise to fluctuations of the braiding phase factor $e^{i \pi N_c} = \pm 1$ [$+$ ($-$) for even (odd) $N_c$], suppressing the $h/e$ harmonics. This topological braiding-induced dephasing also occurs due to thermal quasiparticle or electron fluctuations in the bulk; see Appendix~\ref{Appendix_Quasiflucbulk} for quasiparticle fluctuations in the bulk. Note that this topological dephasing is utterly different from a dephasing mechanism at zero temperature, arising when quasiparticles travelling along an edge change internal degrees of freedom within the bulk (e.g., Ref.~\cite{Stern}). By contrast, the main contribution to the $h/(2e)$ harmonics $\mathcal{G}_{\delta p=2}$ comes from $(\delta p, \delta q)$ = (2,0). In this case, the braiding phase factor $e^{\pi i (N_c \delta p \mp N_n \delta q)} = 1$, regardless of $N_c$ being even or odd. Hence, $\mathcal{G}_{\delta p=2}$ is immune to the topological dephasing. In general, such dephasing occurs only with odd $\delta p + \delta q$, since the fluctuating $N_c \pm N_n$ is always even; see Eq.~\eqref{OldnumberNewnumber}. When $v_c \simeq v_n$, the topological dephasing does not occur, since $\delta p = -\delta q$ and $N_c \pm N_n$ is even. The above arguments hold for the pure AB regime (or for the intermediate regime between pure AB and Coulomb-dominated regimes). An apt question is to what extent this analysis holds for the Coulomb dominated regime. When an electron of a given energy enters the QD by the process of sequential tunneling, it occupies a certain orbital state of the QD edge, satisfying energy conservation. In the pure AB regime, the area enclosed by the orbit (hence, the AB phase assigned to the orbit) is not modified when the number of quasiparticles or electrons in the bulk of the QD fluctuates thermally: edge-bulk interactions are negligible. Hence, such thermal fluctuations affect only the braiding phase gained by the electron, leading to topological dephasing. By contrast, in the Coulomb-dominated regime, the fluctuations are fully screened by the edge, reflecting the effect of edge-bulk interaction. This screening leads to modification of the area of the orbit, hence it modifies the AB phase of the orbit. This change of the AB phase exactly cancels out the change of the braiding phase caused by the thermal fluctuations. It follows that topological dephasing disappears in the Coulomb-dominated regime. \begin{figure} \includegraphics[width=.9\columnwidth]{Interferencepaths2.pdf} \caption{(Color online) Dynamical processes involving different winding numbers of the charge ($p$) and neutral ($q$) components. (a) $(p,q) = (0,0)$: an electron injected at $x_\mathcal{L}$ fractionalizes into charge (moving along blue solid arrows) and neutral (red dashed) components. (b) $(p,q) = (1,0)$. For $v_c \gtrsim v_n$, the charge (neutral) component arrives at $x_\mathcal{L}$, after winding once around the QD, $p=1$ (almost once, $q=-1$). The interference of relative winding numbers $(\delta p,\delta q) = (1,-1)$ between (a) and (b) contributes to $\mathcal{G}_{\delta p =1}$. Reduced overlap between the neutral components of (a) and (b) leads to plasmonic dephasing. For $v_c \gg v_n$, the dynamics is depicted for (c) $(p,q)=(1,0)$ and (d) $(p,q)=(2,0)$. The charge component winds once in (c) and twice in (d), while the neutral component moves little by $\Delta L$ in (c) and $2\Delta L$ in (d) (hence mainly $q = 0$). The interference between (a) and (c) suffers from topological dephasing with odd $\delta p + \delta q$. The interference between (a) and (d) is immune to topological dephasing. }\label{Interferencepaths2} \end{figure} \section{Sequential tunneling} We compute $\mathcal{G}$ in Eq.~\eqref{g_decomp} to the order of sequential tunneling, \begin{equation} \mathcal{G} \simeq \frac{e^2}{\hbar} c_g \tilde{\gamma} k_B T \sum_{j=\pm, \alpha} \int_{-\infty}^{0} dt F (t)\text{Im} \, G_j(x_\alpha, x_\alpha; t), \label{conductance11} \end{equation} where $\tilde{\gamma} = \gamma_{\mathcal{L}}\gamma_{\mathcal{R}}/( \gamma_{\mathcal{L}}+\gamma_{\mathcal{R}})$, $\gamma_\alpha \propto |t_{\alpha ij}|^2$ is the (renormalized) electron tunneling rate between the QD and lead edge $\alpha = \mathcal{L,R}$, $G_j(x_{\alpha}, x_{\alpha}; t) \equiv \big\langle \big [\Psi_{j}^{\dagger}(x_{\alpha},t), \Psi_{j}(x_{\alpha},0) \big ] \big\rangle$ is the Green function describing the time ($t$) evolution of the fractionalized components of an injected electron described by $\Psi_{j}(x_\alpha,t=0)$, and $c_g$ is a constant. The start and end positions of the Green function $G_j(x_{\alpha}, x_{\alpha}; t)$ coincide, since the Green function describes the sequential tunneling. The injection leaves a hole behind on the lead edge. $F(t) = (\pi k_B T t/ \hbar) \sinh^{-2 (\delta_c + \delta_n)} (\pi k_B T t / \hbar)$ accounts for the fractionalization of the hole. For the detailed derivation of Eq.~\eqref{conductance11}, see Appendix~\ref{secAppendix_seq}. $\mathcal{G}_{\delta p}$ comes from the interference between two processes of relative charge winding number $\delta p$. At $k_B T > E_{c}/(2\pi^2)$, the charge component has spatial width $L_{T,c}< L$. Then, $G_j(x_\alpha, x_\alpha; t)$ contributes to $\mathcal{G}_{\delta p}$ mainly around the times $\delta p L / v_{c}$, at which the charge component arrives at the initial injection point $x_\alpha$ after winding $\delta p$ times around $L$; the neutral component winding times $ \delta q L / v_{n}$ are much less important, because of the scaling dimensions $\delta_c=3 \delta_n > \delta_n$. We focus on $\mathcal{G}_{\delta p = 1}$ and $\mathcal{G}_{\delta p = 2}$, as they involve the shorter times of $\delta p L / v_c$, are more robust against the dephasing discussed below, hence are much larger than $\mathcal{G}_{\delta p \ge 3}$ at $k_B T \gg E_{c}$. At $k_B T \gg E_{c}$, we compute $\mathcal{G}_{\delta p = 1}$ and $\mathcal{G}_{\delta p = 2}$ analytically in the absence of disorder and interaction ($W=0$, $v=0$), and also in the strong disorder regime, based on a finite-size bosonization~\cite{Haldane1,Loss,Geller,Eggert,Delft,Neutral.Kim} and a semiclassical approximation (see Appendix~\ref{appendix_semi}). \section{Clean regime} We first deal with the regime of $W=0$ and $v=0$ [see Eq.~\eqref{Hamiltoniandot}] and then discuss the regime of weak disorder and weak intermode interaction. We treat various contributions to dephasing quantitatively for the two cases of $v_c \gtrsim v_n$ and $v_c \gg v_n$. When $v_c \gtrsim v_n$ and $k_B T \gg E_c$, only the plasmonic dephasing is important. The dominant contribution to $\mathcal{G}$ comes from the $h/e$ harmonics. With the additional condition of $k_B T \gg \hbar v_n / (L -\Delta L)$, we obtain \begin{eqnarray} \label{conductancevcsimvn} \mathcal{G}_{\delta p = 1} \propto \tilde{\gamma} L (k_BT)^3 \exp(-\frac{L}{L_{T,c}}-\frac{\Delta L}{L_{T,n}}-\frac{L - \Delta L}{L_{T,n}}), \end{eqnarray} where $\Delta L =L v_n/v_c$. We explain two processes, whose interference dominates $\mathcal{G}_{\delta p = 1}$. In one process [Fig.~\ref{Interferencepaths2}(a)], an electron tunnels from lead edge $\mathcal{L}$ into the QD and fractionalizes at $x_{\mathcal{L}}$ at time $t_{1}'=0$, while at $t_2' = - L/v_c$ in the other [Fig.~\ref{Interferencepaths2}(b)]. The charge components of the two processes interfere at $x_{\mathcal{L}}$ at $t = 0$, contributing to $\mathcal{G}_{\delta p = 1}$, after respective windings $p = 0$ and $1$. At that time, the distance between the neutral components is $L - \Delta L$, leading to partial overlap, hence, to the third factor $\exp(-(L - \Delta L)/{L_{T,n}})$ of $\mathcal{G}_{\delta p = 1}$. The tunneling leaves a hole behind on $\mathcal{L}$, which also fractionalizes into charge and neutral components (not shown in Fig.~\ref{Interferencepaths2}). The partial overlap at $t = 0$ between the two charge components from the holes created at $t_1'$ and $t_{2}'$, and that between the two neutral components, lead to the first two exponential factors of $\mathcal{G}_{\delta p = 1}$ in Eq.~\eqref{conductancevcsimvn}, respectively. In the other limit of $v_c \gg v_n$, both plasmonic and topological dephasings are crucial. There are two interfering processes for $\mathcal{G}_{\delta p = 1}$ shown in Figs.~\ref{Interferencepaths2}(a) and \ref{Interferencepaths2}(c), and for $\mathcal{G}_{\delta p = 2}$ in Figs.~\ref{Interferencepaths2}(a) and \ref{Interferencepaths2}(d). When $k_B T \gg E_c$, we obtain \begin{eqnarray} \mathcal{G}_{\delta p = 1} &\propto& \tilde{\gamma} L (k_BT)^3 \exp{(-\frac{L}{ L_{T,c}}-\frac{\Delta L}{L_{T,n}}-\frac{\Delta L}{L_{T,n}} +\frac{ (\Delta L)^2}{L L_{T,n}})} \nonumber\\ && \times \exp[-\frac{( L - \Delta L)^2}{ L L_{T,n}}] \label{conductancevcggvn1} \\ \mathcal{G}_{\delta p = 2} &\propto& \tilde{\gamma} L (k_BT)^3 \exp[-2(\frac{L}{L_{T,c}}+\frac{\Delta L}{L_{T,n}}+\frac{\Delta L}{L_{T,n}})]. \label{conductancevcggvn2} \end{eqnarray} The first three exponential factors of $\mathcal{G}_{\delta p = 1}$ and $\mathcal{G}_{\delta p = 2}$ result from plasmonic dephasing, as those of Eq.~\eqref{conductancevcsimvn}. The third factor has a form different from that of Eq.~\eqref{conductancevcsimvn} of $v_c \gtrsim v_n$, as the interfering neutral components in the QD are now $\Delta L$ apart in space. The factor $2$ in the arguments of $\mathcal{G}_{\delta p = 2}$ arises from the double winding. Another exponential factor $\exp[ ( \Delta L)^2/ (L L_{T,n})]$ of $\mathcal{G}_{\delta p = 1}$ comes from the plasmonic part of the neutral component; it is cancelled out with zero-mode contributions in Eqs.~\eqref{conductancevcsimvn} and \eqref{conductancevcggvn2} and also in other cases~\cite{Neutral.Kim}. The last suppression factor in Eq.~\eqref{conductancevcggvn1}, $\exp[-(L- \Delta L)^2/ (L L_{T,n})]$, represents the topological dephasing, arising from the zero-mode parts of $G_{j}(x_{\alpha}, x_{\alpha};t)$. The process in Fig.~\ref{Interferencepaths2}(c) (where the center of the neutral component hardly moves, while the charge component winds once around $L$) interferes with that of Fig.~\ref{Interferencepaths2}(a), contributing to $(\delta p, \delta q) = (1,0)$. As discussed around Eq.~\eqref{Braid}, this interference with $\delta p + \delta q = 1$ is suppressed by the thermally fluctuating braiding phase factor of $e^{i \pi (\delta p N_c + \delta q N_n)} = \pm 1$. The suppression factor is interestingly determined by the spatial tail (or finite $L_{T,n}$) of the zero-mode part of the neutral component. The tail indicates that the neutral component can quantum mechanically wind once more than the semiclassical number $q$ of the center; the quantum mechanical winding is well defined by the Poisson formula (see Appendix~\ref{appendix_top}). Hence, from the processes in Figs.~\ref{Interferencepaths2}(a) and \ref{Interferencepaths2}(c), interference with the total relative winding number of $\delta p + (\delta q + 1)$ can occur. As $\delta p + \delta q + 1$ is even, this interference avoids the topological dephasing, dominantly contributing to $\mathcal{G}_{\delta p=1}$, but it is reduced by the separation $L - \Delta L$ of the neutral components of $\delta q + 1$ relative windings. This explains the factor $\exp[-(L- \Delta L)^2/ (L L_{T,n})]$; the exponent is quadratic in $L- \Delta L$, since it originates from the zero-mode part~\cite{Neutral.Kim}. We point out that the topological dephasing occurs when $L > L_{T,n} $ ($k_B T > E_n / (4 \pi ^2 \delta_n)$), as seen in the last exponential factor in Eq.~\eqref{conductancevcggvn1}. In contrast, the plasmonic dephasing occurs when $k_B T \gg E_c$. Note that we choose the condition of $k_B T \gg E_c$ for the derivation of Eq.~\eqref{conductancevcggvn1}, to show both of the plasmonic dephasing and the topological dephasing simultaneously. Because of the topological dephasing, $\mathcal{G}_{\delta p = 2}$ is much larger than $\mathcal{G}_{\delta p = 1}$ when $v_c \gg v_n$; $\exp (- (L - \Delta L)^2 /L L_{T,n})$ is much smaller than the other factors. As a result, $\mathcal{G}$ shows $h/(2e)$ AB oscillations. In Fig.~\ref{conductance3}, we numerically compute $\mathcal{G}$ for both $v_c \gtrsim v_n$ and $v_c \gg v_n$ without employing the semiclassical approximation. The result for $v_c \gg v_n$ demonstrates the topological dephasing and consequent period halving even at $k_B T < E_c$. \begin{figure}[tpb] \includegraphics[width=\columnwidth]{interferencecond.pdf} \caption{(Color Online) Topological dephasing and period halving. Shown are Aharonov-Bohm oscillations of $\mathcal{G}$ for (a) $v_{n} = 0.9v_{c}$ (period $\Phi_0$) and for (b) $v_{n} = 0.1 v_{c}$ (period $\Phi_0/2$) at $k_B T=E_c / 20$ (blue curve). $\mathcal{G}$ is measured in units of $e^2 \tilde{\gamma} a / (h^2 v_c^{3/4} v_n ^{1/4})$ and $L=200a$. }\label{conductance3} \end{figure} So far, we have discussed the regime of no intermode interaction ($v= 0$) and no disorder ($W=0$). The argument of the regime holds also in the regime of weak intermode interaction and weak disorder, with slight modifications. In this regime, the plasmonic part of the neutral component decays, together with the diffusive spreading of the plasmonic part of the charge component~\cite{Kane_PRB}. These slightly modify the plasmonic dephasing (the first three dephasing factors in Eqs.~\eqref{conductancevcsimvn}$-$\eqref{conductancevcggvn2}, but do not affect the topological dephasing. Note that the weak disorder regime is realized when the renormalization of $W$ stops by temperature $T$ or QD size $L$ before going to the strong disorder regime, and a weak intermode interaction occurs in a dot, when the Coulomb interaction between the charge modes is larger than the confining potential (see Appendix~\ref{appen_Coulomb} and Refs.~\cite{Neutral.Wang, Chamon2}). In recent experiments~\cite{Gurman}, neutral modes are measured with QDs of size $4 \, \mu$m$^2$, implying that the intermode interaction is sufficiently weak in the QDs. \section{Strong disorder regime} We show that Eqs.~\eqref{conductancevcsimvn}$-$\eqref{conductancevcggvn2} hold in the strong disorder regime of a QD edge without any modification. In this regime, the neutral component is totally decoupled with the charge component ($v=0$ in Eq.~\eqref{Hamiltoniandot})~\cite{Kane_PRL}. We start with the diagonalized form of $H_\textrm{D}$ $H_\textrm{D} = \int_{0}^{L} dx [v_c (\partial_x \phi_c)^2 / (4\pi) + v_n \tilde{\psi}^{\dagger}i\partial_x \tilde{\psi}]$. This form is obtained from Eq.~\eqref{Hamiltoniandot}, where the effect of disorders is included. Here, $\tilde{\psi}(x) \equiv (e^{i(\tilde{\chi}+\tilde{\phi}_{n})/\sqrt{2}}, e^{i(\tilde{\chi}-\tilde{\phi}_{n})/\sqrt{2}})^T= U(x) \psi(x)$, the unitary matrix $U(x)=T_x \exp[-i\int_{0}^{x}dx' (\xi(x')\sigma^{+}+\xi^{*}(x')\sigma^{-})/v_n]$ represents random disorder scattering, $\psi\equiv (e^{i(\chi+\phi_{n})/\sqrt{2}}, e^{i(\chi-\phi_{n})/\sqrt{2}})^T$ is a two-component fermionic operator, $\chi$ is an auxiliary bosonic field, and $\sigma^{\pm}=\sigma_{x}\pm i\sigma_{y}$, $\sigma_{x}$ and $\sigma_{y}$ are the Pauli matrices. The equal-position correlator $\langle [\Psi_{\pm}^{\dagger}(x_{\mathcal{L}},t), \Psi_{\pm}(x_{\mathcal{L}},0)] \rangle$ is replaced by $\langle [\tilde{\Psi}_{\pm}^{\dagger}(x_{\mathcal{L}},t), \tilde{\Psi}_{\pm}(x_{\mathcal{L}},0)] \rangle$ when we choose the global gauge transformation making $U(x_\mathcal{L}) = 1$. Then, it is readily computed because the Hamiltonian is free in the basis of $\tilde{\Psi}_{\pm}$, and is the same as $G_{j} (x_\alpha, x_\alpha; t)$ in Eq.~\eqref{conductance11} that is obtained in the absence of interaction ($v=0$) between the charge and neutral components and disorder ($W=0$). Hence, Eqs.~\eqref{conductancevcsimvn}$-$\eqref{conductancevcggvn2} can be also applied to the strong disorder regime. \section{Discussion and conclusion} We have studied electron dephasing at $\nu = 2/3$. Electron fractionalization into charge and neutral components leads to plasmonic dephasing. When $v_c \gg v_n$ (which is likely \cite{Wan,Lee}) and at $k_B T > E_n/(4\pi^2 \delta_n)$, a new type of dephasing additionally arises. This dephasing is topological, resulting from the fractionalization and the fractional braiding statistics of the components, and occurs depending on the topological sectors characterized by the winding numbers $(\delta p, \delta q)$ of the components; its dependence on the even-odd parity of $\delta p + \delta q$ is mathematical reminiscent of the parity (integer versus half-integer spin) dependent role of the topological $\theta$ term in antiferromagnetic spin chains~\cite{Haldane2}. It leads to period halving of the AB oscillations. We emphasize that the topological dephasing occurs in both the regimes of strong and weak disorder, when bulk-edge interactions are not strong. In the case of weak disorder, which may be realized in high temperatures, weak intermode interaction causes the decay of the plasmonic part of the neutral component, accompanied by the diffusive spreading of the plasmonic part of the charge component~\cite{Kane_PRB}. These do not affect the topological dephasing, hence the emergence of the $h/2e$ oscillations. On the other hand, in the case of strong disorder at low temperatures, $v$ renormalizes towards zero~\cite{Kane_PRL}, and then does not change the physics of the topological dephasing. Note that bulk-edge Coulomb interactions become weaker in QDs of larger area; the pure AB regime (or the intermediate regime between the pure AB and the Coulomb-dominated regimes) could be achieved when the edge-to-bulk capacitance is smaller than other capacitances even when strong backscattering occurs at QPCs. We also note that the QH edges at $\nu = 2/3$ may undergo more complex edge reconstruction at about $T > 50$ mK~\cite{Neutral.Wang,Neutral.Bid2}. At lower temperature our analysis is applicable, while at higher temperature different topological dephasing may occur. Assuming $v_n \sim 5 \times 10^4$~m/s, $v_c \sim 5 \times 10^5$~m/s, and $L=10 \, \mu \textrm{m}$, we expect that the $h/(2e)$ oscillation will appear at temperature $k_B T > \hbar v_n / (2\pi \delta_n L ) \sim 20$ mK. In this case, the oscillation will be suppressed at $k_B T > \hbar v_c / (2\pi \delta_c L ) \sim 60$ mK, due to the plasmonic dephasing. Detection of the period halving supports the topological dephasing, thus, the fractional statistics of the charge component at $\nu = 2/3$. The plasmonic dephasing and the topological dephasing will occur, with modifications, in other anyon interferometers or at other $\nu$'s. It should be mentioned that the known mechanisms yielding $h/(2e)$ oscillations in other mesoscopic systems do not apply to our setup. The Altshuler-Aronov-Spivak mechanism~\cite{Altshuler, Murat} employs disorder averaging in multi-channel geometries, which is not present in our setup. Another mechanism for $h/2e$ oscillations~\cite{Kataoka, Sim} relies on integer QH edge modes in an antidot at temperatures much below the charging energy of the antidot. Moreover, our setup does not have superconducting fluctuations that support such periodicity. \begin{acknowledgements} We thank G. Rafael for a helpful discussion. HSS was supported by Korea NRF (Grant No. NRF-2010-00491 and Grant No. NRF-2013R1A2A2A01007327). YG was supported by DFG (Grant No. RO 2247/8-1). \end{acknowledgements}
1,108,101,566,130
arxiv
\section{Introduction} The determination of the one-body distribution function, which gives the probability of finding a particle at some given position,with a given velocity at a given time, is one of the central problems in nonequilibrium statistical mechanics. Its time-evolution is in many cases well described by approximate kinetic equations such as the Boltzmann equation\cite{McLennan}, for low-density gases and the revised Enskog equation\cite{RET},\cite{KDEP}, for denser hard-sphere gases and solids. Only rarely are exact solutions of these equations possible. Probably the most important technique for generating approximate solutions to one-body kinetic equations is the Chapman-Enskog method which, as originally formulated, consists of a gradient expansion about a local-equilibrium state\cite{ChapmanCowling},\cite% {McLennan}. The goal in this approach is to construct a particular type of solution, called a ''normal solution'', in which all space and time dependence of the one-body distribution occurs implicitly via its dependence on the macroscopic hydrodynamic fields. The latter are, for a simple fluid, the density, velocity and temperature fields corresponding to the conserved variables of particle number, momentum and energy respectively. (In a multicomponent system, the partial densities are also included.) The Chapman-Enskog method proceeds to develop the solution perturbatively in the gradients of the hydrodynamic fields: the distribution is developed as a functional of the fields and their gradients and at the same time the equations of motion of the fields, the hydrodynamic equations, are also developed. The zeroth-order distribution is the local-equilibrium distribution; at first order, this is corrected by terms involving linear gradients of the hydrodynamic fields which in turn are governed by the Euler equations (with an explicit prescription for the calculation of the pressure from the kinetic theory). At second order, the hydrodynamic fields are governed by the Navier-Stokes equations, at third order, by the Burnett equations, etc. The calculations involved in extending the solution to each successive higher order are increasingly difficult and since the Navier-Stokes equations are usually considered an adequate description of fluid dynamics, results above third order (Burnett order) for the Boltzmann equation and above second (Navier-Stokes) order for the Enskog equation are sparse. The extension of the Chapman-Enskog method beyond the Navier-Stokes level is, however, not physically irrelevant since only by doing so is it possible to understand non-Newtonian viscoelastic effects such as shear thinning and normal stresses which occur even in simple fluids under extreme conditions\cite{Lutsko_EnskogPRL},\cite{LutskoEnskog}. Recently, interest in non-Newtonian effects has increased because of their importance in fluidized granular materials. Granular systems are composed of particles - grains - which lose energy when they collide. As such, there is no equilibrium state - an undriven homogeneous collection of grains will cool continuously. This has many interesting consequences such as the spontaneous formation of clusters in the homogeneous gas and various segregation phenomena in mixtures\cite{GranularPhysicsToday},\cite% {GranularRMP},\cite{GranularGases},\cite{GranularGasDynamics}. The collisional cooling also gives rise to a unique class of nonequilibrium steady states due to the fact that the cooling can be balanced by the viscous heating that occurs in inhomogeneous flows. One of the most widely studied examples of such a system is a granular fluid undergoing planar Couette flow where the velocity field takes the form $\mathbf{v}\left( \mathbf{r}\right) =ay\widehat{\mathbf{x}}$, where $a$ is the shear rate. The common presence of non-Newtonian effects, such as normal stresses, in these systems has long been recognized as signalling the need to go beyond the Navier-Stokes description\cite{SelGoldhirsch}. As emphasized by Santos et al,% \cite{SantosInherentRheology}, the balance between the velocity gradients, which determine the rate of viscous heating, and the cooling, arising from a material property, means that such fluids are inherently non-Newtonian in the sense that the sheared state cannot be viewed as a perturbation of the unsheared, homogeneous fluid and so the usual Navier-Stokes equations cannot be used to study either the rheology or the stability of the sheared granular fluid. One of the goals of the present work is to show that a more general hydrodynamic description can be derived for this, and other flow states, which is able to accurately describe such far-from-equilibrium states. The formalism developed here is general and not restricted to granular fluids although they do provide the most obvious application. Indeed,an application of this form of hydrodynamics has recently been presented by Garz{\'{o}}\cite{garzo-2005-} who studied the stability of a granular fluid under strong shear. The extension of the Chapman-Enskog method to derive the hydrodynamics for fluctuations about an arbitrary nonequilibrium state might at first appear trivial but in fact it involves a careful application of the ideas underlying the method. To illustrate, let $f\left( \mathbf{r},\mathbf{v}% ,t\right) $ be the probability to find a particle at position $\mathbf{r}$ with velocity $\mathbf{v}$ at time $t$. For a $D-$dimensional system in equilibrium, this is just the (space and time-independent) Gaussian distribution% \begin{equation} f\left( \mathbf{r},\mathbf{v},t\right) =\phi _{0}\left( \mathbf{v}% ;n,T,U\right) =n\left( \frac{m}{2\pi k_{B}T}\right) ^{D/2}\exp \left( -\left( \mathbf{v}-\mathbf{U}\right) ^{2}/k_{B}T\right) \label{le} \end{equation}% where $n$ is the number density, $k_{B}$ is Boltzmann's constant, $T$ is the temperature, $m$ is the mass of the particles and $\mathbf{U}$ is the center-of-mass velocity. The zeroth-order approximation in the Chapman-Enskog method is the localized distribution $f^{(0)}\left( \mathbf{r}% ,\mathbf{v},t\right) =\phi _{0}\left( \mathbf{v};n\left( \mathbf{r},t\right) ,T\left( \mathbf{r},t\right) ,\mathbf{U}\left( \mathbf{r},t\right) \right) $ or, in other words, the local equilibrium distribution. In contrast, a homogeneous non-equilibrium steady state might be characterized by some time-independent distribution \begin{equation} f\left( \mathbf{r},\mathbf{v},t\right) =\Phi _{ss}\left( \mathbf{v};n,T,% \mathbf{U}\right) \end{equation}% but the zeroth-order approximation in the Chapman-Enskog method will \emph{% not} in general be the localized steady-state distribution, $f^{(0)}\left( \mathbf{r},\mathbf{v},t\right) \neq \Phi _{ss}\left( \mathbf{v};n\left( \mathbf{r},t\right) ,T\left( \mathbf{r},t\right) ,\mathbf{U}\left( \mathbf{r}% ,t\right) \right) $. The reason is that a steady state is the result of a balance - in the example given above, it is a balance between viscous heating and collisional cooling. Thus, any change in density must be compensated by, say, a change in temperature or the system is no longer in a steady state. This therefore gives a relation between density and temperature in the steady state, say $n=n_{ss}(T)$, so that one has $\Phi _{ss}\left( \mathbf{v};n,T,\mathbf{U}\right) =\Phi _{ss}\left( \mathbf{v}% ;n_{ss}(T),T,\mathbf{U}\right) $. Clearly, it makes no sense to simply ''localize'' the hydrodynamic variables as the starting point of the Chapman-Enskog method since, in a steady state, the hydrodynamic variables are not independent. Limited attempts have been made in the past to perform the type of generalization suggested here. In particular, Lee and Dufty considered this problem for the specific case of an ordinary fluid under shear with an artificial thermostat present so as to make possible a steady state\cite{MirimThesis},\cite{MirimThesisArticle}. However, the issues discussed in this paper were circumvented through the use of a very particular type of thermostat so that, while of theoretical interest, that calculation cannot serve as a template for the more general problem. In Section II, the abstract formulation of the Chapman-Enskog expansion for fluctuations about a non-equilibrium state is proposed. It not only requires care in understanding the zeroth order approximation, but a generalization in the concept of a normal solution. In Section III, the method is illustrated by application to a simple kinetic theory for a sheared granular gas. Explicit expressions are given for the full complement of transport coefficients. One unique feature of the hydrodynamics obtained in this case is that several transport coefficients depend linearly on fluctuations in the velocity in the $y$-direction (i.e. in the direction of the velocity gradient). The section concludes with a brief summary of the resulting hydrodynamics and of the linearized form of the hydrodynamic equations which leads to considerable simplification. The paper ends in Section IV with a summary of the results, a comparison of the present results to the results of the standard Chapman-Enskog analysis and a discussion of further applications. \section{The Chapman-Enskog expansion about an arbitrary state} \subsection{Kinetic theory} Consider a single-component fluid composed of particles of mass $m$ in $D$ dimensions. In general, the one-body distribution will obey a kinetic equation of the form% \begin{equation} \left( \frac{\partial }{\partial t}+\mathbf{v}\cdot \nabla \right) f(\mathbf{% r},\mathbf{v},t)=J[\mathbf{r},\mathbf{v},t|f] \label{x1} \end{equation}% where the collision operator $J[\mathbf{r},\mathbf{v},t|f]$ is a function of position and velocity and a \emph{functional} of the distribution function. No particular details of the form of the collision operator will be important here but all results are formulated with the examples of BGK-type relaxation models, the Boltzmann equation and the Enskog equation in mind. The first five velocity moments of $f$ define the number density \begin{equation} n(\mathbf{r},t)=\int \;d\mathbf{v}f(\mathbf{r},\mathbf{v},t), \label{density} \end{equation}% the flow velocity \begin{equation} \mathbf{u}(\mathbf{r},t)=\frac{1}{n(\mathbf{r},t)}\int \;d\mathbf{v}\mathbf{v% }f(\mathbf{r},\mathbf{v},t), \label{velocity} \end{equation}% and the kinetic temperature \begin{equation} T(\mathbf{r},t)=\frac{m}{Dn(\mathbf{r},t)k_{B}}\int \;d\mathbf{v}C^{2}(% \mathbf{r},t)f(\mathbf{r},\mathbf{v},t), \label{temperature} \end{equation}% where $\mathbf{C}(\mathbf{r},t)\equiv \mathbf{v}-\mathbf{u}(\mathbf{r},t)$ is the peculiar velocity. The macroscopic balance equations for density $n$, momentum $m\mathbf{u}$, and energy $\frac{D}{2}nk_{B}T$ follow directly from eq.\ ({\ref{x1}) by multiplying with $1$, $m\mathbf{v}$, and $\frac{1}{2}% mC^{2}$ and integrating over $\mathbf{v}$: \begin{eqnarray} D_{t}n+n\nabla \cdot \mathbf{u} &=&0\; \label{x2} \\ D_{t}u_{i}+(mn)^{-1}\nabla _{j}P_{ij} &=&0 \notag \\ D_{t}T+\frac{2}{Dnk_{B}}\left( \nabla \cdot \mathbf{q}+P_{ij}\nabla _{j}u_{i}\right) &=&-\zeta T, \notag \end{eqnarray}% where $D_{t}=\partial _{t}+\mathbf{u}\cdot \nabla $ is the convective derivative. The microscopic expressions for the pressure tensor $\mathsf{P=P}% \left[ f\right] $, the heat flux $\mathbf{q=q}\left[ f\right] $} depend on the exact form of the collision operator (see ref. \cite{McLennan},\cite% {LutskoJCP} for a general discussion){\ but as indicated, they are in general functionals of the distribution, while the cooling rate $\zeta $ is given by \begin{equation} \zeta (\mathbf{r},t)=\frac{1}{Dn(\mathbf{r},t)k_{B}T(\mathbf{r},t)}\int \,d% \mathbf{v}mC^{2}J[\mathbf{r},\mathbf{v},t|f]. \label{heating} \end{equation}% } \subsection{Formulation of the gradient expansion} The goal of the Chapman-Enskog method is to construct a so-called \emph{% normal }solution to the kinetic equation, eq.(\ref{x1}). In the standard formulation of the method\cite{McLennan}, this is defined as a distribution $% f(\mathbf{r},\mathbf{v},t)$ for which all of the space and time dependence occurs through the hydrodynamic variables, denoted collectively as $\psi \equiv \left\{ n,\mathbf{u},T\right\} $, and their derivatives so that% \begin{equation} f(\mathbf{r},\mathbf{v},t)=f\left( \mathbf{v};\psi \left( \mathbf{r}% ,t\right) ,\mathbf{\nabla }\psi \left( \mathbf{r},t\right) ,\mathbf{\nabla \nabla }\psi \left( \mathbf{r},t\right) ,...\right) . \label{KE} \end{equation}% The distribution is therefore a \emph{functional} of the fields $\psi \left( \mathbf{r},t\right) $ or, equivalently in this case, a \emph{function} of the fields and their gradients to all orders. In the following, this particular type of functional dependence will be denoted more compactly with the notation $f\left( \mathbf{v};\left[ \mathbf{\nabla }^{(n)}\psi \left( \mathbf{r},t\right) \right] \right) $ where the index, $n$, indicates the maximum derivative that is used. When all derivatives are possible, as in eq.(\ref{KE}) the notation $f(\mathbf{r},\mathbf{v},t)=f\left( \mathbf{v};% \left[ \mathbf{\nabla }^{(\infty )}\psi \left( \mathbf{r},t\right) \right] \right) $ will be used. The kinetic equation, eq.(\ref{x1}), the balance equations, eqs.(\ref{x2}), and the definitions of the various fluxes and sources then provide a closed set of equations from which to determine the distribution. Note that since the fluxes and sources are functionals of the distribution, their space and time dependence also occurs implicitly via their dependence on the hydrodynamic fields and their derivatives. Given such a solution has been found for a particular set of boundary conditions yielding the hydrodynamic state $\psi _{0}\left( \mathbf{r}% ,t\right) $ with distribution $f_{0}\left( \mathbf{v};\left[ \mathbf{\nabla }% ^{(\infty )}\psi _{0}\left( \mathbf{r},t\right) \right] \right) $, the aim is to describe deviations about this state, denoted $\delta \psi $, so that the total hydrodynamic fields are $\psi =\psi _{0}+\delta \psi $. In the Chapman-Enskog method, it is assumed that the deviations are smooth in the sense that \begin{equation} \delta \psi \gg l\mathbf{\nabla }\delta \psi \gg l^{2}\mathbf{\nabla \nabla }% \delta \psi ..., \end{equation}% where $l$ is the mean free path, so that one can work perturbatively in terms of the gradients of the perturbations to the hydrodynamic fields. To develop this perturbation theory systematically, it is convenient to introduce a fictitious small parameter, $\epsilon $, and to write the gradient operator as $\mathbf{\nabla }=\mathbf{\nabla }^{\left( 0\right) }+\epsilon \mathbf{\nabla }^{\left( 1\right) }$ where the two operators on the right are defined by $\mathbf{\nabla }_{0}\psi =\mathbf{\nabla }\psi _{0} $ and $\mathbf{\nabla }_{1}\psi =\mathbf{\nabla \delta }\psi $. This then generates an expansion of the distribution that looks like% \begin{eqnarray} f\left( \mathbf{v};\left[ \mathbf{\nabla }^{(\infty )}\psi \left( \mathbf{r}% ,t\right) \right] \right) &=&f^{(0)}\left( \mathbf{v};\mathbf{\nabla }% _{0}^{\left( \infty \right) }\psi \left( \mathbf{r},t\right) \right) \label{dist-expansion} \\ &&+\epsilon f^{(1)}\left( \mathbf{v};\mathbf{\nabla }_{1}\mathbf{\delta }% \psi ,\mathbf{\nabla }_{0}^{\left( \infty \right) }\psi \left( \mathbf{r}% ,t\right) \right) \notag \\ &&+\epsilon ^{2}f^{(2)}\left( \mathbf{v};\mathbf{\nabla }_{1}\mathbf{\nabla }% _{1}\mathbf{\delta }\psi ,\left( \mathbf{\nabla }_{1}\mathbf{\delta }\psi \right) ^{2},\mathbf{\nabla }_{0}^{(\infty )}\psi \left( \mathbf{r},t\right) \right) \notag \\ &&+... \notag \end{eqnarray}% where $f^{(1)}$ will be linear in $\mathbf{\nabla }_{1}\mathbf{\delta }\psi $% , $f^{(2)}$ will be linear in $\mathbf{\nabla }_{1}\mathbf{\nabla }_{1}% \mathbf{\delta }\psi $ and $\left( \mathbf{\nabla }_{1}\mathbf{\delta }\psi \right) ^{2}$, etc. This notation is meant to be taken literally:\ the quantity $\mathbf{\nabla }_{0}^{(\infty )}\psi \left( \mathbf{r},t\right) =\left\{ \psi \left( \mathbf{r},t\right) ,\mathbf{\nabla }_{0}\psi \left( \mathbf{r},t\right) ,...\right\} =\left\{ \psi \left( \mathbf{r},t\right) ,% \mathbf{\nabla }\psi _{0}\left( \mathbf{r},t\right) ,...\right\} $ so that at each order in perturbation theory, the distribution is a function of the exact field $\psi \left( \mathbf{r},t\right) $ as well as all gradients of the reference field. This involves a departure from the usual formulation of the Chapman-Enskog definition of a normal state. In the standard form, the distribution is assumed to be a functional of the \emph{exact} fields $\psi \left( \mathbf{r},t\right) $ whereas here it is proposed that the distribution is a functional of the exact field $\psi \left( \mathbf{r}% ,t\right) $ \emph{and} of the reference state $\psi _{0}\left( \mathbf{r}% ,t\right) $. Of course, it is obvious that in order to study deviations about a reference state within the Chapman-Enskog framework, the distribution will have to be a functional of that reference state. Nevertheless, this violates, or generalizes, the usual definition of a normal solution since there are now two sources of space and time dependence in the distribution:\ the exact hydrodynamics fields and the reference hydrodynamic state. For deviations from an equilibrium state, this point is moot since $\mathbf{\nabla }\psi _{0}\left( \mathbf{r},t\right) =0$, etc. The perturbative expansion of the distribution will generate a similar expansion of the fluxes and sources through their functional dependence on the distribution, see e.g. eq.(\ref{heating}), so that one writes% \begin{equation} P_{ij}=P_{ij}^{(0)}+\epsilon P_{ij}^{(1)}+... \end{equation}% and so forth. Since the balance equations link space and time derivatives, it is necessary to introduce a multiscale expansion of the time derivatives in both the kinetic equation and the balance equations as \begin{equation} \frac{\partial }{\partial t}f=\partial _{t}^{(0)}f+\epsilon \partial _{t}^{(1)}f+... \end{equation}% The precise meaning of the symbols $\partial _{t}^{(0)}$, $\partial _{t}^{(1)}$ is that the balance equations define $\partial _{t}^{(i)}$ in terms of the spatial gradients of the hydrodynamic fields and these definitions, together with the normal form of the distribution, define the action of these symbols on the distribution. Finally, to maintain generality, note that sometimes (specifically in the Enskog theory) the collision operator itself is non-local and must be expanded as well in gradients in $\delta \psi $ so that we write% \begin{equation} J[\mathbf{r},\mathbf{v},t|f]=J_{0}[\mathbf{r},\mathbf{v},t|f]+\epsilon J_{1}[% \mathbf{r},\mathbf{v},t|f]+... \end{equation}% and it is understood that $J_{0}[\mathbf{r},\mathbf{v},t|f]$ by definition involves no gradients with respect to the perturbations $\delta \psi \left( \mathbf{r},t\right) $ but will, in general, contain gradients of \emph{all} orders in the reference fields $\psi _{0}\left( \mathbf{r},t\right) $. (Note that the existence of a normal solution is plausible if the spatial and temporal dependence of the collision operator is also normal which is, in fact, generally the case. However, for simplicity, no effort is made here to indicate this explicitly.) A final property of the perturbative expansion concerns the relation between the various distributions and the hydrodynamic variables. The zeroth order distribution is required to reproduce the exact hydrodynamic variables via% \begin{equation} \left( \begin{array}{c} n(\mathbf{r},t) \\ n(\mathbf{r},t)\mathbf{u}(\mathbf{r},t) \\ Dn(\mathbf{r},t)k_{B}T% \end{array}% \right) =\int \left( \begin{array}{c} 1 \\ \mathbf{v} \\ mC^{2}% \end{array}% \right) f^{(0)}\left( \mathbf{v};\mathbf{\nabla }_{0}^{\left( \infty \right) }\psi \left( \mathbf{r},t\right) \right) d\mathbf{v} \label{f0-hydro} \end{equation}% while the higher order terms are orthogonal to the first three velocity moments% \begin{equation} \int \left( \begin{array}{c} 1 \\ \mathbf{v} \\ mC^{2}% \end{array}% \right) f^{(n)}\left( \mathbf{v};\mathbf{\nabla }_{0}^{\left( \infty \right) }\psi \left( \mathbf{r},t\right) \right) d\mathbf{v}=0,\;n>0, \label{fn-hydro} \end{equation}% so that the total distribution $f=f^{(0)}+f^{(1)}+...$ satisfies eqs.(\ref% {density})-(\ref{temperature}). \subsection{The reference state} Recall that the goal is to describe deviations from the reference state $% \psi _{0}\left( \mathbf{r},t\right) $ which corresponds to the distribution $% f_{0}\left( \mathbf{r},\mathbf{v,}t;\left[ \psi _{0}\right] \right) $ and in fact the distribution and fields are related by the definitions given in eqs.(\ref{density})-(\ref{temperature}). The reference distribution is itself assumed to be normal so that the dependence on $\mathbf{r}$ and $t$ occurs implicitly through the fields. In terms of the notation used here, the reference distribution satisfies the kinetic equation, eq.(\ref{KE}), and the full, nonlinear balance equations, eqs.(\ref{x2}). Using the definitions given above, these translate into \begin{equation} \left( \partial _{t}^{\left( 0\right) }+\mathbf{v}\cdot \nabla ^{\left( 0\right) }\right) f_{0}\left( \mathbf{r},\mathbf{v,}t;\left[ \psi _{0}\right] \right) =J_{0}[\mathbf{r},\mathbf{v},t|f_{0}] \label{ref-KE} \end{equation}% and the fields are solutions to the full, nonlinear balance equations {% \begin{eqnarray} \partial _{t}^{\left( 0\right) }{n}_{0}{+\mathbf{u}\cdot }\mathbf{\nabla }% ^{\left( 0\right) }n_{0}+n_{0}\mathbf{\nabla }^{\left( 0\right) }\cdot \mathbf{u}_{0} &=&0\; \label{ref-balance} \\ \partial _{t}^{\left( 0\right) }{u}_{0,i}{+\mathbf{u}}_{0}{\cdot }\mathbf{% \nabla }^{\left( 0\right) }u_{0.i}+(mn_{0})^{-1}\partial _{j}^{(0)}P_{ij}^{(00)} &=&0 \notag \\ {\partial _{t}^{(0)}T}_{0}{+\mathbf{u}}_{0}{\cdot }\mathbf{\nabla }^{\left( 0\right) }T_{0}+\frac{2}{Dn_{0}k_{B}}\left( \mathbf{\nabla }^{\left( 0\right) }\cdot \mathbf{q}^{(00)}+P_{ij}^{(00)}\partial _{j}^{(0)}u_{0,i}\right) &=&-\zeta ^{(00)}T_{0}\;, \notag \end{eqnarray}% where, e.g., }$P_{ij}^{(00)}$ is the pressure tensor evaluated in the reference state, and \begin{equation} {\partial _{t}^{(n)}\psi }_{0}=0,\;n>0. \end{equation}% Thus, in the ordering scheme developed here, the reference state is an exact solution to the zeroth order perturbative equations. For the standard case describing deviations from the equilibrium state, the hydrodynamic fields are constant in both space and time and $\zeta ^{(00)}=0$ so that the balance equations just reduce to ${\partial _{t}^{(0)}\psi }% _{0}=0$. The left hand side of the kinetic equation therefore vanishes leaving $0=J_{0}[\mathbf{r},\mathbf{v},t|f_{0}]$ which is indeed satisfied by the equilibrium distribution. For a granular fluid, $\zeta ^{(00)}\neq 0$ and the simplest solution that can be constructed consists of spatially homogeneous, but time dependent fields giving% \begin{equation} \partial _{t}^{\left( 0\right) }f_{0}\left( \mathbf{r},\mathbf{v,}t;\left[ \psi _{0}\right] \right) =J_{0}[\mathbf{r},\mathbf{v},t|f_{0}] \label{e1} \end{equation}% and {% \begin{eqnarray} \partial _{t}^{\left( 0\right) }{n}_{0} &=&0\; \\ \partial _{t}^{\left( 0\right) }{u}_{0,i} &=&0 \notag \\ {\partial _{t}^{(0)}T}_{0} &=&-\zeta ^{(00)}T_{0} \notag \end{eqnarray}% so that the distribution depends on time through its dependence on the temperature. The balance equations, together with the assumption of normality, serve to define the meaning of the left hand side of eq.(\ref{e1}% ) giving}% \begin{equation} -\zeta ^{(00)}T_{0}\frac{\partial }{\partial T}f_{0}\left( \mathbf{r},% \mathbf{v,}t;\left[ \psi _{0}\right] \right) =J_{0}[\mathbf{r},\mathbf{v}% ,t|f_{0}]. \end{equation}% Typically, this is solved by assuming a scaling solution of the form {\ }$% f_{0}\left( \mathbf{r},\mathbf{v,}t;\left[ \psi _{0}\right] \right) =\Phi \left( \mathbf{v}\sqrt{\frac{m\sigma ^{2}}{k_{B}T\left( t\right) }}\right) $. \subsection{The zeroth order Chapman-Enskog solution} As emphasized above, the Chapman-Enskog method is an expansions in gradients of the deviations of the hydrodynamic fields from the reference state. Using the ordering developed above,{the zeroth order kinetic equation is}% \begin{equation} \partial _{t}^{(0)}f^{(0)}\left( \mathbf{r},\mathbf{v};\delta \psi \left( \mathbf{r},t\right) ,\left[ \psi _{0}\right] \right) +\mathbf{v}\cdot \nabla ^{(0)}f^{(0)}\left( \mathbf{r},\mathbf{v};\delta \psi \left( \mathbf{r}% ,t\right) ,\left[ \psi _{0}\right] \right) =J_{0}[\mathbf{r},\mathbf{v}% ,t|f_{0}]. \label{zero-KE} \end{equation}% and the zeroth order balance equations are{% \begin{eqnarray} {\partial _{t}^{(0)}n+\mathbf{u}\cdot }\mathbf{\nabla }n_{0}+n\mathbf{\nabla }\cdot \mathbf{u}_{0} &=&0\; \label{zero-KE-2} \\ {\partial _{t}^{(0)}u}_{i}{+\mathbf{u}\cdot }\mathbf{\nabla }% u_{0.i}+(mn)_{j}^{-1}\mathbf{\nabla }^{\left( 0\right) }P_{ij}^{(0)} &=&0 \notag \\ {\partial _{t}^{(0)}T+\mathbf{u}\cdot \nabla }T_{0}+\frac{2}{Dnk_{B}}\left( \mathbf{\nabla }^{\left( 0\right) }\cdot \mathbf{q}^{(0)}+P_{ij}^{(0)}% \partial _{j}u_{0,i}\right) &=&-\zeta ^{(0)}T. \notag \end{eqnarray}% Making use of the balance equations satisfied by the reference fields, (\ref% {ref-balance}), this can be written in terms of the deviations as \begin{eqnarray} {\partial _{t}^{(0)}\delta n+\delta \mathbf{u}\cdot \nabla }n_{0}+\delta n\nabla \cdot \mathbf{u}_{0} &=&0\; \label{zero-balance} \\ {\partial _{t}^{(0)}\delta u}_{i}{+\delta \mathbf{u}\cdot \nabla }% u_{0.i}+(mn)^{-1}\nabla _{j}^{(0)}P_{ij}^{(0)}-(mn_{0})^{-1}\nabla _{j}P_{ij}^{(00)} &=&0 \notag \\ {\partial _{t}^{(0)}\delta T+\delta \mathbf{u}\cdot \nabla }T_{0}+\frac{2}{% Dnk_{B}}\left( \nabla ^{(0)}\cdot \mathbf{q}^{(0)}+P_{ij}^{(0)}\nabla _{j}u_{0,i}\right) -\frac{2}{Dn_{0}k_{B}}\left( \nabla \cdot \mathbf{q}% ^{(00)}+P_{ij}^{(00)}\nabla _{j}u_{0,i}\right) &=&-\zeta ^{(0)}T+\zeta ^{(00)}T_{0}. \notag \end{eqnarray}% Since the zeroth-order distribution is a \emph{function} of }$\delta \psi $ but a \emph{functional} of the reference fields, t{he time derivative in eq.(% \ref{zero-KE}) is evaluated using }% \begin{equation} \partial _{t}^{(0)}f^{(0)}=\sum_{\alpha }\left( {\partial _{t}^{(0)}}\delta \psi _{\alpha }\left( \mathbf{r},t\right) \right) \frac{\partial }{\partial \delta \psi _{\alpha }\left( \mathbf{r},t\right) }f^{(0)}+\sum_{\alpha }\int d\mathbf{r}^{\prime }\;\left( {\partial _{t}^{(0)}}\psi _{0,\alpha }\left( \mathbf{r}^{\prime },t\right) \right) \frac{\delta }{\delta \psi _{0,\alpha }\left( \mathbf{r}^{\prime },t\right) }f^{(0)}. \label{zero-t} \end{equation}% and these equations must be solved subject to the additional boundary condition% \begin{equation} \lim_{\delta \psi \rightarrow 0}f^{(0)}\left( \mathbf{r},\mathbf{v},t;\delta \psi \left( \mathbf{r},t\right) ,\left[ \psi _{0}\right] \right) =f_{0}\left( \mathbf{r},\mathbf{v},t;\left[ \psi _{0}\right] \right) . \label{bc0} \end{equation}% There are several important points to be made here. First, it must be emphasized that the reference fields $\psi _{0}\left( \mathbf{r},t\right) $ and the deviations $\delta \psi \left( \mathbf{r},t\right) $ are playing different roles in these equations. The former are fixed and assumed known whereas the latter are independent variables. The result of a solution of these equations will be the zeroth order distribution as a function of the variables $\delta \psi $. For any given physical problem, the deviations will be determined by solving the balance equations, eqs.(\ref{zero-balance}% ), subject to appropriate boundary conditions and only then is the distribution completely specified. Second, nothing is said here about the solution of eqs.(\ref{zero-KE})-(\ref{zero-t}) which, in general, constitute a complicated functional equation in terms of the reference state variables $% \psi _{0,\alpha }\left( \mathbf{r},t\right) $ . The only obvious exceptions, and perhaps the only practical cases, are when the reference state is either time-independent, so that ${\partial _{t}^{(0)}}\psi _{0,\alpha }=0$, or spatially homogeneous so that $f^{(0)}$is a function, and not a functional, of the reference fields. The equilibrium state is both, the homogeneous cooling state is a spatially homogeneous state and time-independent flow states such as uniform shear flow or Pouseille flow with thermalizing walls are important examples of time-independent, spatially inhomogeneous states. Third, since eqs.(\ref{zero-KE})-(\ref{zero-KE-2}) are the lowest order equations in a gradient expansion, they are to be solved for \emph{% arbitrarily large} deviations of the fields, $\delta \psi $. There is no sense in which the deviations should be considered to be small. The fourth observation, and perhaps the most important, is that there is no conceptual connection between the zeroth order distribution $f^{(0)}\left( \mathbf{v}% ;\delta \psi \left( \mathbf{r},t\right) ,\mathbf{\nabla }_{0}^{(\infty )}\psi _{0}\left( \mathbf{r},t\right) \right) $ and the reference distribution $f_{0}\left( \mathbf{v};\mathbf{\nabla }_{0}^{(\infty )}\psi _{0}\left( \mathbf{r},t\right) \right) $ except for the limit given in eq.(% \ref{bc0}). In particular, it will almost always be the case that \begin{equation} f^{(0)}\left( \mathbf{v};\delta \psi \left( \mathbf{r},t\right) ,\mathbf{% \nabla }_{0}^{(\infty )}\psi _{0}\left( \mathbf{r},t\right) \right) \neq f_{0}\left( \mathbf{v};\mathbf{\nabla }_{0}^{(\infty )}\left( \psi _{0}\left( \mathbf{r},t\right) +\delta \psi \left( \mathbf{r},t\right) \right) \right) . \end{equation}% A rare exception for which this inequality is reversed is when the reference state is the equilibrium state. In that case, the density, temperature and velocity fields are uniform and the reference distribution is just a Gaussian% \begin{equation} f_{0}\left( \mathbf{r},\mathbf{v};\mathbf{\nabla }_{0}^{(\infty )}\psi _{0}\right) =\phi _{0}\left( \mathbf{v};n_{0},T_{0},\mathbf{U}_{0}\right) \end{equation}% and the solution to the zeroth order equations is the local equilibrium distribution \begin{equation} f^{(0)}\left( \mathbf{v};\delta \psi \left( \mathbf{r},t\right) ,\mathbf{% \nabla }_{0}^{(\infty )}\psi _{0}\left( \mathbf{r},t\right) \right) =\phi _{0}\left( \mathbf{v};n+\delta n\left( \mathbf{r},\mathbf{t}\right) ,T+\delta T\left( \mathbf{r},\mathbf{t}\right) ,\mathbf{U}+\delta \mathbf{U}% \left( \mathbf{r},\mathbf{t}\right) \right) =f_{0}\left( \mathbf{v};\mathbf{% \nabla }_{0}^{(\infty )}\left( \psi _{0}\left( \mathbf{r},t\right) +\delta \psi \left( \mathbf{r},t\right) \right) \right) . \label{localize} \end{equation}% For steady states, as will be illustrated in the next Section, it is not the case that $f^{(0)}$ is obtained from the steady-state distribution via a ''localization'' along the lines of that shown in eq.(\ref{localize}). On the other hand, eqs.(\ref{zero-KE})-(\ref{zero-KE-2}) are the same whether they are solved for the general field $\delta \psi \left( \mathbf{r}% ,t\right) \;$or for the spatially homogeneous field $\delta \psi \left( t\right) $ with the subsequent localization $\delta \psi \left( t\right) \rightarrow \delta \psi \left( \mathbf{r},t\right) $. Furthermore, these equations are identical to those one would solve in order to obtain an exact normal solution to the full kinetic equation, eq.(\ref{ref-KE}) and balance equations, eq.(\ref{ref-balance}), for the fields $\psi _{0}\left( \mathbf{r}% ,t\right) +\delta \psi \left( t\right) $. In other words, the zeroth-order Chapman-Enskog distribution is the localization of the exact distribution for homogeneous deviations from the reference state. Again, only in the case of the equilibrium reference state is it true that this corresponds to the localization of the reference state itself. \subsection{First order Chapman-Enskog} In the following, the equations for the first-order terms will also be needed. Collecting terms in eq.(\ref{ref-KE}), the first order distribution function is found to satisfy% \begin{eqnarray} &&\partial _{t}^{(0)}f^{(1)}(\mathbf{v};\delta \psi \left( \mathbf{r}% ,t\right) ,\left[ \psi _{0}\right] )+\mathbf{v}\cdot \mathbf{\nabla }% ^{(0)}f^{(1)}(\mathbf{v};\delta \psi \left( \mathbf{r},t\right) ,\left[ \psi _{0}\right] ) \label{f1} \\ &=&J_{0}[\mathbf{r},\mathbf{v},t|f_{1}]+J_{1}[\mathbf{r},\mathbf{v}% ,t|f_{0}]-\left( \partial _{t}^{(1)}f^{(0)}(\mathbf{v};\delta \psi \left( \mathbf{r},t\right) ,\left[ \psi _{0}\right] )+\mathbf{v}\cdot \nabla ^{(1)}f^{(0)}(\mathbf{v};\delta \psi \left( \mathbf{r},t\right) ,\left[ \psi _{0}\right] )\right) \notag \end{eqnarray}% and the first-order balance equations become{% \begin{eqnarray} {\partial _{t}^{(1)}\delta n+\mathbf{u}\cdot \nabla }\delta n+n\nabla \cdot \delta \mathbf{u} &=&0\; \label{P1} \\ {\partial _{t}^{(1)}\delta u_{i}+\mathbf{u}\cdot }\mathbf{\nabla }{\delta }% u_{i}+(mn)^{-1}\nabla _{j}^{\left( 1\right) }P_{ij}^{\left( 0\right) }+(mn)^{-1}\nabla _{j}^{\left( 0\right) }P_{ij}^{\left( 1\right) } &=&0 \notag \\ {\partial _{t}^{(1)}\delta T+\mathbf{u}\cdot \mathbf{\nabla }\delta }T+\frac{% 2}{Dnk_{B}}\left( \mathbf{\nabla }^{\left( 1\right) }\cdot \mathbf{q}% ^{(0)}+P_{ij}^{\left( 0\right) }\nabla _{j}\delta u_{i}\right) +\frac{2}{% Dnk_{B}}\left( \mathbf{\nabla }^{\left( 0\right) }\cdot \mathbf{q}% ^{(1)}+P_{ij}^{\left( 1\right) }\nabla _{j}u_{0,i}\right) &=&-\zeta ^{(1)}T. \notag \end{eqnarray}% } \section{Application to Uniform Shear Flow of Granular Fluids} Uniform shear flow (USF) is a macroscopic state that is characterized by a constant density, a uniform temperature and a simple shear with the local velocity field given by \begin{equation} u_{i}=a_{ij}r_{j},\quad a_{ij}=a\delta _{ix}\delta _{jy}, \label{profile} \end{equation}% where $a$ is the \emph{constant} shear rate. If one assumes that the pressure tensor, heat flux vector and cooling rate are also spatially uniform, the reference-state balance equations, eqs.(\ref{ref-balance}), become{% \begin{eqnarray} \partial _{t}^{\left( 0\right) }{n}_{0} &=&0\; \label{ssx} \\ \partial _{t}^{\left( 0\right) }{u}_{0,i}+au_{0,y}\delta _{ix} &=&0 \notag \\ {\partial _{t}^{(0)}T}_{0}+\frac{2}{Dn_{0}k_{B}}aP_{xy}^{(00)} &=&-\zeta ^{(00)}T_{0}\;, \notag \end{eqnarray}% The question of whether or not these assumptions of spatial homogeneity are true depends on the detailed form of the collision operator:\ in ref.\cite% {LutskoPolydisperse} it is shown that only for the linear velocity profile, eq.(\ref{profile}), this assumption }is easily verified for the Enskog kinetic theory (and hence for simpler approximations to it such as the Boltzmann and BGK theories).{\ }This linear velocity profile is generated by Lee-Edwards boundary conditions \cite{LeesEdwards}, which are simply periodic boundary conditions in the local Lagrangian frame. For elastic gases, $\zeta ^{(00)}=0$ and the temperature grows in time due to viscous heating and so a steady state is not possible unless an external (artificial) thermostat is introduced\cite{MirimThesisArticle}. However, for inelastic gases, the temperature changes in time due to the competition between two (opposite) mechanisms: on the one hand, viscous (shear) heating and, on the other hand, energy dissipation in collisions. A steady state occurs when both mechanisms cancel each other at which point the balance equation for temperature becomes \begin{equation} \frac{2}{Dn_{0}k_{B}}aP_{xy}^{(00)}=-\zeta ^{(00)}T_{0}. \end{equation}% Note that both the pressure tensor and the cooling rate are in general functions of the two control parameters, the shear rate and the coefficient of restitution, and the hydrodynamic variables, the density and the temperature, so that this relation fixes any one of these in terms of the the other three: for example, it could be viewed as giving the steady-state temperature as a function of the other variables. At a microscopic level, the one-body distribution for USF will clearly be inhomogeneous since the eq.(\ref{velocity}) and eq.(\ref{profile}) imply that the steady-state distribution must give% \begin{equation} ay\widehat{\mathbf{x}}=\frac{1}{n_{0}}\int \;d\mathbf{v}\mathbf{v}f_{0}(% \mathbf{r},\mathbf{v}). \end{equation}% However, it can be shown, at least up to the Enskog theory{\cite% {LutskoPolydisperse}}, that for the Lees-Edwards boundary conditions, the state of USF possesses a modified translational invariance whereby the steady state distribution, when expressed in terms of the local rest-frame velocities $V_{i}=v_{i}-a_{ij}r_{j}$ does not have any explicit dependence on position. In terms of these variables, and assuming a steady state, the kinetic equation becomes \begin{equation} -aV_{y}\frac{\partial }{\partial V_{x}}f(\mathbf{V})=J\left[ \mathbf{V}|f,f% \right] \;. \label{2.15} \end{equation}% The solution of this equation has been considered in some detail for the BGK-type models\cite{MirimThesis},\cite{MirimThesisArticle},\cite% {Brey_EarlyKineticModels},\cite{Brey_KineticModels}, the Boltzmann equation% \cite{SelGoldhirsch}, and the Enskog equation\cite{ChouRichman1},\cite% {ChouRichman2},\cite{LutskoPolydisperse}.\textbf{\ } \subsection{The model kinetic theory} Here, for simplicity, attention will be restricted to a particularly simple kinetic theory which nevertheless gives realistic results that can be compared to experiment. The kinetic theory used is the kinetic model of Brey, Dufty and Santos\cite{Brey_KineticModels}, which is a relaxation type model where the operator $J[f,f]$ is approximated as \begin{equation} J[f,f]\rightarrow -\nu ^{\ast }\left( \alpha \right) \nu \left( \psi \right) (f-\phi _{0})+\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) \frac{\partial }{\partial \mathbf{v}}\cdot \left( \mathbf{C}f\right) . \label{BGK} \end{equation}% The right hand side involves the peculiar velocity $\mathbf{C}=\mathbf{v}-% \mathbf{u}=\mathbf{V}-\delta \mathbf{u}$ and the local equilibrium distribution, eq.(\ref{le}). The parameters in this relaxation approximation are taken so as to give agreement with the results from the Boltzmann theory of the homogeneous cooling state as discussed in ref.\cite% {Brey_KineticModels}. Defining the collision rate for elastic hard spheres in the Boltzmann approximation as \begin{equation} \nu \left( \psi \right) =\frac{8\pi ^{\left( D-2\right) /2}}{\left( D+2\right) \Gamma \left( D/2\right) }n\sigma ^{D}\sqrt{\frac{\pi k_{B}T}{% m\sigma ^{2}}}, \end{equation}% the correction for the effect of the inelasticity is chosen to reproduce the Navier-Stokes shear viscosity coefficient of an inelastic gas of hard spheres in the Boltzmann approximation\cite{BreyCubero},\cite{LutskoCE} giving \begin{equation} \nu ^{\ast }\left( \alpha \right) =\frac{1}{4D}\left( 1+\alpha \right) \left( \left( D-1\right) \alpha +D+1\right) . \end{equation}% The second term in eq.(\ref{BGK}) accounts for the collisional cooling and the coefficient is chosen so as to give the same cooling rate for the homogeneous cooling state as the Boltzmann kinetic theory\cite{BreyCubero},% \cite{LutskoCE}, \begin{equation} \zeta ^{\ast }\left( \alpha \right) =\frac{D+2}{4D}\left( 1-\alpha ^{2}\right) . \end{equation}% In this case, the expressions for the pressure tensor, heat-flux vector and cooling rate take particularly simple forms typical of the Boltzmann description\cite{ChapmanCowling}% \begin{eqnarray} P_{ij} &=&m\int d\mathbf{C}\;C_{i}C_{j}f\left( \mathbf{r,C,}t\right) , \label{fluxBGK} \\ q_{i} &=&\frac{1}{2}m\int d\mathbf{C}\;C_{i}C^{2}f\left( \mathbf{r,C,}% t\right) , \notag \end{eqnarray}% while the cooling rate can be calculated directly from eqs.(\ref{BGK}) and (% \ref{heating}) with the result $\zeta (\psi )=\nu \left( \psi \right) \zeta ^{\ast }\left( \alpha \right) $. \subsection{The steady-state} Before proceeding with the Chapman-Enskog solution of the kinetic equation, it is useful to describe the steady state for which the distribution satisfies eq.(\ref{2.15}) which now becomes% \begin{equation} -aV_{y}\frac{\partial }{\partial V_{x}}f(\mathbf{V})=-\nu ^{\ast }\left( \alpha \right) \nu \left( \psi _{0}\right) (f-\phi _{0})+\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi _{0}\right) \frac{\partial }{% \partial \mathbf{V}}\cdot \left( \mathbf{V}f\right) . \label{kss} \end{equation}% The balance equations reduce to \begin{equation} 2aP_{xy}^{ss}=-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) Dn_{0}k_{B}T_{0}. \label{ss} \end{equation}% An equation for the pressure tensor is obtained by multiplying eq.(\ref{kss}% ) through by $mV_{i}V_{j}$ and integrating giving% \begin{equation*} aP_{iy}^{ss}\delta _{jx}+aP_{jy}^{ss}\delta _{ix}=-\nu ^{\ast }\left( \alpha \right) \nu \left( \psi _{0}\right) (P_{ij}^{ss}-n_{0}k_{B}T_{0}\delta _{ij})-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi _{0}\right) P_{ij}^{ss}. \end{equation*}% This set of algebraic equations is easily solved giving the only non-zero components of the pressure tensor as \begin{eqnarray} P_{ii}^{ss} &=&\frac{\nu ^{\ast }\left( \alpha \right) +\delta _{ix}D\zeta ^{\ast }\left( \alpha \right) }{\nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast }\left( \alpha \right) }n_{0}k_{B}T_{0} \\ P_{xy}^{ss} &=&-\frac{a_{ss}^{\ast }}{\nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast }\left( \alpha \right) }P_{yy}, \notag \end{eqnarray}% where $a_{ss}^{\ast }=a_{ss}/\nu \left( \psi _{0}\right) $ satisfies the steady-state condition, eq.(\ref{ss})% \begin{equation} \frac{a_{ss}^{\ast 2}\nu ^{\ast }\left( \alpha \right) }{\left( \nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast }\left( \alpha \right) \right) ^{2}}=% \frac{D}{2}\zeta ^{\ast }\left( \alpha \right) . \label{balance} \end{equation}% For fixed control parameters, $\alpha $ and $a$, this is a relation constraining the state variables $n_{0}$ and $T_{0}$. The steady-state distribution can be given explicitly, see e.g. \cite{SantosSolveBGK}. \subsection{Zeroth order Chapman-Enskog} Since the only spatially varying reference field is the velocity and since it is linear in the spatial coordinate, the zeroth-order kinetic equation, eq.(\ref{zero-KE})\ becomes% \begin{equation} \partial _{t}^{(0)}f^{(0)}+\mathbf{v}\cdot \left( \mathbf{\nabla }% ^{(0)}u_{0i}\right) \frac{\partial }{\partial u_{0i}}f^{(0)}=-\nu \left( \psi \right) (f^{\left( 0\right) }-\phi _{0})+\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) \frac{\partial }{\partial \mathbf{v}}\cdot \left( \mathbf{C}f^{(0)}\right) . \label{f00} \end{equation}% or, writing this in terms of the peculiar velocity, \begin{equation} \partial _{t}^{(0)}f^{(0)}+v_{y}\partial _{y}^{0}f^{(0)}-av_{y}\frac{% \partial }{\partial C_{x}}f^{(0)}=-\nu \left( \psi \right) (f-\phi _{0})+% \frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) \frac{% \partial }{\partial \mathbf{v}}\cdot \left( \mathbf{C}f^{(0)}\right) . \end{equation}% Here, the second term on the left accounts for any explicit dependence of the distribution on the coordinate $y$, aside from the implicit dependence coming from $\mathbf{C}$. Since it is a zero-order derivative, it does not act on the deviations $\delta \psi $. In terms of the peculiar velocity, this becomes% \begin{equation} \partial _{t}^{(0)}f^{(0)}+\left( C_{y}+\delta u_{y}\right) \partial _{y}^{0}f^{(0)}-aC_{y}\frac{\partial }{\partial C_{x}}f^{(0)}-a\delta u_{y}% \frac{\partial }{\partial C_{x}}f^{(0)}=-\nu \left( \psi \right) (f-\phi _{0})+\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) \frac{\partial }{\partial \mathbf{C}}\cdot \left( \mathbf{C}f^{(0)}\right) . \label{f001} \end{equation}% The first term on the left is evaluated using eq.(\ref{zero-t}) and the zeroth order balance equations{% \begin{eqnarray} {\partial _{t}^{(0)}n} &=&0\; \label{T0} \\ {\partial _{t}^{(0)}u}_{i}+a\delta u_{y}\delta _{ix} &=&0 \notag \\ {\partial _{t}^{(0)}T}+\frac{2}{Dnk_{B}}aP_{xy}^{(0)} &=&-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) T\;, \notag \end{eqnarray}% and the assumption of normality}% \begin{equation*} \partial _{t}^{(0)}f^{(0)}=\left( \partial _{t}^{(0)}\delta n\right) \left( \frac{\partial }{\partial \delta n}f^{(0)}\right) +\left( \partial _{t}^{(0)}\delta T\right) \left( \frac{\partial }{\partial \delta T}% f^{(0)}\right) +\left( \partial _{t}^{(0)}\delta u_{i}\right) \left( \frac{% \partial }{\partial \delta u_{i}}f^{(0)}\right) \end{equation*}% {to give}% \begin{eqnarray} &&\left( -\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) T-% \frac{2}{Dnk_{B}}aP_{xy}^{(0)}\right) \frac{\partial }{\partial T}% f^{(0)}-aC_{y}\frac{\partial }{\partial C_{x}}f^{(0)}-a\delta u_{y}\left( \frac{\partial }{\partial C_{x}}f^{(0)}+\frac{\partial }{\partial \delta u_{x}}f^{(0)}\right) \label{f0} \\ &=&-\nu ^{\ast }\left( \alpha \right) \nu \left( \psi \right) (f^{(0)}-\phi _{0})+\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) \frac{\partial }{\partial \mathbf{C}}\cdot \left( \mathbf{C}f^{(0)}\right) , \notag \end{eqnarray}% where the temperature derivative is understood to be evaluated at constant density. Here, the second term on the left in eq.(\ref{f001}) has been dropped as neither eq.(\ref{f001}) nor the balance equations contain explicit reference to the velocity field $u_{0}$, and so no explicit dependence on the coordinate $y$ , thus justifying the assumption that such dependence does not occur in $f^{\left( 0\right) }$. One can also assume that $f^{\left( 0\right) }$ depends on $\delta u_{i}$ only through the peculiar velocity, since in that case the term proportional to $\delta u_{y}$% vanishes as well and there is no other explicit dependence on $\delta u_{y}$. Equation (\ref{f0}) is closed once the pressure tensor is specified. Since the primary goal here is to develop the transport equations for deviations from the reference state, attention will be focused on the determination of the pressure tensor and the heat flux vector. It is a feature of the simple kinetic model used here that these can be calculated without determining the explicit form of the distribution. \subsubsection{The zeroth-order pressure tensor} An equation for the pressure tensor can be obtained by multiplying this equation through by $mC_{i}C_{j}$ and integrating over velocities. Using the definition given in eq.(\ref{fluxBGK}), \begin{equation} \left( -\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) T-\frac{2% }{Dnk_{B}}aP_{xy}^{(0)}\right) \frac{\partial }{\partial T}% P_{ij}^{(0)}+a\delta _{ix}P_{jy}^{(0)}+a\delta _{jx}P_{iy}^{(0)}=-\nu ^{\ast }\left( \alpha \right) \nu \left( \psi \right) (P_{ij}^{(0)}-\delta _{ij}nk_{B}T)-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) P_{ij}^{(0)}, \label{p0} \end{equation}% and of course there is the constraint that by definition $Tr\left( \mathsf{P}% \right) =Dnk_{B}T$. It is interesting to observe that eqs.,(\ref{T0}) - (\ref% {p0}) are identical with their steady-state counterparts when the steady-state condition, $\zeta ^{(0)}T=\frac{2}{Dnk_{B}}aP_{xy}^{(0)}$, is fulfilled. However, here the solution of these equations is needed for arbitrary values of $\delta T$, $\delta n$ and $\delta \mathbf{u}$. Another point of interest is that these equations are local in the deviations $% \delta \psi $ so that they are exactly the same equations as describe spatially homogeneous deviations from the reference state. As mentioned above, this is the meaning of the zeroth-order solution to the Chapman-Enskog expansion: it is the exact solution to the problem of uniform deviations from the reference state. It is this exact solution which is ''localized'' to give the zeroth-order Chapman-Enskog approximation and not the reference distribution, $f_{0}$, except in the rare cases, such as equilibrium, when they coincide. To complete the specification of the distribution, eqs. (\ref{f0}) and (\ref% {p0}) must be supplemented by boundary conditions. The relevant dimensionless quantity characterizing the strength of the nonequilibrium state is the dimensionless shear rate defined as \begin{equation} a^{\ast }\equiv a/\nu =a\frac{\left( D+2\right) \Gamma \left( D/2\right) }{% 8\pi ^{\left( D-1\right) /2}n\sigma ^{D}}\sqrt{\frac{m\sigma ^{2}}{k_{B}T}}. \end{equation}% It is clear that for a uniform system, the dimensionless shear rate becomes smaller as the temperature rises so that we expect that in the limit of infinite temperature, the system will behave as an inelastic gas without any shear - i.e., in the homogeneous cooling state, giving the boundary condition% \begin{equation} \lim_{T\rightarrow \infty }\frac{1}{nk_{B}T}P_{ij}=\delta _{ij}, \end{equation}% and in this limit, the distribution must go to the homogeneous cooling state distribution. These boundary conditions can be implemented equivalently by rewriting eqs.(\ref{s-low}) in terms of the inverse temperature, or more physically the variable $a^{\ast }$, and the dimensionless pressure tensor $% P_{ij}^{(\ast )}=\frac{1}{nk_{B}T}P_{ij}^{(0)}$ giving% \begin{equation} \left( \frac{1}{2}\zeta ^{\ast }\left( \alpha \right) +\frac{1}{D}a^{\ast }P_{xy}^{(\ast )}\right) a^{\ast }\frac{\partial }{\partial a^{\ast }}% P_{ij}^{(\ast )}=\frac{2}{D}a^{\ast }P_{xy}^{(\ast )}P_{ij}^{(\ast )}-a^{\ast }\delta _{ix}P_{jy}^{(\ast )}-a^{\ast }\delta _{jx}P_{iy}^{(\ast )}-\nu ^{\ast }\left( \alpha \right) (P_{ij}^{(\ast )}-\delta _{ij}) \label{P0-a} \end{equation}% and writing $f^{(0)}\left( \mathbf{C};\psi \right) =n\left( \frac{m}{2\pi k_{B}T}\right) ^{D/2}g\left( \sqrt{\frac{m}{k_{B}T}}\mathbf{C};a^{\ast }\right) $ \begin{eqnarray} &&\left( \zeta ^{\ast }\left( \alpha \right) +\frac{2}{D}a^{\ast }P_{xy}^{(\ast )}\right) a^{\ast }\frac{\partial }{\partial a^{\ast }}g+% \frac{1}{D}a^{\ast }P_{xy}^{(\ast )}C_{i}\frac{\partial }{\partial C_{i}}% g+a^{\ast }P_{xy}^{(\ast )}g-a^{\ast }C_{y}\frac{\partial }{\partial C_{x}}g \label{P00} \\ &=&-\nu ^{\ast }\left( \alpha \right) \left( g-\exp \left( -mC^{2}/k_{B}T\right) \right) , \notag \end{eqnarray}% with boundary condition $\lim_{a^{\ast }\rightarrow 0}P_{ij}^{\left( \ast \right) }=\delta _{ij}$ and $\lim_{a^{\ast }\rightarrow 0}g=\exp \left( -mC^{2}/k_{B}T\right) $. For practical calculations, it is more convenient to introduce a fictitious time variable, $s$, and to express these equations as \begin{eqnarray} \frac{da^{\ast }}{ds} &=&\frac{1}{2}a^{\ast }\zeta ^{\ast }\left( \alpha \right) +\frac{1}{D}a^{\ast 2}P_{xy}^{(\ast )} \label{ss-hi} \\ \frac{\partial }{\partial s}P_{ij}^{(0)} &=&\frac{2}{D}a^{\ast }P_{xy}^{(\ast )}P_{ij}^{(\ast )}-a^{\ast }\delta _{ix}P_{jy}^{(\ast )}-a^{\ast }\delta _{jx}P_{iy}^{(\ast )}-\nu ^{\ast }\left( \alpha \right) (P_{ij}^{(\ast )}-\delta _{ij}) \notag \end{eqnarray}% where the boundary condition is then $P_{ij}^{\left( \ast \right) }\left( s=0\right) =\delta _{ij}$, and $a^{\ast }\left( s=0\right) =0$. The distribution then satisfies% \begin{equation} \frac{\partial }{\partial s}g=-\frac{1}{D}a^{\ast }P_{xy}^{(\ast )}C_{i}% \frac{\partial }{\partial C_{i}}g-a^{\ast }P_{xy}^{(\ast )}g+a^{\ast }C_{y}% \frac{\partial }{\partial C_{x}}g-\nu ^{\ast }\left( \alpha \right) \left( g-\exp \left( -mC^{2}/k_{B}T\right) \right) \label{ss-hif} \end{equation}% with $\lim_{s\rightarrow 0}g=\exp \left( -mC^{2}/k_{B}T\right) $. These are to be solved simultaneously to give $P_{ij}^{\left( \ast \right) }\left( s\right) ,a^{\ast }\left( s\right) $ and $f^{\left( 0\right) }\left( s\right) $ from which the desired curves $P_{ij}^{\left( \ast \right) }\left( a^{\ast }\right) $ and $f^{\left( 0\right) }\left( a^{\ast }\right) $ are obtained. Physically, if the gas starts at a very high temperature, it would be expected to cool until it reached the steady state. It is easy to see that the right hand sides of eqs.(\ref{ss-hi}) do in fact vanish in the steady state so that the steady state represents a critical point of this system of differential equations\cite{Nicolis}. In order to fully specify the curve $% P_{ij}\left( T\right) $ and the distribution $f^{(0)}$ it is necessary to integrate as well from a temperature below the steady state temperature. Clearly, in the case of \emph{zero} temperature, one expects that the pressure tensor goes to zero since this corresponds to the physical situation in which the atoms stream at exactly the velocities predicted by their positions and the macroscopic flow field. (Note that if the atoms have finite size, this could still lead to collisions. However, the BGK kinetic theory used here is properly understood as an approximation to the Boltzmann theory appropriate for a low density gas in which the finite size of the grains is of no importance.) Thus, the expectation is that the zero-temperature limit will give% \begin{equation} \lim_{T\rightarrow 0}P_{ij}^{(0)}=0. \end{equation}% Then, in terms of a fictitious time parameters, one has \begin{eqnarray} \frac{dT}{ds} &=&-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) T-\frac{2}{D}aTP_{xy}^{(\ast )} \label{s-low} \\ \frac{\partial }{\partial s}P_{ij}^{(\ast )} &=&a\frac{2}{D}P_{xy}^{(\ast )}P_{ij}^{(\ast )}-a\delta _{ix}P_{jy}^{(\ast )}-a\delta _{jx}P_{iy}^{(\ast )}-\nu ^{\ast }\left( \alpha \right) \nu \left( \psi \right) (P_{ij}^{(\ast )}-\delta _{ij}) \notag \end{eqnarray}% and for the distribution% \begin{equation} \frac{\partial }{\partial s}f^{(0)}=aC_{y}\frac{\partial }{\partial C_{x}}% f^{(0)}-\nu ^{\ast }\left( \alpha \right) \nu \left( \psi \right) (f^{(0)}-\phi _{0})+\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) \frac{\partial }{\partial \mathbf{C}}\cdot \left( \mathbf{C}f^{(0)}\right) . \label{s-low1} \end{equation}% A final point is that the solution of these equations requires more than the boundary condition $P_{ij}^{(0)}\left( s=0\right) =0$ since evaluation of the right hand side of eq.(\ref{s-low}) requires a statement about $% P_{ij}^{(\ast )}\left( s=0\right) $ as well. A straight-forward series solution of eq.(\ref{p0}) in the vicinity of $T=0$ gives $P_{xy}^{\ast }\sim a^{\ast -1/3}$ and $P_{ii}^{\ast }\sim a^{\ast -2/3}$so that the correct boundary condition here is $P_{ij}^{(\ast )}\left( s=0\right) =0$. The solution of these equations can then be performed as discussed in ref. % \cite{SantosInherentRheology} with the boundary conditions given here. It will also prove useful below to know the behavior of the pressure tensor near the steady-state. This is obtained by making a series solution to eq.(% \ref{P0-a}) in the variable $\left( a^{\ast }-a_{ss}^{\ast }\right) $ where $% a_{ss}^{\ast }$ is the reduced shear in the steady-state. Details are given in Appendix \ref{AppP} and the result is that% \begin{equation} P_{ij}^{\left( 0\right) }=P_{ij}^{ss}\left( 1+A_{ij}^{\ast }\left( \alpha \right) \left( \frac{a^{\ast }}{a_{ss}^{\ast }}-1\right) +...\right) , \label{Pss} \end{equation}% with the coefficients% \begin{eqnarray} A_{xy}^{\ast }\left( \alpha \right) &=&-2\frac{\Delta \left( \alpha \right) +\zeta ^{\ast }\left( \alpha \right) }{\zeta ^{\ast }\left( \alpha \right) } \label{Pss-A} \\ \left( 1-\delta _{ix}\right) A_{ii}^{\ast }\left( \alpha \right) &=&-2\left( \frac{\nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast }\left( \alpha \right) }{\Delta \left( \alpha \right) +\nu ^{\ast }\left( \alpha \right) +\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) }\right) \left( 1-\delta _{ix}\right) \notag \\ A_{xx}^{\ast }\left( \alpha \right) &=&-2D\frac{\left( \Delta \left( \alpha \right) +\frac{1}{D}\nu ^{\ast }\left( \alpha \right) +\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \right) \left( \nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast }\left( \alpha \right) \right) }{\left( \Delta \left( \alpha \right) +\nu ^{\ast }\left( \alpha \right) +\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \right) \left( \nu ^{\ast }\left( \alpha \right) +D\zeta ^{\ast }\left( \alpha \right) \right) }, \notag \end{eqnarray}% where $\Delta \left( \alpha \right) $ is the real root of \begin{equation} 4\Delta ^{3}+8\left( \nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast }\left( \alpha \right) \right) \Delta ^{2}+\left( 4\nu ^{\ast 2}\left( \alpha \right) +14\nu ^{\ast }\left( \alpha \right) \zeta ^{\ast }\left( \alpha \right) +7\zeta ^{\ast 2}\left( \alpha \right) \right) \Delta +\zeta ^{\ast }\left( \alpha \right) \left( 2\nu ^{\ast 2}\left( \alpha \right) -\nu ^{\ast }\left( \alpha \right) \zeta ^{\ast }\left( \alpha \right) -2\zeta ^{\ast 2}\left( \alpha \right) \right) =0. \label{Pss-d1} \end{equation} \subsubsection{Higher order moments:\ the zeroth-order heat flux vector} Determination of the heat flux vector requires consideration of the full tensor of third order moments. Since fourth order moments will also be needed later, it is easiest to consider the equations for the general $N$-th order moment, defined as \begin{equation} M_{i_{1}...iN}^{(0)}\left( \mathbf{r,}t\right) =m\int d\mathbf{v}% \;C_{i_{1}}...C_{i_{N}}f^{\left( 0\right) }\left( \mathbf{r,C,}t\right) . \end{equation}% To simplify the equations, a more compact notation will be used for the indices whereby a collection of numbered indices, such as $i_{1}...i_{N}$, will be written more compactly as $I_{N}$ so that capital letters denote collections of indices and the subscript on the capital indicates the number of indices in the collection. Some examples of this are% \begin{eqnarray} M_{I_{N}}^{(0)} &=&M_{i_{1}...i_{N}}^{(0)} \\ M_{I_{2}}^{(0)} &=&M_{i_{1}i_{2}}^{(0)} \notag \\ M_{I_{2}y}^{(0)} &=&M_{i_{1}i_{2}y}^{(0)}. \notag \end{eqnarray} In terms of the general moments, the heat flux vector is \begin{equation} q_{i}^{\left( 0\right) }\left( \mathbf{r,}t\right) =\frac{1}{2}% \sum_{j}M_{ijj}^{(0)}\left( \mathbf{r,}t\right) =\frac{1}{2}% M_{ijj}^{(0)}\left( \mathbf{r,}t\right) , \end{equation}% where the second equality introduces the Einstein summation convention whereby repeated indices are summed. The pressure tensor is just the second moment $P_{ij}^{\left( 0\right) }=M_{ij}^{\left( 0\right) }$. The local equilibrium moments are easily shown to be zero for odd $N$ while the result for even $N$ is% \begin{equation} M_{I_{N}}^{\left( le\right) }=mn\left( \frac{2k_{B}T}{m}\right) ^{\frac{N}{2}% }2^{\frac{N}{2}}\frac{\Gamma \left( \frac{N+1}{2}\right) \Gamma \left( \frac{% N+2}{2}\right) }{\sqrt{\pi }\Gamma \left( N\right) }\mathcal{P}% _{I_{N}}\delta _{i_{1}i_{2}}\delta _{i_{3}i_{4}}...\delta _{i_{N-1}i_{N}} \end{equation}% where the operator $\mathcal{P}_{ijk...}$ indicates the sum over distinct permutations of the indices $ijk...$ and has no effect on any other indices. (e.g., $\mathcal{P}_{I_{4}}\delta _{i_{1}i_{2}}\delta _{i_{3}i_{4}}=\delta _{i_{1}i_{2}}\delta _{i_{3}i_{4}}+\delta _{i_{1}i_{3}}\delta _{i_{2}i_{4}}+\delta _{i_{1}i_{4}}\delta _{i_{2}i_{3}}$). An equation for the general $N$-th order moment can be obtained from eq.(\ref{f00}) with the result% \begin{equation} \left( -\zeta ^{\ast }\left( \alpha \right) -\frac{2}{D}a^{\ast }P_{xy}^{(\ast )}\right) T\frac{\partial }{\partial T}M_{I_{N}}^{(0)}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{N}{2}\zeta ^{\ast }\left( \alpha \right) \right) M_{I_{N}}^{(0)}+a^{\ast }\mathcal{P}_{I_{N}}\delta _{xi_{N}}M_{I_{N-1}y}^{(0)}=\nu ^{\ast }\left( \alpha \right) M_{I_{N}}^{(le)}. \end{equation}% Writing $M_{I_{N}}^{(0)}=mn\left( \frac{2k_{B}T}{m}\right) ^{\frac{N}{2}% }M_{I_{N}}^{\ast }$gives% \begin{equation} -\left( \zeta ^{\ast }\left( \alpha \right) +\frac{2}{D}a^{\ast }P_{xy}^{(\ast )}\right) T\frac{\partial }{\partial T}M_{I_{N}}^{\ast }+\left( \nu ^{\ast }\left( \alpha \right) -\frac{N}{D}a^{\ast }P_{xy}^{(\ast )}\right) M_{I_{N}}^{\ast }+a^{\ast }\mathcal{P}% _{I_{N}}\delta _{xi_{N}}M_{I_{N-1}y}^{\ast }=\nu ^{\ast }\left( \alpha \right) M_{I_{N}}^{(le\ast )} \label{Moments1} \end{equation}% Notice that the moments are completely decoupled order by order in $N$. Since the source on the right vanishes for odd $N$ it is natural to assume that $M_{I_{N}}^{\ast }=0$ for odd $N$. This is certainly true for temperatures above the steady state temperature since the appropriate boundary condition in this case, based on the discussion above, is that $% \lim_{T\rightarrow \infty }M_{I_{N}}^{\ast }=M_{I_{N}}^{(le\ast )}=0$. In the opposite limit, $T\rightarrow 0$, as mentioned above, one has that $% P_{xy}^{\ast }\sim a^{\ast -1/3}\sim T^{1/6}$ and there are two cases to consider depending on whether or not the third term on the left contributes. If it does, i.e. if one or more indices is equal to $x$, then a series solution near $T=0$ gives $M_{I_{N}}^{\ast }\sim a^{\ast -1}\sim T^{1/2}$ while if no index is equal to $x$ then $M_{I_{N}}^{\ast }\sim a^{\ast -2/3}\sim T^{1/3}$ giving in both cases the boundary condition $% \lim_{T\rightarrow 0}M_{I_{N}}^{\ast }=0$. In particular, this shows that the odd moments vanish for all temperatures. From this, it immediately follows that \begin{equation} q_{i}^{\left( 0\right) }\left( \mathbf{r,}t\right) =0. \end{equation} \subsection{First-order Chapman-Enskog: General formalism} The equation for the first-order distribution, eq.(\ref{f1}), becomes% \begin{equation} \partial _{t}^{(0)}f^{(1)}+av_{y}\frac{\partial }{\partial u_{0x}}% f^{(1)}=-\nu \left( \psi \right) f^{\left( 1\right) }+\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) \frac{\partial }{% \partial \mathbf{v}}\cdot \left( \mathbf{C}f^{(1)}\right) -\left( \partial _{t}^{(1)}f^{(0)}+\mathbf{v}\cdot \mathbf{\nabla }_{1}f^{(0)}\right) , \end{equation}% and the operator $\partial _{t}^{(1)}$ is defined via the corresponding balance equations which are now {% \begin{eqnarray} {\partial _{t}^{(1)}\delta n+\mathbf{u}\cdot \mathbf{\nabla }}\delta n+n{% \mathbf{\nabla }}\cdot \delta \mathbf{u} &=&0\; \\ {\partial _{t}^{(1)}\delta u_{i}+\mathbf{u}\cdot \mathbf{\nabla }\delta }% u_{i}+(mn)^{-1}\partial _{j}^{\left( 1\right) }P_{ij}^{\left( 0\right) }+(mn)^{-1}\partial _{y}^{\left( 0\right) }P_{iy}^{\left( 1\right) } &=&0 \notag \\ {\partial _{t}^{(1)}\delta T+\mathbf{u}\cdot \mathbf{\nabla }\delta }T+\frac{% 2}{Dnk_{B}}\left( P_{ij}^{\left( 0\right) }\nabla _{j}\delta u_{i}+{\mathbf{% \nabla }}^{\left( 0\right) }\cdot \mathbf{q}^{(1)}+aP_{xy}^{\left( 1\right) }\right) &=&0. \notag \end{eqnarray}% }\newline Writing the kinetic equation in the form% \begin{eqnarray} &&\partial _{t}^{(0)}f^{(1)}+a\frac{\partial }{\partial u_{0x}}% v_{y}f^{(1)}+\nu ^{\ast }\left( \alpha \right) \nu \left( \psi \right) f^{\left( 1\right) }-\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) \frac{\partial }{\partial \mathbf{v}}\cdot \left( \mathbf{C}f^{(1)}\right) \\ &=&-\left( \partial _{t}^{(1)}n+u_{l}\partial _{l}^{1}n\right) \frac{% \partial }{\partial n}f^{(0)}-\left( \partial _{t}^{(1)}T+u_{l}\partial _{l}^{1}T\right) \frac{\partial }{\partial T}f^{(0)}-\left( \partial _{t}^{(1)}\delta u_{j}+u_{l}\partial _{l}^{1}\delta u_{j}\right) \frac{% \partial }{\partial \delta u_{j}}f^{(0)} \notag \\ &&-\left( \partial _{l}^{1}u_{l}\right) f^{(0)}-\partial _{l}^{1}C_{l}f^{(0)} \notag \end{eqnarray}% equations for the $N$-th moment can be obtained by multiplying through by $% C_{i_{1}}...C_{i_{N}}$ and integrating over velocity. The first two terms on the left contribute \begin{eqnarray} \int C_{i_{1}}...C_{i_{N}}\left( \partial _{t}^{(0)}f^{(1)}+a\frac{\partial }{\partial u_{0x}}v_{y}f^{(1)}\right) d\mathbf{v} &=&\partial _{t}^{(0)}M_{I_{N}}^{\left( 1\right) }\mathbf{+}\mathcal{P}_{I_{N}}\left( \partial _{t}^{\left( 0\right) }\delta u_{i_{N}}\right) M_{I_{N-1}}^{\left( 1\right) } \\ &&+a\frac{\partial }{\partial u_{0x}}\left( M_{I_{N}y}^{\left( 1\right) }+\delta u_{y}M_{I_{N}}^{\left( 1\right) }\right) \mathbf{+}a\mathcal{P}% _{I_{N}}\delta _{xi_{N}}\left( M_{I_{N-1}y}^{\left( 1\right) }+\delta u_{y}M_{I_{N-1}}^{\left( 1\right) }\right) \notag \\ &=&\partial _{t}^{(0)}M_{I_{N}}^{\left( 1\right) }+a\frac{\partial }{% \partial u_{0x}}\left( M_{I_{N}y}^{\left( 1\right) }+\delta u_{y}M_{I_{N}}^{\left( 1\right) }\right) \mathbf{+}a\mathcal{P}% _{I_{N}}\delta _{xi_{N}}M_{I_{N-1}y}^{\left( 1\right) } \notag \end{eqnarray}% where the last line follows from using the zeroth order balance equation $% \partial _{t}^{\left( 0\right) }\delta u_{i_{N}}=-a\delta _{i_{N}x}\delta u_{y}$ . The evaluation of the right hand side is straightforward with the only difficult term being% \begin{equation} \int C_{i_{1}}...C_{i_{N}}\left( \frac{\partial }{\partial \delta u_{j}}% f^{(0)}\right) d\mathbf{v=}\frac{\partial }{\partial \delta u_{j}}% M_{I_{N}}^{\left( 0\right) }\mathbf{+}\mathcal{P}_{I_{N}}\delta _{i_{N}j}M_{I_{N-1}}^{\left( 0\right) }, \end{equation}% and from eq.(\ref{Moments1}) it is clear that $M_{I_{N}}^{\left( 0\right) }$ is independent of $\delta u_{j}$ so that the first term on the right vanishes. Thus% \begin{eqnarray} &&\partial _{t}^{(0)}M_{I_{N}}^{\left( 1\right) }+a\frac{\partial }{\partial u_{0x}}\left( M_{I_{N}y}^{\left( 1\right) }+\delta u_{y}M_{I_{N}}^{\left( 1\right) }\right) \mathbf{+}a\mathcal{P}_{I_{N}}\delta _{xi_{N}}M_{I_{N-1}y}^{\left( 1\right) }+\left( \nu ^{\ast }\left( \alpha \right) +\frac{N}{2}\zeta ^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) M_{I_{N}}^{\left( 1\right) } \label{moments} \\ &=&-\left( \partial _{t}^{(1)}n+u_{l}\partial _{l}^{1}n\right) \frac{% \partial }{\partial n}M_{I_{N}}^{\left( 0\right) }-\left( \partial _{t}^{(1)}T+u_{l}\partial _{l}^{1}T\right) \frac{\partial }{\partial T}% M_{I_{N}}^{\left( 0\right) }-\left( \partial _{t}^{(1)}\delta u_{j}+u_{l}\partial _{l}^{1}\delta u_{j}\right) \mathcal{P}_{I_{N}}\delta _{i_{N}j}M_{I_{N-1}}^{\left( 0\right) } \notag \\ &&-\left( \partial _{l}^{1}u_{l}\right) M_{I_{N}}^{\left( 0\right) }-% \mathcal{P}_{I_{N}}\left( \partial _{l}^{1}u_{i_{N}}\right) M_{I_{N-1}l}^{\left( 0\right) }-\partial _{l}^{1}M_{I_{N}l}^{\left( 0\right) } \notag \end{eqnarray}% Superficially, it appears that the right hand side depends explicitly on the reference field, since $u_{l}=u_{0.l}+\delta u_{l}$, which would in turn generate an explicit dependence of the moments on the $y$-coordinate. However, when the balance equations are used to eliminate $\partial _{t}^{(1)}$ this becomes% \begin{eqnarray} &&\partial _{t}^{(0)}M_{I_{N}}^{\left( 1\right) }+a\frac{\partial }{\partial u_{0x}}\left( M_{I_{N}y}^{\left( 1\right) }+\delta u_{y}M_{I_{N}}^{\left( 1\right) }\right) +a\mathcal{P}_{I_{N}}\delta _{xi_{N}}M_{I_{N-1}y}^{\left( 1\right) }+\left( \nu ^{\ast }\left( \alpha \right) +\frac{N}{2}\zeta ^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) M_{I_{N}}^{\left( 1\right) } \\ &=&\left( \partial _{l}^{\left( 1\right) }\delta u_{l}\right) n\frac{% \partial }{\partial n}M_{I_{N}}^{\left( 0\right) }+\frac{2}{Dnk_{B}}\left( M_{lk}^{\left( 0\right) }\partial _{l}^{\left( 1\right) }\delta u_{k}+aM_{xy}^{\left( 1\right) }\right) \frac{\partial }{\partial T}% M_{I_{N}}^{\left( 0\right) } \notag \\ &&+\frac{1}{mn}\mathcal{P}_{I_{N}}\left( \partial _{l}^{\left( 1\right) }P_{li_{N}}^{\left( 0\right) }+\partial _{y}^{\left( 0\right) }P_{yi_{N}}^{\left( 1\right) }\right) M_{I_{N-1}}^{\left( 0\right) } \notag \\ &&-\left( \partial _{l}^{1}\delta u_{l}\right) M_{I_{N}}^{\left( 0\right) }-% \mathcal{P}_{I_{N}}\left( \partial _{l}^{1}\delta u_{i_{N}}\right) M_{I_{N-1}l}^{\left( 0\right) }-\partial _{l}^{1}M_{I_{N}l}^{\left( 0\right) } \notag \end{eqnarray}% Then, assuming that the first-order moments are independent of the reference field, $\mathbf{u}_{0}$, gives \begin{eqnarray} &&\partial _{t}^{(0)}M_{I_{N}}^{\left( 1\right) }+a\mathcal{P}_{I_{N}}\delta _{xi_{N}}M_{I_{N-1}y}^{\left( 1\right) }+\left( \nu ^{\ast }\left( \alpha \right) +\frac{N}{2}\zeta ^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) M_{I_{N}}^{\left( 1\right) }-\left( \frac{2a}{Dnk_{B}}\frac{% \partial }{\partial T}M_{I_{N}}^{\left( 0\right) }\right) M_{xy}^{\left( 1\right) } \label{moments2} \\ &=&\left[ \delta _{ab}\left( n\frac{\partial }{\partial n}M_{I_{N}}^{\left( 0\right) }-M_{I_{N}}^{\left( 0\right) }\right) +\frac{2}{Dnk_{B}}% P_{ab}^{\left( 0\right) }\frac{\partial }{\partial T}M_{I_{N}}^{\left( 0\right) }-\mathcal{P}_{I_{N}}\delta _{bi_{N}}M_{I_{N-1}a}^{\left( 0\right) }% \right] \left( \partial _{a}^{\left( 1\right) }\delta u_{b}\right) \notag \\ &&+\left[ \frac{1}{mn}\mathcal{P}_{I_{N}}\left( \frac{\partial }{\partial \delta n}P_{li_{N}}^{\left( 0\right) }\right) M_{I_{N-1}}^{\left( 0\right) }-% \frac{\partial }{\partial \delta n}M_{I_{N}l}^{\left( 0\right) }\right] \left( \partial _{l}^{\left( 1\right) }\delta n\right) \notag \\ &&+\left[ \frac{1}{mn}\mathcal{P}_{I_{N}}\left( \frac{\partial }{\partial \delta T}P_{li_{N}}^{\left( 0\right) }\right) M_{I_{N-1}}^{\left( 0\right) }-% \frac{\partial }{\partial \delta T}M_{I_{N}l}^{\left( 0\right) }\right] \left( \partial _{l}^{\left( 1\right) }\delta T\right) \notag \end{eqnarray}% which is consistent since no factors of $\mathbf{u}_{0}$ appear and since the zeroth order moments are known to be independent of the reference velocity field. The moment equations are linear in gradients in the deviation fields, so generalized transport coefficients via the definition% \begin{equation} M_{I_{N}}^{\left( 1\right) }=-\lambda _{I_{N}ab}\frac{\partial \delta \psi _{b}}{\partial r_{a}}=-\mu _{I_{N}a}\frac{\partial \delta n}{\partial r_{a}}% -\kappa _{I_{N}a}\frac{\partial \delta T}{\partial r_{a}}-\eta _{I_{N}ab}% \frac{\partial \delta u_{a}}{\partial r_{b}} \end{equation}% where the transport coefficients for different values of $N$ have the same name but can always be distinguished by the number of indices they carry. The zeroth-order time derivative is evaluated using \begin{eqnarray} \partial _{t}^{(0)}\lambda _{I_{N}ab}\frac{\partial \delta \psi _{b}}{% \partial r_{a}} &=&\left( \partial _{t}^{(0)}\lambda _{I_{N}ab}\right) \frac{% \partial \delta \psi _{b}}{\partial r_{a}}+\lambda _{I_{N}ab}\partial _{t}^{(0)}\frac{\partial \delta \psi _{b}}{\partial r_{a}} \\ &=&\left( \left( \partial _{t}^{(0)}T\right) \frac{\partial \lambda _{I_{N}ab}}{\partial T}+\left( \partial _{t}^{(0)}\delta u_{j}\right) \frac{% \partial \lambda _{I_{N}ab}}{\partial \delta u_{j}}\right) \frac{\partial \delta \psi _{b}}{\partial r_{a}}+\lambda _{I_{N}ab}\frac{\partial }{% \partial r_{a}}\left( \partial _{t}^{(0)}\delta \psi _{b}\right) \notag \\ &=&\left( \partial _{t}^{(0)}T\right) \frac{\partial \lambda _{I_{N}ab}}{% \partial T}\frac{\partial \delta \psi _{b}}{\partial r_{a}}+\lambda _{I_{N}ab}\frac{\partial \delta \psi _{c}}{\partial r_{a}}\frac{\partial \left( \partial _{t}^{(0)}\delta \psi _{b}\right) }{\partial \delta \psi _{c}% } \end{eqnarray}% where the third line follows from (a) the fact that the transport coefficients will have no explicit dependence on the velocity field, as may be verified from the structure of eq(\ref{moments2}) and (b) the fact that the gradient here is a first order gradient $\nabla _{1}$ so that it only contributes via gradients of the deviations of the fields thus giving the last term on the right. Since the fields are independent variables, the coefficients of the various terms $\frac{\partial \delta \psi _{b}}{\partial r_{a}}$ must vanish independently. For the coefficients of the velocity gradients, this gives% \begin{eqnarray} &&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\eta _{I_{N}ab}+\eta _{I_{N}ac}\frac{\partial \left( \partial _{t}^{(0)}\delta u_{c}\right) }{\partial \delta u_{b}}+a\mathcal{P}_{I_{N}}\delta _{xi_{N}}\eta _{I_{N-1}yab}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{N% }{2}\zeta ^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) \eta _{I_{N}ab}-\left( \frac{2a}{Dnk_{B}}\frac{\partial }{\partial T}% M_{I_{N}}^{\left( 0\right) }\right) \eta _{xyab} \\ &=&-\delta _{ab}\left( n\frac{\partial }{\partial n}M_{I_{N}}^{\left( 0\right) }-M_{I_{N}}^{\left( 0\right) }\right) -\frac{2}{Dnk_{B}}% M_{ab}^{\left( 0\right) }\frac{\partial }{\partial T}M_{I_{N}}^{\left( 0\right) }+\mathcal{P}_{I_{N}}\delta _{bi_{N}}M_{I_{N-1}a}^{\left( 0\right) }. \notag \end{eqnarray}% The vanishing of the coefficients of the density gradients gives% \begin{eqnarray} &&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\mu _{I_{N}a}+\kappa _{I_{N}a}\frac{\partial \left( \partial _{t}^{(0)}T\right) }{\partial n}+a\mathcal{P}_{I_{N}}\delta _{xi_{N}}\mu _{I_{N-1}ya}^{N}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{N}{2}\zeta ^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) \mu _{I_{N}a}-\left( \frac{2a}{Dnk_{B}}\frac{\partial }{\partial T}% M_{I_{N}}^{\left( 0\right) }\right) \mu _{xya} \\ &&=-\frac{1}{mn}\mathcal{P}_{I_{N}}\left( \frac{\partial }{\partial \delta n}% P_{ai_{N}}^{\left( 0\right) }\right) M_{I_{N-1}}^{\left( 0\right) }+\frac{% \partial }{\partial \delta n}M_{I_{N}a}^{\left( 0\right) }, \notag \end{eqnarray}% while the vanishing of the coefficient of the temperature gradient gives \begin{eqnarray} &&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\kappa _{I_{N}a}+\frac{\partial \left( \partial _{t}^{(0)}T\right) }{\partial T}% \kappa _{I_{N}a}+a\mathcal{P}_{I_{N}}\delta _{xi_{N}}\kappa _{I_{N-1}ya}^{N}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{N}{2}\zeta ^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) \kappa _{I_{N}a}-\left( \frac{2a}{Dnk_{B}}\frac{\partial }{\partial T}% M_{I_{N}}^{\left( 0\right) }\right) \kappa _{xya} \\ &&=-\frac{1}{mn}\mathcal{P}_{I_{N}}\left( \frac{\partial }{\partial \delta T}% P_{ai_{N}}^{\left( 0\right) }\right) M_{I_{N-1}}^{\left( 0\right) }+\frac{% \partial }{\partial \delta T}M_{I_{N}a}^{\left( 0\right) }. \notag \end{eqnarray} Notice that for even moments, the source terms for the density and temperature transport coefficients all vanish (as they involve odd zeroth-order moments) and it is easy to verify that the boundary conditions are consistent with $\mu _{I_{N}a}=\kappa _{I_{N}a}=0$ and only the velocity gradients contribute. For odd values of $N$, the opposite is true and $\eta _{I_{N}ab}=0$ while the others are in general nonzero. \subsection{Navier-Stokes transport} \subsubsection{The first order pressure tensor} Specializing to the case $N=2$ gives the transport coefficients appearing in the pressure tensor% \begin{equation} P_{I_{N}}^{\left( 1\right) }=-\eta _{I_{N}ab}\frac{\partial \delta u_{a}}{% \partial r_{b}} \end{equation}% where the generalized viscosity satisfies% \begin{eqnarray} &&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\eta _{ijab}-a\eta _{ijax}\delta _{by}+a\delta _{xi}\eta _{jyab}+a\delta _{xj}\eta _{iyab}+\left( \nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) \eta _{ijab}-\left( \frac{2a}{Dnk_{B}}\frac{\partial }{\partial T}P_{ij}^{\left( 0\right) }\right) \eta _{xyab} \\ &=&-\delta _{ab}\left( n\frac{\partial }{\partial n}P_{ij}^{\left( 0\right) }-P_{ij}^{\left( 0\right) }\right) -\frac{2}{Dnk_{B}}P_{ab}^{\left( 0\right) }\frac{\partial }{\partial T}P_{ij}^{\left( 0\right) }+\delta _{bi}P_{ja}^{\left( 0\right) }+\delta _{bj}P_{ia}^{\left( 0\right) }. \notag \end{eqnarray} \subsubsection{First order third moments and the heat flux vector} For the third moments, the contribution of density gradients to the heat flux is well-known in the theory of granular fluids and the transport coefficient is here the solution of \begin{eqnarray} &&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\mu _{ijka}+% \frac{\partial \left( \partial _{t}^{(0)}T\right) }{\partial n}\kappa _{ijka}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{3}{2}\zeta ^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) \mu _{ijka} \\ &&+a\delta _{xk}\mu _{ijya}+a\delta _{xi}\mu _{kjya}+a\delta _{xj}\mu _{ikya} \notag \\ &=&-\frac{1}{mn}\left( \frac{\partial }{\partial n}P_{ak}^{\left( 0\right) }\right) P_{ij}^{\left( 0\right) }-\frac{1}{mn}\left( \frac{\partial }{% \partial n}P_{ai}^{\left( 0\right) }\right) P_{kj}^{\left( 0\right) }-\frac{1% }{mn}\left( \frac{\partial }{\partial n}P_{aj}^{\left( 0\right) }\right) P_{ik}^{\left( 0\right) }+\frac{\partial }{\partial n}M_{ijka}^{\left( 0\right) }, \notag \end{eqnarray}% and the generalized thermal conductivity is determined from \begin{eqnarray} &&\left( \partial _{t}^{(0)}T\right) \frac{\partial }{\partial T}\kappa _{ijka}+\frac{\partial \left( \partial _{t}^{(0)}T\right) }{\partial T}% \kappa _{ijka}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{3}{2}\zeta ^{\ast }\left( \alpha \right) \right) \nu \left( \psi \right) \kappa _{ijka} \\ &&+a\delta _{xk}\kappa _{ijya}+a\delta _{xi}\kappa _{kjya}+a\delta _{xj}\kappa _{ikya} \notag \\ &=&-\frac{1}{mn}\left( \frac{\partial }{\partial T}P_{ak}^{\left( 0\right) }\right) P_{ij}^{\left( 0\right) }-\frac{1}{mn}\left( \frac{\partial }{% \partial T}P_{ai}^{\left( 0\right) }\right) P_{kj}^{\left( 0\right) }-\frac{1% }{mn}\left( \frac{\partial }{\partial T}P_{aj}^{\left( 0\right) }\right) P_{ik}^{\left( 0\right) }+\frac{\partial }{\partial T}M_{ijka}^{\left( 0\right) }. \notag \end{eqnarray}% Note that both of these require knowledge of the zeroth-order fourth velocity moment $M_{ijka}^{\left( 0\right) }$. The heat flux vector is \begin{equation} q_{i}^{\left( 1\right) }=-\overline{\mu }_{ia}\frac{\partial \delta n}{% \partial r_{a}}-\overline{\kappa }_{ia}\frac{\partial \delta T}{\partial r_{a}} \end{equation}% where% \begin{eqnarray} \overline{\mu }_{ia} &=&\mu _{ijja} \\ \overline{\kappa }_{ia} &=&\kappa _{ijja}. \notag \end{eqnarray} \subsection{The second-order transport equations} In this Section, the results obtained so far are put together so as to give the Navier-Stokes equations for deviations from the steady state. The Navier-Stokes equations result from the sum of the balance equations. To first order, this takes the form{% \begin{eqnarray} {\partial _{t}n+\mathbf{u}\cdot \mathbf{\nabla }}\delta n+n{\mathbf{\nabla }}% \cdot \delta \mathbf{u} &=&0\; \label{first} \\ {\partial _{t}u_{i}+\mathbf{u}\cdot \mathbf{\nabla }\delta }u_{i}+a\delta _{ix}\delta u_{y}+(mn)^{-1}\partial _{j}^{\left( 1\right) }P_{ij}^{\left( 0\right) } &=&0 \notag \\ {\partial _{t}T+\mathbf{u}\cdot \mathbf{\nabla }\delta }T+\frac{2}{Dnk_{B}}% \left( P_{ij}^{\left( 0\right) }\nabla _{j}\delta u_{i}+aP_{xy}^{(0)}+aP_{xy}^{\left( 1\right) }\right) &=&-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) T. \notag \end{eqnarray}% where }${\partial _{t}=\partial _{t}^{\left( 0\right) }+\partial _{t}^{\left( 1\right) }}$. By analogy with the analysis of an equilibrium system, these will be termed the Euler approximation. Summing to second order to get the Navier-Stokes approximation gives{% \begin{gather} {\partial _{t}n+\mathbf{u}\cdot \mathbf{\nabla }}\delta n+n{\mathbf{\nabla }}% \cdot \delta \mathbf{u}=0\; \label{second} \\ {\partial _{t}u_{i}+\mathbf{u}\cdot \mathbf{\nabla }\delta }u_{i}+a\delta _{ix}\delta u_{y}+(mn)^{-1}\partial _{j}^{\left( 1\right) }P_{ij}^{\left( 0\right) }+(mn)^{-1}\partial _{y}^{\left( 1\right) }P_{iy}^{\left( 1\right) }+(mn)^{-1}\partial _{j}^{\left( 0\right) }P_{ij}^{\left( 2\right) }=0 \notag \\ {\partial _{t}T+\mathbf{u}\cdot \nabla \delta }T+\frac{2}{Dnk_{B}}\left( {% \mathbf{\nabla }}^{\left( 1\right) }\cdot \mathbf{q}^{(1)}+{\mathbf{\nabla }}% ^{\left( 0\right) }\cdot \mathbf{q}^{(2)}+P_{ij}^{\left( 0\right) }\nabla _{j}\delta u_{i}+P_{ij}^{\left( 1\right) }\nabla _{j}\delta u_{i}+aP_{xy}^{(0)}+aP_{xy}^{\left( 1\right) }+aP_{xy}^{\left( 2\right) }\right) =-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) T. \notag \end{gather}% where now }${\partial _{t}=\partial _{t}^{\left( 0\right) }+\partial _{t}^{\left( 1\right) }+\partial _{t}^{\left( 2\right) }}$ but this expression is problematic. Based on the results so far, it seems reasonable to expect that $\partial _{j}^{\left( 0\right) }P_{ij}^{\left( 2\right) }={% \mathbf{\nabla }}^{\left( 0\right) }\cdot \mathbf{q}^{(2)}=0$. However, to consistently write the equations to third order requires knowledge of $% P_{xy}^{\left( 2\right) }$ which is not available without extending the solution of the kinetic equation to third order. The reason this problem arises here, and not in the analysis about equilibrium, is that the shear rate, $a$, arises from a gradient of the reference field. In the usual analysis, such a term would be first order and $aP_{xy}^{\left( 2\right) }=\left( \partial _{i}u_{0j}\right) P_{ij}^{\left( 2\right) }$would be of third order and therefore neglected here. This is unfortunate and shows that this method of analysis does not completely supplant the need to go beyond the second-order solution in order to study shear flow. However, this problem is not unique. In fact, in calculations of the transport coefficients for the homogeneous cooling state of a granular gas, a similar problem occurs in the calculation of the cooling rate: the true Navier-Stokes expression requires going to third order in the solution of the kinetic equation\cite{DuftyGranularTransport},\cite{LutskoCE}. (This is because the source does not appear under a gradient, as can be seen in the equations above.) Thus, it is suggested here that the same type of approximation be accepted here, namely that the term $aP_{xy}^{\left( 2\right) }$ is neglected, so that the total pressure tensor and heat flux vectors are% \begin{eqnarray} P_{ij} &=&P_{ij}^{\left( 0\right) }+P_{ij}^{\left( 1\right) } \\ q_{i} &=&q_{i}^{\left( 0\right) }+q_{i}^{\left( 1\right) } \notag \end{eqnarray}% and the transport equations can be written as{% \begin{eqnarray} {\partial _{t}n+\mathbf{\nabla }}\cdot \left( n\mathbf{u}\right) &=&0\; \label{hydro-final} \\ {\partial _{t}u_{i}+\mathbf{u}\cdot \mathbf{\nabla }}u_{i}+(mn)^{-1}\partial _{j}P_{ij} &=&0 \notag \\ {\partial _{t}T+\mathbf{u}\cdot \mathbf{\nabla }}T+\frac{2}{Dnk_{B}}\left( {% \mathbf{\nabla }}\cdot \mathbf{q}+P_{ij}\nabla _{j}u_{i}\right) &=&-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi \right) T. \notag \end{eqnarray}% which is the expected form of the balance equations. The total fluxes are given terms of the generalized transport coefficients}% \begin{eqnarray} P_{ij} &=&P_{ij}^{\left( 0\right) }-\eta _{ijab}\frac{\partial \delta u_{a}}{% \partial r_{b}} \\ q_{i} &=&-\mu _{ijja}\frac{\partial \delta n}{\partial r_{a}}-\kappa _{ijja}% \frac{\partial \delta T}{\partial r_{a}}. \notag \end{eqnarray} \subsection{Linearized second-order transport} Some simplification occurs if attention is restricted to the linearized form of these equations. This is because, as noted in the previous Section, several transport coefficients are proportional to $\delta u_{y}$ and consequently do not contribute when the transport coefficients are linearized. Taking this into account, the total fluxes are% \begin{eqnarray} P_{ij} &=&P_{ij}^{\left( ss\right) }+\left( \frac{\partial P_{ij}^{\left( 0\right) }}{\partial \delta n}\right) _{ss}\delta n+\left( \frac{\partial P_{ij}^{\left( 0\right) }}{\partial \delta T}\right) _{ss}\delta T-\eta _{ijab}^{ss}\frac{\partial \delta u_{a}}{\partial r_{b}} \\ q_{i} &=&-\overline{\mu }_{ia}^{ss}\frac{\partial \delta n}{\partial r_{a}}-% \overline{\kappa }_{ia}^{ss}\frac{\partial \delta T}{\partial r_{a}}, \notag \end{eqnarray}% where the superscript on the transport coefficients, and subscript on the derivatives, indicates that they are evaluated to zeroth order in the deviations,% \begin{equation*} \left( \frac{\partial P_{ij}^{\left( 0\right) }}{\partial \delta n}\right) _{ss}\equiv \lim_{\delta \psi \rightarrow 0}\frac{\partial P_{ij}^{\left( 0\right) }}{\partial \delta n}, \end{equation*}% i.e. in the steady state. The defining expressions for the transport coefficients simplify since the factor $\partial _{t}^{(0)}T$ is at least of first order in the deviations from the steady state (since it vanishes in the steady state) so that the temperature derivative can be neglected thus transforming the differential equations into coupled algebraic equations. Also, all remaining quantities are evaluated for $\delta \psi =0$, i.e. in the steady state. Thus the viscosity becomes% \begin{eqnarray} &&-a_{ss}^{\ast }\eta _{ijax}^{ss}\delta _{by}+a_{ss}^{\ast }\delta _{xi}\eta _{jyab}^{ss}+a_{ss}^{\ast }\delta _{xj}\eta _{iyab}^{ss}+\left( \nu ^{\ast }\left( \alpha \right) +\zeta ^{\ast }\left( \alpha \right) \right) \eta _{ijab}^{ss}-\frac{2a_{ss}^{\ast }}{Dn_{0}k_{B}}\left( \frac{% \partial }{\partial T}P_{ij}^{\left( 0\right) }\right) _{ss}\eta _{xyab}^{ss} \label{p-lin} \\ &=&-\nu ^{-1}\left( \psi _{0}\right) \delta _{ab}\left( n_{0}\left( \frac{% \partial }{\partial n}P_{ij}^{\left( 0\right) }\right) _{ss}-P_{ij}^{\left( ss\right) }\right) -\frac{2\nu ^{-1}\left( \psi _{0}\right) }{Dn_{0}k_{B}}% P_{ab}^{\left( ss\right) }\left( \frac{\partial }{\partial T}P_{ij}^{\left( 0\right) }\right) _{ss}+\nu ^{-1}\left( \psi _{0}\right) \left( \delta _{bi}P_{ja}^{\left( ss\right) }+\delta _{bj}P_{ia}^{\left( ss\right) }\right) \notag \end{eqnarray}% where $a_{ss}^{\ast }$ was defined in eq.(\ref{balance}). The generalized heat conductivities will be given by the simplified equations \begin{eqnarray} &&\nu ^{-1}\left( \psi _{0}\right) \left( \frac{\partial \left( \partial _{t}^{(0)}T\right) }{\partial n}\right) _{ss}\kappa _{ijka}^{ss}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{3}{2}\zeta ^{\ast }\left( \alpha \right) \right) \mu _{ijka}^{ss}+a_{ss}^{\ast }\mathcal{P}_{ijk}\delta _{xk}\mu _{ijya}^{ss} \\ &=&-\frac{\nu ^{-1}\left( \psi _{0}\right) }{mn_{0}}\mathcal{P}_{ijk}\left( \frac{\partial }{\partial n}P_{ak}^{\left( 0\right) }\right) _{ss}P_{ij}^{\left( ss\right) }+\nu ^{-1}\left( \psi _{0}\right) \left( \frac{\partial }{\partial n}M_{ijka}^{\left( 0\right) }\right) _{ss} \notag \end{eqnarray}% and \begin{eqnarray} &&\nu ^{-1}\left( \psi _{0}\right) \left( \frac{\partial \left( \partial _{t}^{(0)}T\right) }{\partial T}\right) _{ss}\kappa _{ijka}^{ss}+\left( \nu ^{\ast }\left( \alpha \right) +\frac{3}{2}\zeta ^{\ast }\left( \alpha \right) \right) \kappa _{ijka}^{ss}+a_{ss}^{\ast }\mathcal{P}_{ijk}\delta _{xk}\kappa _{ijya}^{ss} \\ &=&-\frac{\nu ^{-1}\left( \psi _{0}\right) }{mn_{0}}\mathcal{P}_{ijk}\left( \frac{\partial }{\partial T}P_{ak}^{\left( 0\right) }\right) _{ss}P_{ij}^{\left( ss\right) }+\nu ^{-1}\left( \psi _{0}\right) \left( \frac{\partial }{\partial T}M_{ijka}^{\left( 0\right) }\right) _{ss}. \notag \end{eqnarray}% In these equations, the hydrodynamic variables $\psi _{0}$ must satisfy the steady state balance condition, eq.(\ref{balance}). The various quantities in these equations are known from the analysis of the zeroth order moments. For example, from eq.(\ref{Pss}), one has that% \begin{eqnarray} \left( \frac{\partial P_{ij}^{\left( 0\right) }}{\partial T}\right) _{ss} &=&-\frac{1}{2}T_{0}^{-1}P_{ij}^{ss}A_{ij}^{\ast }\left( \alpha \right) \\ \left( \frac{\partial P_{ij}^{\left( 0\right) }}{\partial n}\right) _{ss} &=&n_{0}^{-1}P_{ij}^{ss}\left( 1-A_{ij}^{\ast }\left( \alpha \right) \right) \notag \\ \nu ^{-1}\left( \psi _{0}\right) \left( \frac{\partial \left( \partial _{t}^{(0)}T\right) }{\partial T}\right) _{ss} &=&-\frac{1}{2}\zeta ^{\ast }\left( \alpha \right) \left( 1+A_{xy}^{\ast }\left( \alpha \right) \right) \notag \end{eqnarray}% where $A_{ij}^{\ast }\left( \alpha \right) $ was given in eq.(\ref{Pss-A}) and here, there is no summation over repeated indices. The derivatives of higher order moments in the steady state can easily be given using the results in Appendix \ref{AppP}.The linearized transport equations are{% \begin{eqnarray} {\partial _{t}\delta n+ay}\frac{\partial }{\partial x}\delta n+n_{0}\mathbf{% \nabla }\cdot \delta \mathbf{u} &=&0\; \\ {\partial _{t}\delta u_{i}+{ay\frac{\partial }{\partial x}}\delta }% u_{i}+a\delta u_{y}\delta _{ix}+(mn_{0})^{-1}\left( \left( \frac{\partial P_{ij}^{\left( 0\right) }}{\partial n}\right) _{ss}\frac{\partial \delta n}{% \partial r_{j}}+\left( \frac{\partial P_{ij}^{\left( 0\right) }}{\partial T}% \right) _{ss}\frac{\partial \delta T}{\partial r_{j}}+\eta _{ijab}^{ss}\frac{% \partial ^{2}\delta u_{a}}{\partial r_{j}\partial r_{b}}\right) &=&0 \notag \end{eqnarray}% }% \begin{eqnarray*} &&{\partial _{t}\delta T+ay}\frac{\partial }{\partial x}{\delta T}+\frac{2}{% Dn_{0}k_{B}}\left( \overline{\mu }_{ia}^{ss}\frac{\partial ^{2}\delta n}{% \partial r_{i}\partial r_{a}}+\overline{\kappa }_{ia}^{ss}\frac{\partial ^{2}\delta T}{\partial r_{i}\partial r_{a}}+P_{ij}^{\left( ss\right) }\frac{% \partial \delta u_{i}}{\partial r_{j}}+a\eta _{xyab}^{ss}\frac{\partial \delta u_{a}}{\partial r_{b}}\right) \\ &&+\frac{2a}{Dn_{0}^{2}k_{B}}\left( n_{0}\left( \frac{\partial P_{xy}^{\left( 0\right) }}{\partial \delta n}\right) _{ss}-P_{xy}^{\left( ss\right) }\right) \delta n+\frac{2a}{Dn_{0}k_{B}}\left( \frac{\partial P_{xy}^{\left( 0\right) }}{\partial \delta T}\right) _{ss}\delta T \\ &=&-\frac{3}{2}\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi _{0}\right) \delta T-\zeta ^{\ast }\left( \alpha \right) \nu \left( \psi _{0}\right) T_{0}\frac{\delta n}{n_{0}}. \end{eqnarray*}% where the fact that $\nu \left( \psi \right) \sim nT^{1/2}$ has been used. These equations have recently been used by Garz\'{o} to study the stability of the granular fluid under uniform shear flow\cite{garzo-2005-}. \section{Conclusions} In this paper, the extension of the Chapman-Enskog method to arbitrary reference states has been presented. One of the key ideas is the separation of the gradient operator into ''zeroth'' and ''first'' order operators that help to organize the expansion. It is also important that the zeroth order distribution be recognized as corresponding to the exact distribution for \emph{arbitrary} \emph{uniform} deviations of \emph{all} hydrodynamic fields from the reference state. This distribution does not in general have anything to do with the distribution in the reference state, except in the very special case that the reference state itself is spatially uniform. The method was illustrated by application to the paradigmatic non-uniform system of a fluid undergoing uniform shear flow. In particular, the fluid was chosen to be a granular fluid which therefore admits of a steady state. The analysis was based on a particularly simple kinetic theory in order to allow for illustration of the general concepts without the technical complications involved in, e.g., using the Boltzmann equation. Nevertheless, it should be emphasized that the difference between the present calculation and that using the Boltzmann equation would be no greater than in the case of an equilibrium fluid. The main difference is that with the simplified kinetic theory, it is possible to obtain closed equations for the velocity moments without having to explicitly solve for the distribution. When solving the Boltzmann equation, the moment equations are not closed and it is necessary to resort to expansions in orthogonal polynomials. In that case, the calculation is usually organized somewhat differently: attention is focussed on solving directly for the distribution but this is only a technical point.(In fact, Chapman originally developed his version of the Chapman-Enskog method using Maxwell's moment equations while Enskog based his on the Boltzmann equation\cite{ChapmanCowling}. The methods are of course equivalent.) It is interesting to compare the hydrodynamic equations derived here to the ''standard'' equations for fluctuations about a uniform granular fluid. As might be expected, the hydrodynamic equations describing fluctuations about the state of uniform shear flow are more complex in some ways than are the usual Navier-Stokes equations for a granular fluid, but the similarities with the simpler case are perhaps more surprising. The complexity arises from the fact that the transport coefficients do not have the simple spatial symmetries present in the homogeneous fluid where, e.g., there is a single thermal conductivity rather than the vector quantity that occurs here. However, just as in homogeneous system, the heat flux vector still only couples to density and temperature gradients and the pressure tensor to velocity gradients so that the hydrodynamics equations, eq.(\ref{hydro-final}% ), have the same structure as the Navier-Stokes equations for the homogeneous system. An additional complication in the general analysis presented here is that the zeroth-order pressure tensor and the transport coefficients are obtained as the solution to partial differential equations in the temperature rather than as simple algebraic functions. This requires that appropriate boundary conditions be supplied which will, in general, depend on the particular problem being solved. Here, in the high-temperature limit, the non-equilibrium effects are of no importance and the appropriate boundary condition on all quantities is that they approach their equilibrium values. Boundary conditions must also be given at low temperature as the two domains are separated by the steady-state which represents a critical point. At low temperatures, there are no collisions and no deviations from the macroscopic state so that all velocity moments go to zero thus giving the necessary boundary conditions. A particularly simple case occurs when the hydrodynamic equations are linearized about the reference state as would be appropriate for a linear stability analysis. Then, the transport properties are obtained as the solution to simple algebraic equations. A particular simplifying feature of uniform shear flow is that the flow field has a constant first gradient and, as a result, the moments do not explicitly depend on the flow field. This will not be true for more complex, nonlinear flow fields. However, the application of the methods discussed in Section II should make possible an analysis similar to that given here for the simple case of uniform shear flow. \bigskip \begin{acknowledgements} I am grateful to Vicente Garz\'{o} and Jim Dufty for several useful discussions. This work was supportd in part by the European Space Agency under contract number C90105. \end{acknowledgements}
1,108,101,566,131
arxiv
\section{Introduction} At sufficiently low temperatures a condensate of weakly interacting bosons can be represented by a single wave function whose dynamics obeys the Gross-Pitaevskii equation \cite{gross61,pitaevskii61}. The Gross-Pitaevskii equation can be thought of as the Hartree equation for the ground state of $N$ interacting identical bosons, all occupying the same single-particle orbital $\psi$. Because of its nonlinearity the equation exhibits features not familiar from ordinary Schr\"odinger equations of quantum mechanics. For example, Huepe et al. \cite{hue99,hue03} demonstrated that for Bose-Einstein condensates with attractive contact interaction, described by a negative $s$-wave scattering length, {\em bifurcations} of the stationary solutions of the Gross-Pitaevskii equation appear, and determined both the stable (elliptic) and the unstable (hyperbolic) branches of the solutions. The bifurcation points correspond to critical particle numbers, above which, for given strength of the attractive interaction, collapse of the condensate sets in. In Bose-Einstein condensates of $^7$Li \cite{sacket99,gerton00} and $^{85}$Rb atoms \cite{donley01,roberts01} these collapses were experimentally observed. In those condensates the short-range contact interaction is the only interaction to be considered. In Bose-Einstein condensates of {\em dipolar} gases \cite{santos00,baranov02,goral02a,goral02b,giovanazzi03} also a long-range dipole-dipole interaction is present. Alternatively, following a proposal by O'Dell et al. \cite{dell00,giova01}, by using a combination of 6 triads of appropriately tuned laser light condensates can be produced in which an attractive long-range gravity-like $1/r$ interaction is present. These types of condensates offer the opportunity to study degenerate quantum gases with adjustable long-range {\em and} short-range interactions. While the experimental realization of condensates with gravity-like interaction lies still in the future, the achievement of Bose-Einstein condensation in a gas of chromium atoms \cite{griesmaier05}, with a large dipole moment, has opened the way to promising experiments on dipolar gases \cite{stuhler05}, which could show a wealth of novel phenomena \cite{giovanazzi02,santos03,li04,dell04}. In particular, the experimental observation of the collapse of dipolar quantum gases has been reported \cite{koch08} which occurs when the contact interaction is reduced, for a given particle number, below some critical value using a Feshbach resonance. In this experimental situation it is most timely and appropriate to extend the investigations of the nonlinearity effects of the Gross-Pitaevskii equation to quantum gases in which both the contact interaction and a long-range interaction is active, and this is the topic of the present paper. \section{Scaling Properties of the Gross-Pitaevskii Equations with Long-Range Interactions} \subsection{Gravity-like interaction, isotropic trap}\label{scaling1/r} For an additional gravity-like long-range interaction $ V_u(\vec r, \vec r^\prime \,) = - {u}/{|\vec r - \vec r^\prime |}$ the time-dependent Gross-Pitaevskii equation for the orbital $\psi$ reads \begin{equation}\label{HFat} \Big[ \hskip -1mm - \Delta + \gamma^2 r^2 + N 8 \pi \frac{a}{a_u} |\psi(\vec r, t)|^2 - 2 N \hskip -1mm \int \frac{|\psi({\vec r\,}^\prime,t)|^2}{|{\vec r}- {\vec r\,}^\prime |} d^3 {\vec r\,}^\prime \Big] \; \psi(\vec r, t) = i \frac{\partial}{\partial t} \, \psi(\vec r, t) \, , \end{equation} where we have used \cite{Pap07} the ``Bohr radius'' $a_u = {\hbar^2}/{(m u)}$, the ``Rydberg energy'' $E_u = {\hbar^2}/{(2m a_u^2)}$, and the Rydberg time $\hbar/E_u$ as natural units of length, energy, and time, respectively, to bring the equation in dimensionless form. Furthermore, in (\ref{HFat}) $a$ is the $s$-wave scattering length, which characterizes the strength of the contact interaction $V_s = \delta({\vec r}- {\vec r\,}^\prime)\, 4 \pi a \hbar^2/m $, $N$ is the particle number, and $\gamma = \hbar \omega_0/(2E_u)$ is the dimensionless trap frequency. It can be shown \cite{Pap07} that the solutions of (\ref{HFat}) do not depend on all these three physical quantities but only on the two relevant parameters $\gamma/N^2$ and $N^2 a/a_u$. Thus one has, e.g., for the mean-field energy $E (N, N^2 a^\ast/a_u, \gamma/N^2)/N^3 = \,E(N=1, a/a_u, \gamma)$, with $ N^2 a^\ast/a_u = a/a_u$. \subsection{Dipolar interaction, axisymmetric trap}\label{scaling} In Bose condensates of $^{52}$Cr atoms \cite{griesmaier05}, which possess a large magnetic moment of $\mu = 6 \mu_{\rm B}$, the long-range dipole-dipole interaction \begin{equation} V_{\rm dd}({\vec r}, {\vec r\,'}) =\frac{\mu_0 \mu^2}{4 \pi} \,\frac{1-3\cos^2 \theta'}{|{\vec r}-{\vec r}\,'|^3} \nonumber \end{equation} must also be considered. Defining the dipole length by $a_{\rm d} = {\mu_0 \mu^2 m}/{(2\pi \hbar^2)}$, and using as unit of energy $E_{\rm d} = \hbar^2/(2m a_{\rm d}^2)$, of frequency $\omega_{\rm d} = 2 E_{\rm d}/\hbar$ and of time $\hbar/E_{\rm d}$, one obtains the Gross-Pitaevskii equation for dipolar gases in axisymmetric traps in dimensionless form \begin{eqnarray}\label{gpedd}{\Big[} \hskip -1mm - \Delta + \gamma_{\rho}^2 \rho^2 + \gamma_z^2 z^2 + N 8 \pi \frac{a}{a_{\rm d}} |\psi({\vec r},t)|^2 \hspace*{-2mm} &+& \hspace*{-2mm} N \hskip -1mm \int |\psi({{\vec r}\,}^\prime,t)|^2 \frac{(1 - 3 \cos^2 \vartheta^\prime)}{|{{\vec r}}- {{\vec r}\,}^\prime |^3} d^3 {{\vec r}\,}^\prime \Big] \; \psi({\vec r},t) \nonumber \\ &=& \hspace*{-2mm}i \frac{\partial}{\partial t} \, \psi({\vec r},t) \, . \end{eqnarray} The physical parameters characterizing the condensates are the particle number $N$, the scattering length $a/a_{\rm d}$ and the trap frequencies $\gamma_\rho$ and $\gamma_z$ perpendicular to and along the direction of alignment of the dipoles (alternatively, the geometric mean (${\bar{\gamma}} = \gamma_\rho^{2/3} \gamma_z^{1/3}$ and the aspect ratio $\lambda = \gamma_z/\gamma_\rho$ is used). However, a closer inspection of the scaling properties of (\ref{gpedd}) reveals \cite{Koeb08} that the solutions depend only on three parameters, viz. $N^2 {\bar{\gamma}}, ~\lambda, ~a/a_{\rm d}$. For the mean-field energy, e.g., the scaling law reads $E (N, a/a_{\rm d}, N^2{\bar{\gamma}}^\ast, \lambda) \,=\,E(N=1, a/a_{\rm d}, {\bar{\gamma}}, \lambda)\, /N$, with $ N^2{\bar{\gamma}}^\ast = {\bar{\gamma}}$. \section{Quantum Results: Solutions of the Stationary Gross-Pitaevskii Equations} For the $1/r$ interaction (monopolar quantum gases) we have solved \cite{Pap07} the stationary Gross-Pitaevskii equation both variationally, using an isotropic Gaussian-type orbital $\psi = A \exp(-k^2 r^2/2)$, and numerically accurate, by outward integration of the nonlinear Schr\"odinger equation. For the dipole-dipole interaction (dipolar quantum gases) we have performed a variational calculation \cite{Koeb08} using an axisymmetric Gaussian-type orbital $\psi = A \exp(-k_{\rho}^2 {\rho}^2/2-k_z^2 z^2/2)$. Fig.~\ref{bifurcationmonopolar1} shows the results for the chemical potential (eigenvalue of the stationary Gross-Pitaevskii equation) for the two interactions, plotted as a function of the scattering length. As can be seen, below a critical scattering length no stationary solutions exist, while two stationary solutions are born at the critical scattering length in a tangent bifurcation. At the bifurcation point the chemical potential, the mean-field energy, and the wave functions of the two branches of solution are identical. Such behavior is obviously a consequence of the nonlinearity of the underlying Schr\"odinger equation, and is reminiscent of exceptional points \cite{Hei99,Kato66} discussed so far only in the context of open quantum systems with non-Hermitian Hamiltonians (see Ref. \cite{Car08a} for references). In fact, a closer inspection shows \cite{Car08a} that the bifurcation points can be identified as exceptional points: traversing circles around them in the complex-extended parameter plane, the eigenvalues are permuted, which is a clear signature of exceptional points. \begin{figure}\label{bifurcationmonopolar1} \caption{Bifurcations of the particle-number scaled chemical potential. Left: $1/r$ interaction, for different trap frequencies $\gamma$ (in units of $N^2$); solid curves: accurate numerical calculation, dashed curves: variational calculation. Right: dipole-dipole interaction, variational results for the geometric mean trap frequency $N^2 \bar \gamma = 3.4 \times 10^4$ used in the experiments of Koch et al. \cite{koch08} and different values of the trap aspect ratio $\lambda$.} \includegraphics[width=\textwidth]{fig1} \end{figure} \section{Nonlinear Dynamics of Bose-Einstein Condensates with Atomic Long-Range Interactions} Starting point of accurate numerical calculations are the time-dependent Gross-Pitaevskii equations (\ref{HFat}) and (\ref{gpedd}). For variational calculations one makes use of the fact that these equations follow from the variational principle $||i \phi(t) -H \psi(t)||^2 ={\rm min}$, where the variation is performed with respect to $\phi$, and finally $\phi$ is set equal to $\dot \psi$. Using a complex parametrization of the trial wave function $\psi(t) = \chi({\vec \lambda}(t))$, the variation leads to equations of motion for the parameters $ {\vec \lambda}$ (cf. \cite{Fab07}) \begin{equation} \label{vareq}\textstyle{ \left\langle \frac{\partial \psi}{\partial {\vec \lambda}}\Big|i \dot \psi - H \psi \right\rangle = 0 \leftrightarrow K \dot {\vec \lambda} = -i {\vec h} \hbox{~with~} K = \left\langle \frac{\partial \psi}{ \partial{{\vec \lambda}}} \Big|\frac{\partial \psi}{\partial{{\vec \lambda}}} \right\rangle, {\vec h} = \left\langle \frac{\partial \psi} { \partial{{\vec \lambda}}} \Big|H \Big|\psi \right\rangle}\,. \end{equation} \subsection{Time evolution of condensates with $1/r$ interaction, variational and exact}\label{1/r} For simplicity we consider the case of selftrapping, with no external trap. As one can convince oneself, the results can be easily generalized to the case where an external radially symmetric trap potential is present. We choose a Gaussian trial wave function $\psi(r,t) = \exp\{ {i} [A(t) r^2 + \gamma(t)]\}$, where $A$ and $\gamma$ are complex functions of time, whose equations of motion follow from (\ref{vareq}). Decomposing $A$ into real and imaginary parts, $A = A_r + i A_i $, and replacing them by two other dynamical quantities \cite{Bro89,Car08b}, $q = \sqrt{3/(4 A_i)} \equiv \sqrt{\langle r^2 \rangle}$, $ p = A_r\sqrt{3/A_i}$, converts those equations into the canonical equations of motion for $p$ and $q$ that follow from the Hamiltonian \begin{equation}\label{varham} E = H(q,p) = T+V = p^2+\frac{9}{4q^2} +\frac{3 \sqrt{3}a}{2\sqrt{\pi}q^3} -\frac{\sqrt{3}}{\sqrt{\pi}q}. \nonumber \end{equation} In this way the Gross-Pitaevskii equation is mapped onto the Hamiltonian of a one-dimensional classical autonomous system with a nonlinear potential $V(q)$. The potential has no extremum for $a < a_{\rm cr} = -3\pi/8 \approx -1.18$, possesses a saddle point for $a = a_{\rm cr}$, and a maximum and a minimum for $a > a_{\rm cr}$. The critical scattering length corresponds to the bifurcation point in the variational calculation. For different values of $a$ (in units of $a_u$) phase portraits of trajectories moving according to the Hamiltonian (\ref{varham}) are shown in Fig.~\ref{phaseportraits}. \begin{figure}\label{phaseportraits} \caption{Phase portraits of the dynamics of the complex width function $A(t)$ associated with the Hamiltonian (\ref{varham}) for attractive $1/r$ interaction for different values of the scattering length $a$, measured in units of $a_u$. Left: $a = -1 > a_{\rm cr}$: two stationary states appear as fixed points; middle: $a = -1.18 = a_{\rm cr}$: coalescence of the fixed points; right: $ a = -1.3 < a_{\rm cr}$: no stationary solutions exist.} \includegraphics[width=\textwidth]{fig2} \end{figure} The linear stability analysis of both the variational and the exact quantum stationary solutions proves \cite{Car08b} that the state corresponding to the elliptical fixed-point indeed is dynamically stable (small perturbations of the state show oscillating behavior), while the stationary state corresponding to the hyperbolic fixed-point is dynamically unstable (exponential growth of small perturbations). This behavior is recovered in exact numerical solutions of the time-dependent Gross-Pitaevskii equation with $1/r$ interaction, but also new features emerge. The solution is carried out \cite{Car08b} using the split-operator technique and fast Fourier transforms. To investigate the behavior of condensate wave functions in the vicinity of the exact numerical stable and unstable stationary states, we consider condensates which are obtained by deforming the stationary states by $\psi(r) = f\cdot \psi_{\pm}(r\cdot f^{2/3})$, where $f$ is a numerical stretching factor (this choice of the perturbation does not affect the norm of the state). In Fig.~\ref{wavefunctions} we show examples of the exact BEC dynamics in the vicinity of the unstable and stable stationary states. In Fig.~\ref{wavefunctions} a) we start the time evolution with the numerical solution for the unstable stationary state (in the classical picture this corresponds to the trajectory starting at the hyperbolic fixed point, see left part of Fig.~\ref{phaseportraits}). Because of unavoidable numerical deviations from the theoretically exact unstable state, the wave function determined numerically is stationary only for some time but then begins to oscillate. Obviously we have started with a state which in the variational picture would be located in the elliptical domain close to the hyperbolic fixed point. Note, however, that the oscillation is not strictly periodic. By contrast, in Fig.~\ref{wavefunctions} b), where the time evolution starts with the unstable stationary state stretched by a factor of $f = 1.001$, as time proceeds the wave function contracts towards the origin, and the condensate collapses. In the variational picture this corresponds to a trajectory initially close to the hyperbolic fixed point but located on the hyperbolic side. Note that in a realistic experimental situtation during the collapse further mechanisms have to be taken into account, such as inelastic collisions. The inclusion of such mechanisms, however, clearly goes beyond the scope of the present paper. Figure~\ref{wavefunctions} c) displays a behavior not present in the variational analysis. We start again in the vicinity of the unstable stationary state ($f = 0.99$) and find that the width of the condensate gradually grows and grows. An inspection of the wave function on a logarithmic scale shows \cite{Car08b} that wave function amplitude builds up at large distances from the origin, giving rise to this behavior. Finally, Fig.~\ref{wavefunctions} d), e) show examples for the quantum mechanical time evolution of condensates in the vicinity of the stable ground state. For a large stretching factor (panel d)) the condensate is found to oscillate and to expand, while for a small stretching factor (panel e)) we find the quasiperiodic oscillations that we would expect from the variational analysis. This demonstrates that the variational nonlinear dynamics approach is capable of predicting essential features of the exact quantum mechanical time behavior of the condensates, but that the quantum mechanical behavior is even richer. \begin{figure}\label{wavefunctions} \caption{Time evolution, for attractive $1/r$ interaction, of the root-mean-square widths of the condensate wave functions in the vicinity of the unstable (panels a), b), c)) and the stable stationary (panels d), e)) state. a): Scaled scattering length (in units of $a_{\rm d}$) a = -1.0, stretching factor f= 1.00; b): a = -0.85, f = -1.001; c): a = -0.85, f = 0.99; d): a = -0.85, f =1.25, and e): a = -0.85, f = 1.01. } \includegraphics[width=1 \textwidth]{fig3} \end{figure} \subsection{Dynamics of condensates with dipolar interaction, variational} We choose a Gaussian trial wave function adapted to the axisymmetric trap geometry, $\psi(\rho,z,t) = e^{i(A_\rho \rho^2+ A_zz^2+\gamma)}$, where the complex width parameters $A_\rho$, $A_z$, and the complex phase are functions of time. Their dynamical equations follow from the time-dependent variational principle (\ref{vareq}). Introducing new variables $q_\rho, q_z, p_\rho, p_z$ via \begin{equation} {\rm Re}\; A_\rho = {p_\rho}/{(4 q_\rho)},~~ {\rm Im}\; A_\rho = {1}/{(4 q_\rho^2)},~~ {\rm Re}\; A_z = {p_z}/{(4 q_z)},~~ {\rm Im}\; A_z = {1}/{(8 q_z^2)} \end{equation} one finds that their dynamical equations are equivalent to the canonical equations of motion belonging to the Hamiltonian \begin{eqnarray}\label{hamiltoniandd} H &=& T+V =\frac{p_\rho^2}{2}+\frac{p_z^2}{2} \; +\frac{1}{2 q_\rho^2}+ 2 \gamma_\rho^2 q_\rho^2+\frac{a/a_{\rm d}} {2 \sqrt{2\pi} q_\rho^2q_z} + \frac{1}{8 q_z^2}+2 \gamma_z^2 q_z^2\nonumber \\ &+&\frac{1+{q_\rho^2}/{q_z^2}-{3 q_\rho^2 \arctan \sqrt{{q_\rho^2}/{(2 q_z^2)}-1}}{\Big/}{\left(q_z^2 \sqrt{{2 q_\rho^2}/{q_z^2}-4}\right)}}{6 \sqrt{2 \pi} q_\rho^4 q_z\left({1}/{q_z^2}-{2}/{q_\rho^2}\right)}\,. \end{eqnarray} Thus the variational ansatz has turned the problem into one corresponding to a two-dimensional nonintegrable Hamiltonian system, which will exhibit all the features familiar from nonlinear dynamics studies of such systems. From the shape of the potential, which is shown in Fig.~\ref{vnumplot2} as a function of the "position'' variables $q_\rho, q_z$, these features can already be read off qualitatively. At the potential minimum sits the stable stationary ground state (elliptic fixed point), while at the saddle point one finds an unstable excited stationary state (hyperbolic fixed point). \begin{figure}\label{vnumplot2} \caption{Potential $V(q_\rho, q_z)$ in the Hamiltonian (\ref{hamiltoniandd}) for dipole-dipole long-range interaction.} \includegraphics[width=0.55\textwidth]{fig4} \end{figure} To quantitatively characterize the dynamics of the variational condensate wave functions we follow the trajectories in the four-dimensional configuration space spanned by the coordinates of the real and imaginary parts of $A_\rho$ and $A_z$. Since the total mean-field energy is a constant of motion the trajectories are confined to three-dimensional hyperplanes, and their behavior can most conveniently be visualized by two-dimensional Poincar{\' e} surfaces of section defined by requiring one of the coordinates to assume a fixed value. We consider Poincar{\' e} surfaces of section defined by the condition that the imaginary part of $A_z(t)$ is zero. Each time the trajectory crosses the plane ${\rm Im}\;{A_z} = 0$, the real and imaginary parts of $A_\rho(t) = A_\rho^r(t) + i A_\rho^i(t)$ are recorded. In Fig.~\ref{poincare} surfaces of section are plotted for five different, increasing, values of the mean-field energy. The physical parameters of the experiment of Koch et al. \cite{koch08} are adopted, and the scattering length is fixed to $a/a_{\rm d}=0.1$, away from its critical value. At these parameters, the variational mean-field energy of the ground state is $N E_{\rm gs} =4.24 \times 10^5$ (in units of $E_{\rm d}$) and represents the local minimum on the two-dimensional mean-field energy landscape, plotted as a function of the width parameters. The variational energy of the second, unstable, stationary state at these experimental parameters is $N E_{\rm es}=6.24 \times 10^5$, it corresponds to the saddle point on the mean-field energy surface. Between these two energy values the motion on the trajectories is bound, while for energies above the saddle-point energy the motion on the trajectories can become unbound: once the saddle point is traversed by a trajectory $A_\rho(t)$, $A_z(t)$, the parameters run to infinity, meaning a shrinking of the quantum state to vanishing width, i~e., a collapse of the condensate takes place. \begin{figure} \centering\includegraphics[width=1\textwidth]{fig5} \caption{Poincar{\' e} surfaces of section of the condensate wave functions for dipolar interaction represented by their width parameters at the scaled trap frequency $N^2 \bar{\gamma} = 3.4\times 10^{4}$, aspect ratio $\lambda =6$, and the scattering length $a/a_{\rm d}=0.1$. The surfaces of section correspond to increasing values of the mean-field energy (in units of $E_{\rm d}$): a) $NE = 4.5 \times 10^5$, b) $NE = 6.00 \times 10^5$, c) and d) $NE = 6.24 \times 10^5$, e) $NE = 9 \times 10^5$, f) $NE = 6.00 \times 10^6$. } \label{poincare} \end{figure} The energy in Fig.~\ref{poincare} a) lies slightly above the energy of the stationary ground state. The initially stationary state has evolved into a periodic orbit (fixed point in the surface of section), corresponding to a state of the condensate whose motion is periodic. The oscillations of the width parameters $A_\rho(t)$ and $A_z(t)$ represent oscillatory stretchings of the condensate along the $\rho$ and $z$ directions. The stable {\em periodic} orbit in the surface of section is surrounded by elliptical, quasi-periodic orbits, representing quasi-periodic oscillations of the condensate. As the energy is increased further, in Fig.~\ref{poincare} b), new periodic and quasi-periodic orbits are born, and the motion is still regular. In Fig.~\ref{poincare} c) we have reached the saddle-point energy. Now chaotic orbits have appeared in the vicinity of the unstable excited stationary state (hyperbolic fixed point). Figure~\ref{poincare} d) shows an enlargement of this region in phase space. In contrast to the (quasi-) periodic stretching oscillations of the condensate within the elliptical islands, the chaotic motion of the parameters describes a condensate which does not yet collapse but whose widths fluctuate irregularly. In the surfaces of section shown in Fig.~\ref{poincare} e) and f), with mean-field energies well above the saddle-point energy, regular islands are still clearly visible. These stable islands are surrounded by chaotic trajectories. Since ergodic motion along these trajectories comes close to every point in the configuration space, the chaotic motion sooner or later leads to a crossing of the saddle point and then to the collapse of the condensate wave functions. It can be seen that with growing energy above the saddle point the sizes of the stable regions shrink. The kinematically allowed regions surrounding the stable islands are hardly recognizable any more since high above the saddle-point energy the chaotic motion becomes more and more unbound, and thus trajectories cross the Poincar{\' e} surfaces of section only a few times, if ever, before they escape to infinity and collapse takes place. It must be stressed, however, that stable islands persist even far above the saddle-point energy, implying the existence of quasi-periodically oscillating nondecaying modes of dipolar condensate wave functions. \section{Summary and Conclusions} We have demonstrated that variational forms of the Bose-Einstein condensate wave functions convert the condensates via the Gross-Pitaevskii equation into Hamiltonian systems that can be studied using the methods of nonlinear dynamics. We have also shown that these results serve as a useful guide in the search for nonlinear dynamics effects in the numerically accurate quantum calculations of Bose-Einstein condensates with long-range interactions. The existence of stable islands as well as chaotic regions for excited states of dipolar Bose-Einstein condensates is a result that could be checked experimentally. One way of creating the collectively excited states one might think of is to prepare the condensate in the ground state, and then to non-adiabatically reduce the trap frequencies. One might question whether the Gross-Pitaevskii equation is adequate to describe the types of complex dynamics discussed in this paper in ``real'' condensates. For example, in the chaotic regime local density maxima might occur for which losses by two-body or three-body collisions would have to be taken into account. However, by virtue of the scaling laws discussed in Sections \ref{scaling1/r} and \ref{scaling} parameter ranges can always be found where the particle densities remain small even in these regimes, and the Gross-Pitaevskii equation is applicable. The advantage of the simple variational ansatz adopted in this paper is that the analysis of the nonlinear dynamical properties of Bose-Einstein condensates becomes particularly transparent. Numerical quantum calculations to confirm the variational findings for dipolar gases and the extensions to structured condensate states \cite{goral00,dutta07} are under way. We have already seen in Sec.~\ref{1/r}, by comparing variational and accurate numerical quantum results for Bose-Einstein condensates with attractive $1/r$ interaction, that the nonlinear dynamical properties predicted by the variational calculation were confirmed by the full quantum calculations. We therefore have good reason to believe that this will also be true once the full quantum calculations of the dynamics of excited condensate wave functions of dipolar gases have become available. \begin{theacknowledgments} Part of this work has been supported by Deutsche Forschungsgemeinschaft. H.C. is grateful for support from the Landesgraduiertenf\"orderung of the Land Baden-W\"urttemberg. \end{theacknowledgments} \bibliographystyle{aipproc}
1,108,101,566,132
arxiv
\section{INTRODUCTION} The Near-Infrared Extragalactic Background Light (NIREBL) is the integrated light of the entire cosmic history in the near infrared. Thus, the origin of the NIREBL is essential to probe the formation and evolution of galaxies from birth to the present Universe. Since current technology limits us from resolving diffuse, faint, and distant objects that contribute to the NIREBL brightness, we should rely on measurements of spatial fluctuations and absolute brightness to understand the nature of the NIREBL. The absolute brightness measures the background intensity, and the spatial fluctuation measures the clustering properties of the emitting sources. The first reliable measurement of the absolute brightness conducted at 1.25, 2.2, 3.5, and 4.9 $\micron$ with the Diffuse InfraRed Background Experiment (DIRBE) on the Cosmic Background Explorer (COBE) although it experienced difficulties in subtracting the contribution from Galactic stars due to large confusion limit \citep{gorjian00,wright00,cambresy01,levenson07,sano15,sano16}. They found 2 to 8 times larger brightness than the Integrated Light of Galaxies (ILG). Thanks to the smaller beam size and low resolution spectrograph, the Infrared Telescope in Space (IRTS) confirmed the NIREBL excess by observing the isotropic background spectrum at short wavelengths (1.4 - 4 $\micron$) with better precision \citep{matsumoto05,matsumoto15}. With better point source subtractions, AKARI also succeeded to confirm the excess NIREBL spectrum at 2 - 5 $\micron$ \citep{tsumura13b}. The spectra obtained by COBE, IRTS, and AKARI are consistent within the common wavelength region ($\lambda$ $>$ 2 $\micron$). Several Extragalactic Background Light (EBL) observations were also carried out at around the optical wavelength range \citep{bernstein07,matsuoka11,mattila17a,mattila17b,kawara17,matsuura17,zemcov17}. However, their brightness levels are not in good agreement at $<$ 0.7 $\micron$, and this discrepancy may have been caused by uncertainties in the foregrounds subtraction. The excess brightness was initially explained by the first generation of stars that formed at the reionization era \citep{santos02,salvaterra03}. However, theoretical models based on the recent observations of high redshift galaxies indicate that the first stars contribute less than 1\% of the total absolute flux of the observed EBL \citep{cooray12b,yue13}. On the other hand, several studies argue that the excess brightness is not a real background but a measurement error. For example, \citet{dwek05} and \citet{kawara17} tried to explain the excess brightness with a subtraction error of the Zodiacal Light (ZL) which is the brightest diffuse foreground component. Unlike the absolute brightness measurement, the spatial fluctuation can be measured to mitigate the problem of foreground subtraction since the fluctuation is less sensitive to a foreground component. For example, although the ZL is the brightest foreground component, it is expected that ZL is very smooth over the large angular scales \citep{abraham97,pyo12}. Therefore, the EBL fluctuation can be more clearly distinguished from ZL. The detection of an excess EBL fluctuation was measured by \citet{kashlinsky05} with Spitzer at angular scales up to 5$\arcmin$ in wavelengths between 3.6 to 8 $\micron$, after subtracting the contribution from galaxies brighter than mag$_{AB}$ = 25. Subsequently, an excess fluctuation was detected over the ILG at angular scales up to 1$^{\circ}$, confirming the previous measurements \citep{kashlinsky12,cooray12a,mitchell15} using deeper and wider data from Spitzer. Using the HST data, \citet{thompson07} and \citet{donnerstein15} measured fluctuations at 1.1 and 1.6 $\micron$ at the sub-arcminute scale. \citet{matsumoto11} detected excess fluctuation above 100$\arcsec$ from AKARI (2.4, 3.2, and 4.1 $\micron$) data and found that the fluctuation follows Rayleigh-Jeans like spectrum (i.e. $\lambda$I$_\lambda$ $\sim$ $\lambda^{-3}$). Using a wider field AKARI image, \citet{seo15} found the existence of excess power up to $\sim$ 0.3$^{\circ}$. At 1.1 and 1.6 $\micron$, the Cosmic Infrared Background ExpeRiment (CIBER) measured large angular scale ($<$ 1$^{\circ}$) fluctuations \citep{zemcov14}. They reported a clear excess fluctuation at angular scales between 0.1$^{\circ}$ and 0.36$^{\circ}$. They used the Intra Halo Light (IHL) at z $<$ 3 to explain the excess fluctuation \citep{cooray12a,zemcov14}. The IHL source consists of tidally stripped stars during galaxy mergers and interactions. However, the IHL is not observationally confirmed and cannot explain all of the observed excess fluctuation. Consequently, there is no clear consensus regarding the origin of the NIREBL. To understand the origin of the excess fluctuations, we examine the fluctuation spectrum using IRTS data with a scale up to several-degrees. Such large scale fluctuations has never been explored before. Our approach can also constrain the physical properties of the excess origins. The outline of this paper is as follows. In section \ref{S:instrument}, we introduce the instrument. We describe the observation and data reduction in section \ref{S:irtsobs}. The data analysis is described in section \ref{S:datareduction}. The power spectrum estimation and the result are shown in section \ref{S:irtsps} and \ref{S:result}, respectively. In section \ref{S:discussion}, the discussions are given. Finally, we summarize our result in section \ref{S:summary}. \section{INSTRUMENT} \label{S:instrument} IRTS, the first Japanese orbiting IR telescope onboard the Space Flyer Unit (SFU), was launched on March 18 UT in 1995. It surveyed 7\% of the sky until its liquid Helium was exhausted on April 25. The IRTS is a 15 cm Ritch-Chretien type telescope with a focal length of 60 cm. The whole system, together with four focal plane instruments, was cooled down to 2 K using liquid Helium \citep{murakami96}. Among those instruments, the Near-Infrared Spectrometer (NIRS) is optimized to study the diffuse background with deep and wide sky coverage. The NIRS covers the wavelength range between 1.4 and 4.0 $\micron$ with a 0.13 $\micron$ spectral resolution. The incident beam goes through a 1.4 mm $\times$ 1.4 mm slit which corresponds to an 8$\arcmin$ $\times$ 8$\arcmin$ area in the sky, and it is diffracted by the grating. The dispersed beam is then focused on the linear array consisting of 24 InSb detector elements \citep{noda94}. To reduce the background errors arising from Galactic stars, it has a higher spatial resolution than DIRBE, and the cold shutter is installed to obtain a dark current. The stability of the detector is monitored using a calibration lamp during the observation. It uses J-FET charge integrating amplifiers operating at 60 K to detect a low background brightness by reducing the noise and achieving a high sensitivity. The InSb detector reads outs data with a 4 Hz sampling rate. The charges were integrated for 65.54 second before a reset and 8.192 seconds of it was used for dark current observation with a shutter close configuration. Details of the NIRS performance is found in \citet{noda96}. \section{OBSERVATION AND DATA REDUCTION}\label{S:irtsobs} Of the entire IRTS coverage, we used data initially reduced by \citet{matsumoto05}. They used data obtained at Galactic latitudes above 40$^{\circ}$ to avoid the strong foreground emissions due to the stars and dust in the Galaxy. The data obtained, while passing through the South Atlantic anomaly region where the noise level increases with high energy charged particles, was rejected. Of the 65.54 second charge integration between resets, first 4 second data due to anomalous residual charges after the reset was not used. The flux(e$^{-}$ s$^{-1}$) of each IRTS data was then obtained from linear fit of charges for 5 seconds along the scan direction. In linear fit process, contaminated data by cosmic-rays, instrumental noise, and stars were excluded. Dark current was subtracted after the linear fit process. Details of this process is described in \citet{matsumoto05}. Astrometry was achieved within 2.2$\arcmin$ using an attitude control sensor that was accurate enough to identify the bright Galactic stars \citep{murakami96}. The absolute calibration was achieved with a few percent errors using the standard stars observed by the IRTS \citep{noda96}. The calibration factor measured from the laboratory and that derived from the observed stellar fluxes were in good agreement. The final data at 24 discrete bands covers 1\% of whole sky. From these IRTS spectra, we made the synthesized 1.6 and 2.2 $\micron$ band fluxes. Specifically, fluxes from 1.53, 1.63, and 1.73 $\micron$ were averaged to obtain the 1.6 $\micron$ flux and those from 2.03, 2.14, 2.24, and 2.34 $\micron$ were averaged to obtain the 2.2 $\micron$ flux (hereafter, IRTS SKY). \section{DATA ANALYSIS} \label{S:datareduction} To measure the spatial fluctuation of the NIREBL, we need the brightness map of the background. The background brightness can be derived by subtracting brightness of all astrophysical foreground components from the observed sky brightness. In this section, we describe how we estimate brightness of the observed sky and foregrounds such as Diffuse Galactic Light (DGL), Integrated Star Light (ISL), and ZL. We also performed the pixelization for the estimated brightness of each component. They were pixelized into pixels covering a nearly equal area for the power spectrum analysis since the IRTS unevenly scanned the sky. To do this, we used well-developed tool HEALPix, which stands for the Hierarchical Equal Area isoLatitude Pixelization \citep{gorski05}. HEALPix divides the surface of the sphere into pixels of roughly equal shape and identical size. The resolution of the pixelization is defined by N$_{side}$ = 2$^{k}$, where $k$ can be any positive integer. The number of pixels in the whole sky is 12 N$_{side}^2$. By considering the IRTS FoV (i.e. 20$\arcmin$ $\times$ 8$\arcmin$), we used N$_{side}$ $=$ 64, which corresponds to a 55$\arcmin$ $\times$ 55$\arcmin$ pixel size. This divides the whole sky into 49152 pixels. \subsection{IRTS data analysis} In this section, we describe additional clipping process which was performed before the pixelization of the IRTS SKY. The clipping process is as follows. Using the IRTS SKY described in section \ref{S:irtsobs}, we did the correlation analysis with the ZL (see figure \ref{F0}). Here, the estimation of the ZL is described in section \ref{SSS:irtszl}. Since the ZL brightness accounts for more than 90\% of the IRTS SKY, figure \ref{F0} should show strong correlation. However, some of the IRTS data located at low ZL brightness region does not follow the correlation. These outliers are mostly located at lower Galactic latitude region as shown in figure \ref{F15}. This indicates that they are due to residual bright stars which was not rejected from the data reduction process. To reject these data, we linearly fitted the correlation diagram in figure \ref{F0} and excluded the IRTS data out of 2$\sigma$ which is around two times larger than the brightness range of the ISL. The fitting process was done as follows. On the correlation diagram in figure \ref{F0}, we divided the y-axis (i.e. ZL) into constant brightness interval. At each interval, we determined the most dense data region along the x-axis (i.e. IRTS SKY). Finally, the linear fit was done along those most dense data regions. Around 18 \% of the IRTS data was rejected through this process. The residual IRTS data for each 1.6 and 2.2 $\micron$ was then assigned to the HEALPix pixels according to their positions. Due to the large HEALPix pixel size, a few to tens of the IRTS data belong to each HEALPix pixel. The number of IRTS data in a HEALPix pixel is shown in figure \ref{F1}. The intensity of each HEALPix pixel was assigned from a mean brightness of the IRTS SKY in the HEALPix pixel. If the number of IRTS fields in a HEALPix pixel is less than 5, we masked the HEALPix pixel. They are 14\% of the HEALPix pixels covered by the IRTS fields. The pixelized 1.6 and 2.2 $\micron$ maps were stored in the FITS format \citep{calabretta02} for the next step to evaluate the power spectrum of the IRTS SKY. \subsection{Foregrounds data analysis} In this section, we describe how we estimated the foreground brightness such as DGL, ISL, and ZL. Each foreground brightness was then pixelized into HEALPix scheme. \subsubsection{Diffuse Galactic Light}\label{SSS:irtsdgl} The DGL consists of star light scattered from dust grains distributed in interstellar space. Since the DGL is diffuse and faint, it is difficult to observe directly. Nevertheless, far-IR thermal emission (e.g. 100 $\micron$), HI or CO column density \citep{brandt12} has close relation with the DGL brightness. At near-IR wavelength region, the relation between the 100 $\micron$ thermal emission and the DGL brightness has been studied based on the near-IR observed data \citep{arai15, sano15}. They derived scale factor between the 100 $\micron$ thermal emission and the near-IR DGL brightness. Then, they fitted the various model spectra to the scale factor. The best fitted model was \citet{brandt12} model as shown in figure \ref{F2}. The fitted scale factor enables us to derive the near-IR DGL brightness from the 100 $\micron$ intensity. To estimate the DGL at IRTS fields using the scale factor as shown in figure \ref{F2}, we used the 100 $\micron$ thermal emission from the SFD dust map \citep{schlegel98}. The SFD dust map was also used for the scale factor derivation \citep{arai15, sano15}. Nevertheless, since the SFD map was not corrected for the cosmic infrared background brightness, we subtracted 0.8 MJy sr$^{-1}$ \citep{puget96,fix98,lagache00,matsuoka11} so the map contains only a dust emission component. From the brightness corrected SFD map, we obtained the 100 $\micron$ intensities belong to each IRTS FoV. 100 $\micron$ intensities at each IRTS FoV were then averaged. The averaged intensity was then multiplied by the scale factor at IRTS bands to derive the DGL brightness. The DGL brightness for the IRTS bands (i.e. 1.53, 1.63, 1.73, 2.03, 2.14, 2.24, and 2.34 $\micron$) was then averaged to make synthesized DGL brightness at 1.6 and 2.2 $\micron$ bands (hereafter, IRTS DGL). This process has been done for all IRTS fields and pixelized into the HEALPix scheme. \subsubsection{Integrated Star Light}\label{SSS:irtsisl} The ISL indicates the Galactic star light which contributes to the brightness of the observed IRTS SKY. We could remove the contributions of the bright stars from the observed sky brightness. However, the contributions of faint stars could not been removed. These faint stars are defined by limiting magnitude of the IRTS as shown in figure \ref{F3}. In this section, we describe how we estimated the contributions of faint stars. To estimate the ISL by faint stars above the IRTS limiting magnitude, we used a 2MASS point/extended source \citep{cohen03} which is well known Galactic star catalog at near-IR region. Since the limiting magnitude of the 2MASS star catalog is much fainter than that of the IRTS, we could estimate the ISL caused by stars lying between the IRTS and the 2MASS limiting magnitudes. To account the faint stars even above the 2MASS limiting magnitudes, we used Galactic model stars. The 2MASS limiting magnitudes for H- and K-bands are 15.1 and 14.3, respectively. First, we describe how we estimated the ISL for IRTS fields based on the 2MASS stars. The diagram of process flow to estimate the ISL for each IRTS field is shown in figure \ref{Fislmap}. That is, we reconstructed a high resolution map (pixel size is 10$\arcsec$) where the map size is more than twice larger than the IRTS FoV. Then, we distributed the 2MASS stars on the high resolution map where the center of the map is set by an nominal IRTS field position from the IRTS attitude information. Here, we used 2MASS stars lying between the IRTS and the 2MASS limiting magnitudes. Then, we convolved the high resolution map with the IRTS/NIRS beam pattern. The Full Width Half Maximum (FWHM) of the IRTS/NIRS beam is 2.4$\arcmin$. On the IRTS/NIRS beam convolved map, we clipped out regions covering the IRTS FoV with a center as same as IRTS nominal position. However, the scanning effect during the IRTS observation causing the beam pattern is not constant along the scan direction. The effective beam pattern is trapezoidal as shown in figure 2 of \citet{matsumoto05}. To apply this on the clipped map, we multiplied a factor showing trapezoidal pattern having unity for 12$\arcmin$ central region along the scan direction and linearly decreases to zero at both ends. This process has been done for all IRTS fields so the ISL based on the 2MASS stars was made. Next, to estimate the ISL by stars above the 2MASS limiting magnitude, we used the Galactic model stars named TRILEGAL \citep{girardi05}. The model provides bright to faint-end stars up to limiting magnitudes of 30. It models the number of stars and their brightness in various wavelengths toward any line of sight at a specified FoV. However, since the model star does not have astrometry information, we randomly assigned the astrometry information to model stars and distributed them on the high resolution map. Here, we used model stars fainter than the 2MASS limiting magnitude. Then, the ISL based on the model was estimated in the same manner as ISL derivation based on the 2MASS stars. However, the brightness of the model stars is known to have systematic offset comparing with that of the 2MASS stars (see figure 1 in \cite{matsumoto15}). To correct this, we multiplied factor of 1.23 to the brightness of the ISL based on the model stars. The ISL brightness for the 1.6 and 2.2 $\micron$ based on the model stars are 15\% and 24\% of the ISL brightness based on the 2MASS stars, respectively. Finally, total ISL (hereafter, IRTS ISL) for the IRTS fields was made by adding the ISL based on the 2MASS and the model. Then, the IRTS ISL was pixelized into the HEALPix scheme. Here, we used the IRTS nominal position to estimate the ISL. However, the IRTS attitude information has 1$\sigma$ error (i.e. 2.2$\arcmin$). This generates the brightness error on the IRTS ISL. To estimate the ISL error, we performed a simulation. That is, we reconstructed ISL maps in the same manner as shown above but using the shifted IRTS field positions within the IRTS astrometry error range. The shift amount was randomly chosen from the normal distribution having 1$\sigma$ of astrometry error and added to each of the IRTS nominal position. We repeated the simulation 100 times and made 100 ISL maps. The number of simulation (i.e. 100) was determined as follows. At each ISL map, we took median value of brightness. We repeated the simulation until the brightness distribution of median values shows Gaussian distribution as shown in figure \ref{F13}. Here, we used median method as each ISL map has skewed brightness distribution toward bright region due to bright Galactic stars. Among the simulated 100 ISL maps, we selected two maps and took brightness difference between pixels of them. Then, we calculated the 1$\sigma$ from the distribution of the brightness difference. This has been done for 4950 times by choosing two maps among 100 ISL maps. The 4950 is all possible combinations for choosing two maps among 100 ISL maps. The 1$\sigma$ values from the 4950 times of simulations are fairy consistent (i.e. variability relative to average is around 4\%). We defined the maximum value among 4950 simulations as ISL error which are 1.31 nW m$^{-2}$ sr$^{-1}$ and 0.71 nW m$^{-2}$ sr$^{-1}$ for 1.6 and 2.2 $\micron$, respectively. \subsubsection{Zodiacal Light}\label{SSS:irtszl} The ZL at near-IR consists of scattered Sun light from Interplanetary Dust (IPD) particles in the Solar system. Therefore, the ZL spectrum resembles the spectrum of the Sun. However, the ZL brightness varies with time due to the orbital motion of the Earth around the Sun (i.e. seasonal variation). \citet{kelsall98} constructed a model to estimate the ZL accounting the seasonal variation. Using this model, we reconstructed the ZL map for the IRTS field. Nevertheless, we need brightness correction of the ZL model for two reasons. First, the Kelsall model does not provide the 1.6 $\micron$ ZL brightness. Therefore, we initially obtained the 1.25 $\micron$ ZL brightness and derived the 1.6 $\micron$ ZL brightness assuming that the spectral shape does not depend on the sky position \citep{tsumura10}. Second, their exists systematic uncertainty between the ZL model and the observed ZL. The correction has been made in the same manner as \citet{matsumoto15}. The detailed descriptions are as follows. We obtained the 1.25 and 2.2 $\micron$ ZL model brightness (hereafter, IRTS ZL) at IRTS positions using their observation date information. Then, we pixelized the IRTS ZL into the HEALPix scheme. Next, we made brightness correlation studies between the pixelized IRTS ZL model and the IRTS data after subtracting the IRTS DGL and the IRTS ISL from the IRTS SKY. Since we subtracted the DGL and the ISL, the brightness of the IRTS data contains only brightness of the actual ZL and the NIREBL. Under the assumption that the NIREBL is homogeneous and isotropic, we expect the strong correlation with the IRTS ZL model. As shown in figure \ref{zlcorr}, they show excellent correlation for both bands, which implies three things. First, the IRTS data after subtracting the IRTS DGL and the IRTS ISL from the IRTS SKY is well represented by the ZL. Second, the ZL spectral shape is uniform for the IRTS fields. Third, the NIREBL is homogeneous and isotropic emission. According to the third implication, a slope of the correlation study should show unity. However, because of band difference and systematic uncertainty between the ZL model and actual ZL, the slopes of 1.6 and 2.2 $\micron$ show 0.811 and 1.146, respectively. The band difference and systematic uncertainty were corrected by multiplying the slopes to the ZL model brightness. Then, the IRTS ZL was pixelized into the HEALPix scheme. \section{POWER SPECTRUM ANALYSIS}\label{S:irtsps} In this section, we describe the power spectrum analysis of the NIREBL. The power spectrum of the NIREBL was estimated from the pixelized NIREBL brightness map. The NIREBL brightness map was derived by subtracting the IRTS DGL, IRTS ISL and the IRTS ZL from the IRTS SKY in HEALPix format. The detailed procedure for the power spectrum analysis is described as follows. The power spectrum is an expression of the relative brightness distributions as a function of the angular scales in degree ($\theta$) or multipole moments ($\textit{l}$) related by $\theta = 180 / \textit{l}$. The relative brightness of each map is \begin{equation} \delta I(\Theta,\Phi)=I(\Theta,\Phi)-<I(\Theta,\Phi)>, \label{irtseqa1} \end{equation} \noindent where ($\Theta$, $\Phi$) are angular coordinates in the sky, and $<I(\Theta,\Phi)>$ denotes the averaged brightness of the observed region. The equation (\ref{irtseqa1}) can be decomposed into spherical harmonics as \begin{equation} \delta I(\Theta,\Phi)=\sum _{\textit{l}=0}^{\infty}\sum _{m=-\textit{l}}^{\textit{l}}a_{lm}Y_{lm}(\Theta,\Phi), \label{irtseqa2} \end{equation} \noindent where \begin{equation} a_{lm} \sim \Omega_{pixel} \sum _{\textit{i}=0}^{Npix} \delta I(\Theta_{i},\Phi_{i})Y^{*}_{lm}(\Theta_{i},\Phi_{i}). \label{irtseqa3} \end{equation} \noindent Here, $Y_{lm}(\Theta,\Phi)$ is Laplace's spherical harmonics, $a_{lm}$ is multipole coefficients of the expansion, and $\Omega_{pixel}$ is the solid angle of the HEAlPix pixel. $\textit{l}$ $=$ 0 is a monopole and $\textit{l}$ $=$ 1 is a dipole term. The power spectrum is then expressed by the variance in $a_{lm}$ as \begin{equation} C_{\textit{l}}=\frac{1}{2{\textit{l}}+1} \sum _{m=-\textit{l}}^{\textit{l}} \left | a_{lm} \right |^{2}. \label{irtseqa4} \end{equation} \noindent However, the above procedure is only valid for the full sky coverage data with no mask. If we apply it to partial sky coverage data, it produces a biased power spectrum. Therefore, we need another approach to correct for the biases since the IRTS observed only 1\% of the whole sky. There are two popular methods to measure the power spectrum for an incomplete sky coverage: maximum likelihood estimation \citep{bond98,tegmark97} and pseudo power spectrum estimation \citep{hivon02}. For the IRTS power spectrum analysis, we used the publicly available PolSpice software\footnote{http://www2.iap.fr/users/hivon/software/PolSpice/} based on the pseudo power spectrum analysis \citep{chon04}. Here, \textit{Pseudo} means that the isotropy assumption is broken and the PolSpice corrects for partial sky map. To measure the true power spectrum, PolSpice calculates $a_{lm}$ from $\delta I(\Theta,\Phi)$ map. Then, the pseudo power spectrum for incomplete sky coverage is calculated from equation (\ref{irtseqa4}). Using Legendre polynomials $P_{\textit{l}} (\cos\eta)$, the pseudo power spectrum $C_{\textit{l}}$ is then converted to correlation function \begin{equation} \xi(\eta)=\frac{1}{4\pi} \sum _{\textit{l}=0}^{\infty} (2\textit{l}+1) C_{\textit{l}} P_{\textit{l}} (\cos\eta), \label{irtseqa5} \end{equation} \noindent where $\xi(\eta)$ is the two-point correlation function defined by \begin{equation} \xi(\eta)= <\delta I(\Theta,\Phi) \delta I(\Theta',\Phi')>. \label{irtseqa6} \end{equation} \noindent Here, the angle bracket denotes the ensemble average and $\eta$ is angle between $(\Theta,\Phi)$ and $(\Theta^\prime,\Phi^\prime)$. To correct for the incomplete sky coverage, the correlation function is divided by the mask correlation function, which is estimated from the mask map where the pixel value is 0 for uncovered sky and 1 for covered sky. The corrected correlation function $\xi(\eta)$ is then inserted into the following equation to derive the true power spectrum. \begin{equation} C_{\textit{l}}=2\pi \int_{-1}^{1} \xi(\eta) P_{\textit{l}}(\cos\eta) d(\cos\eta). \label{irtseqa7} \end{equation} \noindent As described above, the tool needs two maps in the HEALPix format. One is a brightness map and the other is a mask map having 0 for uncovered sky and 1 for covered sky. Using the NIREBL brightness map in figure \ref{F4}, we calculated the power spectrum. However, the NIREBL power spectrum still contains photon and readout noise. Since the noise level for each IRTS SKY is unknown, we performed a Monte Carlo simulation to estimate the noise power spectrum. First, we made a Gaussian distribution having a 1$\sigma$ of readout noise and photon noise \citep{matsumoto05}. Under the Gaussian distribution assumption, we randomly picked the value in the distribution and assigned it to each IRTS field. The noise assigned map is then pixelized into the HEALPix scheme. Then, the power spectrum was estimated for the pixelized noise map. This procedure was repeated 100 times and averaged to derive the final noise power spectrum. The noise power spectrum was then subtracted from the NIREBL power spectrum for 1.6 and 2.2 $\micron$ of the IRTS. Nevertheless, the finite resolution and pixelization can suppress the power spectrum at a small angular scale. The suppression can be corrected using a beam transfer function that depends on the shape and size of the PSF. However, we concluded that the beam transfer function for the IRTS/NIRS beam does not affect the fluctuation spectrum above 2$^{\circ}$, which was twice the HEALPix pixel resolution. Since the power spectrum was only valid for angular scales above 2$^{\circ}$ according to the Nyquist sampling, we did not apply the correction for the beam transfer function. \section{ERROR ESTIMATION}\label{S:error} Errors can be categorized into random and systematic components. Random errors include sample variance of the power spectrum (i.e. \textit{$\delta I_{variance}$}), attitude error of the IRTS (i.e. \textit{$\delta I_{attitude}$}), model error of the DGL (i.e. \textit{$\delta I_{DGL}$}), and binning error of the HEALPix pixel (i.e. \textit{$\delta I_{binning}$}). \textit{$\delta I_{variance}$} is the error induced from the power spectrum estimation. At a given angular scale, the number of possible modes is limited due to a finite sky coverage. The smaller angular scale has smaller sample variance due to larger number of possible modes. However, we cannot estimate the error directly from the power spectrum analysis since the small coverage of the IRTS field results in the covariance matrix of the sample variance being very noisy. Alternatively, we used the empirically determined Knox formula that represents the $\chi^2$ distribution of $C_{\textit{l}}$ with its mean \citep{knox95}. The formula is given in Appendix C of \citet{thacker15}. \textit{$\delta I_{attitude}$} is the error induced from inaccurate IRTS attitude. We described the detailed procedure of \textit{$\delta I_{attitude}$} calculation in section \ref{SSS:irtsisl}. \textit{$\delta I_{DGL}$} is transferred from the scale factor which is needed to convert 100 $\micron$ intensity (i.e. far-IR) to near-IR DGL brightness as described in section \ref{SSS:irtsdgl}. \citet{arai15} compared the DGL model to the observed data and found that the scale factor has 20\% uncertainty. To account this error, we multiplied 1.2 to the scale factor and derived the DGL brightness ${I\textprime}_{DGL}$. Then, we subtracted nominal DGL brightness $I_{DGL}$ from ${I\textprime}_{DGL}$. Finally, \textit{$\delta I_{DGL}$} was computed by taking 1$\sigma$ of the brightness difference (i.e. ${I\textprime}_{DGL} - {I_{DGL}}$). \textit{$\delta I_{binning}$} is generated when computing mean of the IRTS pixel brightness belong to each HEALPix pixel. Our procedure to calculate \textit{$\delta I_{binning}$} is as follows. First, we made binning error map by taking 1$\sigma$ of IRTS pixel brightness belong to each HEALPix pixel. Then, computed 1$\sigma$ of the binning error map. This procedure was done for each astrophysical components as well as the sky. Finally, \textit{$\delta I_{binning}$} was calculated by combining all error components in a quadrature. \begin{equation} \delta I_{binning} = \sqrt{\delta I_{SKY,binning}^{2}+\delta I_{DGL,binning}^{2}+\delta I_{ISL,binning}^{2}+\delta I_{ZL,binning}^{2}} \label{irtseqa8} \end{equation} \noindent Then, we combined all random errors using following equation excepting \textit{$\delta I_{variance}$} where the power spectrum of \textit{$\delta I_{variance}$} was directly calculated using the Knox formula \citep{knox95}. \begin{equation} \delta {I\textprime}_{random} = \sqrt{\delta I_{attitude}^{2}+\delta I_{DGL}^{2}+\delta I_{binning}^{2}} \label{irtseqa9} \end{equation} \noindent The power spectrum of the \textit{$\delta {I\textprime}_{random}$} was calculated based on the Monte Carlo simulation in the same manner as power spectrum calculation for the readout and photon noises described in this section. Then, power spectrum of total random error \textit{$\delta {C}_{{l, random}}$} was calculated using following equation. \begin{equation} \delta {C}_{{l, random}} = \sqrt{{\delta C\textprime}_{{l, random}}^{2}+\delta {C}_{{l, variance}}^{2}} \label{irtseqa10} \end{equation} \noindent where ${\delta C\textprime}_{l, random}$ is power spectrum of $\delta {I\textprime}_{random}$ and $\delta {C}_{{l, variance}}$ is power spectrum of \textit{$\delta I_{variance}$}. The systematic errors are categorized into two. One is the calibration error of the IRTS (i.e. \textit{$\delta I_{cal}$}) which is known as 3\% of the NIREBL brightness \citep{matsumoto05,matsumoto15}. The other is the limiting magnitude error of the IRTS (i.e. \textit{$\delta I_{lim}$}) where the IRTS has $\pm$ 0.5 limiting magnitude uncertainty (see figure \ref{F3}). To compute \textit{$\delta I_{cal}$}, we multiplied the 0.03 to the NIREBL brightness map then took 1$\sigma$ of it. \textit{$\delta I_{lim}$} causes the systematic brightness difference on the ISL. To compute this, we derived the ISL brightness error (i.e. \textit{$\delta {I\textprime}_{lim}$}) after adding $\pm$ 0.5 to the limiting magnitude of the IRTS. The \textit{$\delta {I\textprime}_{lim}$} derivation was same as the nominal ISL brightness (i.e. \textit{$\delta {I}_{nominal}$}) derivation described in section \ref{SSS:irtsisl}. Then, we took brightness difference between \textit{$\delta {I\textprime}_{lim}$} and \textit{$\delta {I}_{nominal}$}. The \textit{$\delta {I}_{lim}$} was then calculated by taking 1$\sigma$ of the brightness difference. Finally, total systematic error was derived as \begin{equation} \delta {I}_{systematic} = \sqrt{\delta I_{cal}^{2}+\delta I_{lim}^{2}} \label{irtseqa11} \end{equation} \noindent Then, the power spectrum of the \textit{$\delta {I}_{systematic}$} was calculated based on the Monte Carlo simulation in the same manner as $\delta {C}_{{l, random}}$. Finally, the power spectrum of total error was calculated using following equation. \begin{equation} \delta {C}_{{l, total}} = \sqrt{{\delta C}_{{l, random}}^{2}+\delta {C}_{{l, systematic}}^{2}} \label{irtseqa10} \end{equation} \noindent where $\delta {C}_{{l, systematic}}$ is power spectrum of \textit{$\delta {I}_{systematic}$}. The brightness of errors listed above excepting \textit{$\delta I_{variance}$} are shown in Table \ref{T1}. \section{RESULT}\label{S:result} The fluctuation spectra of the NIREBL for 1.6 and 2.2 $\micron$ are shown in figures \ref{F8} and \ref{F9}, respectively. In figure \ref{F8}, the IRTS fluctuation shows power-law with angular scale. The power index close to -1 indicates that the fluctuation is random and structureless. This homogeneous and isotropic feature is also shown in the NIREBL map in figure \ref{F4}. For comparison, the fluctuation spectrum of the CIBER project \citep{zemcov14} at smaller angular scale ($<$ 1$^{\circ}$) was also shown. It seems to have smooth connection between the IRTS and the CIBER. Although there exists no data between them, we can infer that the NIREBL fluctuation has a peak at around 1$^{\circ}$. In figure \ref{F9}, we also examined the fluctuation at the 2.2 $\micron$. Since the CIBER does not have a 2.2 $\micron$ data, we needed to multiply a scale factor to the CIBER 1.6 $\micron$ power spectrum. The scale factor was derived by the IRTS 1.6/2.2 $\micron$ color ratio assuming color of the NIREBL does not depend on the sky position. The ratio was derived from the IRTS 1.6 and 2.2 $\micron$ correlation study as shown in figure \ref{F10}. Then the ratio was multiplied to the CIBER 1.6 $\micron$ power spectrum to derive the CIBER 2.2 $\micron$ one. As well as the CIBER, we also compared fluctuation spectra for AKARI 2.4 $\micron$ \citep{matsumoto11,seo15} and Spitzer 3.6 $\micron$ \citep{kashlinsky12} by scaling the amplitude of their fluctuations to the 2.2 $\micron$ under the Rayleigh-Jeans assumption \citep{matsumoto11}. As a result, the fluctuation spectra from the AKARI and the Spitzer are marginally consistent with each other. Especially, the Spitzer is extended toward larger angular scales where the fluctuation amplitude is 10 times larger than that of the ILG. The discrepancy between the CIBER and the other measurements (i.e. Spitzer and AKARI) may indicate that the contributing components of the NIREBL are somehow different depending on the wavelengths. Otherwise, since the scaling is only valid if fluctuation follows Rayleigh-Jeans like spectrum, the discrepancy may occurred by the scale factors. Nevertheless, their similar spectral shapes indicate they have same origin. Furthermore, the fluctuation spectrum of the IRTS at 2$^{\circ}$ is located at the middle of the CIBER and the Spitzer fluctuations so either case implies a peak at around 1$^{\circ}$ angular scale. Evidently, the 1$^{\circ}$ peak seems to be a common feature in the broad wavelength ranges at near-IR. We also examined the IRTS 1.6 and 2.2 $\micron$ brightness correlation. As shown in figure \ref{F10}, the 1.6 and 2.2 $\micron$ show excellent correlations. The 1.6/2.2 $\micron$ ratio is consistent with that from the NIREBL brightness spectrum obtained by \citet{matsumoto15}. In addition, we derived the absolute brightness of the NIREBL using result shown in figure \ref{zlcorr}. The y-intercept where the brightness of the ZL becomes zero represents the absolute brightness of the NIREBL. They are 56.032 and 28.228 nW m$^{-2}$ sr$^{-1}$ for the 1.6 and 2.2 $\micron$, respectively. They are fairly consistent with those of \citet{matsumoto15} which implies that our NIREBL brightness derivation is reasonable. This confirms consistency in the data analysis, and the excess fluctuation is strongly associated with the NIREBL spectrum. \section{DISCUSSION}\label{S:discussion} To find the possible origin of excess power at around 1$^{\circ}$, we examined several candidate sources. High redshift objects (e.g. first stars) were initially excluded since they show a turn over at around 0.3$^{\circ}$, which contradicts with the peak fluctuation at around 1$^{\circ}$ (see figure A-2 in \cite{kashlinsky12}). The first candidate is the foregrounds such as ISL, DGL, and ZL. To check this, we analized the cross correlation between each foreground component and NIREBL using the PolSpice analysis tool. However, none of the foreground components show correlation with the NIREBL. As a reference, the ISL and the DGL fluctuations are shown in Figure \ref{F26}. Since the ZL is based only on the model, we do not evaluate the ZL fluctuation in this work. According to the \citet{zemcov14}, however, the ZL fluctuation is too small to detect the fluctuation power. The second candidate is the IHL. The IHL fluctuation can be composed of one-halo and two-halo terms \citep{cooray12a}. The one-halo term describes the clustering of baryonic matter inside a halo, and the two-halo term describes the correlations between the individual halos. The two-halo term shows larger power fluctuation than the one-halo term. In figure \ref{F8}, we drew the contribution of the IHL from \citet{zemcov14} and compared it to the NIREBL fluctuations. Although the IHL spectrum was only estimated at sub-degree scales, the amplitude was too low to explain the excess at 1$^{\circ}$. Next one is the DGL which accounts for a large portion of the fluctuation at a small angular scale measurement \citep{zemcov14}. \citet{gautier92} empirically derived that the DGL power spectrum (\textit{$C_l$}) follows \textit{$l^{-3}$} of power-law. If we extend the DGL spectrum at \citet{zemcov14} toward a larger angular scale with the constant power-law, the excess emission of the IRTS can be explained. However, their DGL estimation was obtained from low angular resolution map \citep{schlegel98} and we may expect a slower increase than $\theta^3$ towards larger angular scales. To measure the DGL directly without a power-law extrapolation at sub-degree scales, we used a high resolution (pixel scale $\sim$ 0.16$\arcmin$) and deep pointing AKARI 90 $\micron$ image of the North Ecliptic Pole (NEP) region \citep{seo15}. The intensity of the map was scaled to near-IR using the empirical scaling relation in the same manner as described in section \ref{SSS:irtsdgl}. We then measured the power spectrum using POKER\footnote{http://www.ipag.osug.fr/$\sim$ponthien/Poker/Poker.html} which is a publicly available tool. The POKER estimates the power spectrum using a Fourier transform under the flat sky approximation. Since projecting the observed sky onto a plane distorts the data, it is only applicable for an image of less than a few degrees \citep{pont11}. The flat sky approximation is valid for the AKARI image with a FoV of 1.2$^{\circ}$ $\times$ 1.2$^{\circ}$. The estimated DGL fluctuation is consistent with the CIBER at smaller angular scales but decreases toward larger angular scales as shown in figures \ref{F8} and \ref{F9}. Although the DGL intensity depends on the field to field, the overall shape of the DGL fluctuation at degree scale also decreases as shown in figure \ref{F26}. Furthermore, the cross correlation between the DGL and the NIREBL indicates that the DGL does not contribute to the NIREBL. We examined the possibility of stellar contamination on the residual background, which can be imperfectly subtracted. If the NIREBL has residual stellar contribution, the 1.6/2.2 $\micron$ ratios of the NIREBL and that of the stars are similar. To check this, we derived 1.6/2.2 $\micron$ ratio of 2MASS stars in the IRTS fields. The derived ratio is 0.57 which is only 15\% steeper than the 1.6/2.2 $\micron$ ratio of the NIREBL in figure \ref{F10}. Since the difference is not significant, we additionally checked whether a Galactic latitude dependency exists on the NIREBL map as shown in figure \ref{isldep}. Here, we averaged the brightness of the NIREBL map with constant interval (i.e. 3$^{\circ}$) of the Galactic latitude. Nevertheless, it shows no dependency along the Galactic latitude, which indicates that the Galactic stars are not proper candidates. We compared the IRTS with the DIRBE (see figure \ref{F4} for DIRBE map). Since the DIRBE has much brighter detection limit (i.e. $\sim$3 mag), stellar contribution is mainly due to nearby bright stars and thus no Galactic latitude dependency is shown. However, they carefully subtracted the Galactic stars to achieve homogeneous background map. Using much more sensitive IRTS image but poor attitude information, the 1$\sigma$ of the NIREBL brightness distribution at 2.2 $\micron$ is 2.16 nW m$^{-2}$ sr$^{-1}$ which is fairy consistent with the DIRBE 2.2 $\micron$ study \citep{levenson07}. A fraction of the NIREBL brightness is also contributed by normal galaxies. To estimate contribution of the galaxy at the degree scales, we performed Monte Carlo simulations using the galaxy count model presented by \citet{keenan10}. We made a brightness map by randomly distributing the galaxies in the sky based on the model count and calculated the power spectrum. We then repeated this procedure 10 times and made 10 maps. We calculated the power spectrum of each map and took average of those power spectra. Nevertheless, they contribute less than 1\% of the NIREBL fluctuation level for 1.6 and 2.2 $\micron$ at large angular scales (see figures \ref{F8} and \ref{F9}). We also examined the near-IR and far-IR cross correlation using the PolSpice analysis tool. For the near-IR map, we used the NIREBL map reduced from this work. For the far-IR map, we used the Cosmic Infrared Background (CIB) map reduced from the Planck 857 GHz data. The Planck team provides a Galactic thermal dust and Cosmic Microwave Background (CMB) removed Planck map\footnote{http://pla.esac.esa.int/pla} where the residual brightness contains only the extragalactic CIB component. According to \citet{planck11}, the Planck map is composed of dusty, star forming galaxies mostly from a low redshift (z $<$ 0.8). The auto correlation spectrum for Planck is shown in figure \ref{F11}. The fluctuation seems to have smooth connection with Herschel 350 $\micron$ one \citep{thacker15} toward smaller angular scales. Interestingly, the IRTS and the Planck show a good correlation, although only the upper bound is shown due to a large error (see figure \ref{F11}). If the nominal cross spectrum is near the upper bound, the fluctuation spectrum (i.e. IRTS K cross Planck) smoothly connects with the Spitzer (3.6 $\micron$) cross the Herschel (350 $\micron$) spectrum \citep{thacker15} having a peak at around 1$^{\circ}$ angular scale. Note that the Spitzer (3.6 $\micron$) spectrum is scaled to the IRTS 2.2 $\micron$ under the Rayleigh-Jeans assumption for the comparison. According to the measurements at sub-degree scale by \citet{thacker15}, about half of the near-IR background can be explained by dusty, star forming galaxies, and the residuals can be explained by the IHL. However, it is difficult to explain the whole excess at degree scales since only the upper limits were obtained. Since the sources contributing to the fluctuations of the near-IR background at such large angular scales have not been clearly identified, we examined possible candidates. However, none of them seem to show a significant contribution. Thus, future studies are necessary to understand the anisotropies from sub-degree to degree scales. \section{SUMMARY}\label{S:summary} We measure the NIREBL fluctuation spectra at angular scales between 2$^{\circ}$ to 20$^{\circ}$ for the 1.6 and 2.2 $\micron$ for the first time. The NIREBL power spectrum is calculated from the NIREBL brightness map after subtracting the foreground components, such as the DGL, the ISL, and the ZL from the observed sky brightness. The readout and photon noises of the IRTS are subtracted from the power spectrum. Within the range of the angular scale studied here, the NIREBL fluctuation monotonically declines with \textit{F($\sqrt{l(l+1)C_l/2\pi}$}) $\sim$ \textit{$\theta^{-1}$} constant power-law indicating that the fluctuations at an angular scale greater than 2$^{\circ}$ is random and structureless. The bumpy structures in \citet{matsumoto05} is also reduced in this work by correcting the effect of the mask pixels on the power spectrum. Furthermore, comparing with \citet{matsumoto05}, our study achieves larger sky coverage and thus larger 2-dimensional sampling which enables us to compare fluctuations with other studies directly. Our result also consistent with \citet{matsumoto15} for the 1.6 and 2.2 $\micron$ absolute brightness measurement. This implies that the sky fluctuation is strongly related to the NIREBL spectrum. Comparing the results with previous studies at sub-degree scales, both the 1.6 and 2.2 $\micron$ spectra appear to have broad bumps with a center at around 1$^{\circ}$. We examine several proposed origins explaining the sub-degree scale fluctuations, but these are not likely contribute to the fluctuations at degree scales. Interestingly, the fluctuations at 857 GHz with Planck after subtraction of the foreground and CMB suggest a good correlations with those of the IRTS bands, although we can only set the upper limit due to large uncertainties. If they have a significant correlation, this indicates that some portions of the anisotropies at degree scales can be explained by dusty, star forming galaxies at $z < 0.8$. Recently, the Korean space mission MIRIS performed deep observations toward the large area near the NEP region (10$^{\circ}$ $\times$ 10$^{\circ}$). The data is being processed and is expected to probe the fluctuations in the spectrum at around 1$^{\circ}$ to several degree scales. This work provides motivation to study various kinds of background that can contribute to the degree scale fluctuations. \newpage \begin{figure} \includegraphics[width=160mm]{F14.pdf} \caption{% The brightness correlation between the IRTS SKY and IRTS ZL. Left and right panels are 1.6 and 2.2 $\micron$, respectively. Red symbol indicates raw data before clipping process. Blue symbol indicates remained data after the clipping process. Black solid line is linear fit along the most dense data regions in the raw data.}% \label{F0} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F15.pdf} \caption{% Same correlation diagram as shown in figure \ref{F0} at 1.6 $\micron$. Each data is colored by its Galactic latitude as indicated by color bar. To highlight the data at low Galactic latitude region (\textit{b} $<$ 45$^{\circ}$), we draw them with larger symbol than others.}% \label{F15} \end{figure} \clearpage \begin{figure} \begin{center} \includegraphics[width=160mm]{F1.pdf} \end{center} \caption{% Number of the IRTS data belong to each HEALPix pixel. The brightness of a HEALPix pixel is mean from the belonging IRTS data. Here, the bin size of the histogram is 5.}% \label{F1} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F2.pdf} \caption{% DGL spectrum normalized by far-IR emission at 100 $\micron$. The CIBER/LRS is from \citet{arai15}, and DGL model to fit the CIBER/LRS data is drawn with solid line (ZDA04; \cite{brandt12}). Blue diamond symbols are points for the IRTS bands. The DIRBE is from \citet{sano15}, AKARI is from \citet{tsumura13a}, and MIRIS is from \citet{onishi18}. Here, the AKARI data is mainly contributed by the PAH emission. Since the IRTS sky coverage is far from the Galactic plane (b $>$ 40$^{\circ}$) where contributions of PAH emission is negligible, we did not consider the AKARI data.}% \label{F2} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F3.pdf} \caption{% Limiting magnitudes for the 24 IRTS bands (diamond symbol). Red and blue solid lines are the 1.6 and 2.2 $\micron$ limiting magnitudes used to estimate the brightness due to unresolved Galactic stars. Also drawn in dashed lines are $\pm$ 0.5 of limiting magnitude errors. Shaded colors represent bandwidths of 1.6 and 2.2 $\micron$.}% \label{F3} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F4.pdf} \caption{% Flow chart of the process to estimate the ISL brightness of an IRTS field based on the 2MASS stars.}% \label{Fislmap} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F13.pdf} \caption{% Histogram of ISL medians for 100 simulated maps. Left and right panels are 1.6 and 2.2 $\micron$, respectively. Here the bin size is 0.01 nW m$^{-2}$ sr$^{-1}$. Each histogram is fitted with Gaussian function shown in red solid line.}% \label{F13} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F5.pdf} \caption{% Upper data points show correlation study between the ZL model brightness and the observed surface brightness after subtracting the ISL and DGL from the observed sky brightness. Lower data points are NIREBL brightness after subtracting the corrected ZL from the y-axis. Black solid lines are best fit lines. Left and right panels are 1.6 and 2.2 $\micron$, respectively.}% \label{zlcorr} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F6.pdf} \caption{% Brightness maps for 1.6 and 2.2 $\micron$ in Galactic coordinates. Left and right maps are for 1.6 and 2.2 $\micron$, respectively. Brightness maps of IRTS raw data without mask, and IRTS raw data with mask, DIRBE 2.2 $\micron$ sky map at IRTS field, ZL, DGL, ISL, and NIREBL with mask are shown from top to bottom. Units in color bars are nW m$^{-2}$ sr$^{-1}$. DIRBE 2.2 $\micron$ sky map is shown to compare with the IRTS SKY.}% \label{F4} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F8.pdf} \caption{% The measured 1.6 $\micron$ fluctuations for the IRTS. The IRTS (this work) and the CIBER \citep{zemcov14} auto spectra are in filled and unfilled red circle, respectively. The shaded color of the IRTS shows the error including systematic error and the random error is drawn with error bar. The shaded color for the CIBER denotes estimated errors. Black dot-dashed line is a power-law with index -1. The red dot-dashed line is spectrum due to unresolved galaxies. Dashed line is DGL spectrum measured using the AKARI/FIS deep pointing data toward NEP region \citep{seo15}. Dotted and solid lines are unmasked sources (i.e. stars and galaxies for $m_{H}$ $>$ 17) and IHL spectrum from \citet{zemcov14}, respectively.}% \label{F8} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F9.pdf} \caption{% The measured 2.2 $\micron$ fluctuations for the IRTS. The IRTS (this work) and the CIBER \citep{zemcov14} auto spectra are in filled and unfilled blue circles, respectively. The shaded color of the IRTS shows the error including systematic error and the random error is drawn with error bar. The shaded color for the CIBER denotes estimated errors. Black dot-dashed line is a power-law with index -1. The blue dot-dashed line is spectrum due to unresolved galaxies. The CIBER 1.6 $\micron$ is scaled to 2.2 $\micron$ using IRTS 1.6/2.2 $\micron$ color ratio. Plus signs with errors are the fluctuation spectra from the AKARI 2.4 $\micron$ \citep{matsumoto11,seo15} and Spitzer 3.6 $\micron$ \citep{kashlinsky12}. Their spectra are scaled to 2.2 $\micron$ under the Rayleigh-Jeans assumption. Dashed line is DGL spectrum measured using the AKARI/FIS deep pointing data toward NEP region \citep{seo15}.}% \label{F9} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F10.pdf} \caption{% The correlation between the 1.6 and 2.2 $\micron$ NIREBL brightness after subtraction of the astrophysical foreground components from the IRTS data. Each data point has different symbol size inversely weighted by its error. That is, the lager symbol represents the smaller error. The best linear fit is shown in red solid line. The fitting parameters are shown in the bottom-right corner of the figure.}% \label{F10} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F16.pdf} \caption{% The foreground fluctuations of the IRTS fields. The ZL fluctuation is not shown since the ZL is based only on the model and expected to have very small fluctuation \citep{zemcov14}.}% \label{F26} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F11.pdf} \caption{% The NIREBL brightness dependence along the Galactic latitude bin. Here the bin size is 3$^{\circ}$. The 1 $\sigma$ error of each bin is drawn. The upper and lower curves are for 1.6 and 2.2 $\micron$, respectively.}% \label{isldep} \end{figure} \clearpage \begin{figure} \includegraphics[width=160mm]{F12.pdf} \caption{% Auto correlation fluctuation spectra of Herschel 350 $\micron$ \citep{thacker15} and Planck 350 $\micron$ (this work), together with cross correlations of Spitzer 3.6 $\micron$ and Herschel 350 $\micron$ \citep{thacker15}, IRTS 1.6 $\micron$ and Planck 350 $\micron$ (this work), and IRTS 2.2 $\micron$ and Planck 350 $\micron$ (this work). Auto correlation fluctuation spectra of IRTS 1.6 $\micron$ and 2.2 $\micron$ (this work) are also drawn. Only upper limits (arrows with solid lines) of cross correlation between the IRTS bands and Planck are shown because of large errors. The Spitzer spectrum is scaled to 2.2 $\micron$ under the Rayleigh-Jeans assumption. Dashed line is DGL spectrum measured using the AKARI/FIS deep pointing data toward NEP region \citep{seo15}.}% \label{F11} \end{figure} \clearpage \begin{table} \tbl{Error budget for the NIREBL fluctuation}{% \begin{tabular}{|c|ccc|cc|} \hline \multicolumn{1}{|l|}{} & \multicolumn{3}{c|}{Statistical error} & \multicolumn{2}{c|}{Systematic error} \\ \hline Band & ISL attitude error & DGL scale factor error & Healpix binning error & IRTS Calibration error & ISL limiting mag error \\ 1.6 $\micron$ & 1.31 & 0.24 & 0.08 & 0.13 & 1.86 \\ 2.2 $\micron$ & 0.71 & 0.06 & 0.04 & 0.06 & 0.98 \\ \hline \end{tabular}}\label{T1} \begin{tabnote} \footnotemark[$*$] The sample variance is not shown since it is estimated by empirically determined Knox formula. \\ \footnotemark[$**$] Units are nW m$^{-2}$ sr$^{-1}$. \\ \end{tabnote} \end{table} \clearpage \begin{ack} {\normalsize This work is based on the observations with the IRTS. M.G.K. acknowledges support from the Global PhD Fellowship Program through the NRF, funded by the Ministry of Education (2011-0007760). H.M.L. was supported by NRF grant 2012R1A4A1028713. K.T. was supported by JSPS KAKENHI (17K18789 and 18KK0089). W.-S.J. acknowledges support from the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science and ICT (MSIT) of Korea (NRF-2018M1A3A3A02065645). The authors thank Dr. Hivon for providing Polspice and Dr. Ponthieu for providing POKER power spectrum analysis tool. We also thank Dr. Girardi who enables us to model the Galactic stars using the TRILEGAL code. This publication makes use of data products from the 2MASS, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the NASA and the NSF. Based on observations obtained with Planck (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada.} \end{ack}
1,108,101,566,133
arxiv
\section{Introduction} Video Retrieval is one of the most eminent and challenging problems in the digital world today. It is the task of ranking videos in a database based on their relevance to user input queries. While most practical applications use video meta-data to convert the problem into a straight-forward page ranking problem, there are vast databases of videos with no labeled meta-data. User queries can be of multiple forms. The most popular form of input is textual input. One of the most popular publicly available applications of text-to-video retrieval is Google’s video search. It’s performance, however, is driven by the metadata of the videos in its search space. Some research has been published in recent years on video retrieval through text queries and and the performance in this task has improved over the years. In this paper, we focus our efforts on video retrieval through image queries, where an even lesser degree of research has been done. There are also drawbacks in most methods proposed so far and there are many challenges that any proposed method needs to overcome. The biggest of those challenges is to find an efficient method of storing the videos in your search database. The next big challenge that hasn’t been solved yet is to tap into the temporal information contained in video data. While image classification and object detection tasks in images have been proven to work really well in the last few years, simply searching for objects in the frames of a video would fail to make use of the temporal information contained in videos. In this paper, we propose a novel approach to solve this problem. We first pre-process a video database to extract key features from video frames. These features are clustered, such that similar frames end up in the same cluster. We generate the embeddings for these clusters by aggregating the embeddings of their constituent frames. To account for temporal information in videos, we model these clusters as nodes in a graph. Then, the cluster embeddings are augmented by including neighboring cluster information. These augmented cluster embeddings are stored and used in our video ranking process. The proposed method was tested on the MSR-VTT dataset. \section{Related Work} Most of the related work focuses on Video Retrieval through Text queries. Video retrieval through image queries has been relatively unexplored. However, our approach borrows concepts and other strategies like testing strategies from the discussed methods to develop a new approach for video retrieval through image queries. The paper \cite{araujo} introduces a new retrieval architecture, in which the image query can be compared directly with database videos - significantly improving retrieval scalability compared with a baseline system that searches the database on a video frame level. Matching an image to a video is an inherently asymmetric problem. An asymmetric comparison technique for Fisher vectors and systematically explore query or database items with varying amounts of clutter, showing the benefits of the proposed technique. Novel video descriptors which use Fisher vectors that can be compared directly with image descriptors are also proposed in this work. Large-scale experiments using three datasets show that this technique enables faster and more memory-efficient retrieval, compared with a frame-based method, with similar accuracy. The proposed techniques are further compared against pre-trained convolutional neural network features, outperforming them on three datasets by a substantial margin. However, this paper only uses query images which are the frames of the videos in the dataset and does not work with out of dataset images. The paper \cite{li} introduces a method of integrating the spacial temporal neighbourhood information using an attention mechanism that focuses on useful features on each frame. The Neighborhood Preserving Hashing method creates a learned hashing function that can easily map similar videos. Another approach for representation learning for videos is to create hierarchical graph clusters built upon video-to-video similarities. This is explored in the paper \cite{lee} using two different methods, the first is to create smart triplets and the second is to create pseudo labels. This creates a highly scalable method for creating embeddings for video understanding tasks. When videos are represented as individual frames it makes the modeling of long-range semantic dependencies difficult. The paper \cite{shao} solves this issue by incorporating long range temporal features at the frame level using self attention. For training on video retrieval datsets they propose a supervised contrastive learning method that performs automatic hard negative mining and utilizes the memory bank mechanism to increase the capacity of negative samples. The paper \cite{dong} tackles the challenging problem of zero example video retrieval. In such a retrieval paradigm, an end user searches for unlabeled videos by ad-hoc queries described in natural language text with no visual example provided. Given videos as sequences of frames and queries as sequences of words, an effective sequence-to-sequence cross-modal matching is required. The majority of existing methods are concept based, extracting relevant concepts from queries and videos and accordingly establishing associations between the two modalities. In contrast, this paper takes a concept-free approach, proposing a dual deep encoding network that encodes videos and queries into powerful dense representations of their own. Dual encoding is conceptually simple, practically effective and end-to-end. As experiments on three benchmarks, i.e. MSRVTT, TRECVID 2016 and 2017 Ad-hoc Video Search show, the proposed solution establishes a new state-of-the-art for zero-example video retrieval. While most publications in the domain focus on learning joint text-video embeddings, there are difficulties with the approach. There aren’t many caption-labeled video datasets to work with. Hence, these methods aren’t too wide spread. To workaround this problem, they \cite{miech} use heterogeneous data sources to learn text-video embeddings. They propose a new model called Mixture-of-Embedding-Experts (MEE). It claims to work even with incomplete training data. They extend their embedding technique to work with face descriptors. They evaluate their performance on the MPII Movie Description dataset and MSR-VTT dataset. The proposed model shows considerable improvements and beats previous text-to-video retrieval and video-to-video retrieval methods. In the paper \cite{hu}, they introduce a semantic-based video retrieval framework. They use the approach for surveillance videos. They detect motion trajectories using clustering based methods. These clusters are structured hierarchically to obtain activity models. They propose a hierarchical structure of semantic indexing and object retrieval. Here, each individual activity gets all the semantic descriptions of the activity model from its parent activity. They use this technique to access individual objects semantically. They also allow for different kinds of input queries like keyword search, queries by sketch, and object queries. This paper \cite{zhang} proposes an efficient method for video retrieval using image queries. The focus of this approach is to make the entire process more efficient. They do not focus on improving the accuracies of state-of-the-art models. Instead, they focus on building a model that will work even for very large datasets. They use a Convolutional Neural Network to extract features from images in the videos. Then, they use a Bag of Visual Words model to aggregate these features. They use the K-means clustering algorithm to cluster them in order to reduce the space required to store the embeddings. Then, they propose their Visual Weighted Inverted Index algorithm to improve the accuracy and efficiency of retrieval. They evaluate their approach on the Youtube-8M and Sports-1M datasets. They use large datasets to show that their model works well even with a large database of videos. They also compare the performances of CNN-VWII and SIFT-VWII. K-NN \cite{knn} is an unsupervised machine learning algorithm to cluster data points. Euclidean distance is used as the comparison metric. Euclidean distances of the incoming point with the centres of all the clusters are calculated. The shortest distance is found and the incoming data point is assigned to that particular cluster. If there are a lot of features in a data point, dimensionality reduction can be done before clustering. GraphSAGE \cite{hamilton} is an algorithm for inductive learning and representation on large graphs. It generates low dimensional vectors for the nodes of the graphs. Existing models before this had to be re-trained when a new node was added to the graph. GraphSAGE uses node information and neighbour information and aggregates them to generalize the features of the unseen node. Aggregators take the neighbourhood as input and combine the embeddings with certain weights to create embeddings for the neighbourhood. The initial embeddings of each node is set to its node features. Till the ‘K’ neighbourhood depth, the neighbourhood embedding is created using the aggregator function for each node and it’s concatenated with the node features. This is then passed through a neural network to update the weights and features. Residual networks are a class of deep neural networks proposed in \cite{szegedy}. If ‘x’ is the input to the initial layer H(x) is the mapping of ‘x’ for the first few layers of any deep network. In the case of the residual network, the mapping is modified into H(x) – x. We consider this as F(x). Therefore, H(x) = F(x) + x. This formulation is used to solve the degradation problem. According to this problem, in theory deeper networks should have the same training error as the shallow ones, but this problem shows that due to the inability to approximate identity mappings by many non-linear layers. However, with residual learning, the solvers make the weights of the non-linear layers almost zero to approach identity mappings. This learning is adopted to every few layers in the network. The input and output layers are x and y for a given layer and the function F(x, {Wi}) determines the residual mappings learned. This function also represents convolutional layers and element wise addition is performed on the feature maps. \section{Proposed Method} The database of videos is first pre-processed. To create a memory-efficient and useful means of representing the videos, smart video embeddings were created. \subsubsection{Representing Video Frames} In our proposed pipeline, the only input to our model is a database of videos. Although videos are rich in information when observed by a human being, computers require alternate methods to process videos. The first step is to analyze videos at the level of their component frames. Once we have extracted frames from the video, we can use image embedding generation techniques to represent them. The input videos are sampled at 2 frames per second. Each frame is passed through an image embedding generation model. In our method, we have chosen to use pre-trained Residual Networks which are trained on the Imagenet dataset to generate frame embeddings. These networks produce embbedings of length 2048. Their residual connections allow features at lower layers to be preserved in deeper layers. We experimented with two variants of the residual network - ResNet50 and ResNet152. These networks are 50 and 152 layers deep respectively. \begin{figure}[] \centerline{\includegraphics[scale=0.6]{images/video_embeddings_new.png}} \caption{Basic Video Embedding Generation} \label{fig} \end{figure} \subsubsection{Representing Videos} Videos are generally represented as a concatenation of their component frame embeddings. In our approach, we have chosen to represent the entire dataset of videos together, instead of generating individual video representations. This will allow us to improve the retrieval speed of the model. Once we have generate embeddings for all sampled frames in the dataset, we cluster them. We use the K-Nearest Neighbors algorithm as a light-weight clustering algorithm that can be used for large-scale clustering applications. Frames across videos in the dataset are assigned into 175 clusters. These clusters are represented by the mean of their component frame embeddings. In this clustering process, we lose out on the important temporal information that is inherently present in videos. To preserve this information, we use a graph-based aggregation technique. An undirected graph using these clusters is created where each cluster is treated as a node. If frame Y belongs to cluster 1 follows frame X which belongs to cluster 2 in the video, then there is an edge connecting cluster 1 and 2. The edge weights in the graph are directly proportional to the number of such frame-frame (and cluster-cluster) transitions. To add temporal information to the embeddings, the cluster embeddings are aggregated with their first order neighbor cluster embeddings. This intermediate representation retains the temporal information in the videos and is used in retrieval. \begin{figure}[] \centerline{\includegraphics[scale=0.6]{images/graphsage.png}} \caption{Neighbor Aggregation in Graph Convolutional Networks} \label{fig} \end{figure} \begin{figure}[] \centerline{\includegraphics[scale=0.4]{images/cluster_embeddings_new.png}} \caption{Augmented Embedding Generation} \label{fig} \end{figure} \subsubsection{Video Retrieval} Any query image is first processed by the image embedding generation model. We use the augmented cluster embeddings to reduce the search space for every query. The query image embeddings is first compared with the cluster embeddings. The cosine similarity is used as the similarity metric for these embeddings. The clusters are ranked based on their cosine similarities and the top 'c' clusters are chosen for further comparisons. All frame embeddings present in these top clusters are compared with the query image, and ranked based on their similarities. The 'k' number videos corresponding to the top matching frames are retrieved for each query image and Precision@k is calcualted as: \begin{equation} P@k=\frac{R \cap k }{k} \end{equation} Where 'R' is the number of videos that are the same category as the query image and 'k' is the total number of videos retrieved. After this mAP@k is calculated for all the query images for a particular category. mAP is the mean of all the P@k for all the images for a particular category. It is given as: \begin{equation} mAP=\frac{ \sum_{n=1}^{k} P@k }{k} \end{equation} \section{Dataset} For the evaluation of this technique experiments were performed on the MSR-VTT dataset. The dataset contains 2990 videos which are around 20-60 seconds long and belong to 20 different categories. This dataset is extensively for video retrieval through text. However, rather than using the sentences associated with the videos, this technique uses the video information only hence making it possible to retrieve previously unseen videos. The categories of videos in the dataset are 1. Music 2. People 3. Gaming 4. Sports, Actions 5. News, Events, Politics 6. Education 7. TV Shows 8. Movie, Comedy 9. Animation 10. Vehicles, Autos 11. How-to 12. Travel 13. Science, Technology 14. Animals, Pets 15. Kids, Family 16. Documentary 17. Food, Drink 18. Cooking 19. Beauty, Fashion 20. Advertisement For our testing we merged some similar categories like Food and cooking and removed others like movies, documentary, advertisement, etc due to the arbitrary nature of the classes. For example, it's difficult to tell the difference between a movie clip and a clip from TV show or documentary without any context. Another reason for excluding some of the other classes like Science and Technology or education, is that there is often no clear visually discernible factor that puts a video in this category. For example a video of a teacher explaining a concept might be classified as education but there's no way to understand that the person in the video is a teacher or that something is being taught. \begin{figure}[] \centerline{\includegraphics[scale=0.30]{images/msr_vtt_dist_original_2.png}} \caption{MSR VTT number of videos per class} \label{fig} \end{figure} In the end after filtering, we were left with 11 relevant categories 1. Music 2. Gaming 3. Sports, Actions 4. News, Events, Politics 5. Vehicles, Autos 6. How-to 7. Travel 8. Animals, Pets 9. Kids, Family 10. Food, Drink, Cooking 11. Beauty, Fashion \begin{figure}[] \centerline{\includegraphics[scale=0.30]{images/msr_vtt_dist_modified_2.png}} \caption{MSR VTT number of videos per class after modification} \label{fig} \end{figure} \begin{figure}[] \centerline{\includegraphics[scale=0.65]{images/plot2.png}} \caption{MAP@10 with varying number of clusters} \label{fig} \end{figure} \begin{table}[htbp] \caption{Model Using ResNet152} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Category} & \textbf{mAP@5} & \textbf{mAP@10} & \textbf{mAP@20} \\ \hline Music & 60\% & 42.5\% & 37.5\% \\ \hline Gaming & 50\% & 40\% & 37.5\% \\ \hline Sports, Actions & 100\% & 97.5\% & 90\% \\ \hline News, Events, Politics & 45\% & 40\% & 36.25\% \\ \hline Vehicles, Auto & 85\% & 65\% & 56.25\% \\ \hline How-to & 30\% & 40\% & 28.75\% \\ \hline Travel & 55\% & 42.5\% & 32.5\% \\ \hline Animals, Pets & 85\% & 72.5\% & 61.25\% \\ \hline Kids, Family & 40\% & 40\% & 32.5\% \\ \hline Food, Drink, Cooking & 40\% & 40\% & 50\% \\ \hline Beauty, Fashion & 60\% & 52.5\% & 38.75\% \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} \begin{table}[htbp] \caption{Model Using ResNet50} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Category} & \textbf{mAP@5} & \textbf{mAP@10} & \textbf{mAP@20} \\ \hline Music & 30\% & 35\% & 31.25\% \\ \hline Gaming & 55\% & 47.5\% & 36.25\% \\ \hline Sports, Actions & 100\% & 95\% & 88.75\% \\ \hline News, Events, Politics & 50\% & 42.5\% & 36.25\% \\ \hline Vehicles, Auto & 60\% & 57.5\% & 51.25\% \\ \hline How-to & 40\% & 40\% & 31.25\% \\ \hline Travel & 50\% & 47.5\% & 35\% \\ \hline Animals, Pets & 90\% & 70\% & 50\% \\ \hline Kids, Family & 45\% & 40\% & 33.75\% \\ \hline Food, Drink, Cooking & 35\% & 37.5\% & 45\% \\ \hline Beauty, Fashion & 45\% & 45\% & 33.75\% \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} \begin{table}[htbp] \caption{Model Using ResNet152 without creating graph} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Category} & \textbf{mAP@5} & \textbf{mAP@10} & \textbf{mAP@20} \\ \hline Music & 45\% & 37.5\% & 33.75\% \\ \hline Gaming & 40\% & 35\% & 38.75\% \\ \hline Sports, Actions & 100\% & 97.5\% & 91.25\% \\ \hline News, Events, Politics & 45\% & 47.5\% & 37.5\% \\ \hline Vehicles, Auto & 80\% & 57.5\% & 50.25\% \\ \hline How-to & 35\% & 40\% & 31.25\% \\ \hline Travel & 55\% & 37.5\% & 28.75\% \\ \hline Animals, Pets & 90\% & 80\% & 61.25\% \\ \hline Kids, Family & 35\% & 32.5\% & 30\% \\ \hline Food, Drink, Cooking & 40\% & 42.5\% & 45\% \\ \hline Beauty, Fashion & 50\% & 42.5\% & 37.5\% \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} \begin{table}[htbp] \caption{Model Using ResNet50 without creating graph} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Category} & \textbf{mAP@5} & \textbf{mAP@10} & \textbf{mAP@20} \\ \hline Music & 35\% & 32.5\% & 27.5\% \\ \hline Gaming & 65\% & 55\% & 46.25\% \\ \hline Sports, Actions & 100\% & 92.5\% & 83.75\% \\ \hline News, Events, Politics & 50\% & 45\% & 32.5\% \\ \hline Vehicles, Auto & 68\% & 58\% & 60\% \\ \hline How-to & 40\% & 41.5\% & 31.25\% \\ \hline Travel & 40\% & 37.5\% & 30\% \\ \hline Animals, Pets & 90\% & 72.5\% & 62.5\% \\ \hline Kids, Family & 35\% & 30\% & 28.75\% \\ \hline Food, Drink, Cooking & 35\% & 37..5\% & 43.5\% \\ \hline Beauty, Fashion & 75\% & 52.5\% & 43.75\% \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} \begin{table}[htbp] \caption{Speed Comparison of Models} \begin{center} \begin{tabular}{|c|c|} \hline \textbf{Model} & \textbf{Effective Search Speed (Video Frames / Second)} \\ \hline ResNet152 & 15000\\ \hline ResNet50 & 18000\\ \hline \end{tabular} \label{tab1} \end{center} \end{table} \section{Results and Conclusions} The experiments for this study were run on a system with a 2.2 GHz Intel Core i7 processor with 16 GB of RAM. The system also had an Intel Iris Pro integrated graphics with 1.5 GB of memory. In the results, P@K denotes the precision of the results in the top ‘K’ ranked videos. The query images selected to evaluate this model were not from the dataset. Randomly four images for a particular category were selected and this model was run. For frames in the dataset, the model perfectly selects the video that it’s from. All the results depicted here are for images not from the dataset. As seen from the above tables, Resnet-152 outperforms Resnet-50. In a task that is as complex as this, we expect the larger model to work better than the smaller one. The depth of the ResNet-152 is more than three times the depth of ResNet-50. This means that there are a lot more weights to train, and consequently a lot more parameters to learn. However, ResNet-50 searches through the videos at approximately 180000 frames per second, in comparison to ResNet-152’s 15000 frames per second. Although this is a considerable difference in speeds, the accuracy of search in ResNet-152 is significantly higher. Hence, we have chosen to use the ResNet-152 model in the VIRALIQ desktop application. Also when the results of the proposed technique which are in tables I and II are compared to just clustering and retrieving which are given in tables III and IV, we see that the performance of the proposed method is better. The results in table I show improved mAP rates when compared with table III and similarly the results in table II show improved mAP rates when compared with table IV. This is because using this technique we can keep a track of the temporally relevant clusters due to the graph created and retrieve from those clusters as well. However, just by clustering and retrieving, there is a loss in temporal information and hence the precision of the retrieval also falls. In this model the number of temporal clusters to retrieve from can be specified and the best videos from those can be chosen. It can be seen that the results in table I and II outperform the results in table III and IV respectively in almost all categories. To improve on this further a ResNet model can be pre-trained on a different task, such as scene detection. This would improve the results for certain categories where the composition of the video frames are more important for classification than the individual objects in them. Also this model can be trained on a particular category such as sports, vehicles etc if it's known before hand what domain the model has to work in. Even without this it is evident the proposed model outperforms the traditional model. \section{Future Work} The videos in the MSR-VTT dataset are of very short duration. If the videos are longer, another technique can be leveraged to retrieve videos. Similar to the proposed technique the videos are sampled at 2 frames per second and then the frames are passed through the chosen residual network to get the embedding of each frame. Each embedding is a vector of dimension 2048. The embeddings of each frame are clustered using K-NN. The embedding for each cluster is calculated as the average of all vectors in that cluster. To preserve the temporal information, an undirected graph using these clusters is created as explained previously but here a graph is created for each video. To also add temporal information to the embeddings as explained, the cluster embedding is also aggregated with its first order neighbour cluster embeddings. Now an incoming image is not compared with individual frames of a video but it is compared with these temporal vectors. The query image is compared to these temporal vectors using cosine similarity. The advantage of this technique is that, when new videos are introduced into the dataset, the embedding and clustering has to be done only for these videos individually. However, in the proposed method the graph might change as the cluster centres will be forced to change due to the addition of new frames. Additionaly, this method produces embeddings for each video and these can be useful in other tasks as well. This also means that all the frame embeddings don't need to be stored as this method only works with the video embedding. Hence, the overall memory used will be lesser even though the memory access will be the same. \begin{table}[htbp] \caption{Results for creating graph for individual videos using ResNet152} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Category} & \textbf{mAP@5} & \textbf{mAP@10} & \textbf{mAP@20} \\ \hline Music & 50\% & 37.5\% & 33.5\% \\ \hline Gaming & 45\% & 35\% & 37.5\% \\ \hline Sports, Actions & 100\% & 95\% & 92.5\% \\ \hline News, Events, Politics & 50\% & 40\% & 40\% \\ \hline Vehicles, Auto & 75\% & 57.5\% & 53.75\% \\ \hline How-to & 30\% & 35\% & 33.75\% \\ \hline Travel & 55\% & 40\% & 28.75\% \\ \hline Animals, Pets & 85\% & 77.5\% & 62.5\% \\ \hline Kids, Family & 35\% & 35\% & 30\% \\ \hline Food, Drink, Cooking & 40\% & 42.5\% & 46.25\% \\ \hline Beauty, Fashion & 55\% & 50\% & 40\% \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} \begin{table}[htbp] \caption{Results for retrieval without creating graph using ResNet152} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Category} & \textbf{mAP@5} & \textbf{mAP@10} & \textbf{mAP@20} \\ \hline Music & 45\% & 37.5\% & 33.5\% \\ \hline Gaming & 40\% & 35\% & 38.75\% \\ \hline Sports, Actions & 100\% & 97.5\% & 91.25\% \\ \hline News, Events, Politics & 45\% & 47.5\% & 37.5\% \\ \hline Vehicles, Auto & 80\% & 57.5\% & 50\% \\ \hline How-to & 35\% & 40\% & 31.25\% \\ \hline Travel & 55\% & 37.5\% & 28.75\% \\ \hline Animals, Pets & 90\% & 80\% & 61.25\% \\ \hline Kids, Family & 35\% & 32.5\% & 30\% \\ \hline Food, Drink, Cooking & 40\% & 42.5\% & 45\% \\ \hline Beauty, Fashion & 50\% & 42.5\% & 37.5\% \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} As seen in tables IV and V, there isn't a big difference between creating a graph or retrieving just after clustering. This can be due to the short nature of the videos in this dataset. Alternatively, different algorithms can be experimented to create the graph to get better results in this technique. The experimentation in this dataset has been limited to only the MSR-VTT dataset as it was the only open source data set which was available in video format.
1,108,101,566,134
arxiv
\section{Introduction} \label{sec:introduction} The interactions between two baryons have been studied by two methods in lattice QCD. The first one is the direct method~\cite{Yamazaki:2015asa,Wagman:2017tmp,Berkowitz:2015eaa}, which extracts the eigenenergies of the ground and/or the excited states from the temporal correlations of two-baryon systems. The binding energies and scattering phase shifts are calculated from eigenenergies using L\"uscher's finite volume formula~\cite{Luscher:1985dn,Luscher:1990ux}. The second one is the HAL QCD method~\cite{Ishii:2006ec,Aoki:2009ji,HALQCD:2012aa,Aoki:2012tk}, which derives the energy-independent non-local kernel (called the ``potential" in the literature) from the tempo-spatial correlations of two baryons. Then the binding energies and phase shifts in the infinite volume are calculated by using the Schr\"{o}dinger-type equation with the kernel as the potential, which has field theoretical derivation on the basis of the reduction formula for composite operators. Both methods rely on the asymptotic behavior of the Nambu-Bethe-Salpeter (NBS) wave function, and should in principle give the same results for observables~\blue{\cite{Aoki:2009ji,Aoki:2012tk,Aoki:2010ry}}. In practice, however, the current numerical results for two-nucleon ($NN$) systems seem to be inconsistent with each other: For heavy pion masses ($m_\pi > 0.3$~GeV), both dineutron ($^1$S$_0)$ and deuteron ($^3$S$_1$) are claimed to be bound in the direct method, while those are unbound in the HAL QCD method. Also, the discrepancy is ubiquitous in two-baryon systems: Although both methods indicate a bound H-dibaryon in the SU(3) flavor limit at $m_\pi = m_K \simeq 0.8$~GeV, the binding energy is 74.6(4.7)~MeV in the direct method~\cite{Beane:2012vq}, while it is 37.8(5.2)~MeV in the HAL QCD method~\cite{Inoue:2011ai}.\footnote{Recently, there appears another study~\cite{Francis:2018qch} using the direct method, which indicates the dineutron is unbound while the H-dibaryon is bound 19(10)~MeV at $m_\pi = m_K = 0.96$~GeV. } In a series of recent papers~\cite{Iritani:2016xmx,Iritani:2016jie, Iritani:2017rlk,Iritani:2017wvu,Aoki:2017byw,Iritani:2018zbt}, we have carefully examined the systematic uncertainties in both methods. The difficulty of two-baryon systems compared to a single baryon originates from the existence of elastic scattering states. Their typical excitation energies $\delta E$ are one to two orders of magnitude smaller than ${\cal O}(\Lambda_{\rm QCD})$, so that one needs to probe large Euclidean time $t \gtrsim (\delta E)^{-1}$ to extract the genuine signal of the ground state in the direct method. However, the statistical fluctuation increases exponentially in $t$ as well as the baryon number $A$ for multi-baryon systems as proved in ~\cite{Parisi:1983ae, Lepage:1989hd}. This practically prevents one to identify the signal of the ground state in the naive analysis of the temporal correlation of two baryons. Moreover, our extensive studies~\cite{Iritani:2016jie, Iritani:2017rlk} showed that a commonly employed procedure in the direct method to identify plateaux at early time slices, $t \ll (\delta E)^{-1}$, suffers from uncontrolled systematic errors from the excited state contaminations, since pseudo-plateaux \footnote{ In Refs.~\cite{Iritani:2016jie,Iritani:2017rlk}, they are called ``fake plateaux'' or ``mirages'' of the plateau of the ground state. } easily appear at early time slices. The typical symptoms of such systematics in the previous studies were explicitly exposed by the normality check% \footnote{ In Ref.~\cite{Iritani:2017rlk}, it is called ``sanity check'', a common terminology in computer science for a simple/quick test~\cite{wiki:sanity}. } based on L\"uscher's finite volume formula~\cite{Luscher:1990ux} and the analytic properties of the $S$-matrix~\cite{Iritani:2017rlk}. As far as the HAL QCD method is concerned, the time-dependent formalism~\cite{HALQCD:2012aa} is free from the problem of the ground state saturation, since the energy-independent potential is extracted from the spatial and temporal correlations with the information of both the ground and excited states associated with the elastic scattering. While in practical calculations there appears a systematic uncertainty associated with the truncation of the derivative expansion for the non-locality of the potential, the derivative expansion is found to be well converged at low energies~\cite{Iritani:2018zbt,Murano:2011nz}. Other systematic uncertainties such as the contaminations from the inelastic states and the finite volume effect for the potential are also shown to be well under control~\cite{Iritani:2018zbt}. In this paper, we reveal the origin of the inconsistency between the direct method and the HAL QCD method, by explicitly evaluating the magnitude of the excited states in the temporal correlation functions. We focus on the $\Xi\Xi$ system in the $^1$S$_0$ channel, which is a most convenient channel to obtain insights into the $NN$ systems, since it belongs to the same SU(3) flavor multiplet as $NN (^1S_0)$ but has much better statistical signals. Detailed studies in this channel were already performed with the direct method~\cite{Iritani:2016jie} as well as the HAL QCD method~\cite{Iritani:2018zbt} in (2+1) flavor lattice QCD at $m_\pi = 0.51$~GeV and $m_K = 0.62$~GeV, so that the main purpose of this paper is to present an in-depth analysis by combining both results: In particular, the excited state contaminations in the temporal correlation functions are quantitatively evaluated by decomposing them in terms of the finite-volume eigenmodes of Hamiltonian with the HAL QCD potential (the HAL QCD Hamiltonian). We show how the pseudo-plateau actually appears at early time slices and also predict the time slice at which the ground state saturation is achieved. Moreover, we establish a consistency between the direct method and the HAL QCD method, by demonstrating that temporal correlation functions constructed from the optimized two-baryon operators by the eigenmode of the HAL QCD Hamiltonian show the plateaux with the values consistent with the eigenenergies at early time slices. This paper is organized as follows. In Sec.~\ref{sec:formalism}, we introduce the theoretical framework of the direct method and the HAL QCD method, and present the numerical setup of the lattice QCD calculation. In Sec.~\ref{sec:previous}, we recapitulate the previous analysis on the direct method~\cite{Iritani:2016jie} as well as on the HAL QCD method~\cite{Iritani:2018zbt}. In Sec.~\ref{sec:anatomy}, we decompose the correlation functions into the eigenmodes of the HAL QCD Hamiltonian. The anatomy of the excited state contaminations in the direct method is presented. We also demonstrate that eigenfunctions can be used to optimize two baryon-operators. The consistency between the temporal correlations with the optimized operators and the HAL QCD method is established. Sec.~\ref{sec:summary} is devoted to the conclusion. In Appendix~\ref{app:n2lo}, we check how the next-to-next-leading order (N$^2$LO) analysis for the HAL QCD potential affects the finite volume spectra. In Appendix~\ref{app:eigen_func}, we collect eigenfunctions of the HAL QCD Hamiltonian on various volumes. In Appendix~\ref{app:inelastic}, we study the reconstruction of the $R$-correlator from the elastic states. In Appendix~\ref{app:delta_E_r}, we collect the results for the reconstruction of the effective energy shifts. In Appendix~\ref{app:eigen-proj}, we show the effective energy shifts from the optimized operators on various volumes. We note that a preliminary account of this study was reported in Refs.~\cite{Iritani:2016xmx, Iritani:2017wvu}. \section{Methods and Lattice setup} \label{sec:formalism} In this section, we briefly summarize the direct method and the HAL QCD method for two-baryon systems, together with the lattice setup used in this paper. \subsection{Direct method} \label{subsec:formalism:direct} In the direct method for two-baryon systems, the energy eigenvalues (on a finite volume) are measured from the temporal correlation of the two-baryon operator, $\mathcal{J}^{\rm sink, src}_{BB}(t)$, as \begin{equation} C_\mathrm{BB}(t) \equiv \langle 0 | \mathcal{J}^{\rm sink}_{BB}(t) \overline{\mathcal{J}}^{\rm src}_{BB}(0)| 0 \rangle = \sum_n Z_n e^{-W_n t} + \cdots, \label{eq:4pt_direct} \end{equation} where $W_n$ is the energy of $n$-th two-baryon elastic state and the ellipsis denotes the inelastic contributions. In order to obtain the energy shifts $\Delta E_n \equiv W_n - 2m_B$ with $m_B$ being the single baryon mass, one often uses the ratio of the temporal correlation function of two- (one-) baryon system $C_\mathrm{BB}(t)$ ($C_\mathrm{B}(t)$) as \begin{equation} R(t) \equiv \frac{C_\mathrm{BB}(t)}{\{C_\mathrm{B}(t)\}^2}, \qquad C_B(t) = Z_B e^{-m_B t} + \cdots , \end{equation} to reduce the statistical uncertainties as well as some systematics thanks to the correlations between $C_\mathrm{BB}(t)$ and $C_\mathrm{B}(t)$. The energy shift of the ground state can be obtained from the plateau value of the effective energy shift defined by \begin{equation} \Delta E_\mathrm{eff}(t) \equiv \frac{1}{a} \log \frac{R(t)}{R(t+a)} \label{eq:Eeff} \end{equation} with $a$ being the lattice spacing. Here $t$ needs to be sufficiently large compared to the inverse of the excitation energy. Once the energy shift of the ground (or excited) state on a finite volume is obtained, one can calculate the scattering phase shift in the infinite volume at that energy, $\delta_0 (k)$, via L\"uscher's finite volume formula~\cite{Luscher:1990ux}, \begin{equation} k \cot \delta_0 (k) = \frac{1}{\pi (La)} \sum_{\vec n\in \mathbf{Z}^3}\frac{1}{\vec n^2 -q^2}, \qquad q=\frac{k (La)}{2\pi}, \label{eq:kcot_delta} \end{equation} where we consider the S-wave scattering for simplicity, $k$ is defined through $W_n = 2\sqrt{m_B^2 + k^2}$ and $L$ is the number of the spatial site of the lattice box. If the energy shift $\Delta E_n$ is negative, the analytic continuation of the above formula to $k^2 <0$ is understood. The state with a negative energy shift in the infinite volume limit corresponds to a bound state. As noted before, the origin of the difficulty of two-baryon systems is the existence of elastic scattering states. Since the typical excitation energy of such states is $(2\pi)^2/((La)^2m_B)$, the ground state saturation requires extremely large $t$, e.g., $t \gtrsim {\cal O}(4)$~fm at $La=4$ fm and $m_B=2$~GeV, where the bad signal-to-noise ratio makes it practically impossible to obtain signals. In the literature of the direct method~\cite{Yamazaki:2015asa, Wagman:2017tmp, Berkowitz:2015eaa}, however, one extracted the energy shift for the ground state from the plateau-like behavior of the effective energy shift at early time slices, $t \sim {\cal O}(1)$~fm instead, assuming that the ground state saturation is achieved. \subsection{HAL QCD method} \label{subsec:formalism:hal} In the HAL QCD method, the energy-independent non-local potential $U(\vec{r}, \vec{r'})$ is defined from \begin{equation} (E_k - H_0) \psi^W(\vec{r}) =\int d\vec{r'} U(\vec{r}, \vec{r'}) \psi^W(\vec{r'}) , \label{eq:SCH} \end{equation} with the Nambu-Bethe-Salpeter (NBS) wave function~\cite{Ishii:2006ec,Aoki:2009ji}, \begin{eqnarray} \psi^W(\vec{r}) &=& \langle 0 \vert T\{ \sum_{\vec{x}} B(\vec{x}+\vec{r},0)B(\vec{x},0) \} \vert 2B, W \rangle. \end{eqnarray} Here $\vert 2B, W\rangle$ is the QCD eigenstate for two baryons with the eigenenergy $W = 2\sqrt{m_B^2 + k^2}$ in the center of mass system, $B(\vec{x},t)$ is a single baryon operator, $E_k = k^2/(2\mu)$, and $H_0 = -\nabla^2/(2\mu)$ with $\mu = m_B/2$ being the reduced mass. Eq.~(\ref{eq:SCH}) has field theoretical derivation on the basis of the Nishijima-Zimmermann-Haag reduction formula for composite operators~\cite{Haag:1958vt}. Below the inelastic threshold $W_\mathrm{th}$, the potential $U(\vec{r}, \vec{r'})$ is shown to be faithful to the phase shifts, which are encoded in the behaviors of the NBS wave functions at large~$r$. The four-point correlation function of the two-baryon system $F(\vec{r}, t)$ is given by \begin{eqnarray} F(\vec{r}, t) &\equiv& \langle 0 | T \{ \sum_{\vec{x}} B(\vec{x}+\vec{r},t)B(\vec{x},t) \overline{\mathcal{J}}^{\rm src}_{BB}(0)\}| 0\rangle \\ &=& \langle 0 | T\{ \sum_{\vec{x}} B(\vec{x}+\vec{r},t) B(\vec{x},t)\} \sum_{n} | 2B, W_n \rangle \langle 2B, W_n | \overline{\mathcal{J}}^{\rm src}_{BB}(0) | 0 \rangle + \cdots \nonumber \\ &=& \sum_{n} A_{n} \psi^{W_n}(\vec{r})e^{-W_n t} + \cdots, \end{eqnarray} where $A_{n} \equiv \langle 2B, W_n|\overline{\mathcal{J}}^{\rm src}_{BB}(0)|0\rangle$ is the overlap factor and the ellipsis represents the inelastic contributions. In the time-dependent HAL QCD method~\cite{HALQCD:2012aa,Aoki:2012tk}, the potential is extracted directly from the so-called $R$-correlator as \begin{equation} \left[ -H_0 - \frac{\partial}{\partial t} + \frac{1}{4m_B} \frac{\partial^2}{\partial t^2} \right]R(\vec{r},t) = \int d\vec{r'} U(\vec{r}, \vec{r'}) R(\vec{r'},t), \label{eq:pot-def} \end{equation} where \begin{equation} R(\vec{r}, t) \equiv \frac{F(\vec{r}, t)}{\{C_B(t) \}^2} = \sum_{n} \frac{A_{n}}{Z_B^2} \psi^{W_n}(\vec{r})e^{-(W_n-2m_B) t} + \cdots , \end{equation} with the ellipsis being the inelastic contributions. Eq.~(\ref{eq:pot-def}) requires neither the ground state saturation nor the determination of individual eigenenergy $W_n$ and eigenfunction $\psi^{W_n}(\vec{r})$, as all elastic states can be used to extract the energy-independent potential. Therefore, compared with the direct method, the condition required for the reliable calculation is much more relaxed in the time-dependent HAL QCD method as $t$ $\gtrsim {\cal O}(\Lambda_{\rm QCD}^{-1}) \sim \mathcal{O}(1)$~fm, where $R(\vec{r},t)$ is saturated by the contributions from elastic states (``the elastic state saturation''). In practice, we expand the non-local potential in terms of derivatives as $U(\vec{r}, \vec{r'}) = \displaystyle \sum_n V_n (\vec{r}) \nabla^n \delta(\vec{r} - \vec{r'})$. The leading order (LO) approximation gives $ U(\vec{r},\vec{r'}) \simeq V_0^\mathrm{LO}(r) \delta(\vec{r} - \vec{r'}), $ which can be determined as \begin{equation} V_0^\mathrm{LO}(\vec{r}) = - \frac{H_0 R(\vec{r},t)}{R(\vec{r},t)} - \frac{(\partial/\partial t) R(\vec{r},t)}{R(\vec{r},t)} + \frac{1}{4m_B} \frac{(\partial/\partial t)^2 R(\vec{r},t)}{R(\vec{r},t)} . \label{eq:veff} \end{equation} We can also examine the effect of higher order contributions to observables such as the scattering phase shifts: In this paper, we present the study on the correction to the LO potential for the spin-singlet channel at the next-to-next-leading order (N$^2$LO) as \begin{eqnarray} U(\vec{r}, \vec{r'}) &\simeq& \{V_0^\mathrm{N^2LO}(\vec{r}) + V_2^\mathrm{N^2LO}(\vec{r})\nabla^2\}\delta(\vec{r}- \vec{r'}) . \label{eq:pot:N2LO} \end{eqnarray} \subsection{Lattice setup} \label{sec:setup} Numerical data in previous literature~\cite{Iritani:2016jie,Iritani:2018zbt} and in this paper are obtained from the (2+1)-flavor lattice QCD ensembles, generated in Ref.~\cite{Yamazaki:2012hi} with the Iwasaki gauge action and nonperturbatively $\mathcal{O}(a)$-improved Wilson quark action at the lattice spacing $a=0.08995(40)$~fm ($a^{-1} = 2.194(10)$~GeV). In the present paper, we make use of gauge ensembles on three lattice volumes, $L^3 \times T =$ $40^3 \times 48$, $48^3 \times 48$, and $64^3 \times 64$, with heavy up and down quark masses and the physical strange quark masses, corresponding to $m_\pi = 0.51$~GeV, $m_K = 0.62$~GeV, $m_N = 1.32$~GeV and $m_\Xi = 1.46$~GeV. We employ two different quark sources with the Coulomb gauge fixing, the wall source, $q^\mathrm{wall}(t) = \sum_{\vec{y}}q(\vec{y},t)$, mainly used in the HAL QCD method, and the smeared source, $q^\mathrm{smear}(\vec{x},t) = \sum_{\vec{y}} f(|\vec{x}-\vec{y}|)q(\vec{y},t)$, often used in the direct method. For the smearing function, we take $f(r) \equiv \{Ae^{-Br}, 1, 0\}$ for $\{0 < r < (L-1)/2$, $r=0$, $(L-1)/2\leq r\}$, respectively, as in Ref.~\cite{Yamazaki:2012hi}, and the center of the smeared source is same for all six quarks (i.e., zero displacement between two baryons), as has been employed in all previous studies in the direct method claiming the existence of the $NN$ bound states for heavy quark masses~\cite{Yamazaki:2015asa,Wagman:2017tmp,Berkowitz:2015eaa}.% \footnote{ Ref.~\cite{Berkowitz:2015eaa} uses a non-zero displaced operator as well. } For both sources, the point-sink operator for each baryon is employed in this study. A number of configurations and other parameters are summarized in Table~\ref{tab:lattice_setup}. The correlation functions are calculated by the unified contraction algorithm (UCA)~\cite{Doi:2012xd} and the statistical errors are evaluated by the jack-knife method. For more details on the simulation setup, see Ref.~\cite{Iritani:2016jie}. In this paper, we focus on $\Xi\Xi$ ($^1$S$_0$) system, which belongs to the same $\mathbf{27}$ representation as $NN$ ($^1$S$_0$) in the flavor SU(3) transformation, but has much better signal-to-noise ratio than $NN$ as the system contains four strange quarks. We use the relativistic interpolating operator~\cite{Iritani:2016jie} for $\Xi$, given by \begin{equation} \Xi_\alpha^0 = \varepsilon_{abc}(s^{aT}C\gamma_5 u^b)s_\alpha^c, \quad \Xi_\alpha^- = \varepsilon_{abc}(s^{aT}C\gamma_5 d^b)s_\alpha^c, \label{eq:Xi_op} \end{equation} where $C = \gamma_4\gamma_2$ is the charge conjugation matrix, $\alpha$ and $a$, $b$, $c$ are the spinor and color indices, respectively. \begin{table} \centering \begin{tabular}{c|c|c|cc|c} \hline \hline volume & $La$ & \# of conf. & \# of smeared sources & $(A,B)$ & \# of wall sources \\ \hline $40^3 \times 48$ & 3.6 fm & 207 & 512 & (0.8, 0.22) & 48 \\ $48^3 \times 48$ & 4.3 fm & 200 & $4 \times 384$ & (0.8, 0.23) & $4 \times 48$ \\ $64^3 \times 64$ & 5.8 fm & 327 & $1 \times 256$ & (0.8, 0.23) & $4 \times 64$ \\ \hline \hline \end{tabular} \caption{Simulation parameters. The rotational symmetry for isotropic lattice is used to increase statistics.} \label{tab:lattice_setup} \end{table} \section{Summary of previous studies} \label{sec:previous} \subsection{Operator dependence of the plateaux in the direct method} \label{subsec:direct} \begin{figure}[hbt] \centering \includegraphics[width=0.47\textwidth,clip]{figs/dEeffs/deltaE_XiXi_1S0_wall_v_exp_48x48.pdf} \includegraphics[width=0.47\textwidth,clip]{figs/dEeffs/sink_exp_projected.pdf} \caption{ \label{fig:deltaEeff:src-dep} (Left) The source operator dependence of the effective energy shift $\Delta E_\mathrm{eff}(t)$ for $\Xi\Xi$ ($^1$S$_0$) using the wall source (red circles) and the smeared source (blue squares) for $L=48$~\cite{Iritani:2016jie}. (Right) The sink operator dependence of the same quantity with the smeared source~\cite{Iritani:2016jie}. } \end{figure} In Ref.~\cite{Iritani:2016jie}, we pointed out that the plateau-like behaviors at $t\simeq 1$ fm in the direct method depend on sources or sink operators. For example, Fig.~\ref{fig:deltaEeff:src-dep} (Left) shows the source operator dependence of the effective energy shift $\Delta E_\mathrm{eff}(t)$ in Eq.~(\ref{eq:Eeff}) for the $\Xi\Xi$ ($^1$S$_0$) on $L=48$, where $ \mathcal{J}^{\rm sink}_{BB}(t)$ in Eq.~(\ref{eq:4pt_direct}) is given by \begin{eqnarray} \mathcal{J}^{\rm sink}_{BB}(t) &=& \sum_{\vec{r}} \sum_{\vec{x}} B(\vec{x}+\vec{r},t)B(\vec{x},t) , \label{eq:point_sink} \end{eqnarray} where a baryon operator $B$ is given in Eq.~(\ref{eq:Xi_op}). While plateau-like structures appear around $t/a \sim 15$ for both wall and smeared sources, the values disagree with each other. Similar inconsistencies are found on other volumes and the infinite volume extrapolation implies that the system is bound (unbound) for the smeared (wall) source. These discrepancies indicate some uncontrolled systematic errors. Indeed, such an early-time pseudo-plateau can be shown to appear even with 10\% contamination of the excited states as demonstrated by the mockup data~\cite{Iritani:2016jie}. Fig.~\ref{fig:deltaEeff:src-dep} (Right) shows the sink operator dependence of $\Delta E_\mathrm{eff}(t)$ for $\Xi\Xi$ ($^1$S$_0$) with the smeared source fixed, where the sink operator is generalized as \begin{eqnarray} \mathcal{J}^{\rm sink}_{BB}(t) &=& \sum_{\vec{r}} g(\vec{r}) \sum_{\vec{x}} B(\vec{x}+\vec{r},t)B(\vec{x},t) , \label{eq:gen_op} \\ g(\vec{r}) &=& 1 + \tilde{A} \exp(- \tilde{B} r) , \end{eqnarray} with four different parameter sets, $(\tilde{A},\tilde{B}) = (0.3,0.18), (-0.5,0.20), (-0.9,0.22)$ and $(0,0)$. The last one corresponds to the simple sink operator ($g(r)=1$) in Eq.~(\ref{eq:point_sink}). Although a plateau-like structure is observed for each sink operator, the values disagree with each other. This implies that the plateau-like behaviors at $t\simeq 1$ fm with the smeared source are not the plateau of the ground state but are pseudo-plateaux caused by contaminations of elastic scattering states other than the ground state. We note that such sink-operator dependence is not observed in the case of the wall source~\cite{Iritani:2016jie}. \subsection{Normality check} \label{subsec:sanity} \begin{figure}[thb] \centering \includegraphics[width=0.47\textwidth,clip]{figs/npl2015scat_1s0_kcot.pdf} \includegraphics[width=0.47\textwidth,clip]{figs/yku2012_1s0_kcot_both.pdf} \caption{ \label{fig:sanity} $k\cot\delta_0(k)/m_\pi$ as a function of $(k/m_\pi)^2$ for $NN$($^1$S$_0$) on each volume and the infinite volume in the direct method from Ref.~\cite{Orginos:2015aya} (Left) and Ref.~\cite{Yamazaki:2012hi} (Right). Black dashed lines correspond to L\"uscher's formula for each volume, while the black solid line represents the bound-state condition, $-\sqrt{-(k/m_\pi)^2}$. The red line (with an error band) corresponds to the ERE obtained from the data at $k^2 <0$ on finite volumes. In the left figure, the ERE fit to the data at $k^2 >0$ on finite volumes together with only the infinite volume limit at $k^2<0$ is also shown by the blue line. Both figures from Ref.~\cite{Iritani:2017rlk}. } \end{figure} Since the information on operator dependence as shown in the previous subsection is not always available, we have introduced the ``normality check'' in Ref.~\cite{Iritani:2017rlk}, based on L\"uscher's finite volume formula together with the analytic properties of the $S$-matrix. Some examples of the normality check are given in Fig.~\ref{fig:sanity}, where $k\cot\delta_0(k)$ is plotted as a function of $k^2$ for $NN$($^1$S$_0$). Red and blue lines in Fig.~\ref{fig:sanity} represent fits to data by the effective range expansion (ERE) at the next-to-leading order (NLO) as \begin{eqnarray} k\cot\delta_0(k) &=& \frac{1}{a_0} + \frac{r_0}{2} k^2, \end{eqnarray} where $a_0$ and $r_0$ are the scattering length and the effective range, respectively. In Fig.~\ref{fig:sanity} (Left), inconsistency in ERE parameters is observed: The NLO ERE fit obtained from the data at $k^2 <0$ on finite volumes (red line) disagrees with the fit to the data at $k^2 >0$ on finite volumes together with the infinite volume limit at $k^2<0$ (blue line). For the latter fit (the blue line), the physical condition of the bound state pole is also violated. In Fig.~\ref{fig:sanity} (Right), the NLO ERE fit exhibits a singular behavior as the divergent effective range. See Ref.~\cite{Iritani:2017rlk} for more detailed discussions. As in the case of operator dependence, the normality check in Ref.~\cite{Iritani:2017rlk} indicates that the plateau fitting at $t\simeq 1$ fm suffers from large uncontrolled systematic errors probably due to contaminations from the excited states. \subsection{Source-operator dependence and derivative expansion of the potentials} \label{subsec:hal} In Ref.~\cite{Iritani:2018zbt}, we investigated source operator dependence of the potential as a tool to estimate the systematics associated with the derivative expansion of the HAL QCD potential, using two sources, wall and smeared sources. \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth,clip]{figs/xixi_L64_wall_breakup.png} \includegraphics[width=0.47\textwidth,clip]{figs/xixi_L64_exp_breakup.png} \caption{\label{fig:pot_breakup} The potential at the leading order (LO) analysis $V_0^{\rm LO}(r)$ (red circles) from the wall source (Left) and the smeared source (Right) for $\Xi\Xi$($^1$S$_0$) at $t/a=13$ for $L=64$~\cite{Iritani:2018zbt}. The blue squares, green triangles and black diamonds denote 1st, 2nd and 3rd terms in Eq.~(\ref{eq:veff}), respectively. } \end{figure} Fig.~\ref{fig:pot_breakup} shows the LO $\Xi\Xi$($^1$S$_0$) potential and its breakup into 1st, 2nd, and 3rd terms in Eq.~(\ref{eq:veff}) of the time dependent HAL QCD method from the wall source (Left) and the smeared source (Right). For the wall source, the 1st term dominates with moderate (negligible) contributions from the 2nd (3rd) term. As the 2nd term is not constant as a function of $r$, there exist small but non-negligible contributions from the excited states. For the smeared source, on the other hand, all terms are important. The substantial $r$ dependence of the 2nd term (green triangles), which indicates large contributions from the excited states in the smeared source, is canceled by the 1st term (blue squares) and further corrected by the 3rd term (black diamonds). The total potentials (red circles) from two sources, however, show qualitatively similar behaviors, which illustrates that the time-dependent HAL QCD method works well for extracting the potential irrespective of the source types. \begin{figure}[hbt] \centering \includegraphics[width=0.47\textwidth,clip]{figs/comp_wall_smear_t013.png} \caption{\label{fig:pot_comp} A comparison of the LO $\Xi\Xi$($^1$S$_0$) potential $V_0^{\rm LO}(r)$ between the wall source (red) and the smeared source (blue) at $t/a = 13$ \cite{Iritani:2018zbt}. } \end{figure} Fig.~\ref{fig:pot_comp} shows a comparison of the total potentials between two sources, $V_0^{\rm LO(wall/smear)}(r)$, at $t/a = 13$. The potential approaches to zero within errors at larger $r$ for both sources, indicating that contributions from inelastic states are suppressed. While the potentials from two sources are very similar, there exists the non-zero difference between them. We find that this difference becomes smaller as $t$ increases, where the potential from the wall (smeared) source is independent of (dependent on) $t$~\cite{Iritani:2018zbt}. This indicates that the effects of higher order terms in the derivative expansion exist in the data from the smeared source. Using the difference, $V_0^{\rm N^2LO}(r)$ and $V_2^{\rm N^2LO}(r)$ in Eq.~(\ref{eq:pot:N2LO}) can be determined. In Fig.~\ref{fig:vlo_vnlo} (Left), $V_0^{\rm N^2LO}(r)$ together with $V_0^{\rm LO(wall)}(r)$ on $L=64$ at $t/a=13$ are plotted. As we find that $V_0^{\rm N^2LO}(r)$ agrees well with $V_0^{\rm LO(wall)}(r)$ except at short distances, we expect that $V_0^{\rm LO(wall)}(r)$ works well to reproduce physical observables at low energies. Indeed, as shown in Fig.~\ref{fig:vlo_vnlo} (Right), the N$^2$LO correction to the S-wave scattering phase shift $\delta_0$ is small at low energies, showing not only that the derivative expansion converges well but also that the LO analysis for the wall source is sufficiently good at low energies. \begin{figure}[hbt] \centering \includegraphics[width=0.49\textwidth,clip]{figs/comp_v0_t13_zoom.png} \includegraphics[width=0.46\textwidth,clip]{figs/delta_comp_L64_t013.pdf} \caption{ \label{fig:vlo_vnlo} (Left) The LO $\Xi\Xi$($^1$S$_0$) potential at the N$^2$LO analysis, $V_0^\mathrm{N^2LO}(r)$ (red circles), together with the potential at the LO analysis for the wall source, $V_0^\mathrm{LO(wall)}(r)$ (blue diamonds) at $t/a=13$ on $L=64$. (Right) The scattering phase shifts $\delta_0(k)$ from $V_0^{\rm LO(wall)}$ (black diamonds), $V_0^{\rm N^2LO}(r)$ (blue squares) and $V_0^{\rm N^2LO}(r) + V_2^{\rm N^2LO}(r)\nabla^2$ (red circles) at $t/a=13$. Both figures from Ref.~\cite{Iritani:2018zbt}. } \end{figure} \section{Anatomy: Excited state contaminations in the effective energy shifts} \label{sec:anatomy} We now show our main results of this paper, where we analyze the behaviors of $\Xi\Xi$($^1$S$_0$) temporal correlation functions for both wall and smeared sources using the HAL QCD potential and demonstrate that contaminations of elastic excited states cause pseudo-plateau behaviors at early time slices. The strategy of our analysis is as follows. Provided that the leading order HAL QCD potential from the wall source is found to be a reasonable approximation of the exact potential, we evaluate eigenfunctions of the Hamiltonian with this potential in the finite box whose eigenvalues are below the inelastic threshold. We then calculate overlap factors between these eigenmodes and the $\Xi\Xi$ ($^1$S$_0$) correlation functions, in terms of which we reconstruct pseudo-plateau behaviors of the energy shifts. We also show that the plateaux of the temporal correlation functions projected to the lowest or the 2nd lowest eigenfunction agree with their eigenvalues. This fact demonstrates that both HAL QCD potential method and accurate extraction of energy shifts from the temporal correlation function give consistent results. \subsection{Eigenvalues and eigenfunctions in the finite box} \label{subsec:eigens} We first evaluate eigenvalues and eigenfunctions of the leading-order HAL QCD Hamiltonian in a finite box given by \begin{eqnarray} H^{\rm LO} = H_0 + U, \qquad U\equiv V_0^{\rm LO(wall)} , \end{eqnarray} where we take $V_0^{\rm LO(wall)}$ on each volume for $U$, since $V_0^{\rm LO(wall)}$ is a reasonable approximation of the exact potential $U$.\footnote{In Appendix~\ref{app:n2lo}, we employ the potential at the N$^2$LO analysis instead, and find that the results are consistent with the LO results. } Note that the volume dependence of $V_0^{\rm LO(wall)}$ is negligible, thanks to the short-ranged nature of the interaction. In a finite lattice box, eigenvalues and eigenfunctions of the Hermitian matrix $H^{\rm LO}$ can be easily obtained. The $n$-th eigenvalue of $H^{\rm LO}$ in the $A_1^{+}$ representation below the inelastic threshold\footnote{ For the $\Xi\Xi$ system in the $^1S_0$ channel at $m_\pi = 0.51$ GeV, the lowest inelastic threshold is either $\Xi^\ast\Xi$ or $\Sigma\Omega$ in $^5$D$_0$ channel, which corresponds to $W_\mathrm{th} - 2m_\Xi \simeq 0.25 - 0.30$~GeV on $L=64-40$, using $m_\Sigma = 1.40$~GeV, $m_{\Xi^\ast} = 1.68$~GeV and $m_\Omega = 1.74$~GeV.} is tabulated in Table~\ref{tab:deltaEn_summary}, where we show the energy shift compared to the threshold, $\Delta E_n \equiv W_n - 2m_\Xi$. The number of excited states below the inelastic threshold is 3, 4 and 6 on $L = 40$, 48, and 64, respectively. For larger volume, the energy gap becomes smaller and the number of elastic excited states increases. \begin{table} \centering \begin{tabular}{c|ccc} \hline \hline $\Delta E_n$ [MeV] & $L= 40$ & $L = 48$ & $L = 64$ \\ \hline $n = 0$ & $-5.5(1.0)\left(^{+1.8}_{-0.4}\right)$ & $-2.8(0.4)\left(^{+1.1}_{-0.1}\right)$ & $-1.5(0.3)\left(^{+0.4}_{-0.1}\right)$ \\ $n = 1$ & $77.2(0.8)\left(^{+0.8}_{-4.7}\right)$ & $52.0(0.4)\left(^{+3.6}_{0.0}\right)$ & $28.4(0.3)\left(^{+0.4}_{-0.1}\right)$ \\ $n = 2$ & $161.5(1.0)\left(^{+0.2}_{-6.8}\right)$ & $110.0(0.5)\left(^{+3.2}_{0.0}\right)$ & $60.4(0.4)\left(^{+0.3}_{-0.1}\right)$ \\ $n = 3$ & $236.5(1.1)\left(^{+1.0}_{-0.7}\right)$ & $164.9(0.6)\left(^{+2.1}_{-0.5}\right)$ & $93.2(0.4)\left(^{+0.7}_{0.0}\right)$ \\ $n = 4$ & --- & $216.3(0.4)\left(^{+1.0}_{-0.4}\right)$ & $124.1(0.3)\left(^{+0.2}_{-0.1}\right)$ \\ $n = 5$ & --- & --- & $155.8(0.3)\left(^{+0.6}_{-0.0}\right)$ \\ $n = 6$ & --- & --- & $186.5(0.3)\left(^{+0.8}_{0.0}\right)$ \\ \hline \hline \end{tabular} \caption{ Eigenvalues of the $n$-th eigenfunction below the inelastic threshold in the $A_1^+$ representation of the HAL QCD Hamiltonian $H^{\rm LO}$ in each volume. Eigenvalues are given in terms of the energy shifts from the threshold, $\Delta E_n \equiv W_n - 2m_\Xi$. Central values and statistical errors are evaluated at $t/a = 13$, while the systematic errors are estimated by using the results at $t/a = 14, 15, 16$. \label{tab:deltaEn_summary}} \end{table} Fig.~\ref{fig:eigen_functions:48} shows the eigenfunctions $\Psi_n(\vec{r})$ on $L=48$, with $\sum_{\vec{r}} |\Psi_n(\vec{r})|^2 = 1$ and $\Psi_n(\vec{0}) > 0$.% \footnote{ $\Psi_n(\vec{r})$ is a multi-valued function of $r$ due to the effect of the (periodic) finite box.} Up to a normalization, $\Psi_n(\vec{r})$ corresponds to the NBS wave function $\psi^{W=W_n}(\vec{r})$ in a finite volume. The lowest state $\Psi_0(\vec{r})$ has the similar shape to the $R$-correlator for the wall source, which shows a weak peak structure around $r \lesssim 1$~fm and becomes flat at large distances without any nodes, while the eigenfunctions for the excited states have nodes, whose number increases as the eigenvalue becomes larger. The short distance structures for $\Psi_{n>0}(\vec{r})$, which has a steeper peak around $r < 1$~fm than that of $\Psi_0(\vec{r})$ resemble the $R$-correlator for the smeared source. Eigenfunctions on other volumes are collected in Appendix~\ref{app:eigen_func}. \begin{figure}[hbt] \centering \includegraphics[width=0.47\textwidth,clip]{figs/eigenfuncs/psi_L48_t013_n0_2.png} \includegraphics[width=0.47\textwidth,clip]{figs/eigenfuncs/psi_L48_t013_n3_4.png} \caption{\label{fig:eigen_functions:48} Eigenfunctions of elastic states in the $A_1^+$ representation of the HAL QCD Hamiltonian $H^{\rm LO}$ below the inelastic threshold $\Psi_n(\vec{r})$ on $L=48$ at $t/a = 13$ for $n=0, 1, 2$ (Left) and $n=3, 4$ (Right), where red up-pointing triangles, blue squares, green circles, orange diamonds and black down-pointing triangles correspond to $n=0, 1, 2, 3$ and 4, respectively. The eigenfunction is normalized as $\sum_{\vec{r}} |\Psi_n(\vec{r})|^2 = 1$ and $\Psi_n(\vec{0}) > 0$. Errors are statistical only. } \end{figure} \subsection{Decomposition of the $R$-correlator via eigenfunctions} \label{subsec:decomposition} Since the $R$-correlator is dominated by elastic states at moderately large $t$, we can expand it in terms of eigenfunctions of $H^\mathrm{LO}$ as \begin{equation} R^\mathrm{wall/smear}(\vec{r}, t) = \sum_n a_n^\mathrm{wall/smear} \Psi_n(\vec{r}) e^{-\Delta E_n t}, \label{eq:Rcorr_decomp} \end{equation} where the overlap coefficient $a_n$ characterizes the magnitude of the contribution from the corresponding eigenfunction. Using the orthogonality of $\Psi_n(\vec{r})$, $a_n$ can be determined by \begin{eqnarray} a_n^\mathrm{wall/smear} = \sum_{\vec{r}} \Psi_n^\dag(\vec{r}) R^\mathrm{wall/smear}(\vec{r}, t) e^{\Delta E_n t}. \label{eq:Rcorr_decomp:a_n} \end{eqnarray} The magnitude of the corresponding excited state contamination in the $R$-correlator is represented by the ratio $a_n/a_0$. In Fig.~\ref{fig:an_a0}, we plot $a_n/a_0$ obtained at $t/a = 13$ as a function of $\Delta E_n$ for the wall source (Left) and the smeared source (Right). Calculations at $t/a=14, 15, 16$ confirm that the results are almost independent of $t$ within statistical errors, indicating that the decomposition is reliable.\footnote{ In Appendix~\ref{app:inelastic}, we check how well the decomposition in Eq.~(\ref{eq:Rcorr_decomp}) approximates the original $R$-correlator. It is found that the magnitude of the residual relative to the original $R$-correlator at $t/a = 13$ is as small as ${\cal O}(10^{-5}-10^{-6})$ for the wall source and $0.4$\%, $2$\%, $5$\% for the smeared source on $L=40$, $48$, $64$, respectively. } In the case of the wall source, $R^\mathrm{wall}(r,t)$ has a large overlap with the ground state, and $|a_{n>0}/a_0|$ is smaller than 0.1. In the case of the smeared source, on the other hand, $|a_n/a_0| \sim \mathcal{O}(1)$ and thus all elastic excited states significantly contribute to the $R$-correlator. \begin{figure}[hbt] \centering \includegraphics[width=0.47\textwidth,clip]{figs/factors/an_a0_wall_t013.pdf} \includegraphics[width=0.47\textwidth,clip]{figs/factors/an_a0_exp_t013.pdf} \caption{ \label{fig:an_a0} The ratio of the overlap coefficients $a_n/a_0$ in the $R$-correlator obtained at $t/a = 13$ for the wall source (Left) and the smeared source (Right) on three volumes. } \end{figure} This difference of the magnitude in $a_n/a_0$ between two sources can be understood from the overlap between the $R$-correlator and the eigenfunctions. Fig.~\ref{fig:Roverlap} shows the spatial profile of the overlap, $\Psi_n^\dag(\vec{r}) R(\vec{r},t) e^{\Delta E_n t}$, for the wall source (Left) and the smeared source (Right) on $L = 48$, whose spatial summation corresponds to $a_n$. The contribution from the first excited state (blue squares) is highly suppressed for the wall source, thanks to the cancellation between positive values at short distances and negative values at long distances of $\Psi_{n=1}^\dag(\vec{r})R^\mathrm{wall}(\vec{r},t) e^{\Delta E_{n=1} t}$. For the smeared source, on the contrary, the contamination from the 1st excited state remains non-negligible, due to the absence of the negative parts in $\Psi_{n=1}^\dag(\vec{r})R^\mathrm{smear}(\vec{r},t) e^{\Delta E_{n=1} t}$. \begin{figure}[hbt] \centering \includegraphics[width=0.47\textwidth,clip]{figs/factors/overlap_Rcorr_wall_L48_t013.png} \includegraphics[width=0.47\textwidth,clip]{figs/factors/overlap_Rcorr_exp_L48_t013.png} \caption{ \label{fig:Roverlap} The overlap between the $R$-correlator and the eigenfunction, $\Psi_n^\dag(\vec{r}) R(\vec{r},t) e^{\Delta E_n t}$, as a function of $r$ at $t/a=13$ on $L=48$ for the ground state ($n=0$, red circles) and the first excited state ($n=1$, blue squares) in the case of the wall source (Left) and the smeared source (Right). } \end{figure} In the literature, the smeared source is often employed in the direct method, in order to suppress contributions from (inelastic) excited states in a ``single-baryon'' correlation function. The same smeared source, however, does not necessarily reduce the contaminations from the elastic excited states in the two-baryon correlation function, as is explicitly shown in Fig.~\ref{fig:an_a0}. Indeed, one of the most relevant parameters which control the magnitudes of elastic state contributions is the relative distance $\vec r$ between two baryons at the source, which appears as \begin{eqnarray} \frac{1}{L^3}\sum_{\vec x} B(\vec x) B(\vec x + \vec r) = \sum_{\vec p} \tilde B(\vec p) \tilde B(-\vec p) e^{i \vec p \cdot \vec r}, \qquad \tilde B(\vec p) \equiv \frac{1}{L^3} \sum_{\vec x} B(\vec x) e^{-i\vec p\cdot \vec x} \end{eqnarray} in the center of mass system. Almost all literature of the direct method have employed the smeared source essentially corresponding to $|\vec{r}|=0$, which implies that elastic states for all $\vec p$ are equally generated at the source. Thus the choice $|\vec{r}|=0$ (or $\vert\vec{r}\vert\ll 1$) is one of the possible reasons for large contaminations from elastic excited states in the case of the smeared source. As long as $\vert\vec r\vert$ is non-zero and large, however, modes with non-zero $\vec{p}$ may be suppressed due to the oscillating factor $e^{i \vec p \cdot \vec r}$. \footnote{ Ref.~\cite{Berkowitz:2015eaa} reported the discrepancy in the effective energies between the zero displaced ($|\vec r|=0$) and the non-zero displaced ($|\vec r|\not=0$) source operators, which can be naturally understood from this viewpoint, rather than the existence of two bound states claimed in Ref.~\cite{Berkowitz:2015eaa}.} The study in which the momentum projection is performed for each baryon in the source operator is recently reported in Ref.~\cite{Francis:2018qch}. The temporal correlation function $R(t)$ is reconstructed in terms of eigenfunctions as \begin{eqnarray} R^\mathrm{wall/smear}(t) &=& \sum_{\vec{r}} R^\mathrm{wall/smear}(\vec{r}, t) = \sum_{\vec{r}} \sum_n a_n^\mathrm{wall/smear}\Psi_n(\vec{r}) e^{-\Delta E_n t} = \sum_n b_n^\mathrm{wall/smear} e^{-\Delta E_n t} , ~~~~~~~ \label{eq:bn_factor} \end{eqnarray} where $b_n \equiv a_n \sum_{\vec{r}}\Psi_n(\vec{r})$, whose ratio $b_n/b_0$ gives the magnitude of the contamination to $R(t)$ from the $n$-th elastic excited state. Fig.~\ref{fig:bn_b0} shows $\vert b_n/b_0\vert $ obtained at $t/a=13$ as a function of $\Delta E_n$ on three volumes for the wall source (Left) and the smeared source (Right). Solid (open) symbols correspond to positive (negative) values for $b_n/b_0$. For the wall source, the contamination from the first excited state is found to be smaller than 1\%, and $|b_n/b_0|$ is further suppressed exponentially for higher excited states. In the case of the smeared source, the contamination from the first excited state is as large as $\sim 10$\% with a negative sign and the contamination remains to be $\sim 1$\% even for the higher excited state with $\Delta E_n \sim 100$~MeV. \begin{figure}[hbt] \centering \includegraphics[width=0.47\textwidth,clip]{figs/factors/bn_b0_wall_t013.pdf} \includegraphics[width=0.47\textwidth,clip]{figs/factors/bn_b0_exp_t013.pdf} \caption{ \label{fig:bn_b0} The ratio of the overlap coefficients in the temporal correlation function $|b_n/b_0|$ obtained at $t/a = 13$ for the wall source (Left) and the smeared source (Right) on various volumes. Solid (open) symbols correspond to positive (negative) values of $b_n/b_0$. } \end{figure} \subsection{Reconstruction of the effective energy shift} \label{subsec:reconstruction} \begin{figure}[hbt] \centering \includegraphics[width=0.47\textwidth,clip]{figs/dEeffs/REdEeff_wall_48.pdf} \includegraphics[width=0.47\textwidth,clip]{figs/dEeffs/REdEeff_exp_48.pdf} \caption{ \label{fig:ReDEeffComp:48} The reconstructed effective energy shifts $\overline{\Delta E}_\mathrm{eff}(t,t_0=13a)$ with statistical errors are plotted as a function of $t$ (colored bands), while the direct measurement of the effective energy shifts from $R$-correlators are plotted by red circles or blue squares. The black dashed lines correspond to the energy shift $\Delta E_0 (t_0=13a)$ for the ground state of the HAL QCD Hamiltonian $H^{\rm LO}$ in the finite volume. The results on $L=48$ for the wall source (Left) and the smeared source (Right). } \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=0.7\textwidth,clip]{figs/dEeffs/REdEeff_wall_48_large_t.pdf} \includegraphics[width=0.7\textwidth,clip]{figs/dEeffs/REdEeff_exp_48_large_t.pdf} \caption{ \label{fig:ReDEeffComp:48:large_t} The same as Fig.~\ref{fig:ReDEeffComp:48} but for the wider range of the Euclidean time $t$. } \end{figure} Let us now examine the energy shifts obtained from the reconstructed $R$-correlators; \begin{eqnarray} \overline{\Delta E}_\mathrm{eff}^{\mathrm{wall/smear}}(t, t_0) = \frac{1}{a} \log \left[ \frac{\displaystyle\sum_{n=0}^{n_\mathrm{max}} b_n^\mathrm{wall/smear} (t_0)e^{-\Delta E_n(t_0)t}} {\displaystyle\sum_{n=0}^{n_\mathrm{max}} b_n^\mathrm{wall/smear}(t_0) e^{-\Delta E_n(t_0)(t+a)}} \right] , \label{eq:ReDEeff} \end{eqnarray} where we take $n_\mathrm{max} = 3$, 4, 6 for $L = 40$, 48, 64, respectively, corresponding to the number of elastic excited states below the inelastic threshold, and $b_n(t_0)$ and $\Delta E_n(t_0)$ are extracted at fixed $t_0$. In Fig.~\ref{fig:ReDEeffComp:48}, we show the reconstructed effective energy shifts $\overline{\Delta E}_\mathrm{eff}(t, t_0=13a)$, together with numerical data of the effective energy shifts $\Delta E_\mathrm{eff}(t)$ from the $R$-correlators, for the wall source (Left) and the smeared source (Right) on $L=48$. The bands correspond to $\overline{\Delta E}_\mathrm{eff}(t, t_0=13a)$ with statistical errors coming from those of $b_n$ and $\Delta E_n$ at $t_0/a = 13$, while red circles or blue squares correspond to $\Delta E_\mathrm{eff}(t)$ obtained directly from the $R$-correlator in Sec.~\ref{subsec:direct}. Here we do not consider $\overline{\Delta E}_\mathrm{eff}(t,t_0=13a)$ for $t/a < 13$, where inelastic contributions are expected to be larger. Shown together by the black dashed line represents the energy shift $\Delta E_0 (t_0=13a)$ for the ground state of the HAL QCD Hamiltonian $H^{\rm LO}$ on $L=48$. We find that the results of the direct method, most notably the plateau-like structures around $t/a = 15$, are well reproduced by $\overline{\Delta E}_\mathrm{eff}(t,t_0)$ for both wall and smeared sources, indicating that the behaviors of $\Delta E_\mathrm{eff}(t)$ at this time interval in the direct method are explained by the contributions from the several low-lying states. These plateau-like structures, however, do not necessarily correspond to the plateau of the ground state. Indeed, in the case of the smeared source, there is a clear discrepancy between the value of the plateau-like structure and the eigenvalue $\Delta E_0$ of the ground state. This is a consequence of large excited state contaminations in the correlation function for the smeared source. In the case of the wall source, on the other hand, since the overlap with the ground state is large, the value of the plateau-like structure is consistent with the value $\Delta E_0$. The fate of the plateau-like structures is more clearly seen in Fig.~\ref{fig:ReDEeffComp:48:large_t}, where we plot the behaviors at asymptotically large $t$ of the reconstructed effective energy shifts $\overline{\Delta E}_\mathrm{eff}(t, t_0=13a)$ for the wall source (red band) and the smeared source (blue band). While the plateau-like structure at $t/a \sim 15$ for the wall source is almost unchanged at larger $t$, the value of $\overline{\Delta E}_\mathrm{eff}(t, t_0=13a)$ in the case of the smeared source gradually changes as $t$ increases until it reaches to the value of the ground state, $\Delta E_0$\blue{,} at $t/a \sim 100$. In Appendix~\ref{subapp:delta_E_r:vol}, we perform the same analysis on other volumes and observe essentially the same behaviors as in the case of $L=48$: For the wall source, the value of the plateau-like structure at $t/a \sim 15$ remains almost unchanged at larger $t$ and is consistent with $\Delta E_0$. For the smeared source, the value of the plateau-like structure at $t/a \sim 15$ is inconsistent with $\Delta E_0$. The deviation is found to be larger on a larger volume, due to severer contaminations from the excited states on larger volumes (See Sec.~\ref{subsec:decomposition}).\footnote{ Values of pseudo-plateaux do not strongly depend on volumes, while the correct values $\Delta E_0$ do. This is a counter example against the argument in Refs.~\cite{Wagman:2017tmp,Beane:2017edf} in which it is claimed that the volume-independence of the plateaux guarantees their correctness.} The value of $\overline{\Delta E}_\mathrm{eff}(t, t_0=13a)$ for the smeared source gradually changes at $t$ increases and the ground state saturation is realized at $t/a \gtrsim 50, 100$ and $150$ or $t \gtrsim 5, 10$ and $15$ fm on $L = 40, 48$ and $64$, respectively. These time scales for the ground state saturation are actually not surprising but rather natural, considering the fact that the lowest excitation energy is as small as $\delta E \equiv \Delta E_1 - \Delta E_0 \simeq 84$, 55 and 30~MeV on $L = 40$, 48 and 64, respectively. These results clearly reveal that the plateau-like structures at $t/a \sim 15$ for the smeared source are pseudo-plateaux caused by contaminations of the elastic excited states.\footnote{ In Appendix~\ref{subapp:cut_off}, we show that the dominant contamination comes from the first excited state.} While the effective energy shifts from the wall source happen to be saturated by the ground state even at $t/a \sim 15$, it is generally difficult to confirm that a plateau-like structure corresponds to the correct energy shift of the ground state without the help of other inputs, such as the HAL QCD potential analysis in the present case. Since the calculation of the energy shift from the $R$-correlator at $t \sim (\delta E)^{-1}$ is impractical due to the exponentially growing noises, one cannot obtain the correct spectra from the plateau identification in the direct method unless sophisticated variational techniques~\cite{Luscher:1990ck} are employed. \footnote{ Application of the variational method to two-baryon systems has started lately but mostly with respect to the flavor space~\cite{Francis:2018qch}. It will be interesting to perform the variational method with respect to relative coordinate space for two baryons in addition. In the lattice study of meson-meson scatterings~\cite{Briceno:2017max}, the danger of the excited state contaminations in the plateau fitting has been already recognized and the use of the variational method is known to be mandatory. } \subsection{Projection with improved sink operator} \label{subsec:eigen-proj} Once the finite-volume eigenmodes of $H^{\rm LO}$ with the HAL QCD potential are known, an improved two-baryon sink operator for a designated eigenstate can be constructed as \begin{eqnarray} \mathcal{J}_{BB}^{\rm sink}(t) &=& \sum_{\vec{r}} \Psi_n^\dag(\vec{r}) \sum_{\vec{x}} B(\vec{x}+\vec{r},t)B(\vec{x},t) , \label{eq:EeffProj} \end{eqnarray} which is expected to have a large overlap to the $n$-th elastic state.\footnote{Use of such an improved operator as a ``source'' requires additional calculations and thus is left for future study.} This is equivalent to considering the generalized temporal correlation function with the choice of $g(\vec{r}) = \Psi_n^\dag(\vec{r})$ in Eq.~(\ref{eq:gen_op}), \begin{eqnarray} R^{(n)} (t) &\equiv& \sum_{\vec{r}} \Psi_n^\dag(\vec{r}) R(\vec{r}, t) , \end{eqnarray} from which we define the effective energy shift for the $n$-th eigenfunction as \begin{equation} \Delta E_\mathrm{eff}^{(n)}(t) = \frac{1}{a} \log \frac{R^{(n)}(t)}{R^{(n)}(t+a)} . \end{equation} Fig.~\ref{fig:Eeff_proj:48} shows the effective energy shift $\Delta E_\mathrm{eff}^{(n)}(t)$ using the wall source (black up-triangles) and the smeared sources (purple down-triangles) on $L=48$ for the ground state (Left) and the first excited state (Right). Shown together are the energy shift ($\Delta E_0$ or $\Delta E_1$) with statistical errors (red bands), obtained from $H^{\rm LO}$ with the HAL QCD potential $V_0^\mathrm{LO(wall)}(r)$ at $t/a=13$, as well as that for a non-interacting system (black lines). In the case of the ground state (Fig.~\ref{fig:Eeff_proj:48} (Left)), the effective energy shifts from the direct method for the wall source (red circles) and the smeared source (blue squares) are also plotted for comparisons. \begin{figure}[hbt] \centering \includegraphics[width=0.47\textwidth,clip]{figs/dEeffs/gs_projected_L48.pdf} \includegraphics[width=0.47\textwidth,clip]{figs/dEeffs/1st_projected_L48.pdf} \caption{ \label{fig:Eeff_proj:48} The effective energy shift $\Delta E_\mathrm{eff}^{(n)}(t)$ on $L=48$ from the wall source (black up triangles) and the smeared source (purple down triangles) for the ground state (Left) and the first excited state (Right). Red bands represent the energy shifts with statistical errors obtained from the HAL QCD Hamiltonian $H^{\rm LO}$, while black lines represent those for a non-interacting system. In the left figure, the effective energy shifts in the direct method for the wall source (red circles) and the smeared source (blue squares) are also shown. } \end{figure} First of all, after the sink projections, the results with the wall source and those with the smeared source agree well around $t/a \sim 13$ not only for the ground state but also for the first excited state. This is in sharp contrast with the fact that the results in the direct method without projections disagree between two sources for the ground state. Although a small overlap with the first excited state causes relatively large statistical errors in the case of the wall source, an agreement between two sources for the first excited states is rather striking, and serves as a non-trivial check for the reliability of the effective energy shifts with the sink projection. Moreover, results after the sink projections also agree with those from the HAL QCD Hamiltonian. Although the sink projection utilizes the information of the HAL QCD potential through eigenfunctions, agreements in effective energy shifts within statistical errors for both ground and first excited states provide a non-trivial consistency check between the HAL QCD method and the direct method with proper projection. In other words, results in Fig.~\ref{fig:Eeff_proj:48} establish that (i) the HAL QCD potential correctly describes the energy shifts of two baryons in the finite volume for both ground and excited states, and that (ii) these energy shifts can also be extracted in the direct method if and only if interpolating operators are highly improved. Since the origins of systematic uncertainties are generally quite different between the two methods, such a ``projection check'' would be useful in future lattice QCD studies for two-baryon systems. In recent years, it has been argued that seemingly inconsistent results for the $NN$ systems at heavy pion masses between L\"uscher's finite volume method and the HAL QCD method may indicate some theoretical deficits in one of the two methods. It is now clear from our analysis that L\"uscher's method and the HAL QCD method agree quantitatively with each other, as it should be so theoretically. \section{Summary} \label{sec:summary} In our previous works~\cite{Iritani:2016jie, Iritani:2017rlk}, it has been shown that the plateau fitting of the eigenenergies at early Euclidean times $t$, employed in the direct method, is generally unreliable for multi-baryon systems, due to the appearance of pseudo-plateaux caused by contaminations of the excited states with small gap corresponding to the elastic scattering states on the finite volume. In this paper, we quantified the degree of contaminations from such excited states by decomposing the two-baryon correlation functions in terms of the finite-volume eigenmodes of the HAL QCD Hamiltonian. By taking $\Xi\Xi$ ($^1$S$_0$) system at $m_\pi = 0.51$~GeV in (2+1)-flavor lattice QCD with the wall and smeared quark sources for $La = 3.6, 4.3, 5.8$~fm, we showed that the excited state contaminations are suppressed for the wall source, while those for the smeared source are substantial and become severer on a larger spatial extent. For the smeared source, the plateau-like structures at $t = 1 \sim 2$~fm are shown to be pseudo-plateaux and the plateau with the ground state saturation is realized only at $t > 5 \sim 15$~fm corresponding to the inverse of the lowest excitation energy. We also demonstrated that one can optimize the two-baryon operator utilizing the finite-volume eigenmode of the HAL QCD Hamiltonian. The effective energies from the temporal correlation functions with the optimized operators are found to be consistent with the finite volume spectra obtained from the HAL QCD Hamiltonian. This result establishes not only that the correct finite-volume spectra can be accessed by employing highly optimized operators even in the direct method but also that the HAL QCD method and the direct method agree in the finite volume spectra for the two baryon systems. Thus the long-standing issue on the consistency between L\"uscher's finite volume method and the HAL QCD method is positively resolved at least for the particular system considered here. The next step is to carry out comprehensive studies of baryon-baryon interactions around the physical quark masses in the HAL QCD method, which are partly underway (see, e.g.~\cite{Gongyo:2017fjb,Iritani:2018sra,Inoue:2018axd}). Those will reveal not only the nature of exotic dibaryons but also the equation of state of dense baryonic matter. \acknowledgments We thank the authors of Ref.~\cite{Yamazaki:2012hi} and ILDG/JLDG~\cite{conf:ildg/jldg, Amagasa:2015zwb} for providing the gauge configurations. Lattice QCD codes of CPS~\cite{CPS}, Bridge++~\cite{bridge++} and the modified version thereof by Dr. H.~Matsufuru, cuLGT~\cite{Schrock:2012fj} and domain-decomposed quark solver~\cite{Boku:2012zi,Teraki:2013} are used in this study. The numerical calculations have been performed on BlueGene/Q and SR16000 at KEK, HA-PACS at University of Tsukuba, FX10 at the University of Tokyo and K computer at RIKEN R-CCS (hp150085, hp160093). This work is supported in part by the Japanese Grant-in-Aid for Scientific Research (No. JP24740146, JP25287046, JP15K17667, JP16K05340, JP16H03978, JP18H05236, JP18H05407), by MEXT Strategic Program for Innovative Research (SPIRE) Field 5, by a priority issue (Elucidation of the fundamental laws and evolution of the universe) to be tackled by using Post K Computer, and by Joint Institute for Computational Fundamental Science (JICFuS). \clearpage
1,108,101,566,135
arxiv
\section{Introduction and statement of results} Let $G$ be a totally disconnected, locally compact topological group, with neutral element~$e$, and $\alpha\colon G\to G$ be a continuous endomorphism. Then various $\alpha$-invariant subgroups of~$G$ can be associated with $\alpha$, which are important for the structure theory of totally disconnected, locally compact groups and its applications (see \cite{Wi3}, \cite{GBV}, \cite{BGT}; for automorphisms, see \cite{BaW}, \cite{GaW}, \cite{Sie}; cf.\ \cite{Wi1} and \cite{Wi2} for the general structure theory). According to~\cite{Wi3}, the \emph{contraction subgroup} of~$\alpha$ is the set $\con(\alpha)$ of all $x\in G$ such that $\alpha^n(x)\to e$ as $n\to\infty$. The \emph{anti-contraction subgroup} of~$\alpha$ is the set of all $x\in G$ admitting an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}_0}$ (as recalled in the abstract) such that $\lim_{n\to\infty}x_{-n}=e$. Neither of the subgroups $\con(\alpha)$ and $\con^-(\alpha)$ need to be closed in~$G$; if $\alpha$ is a (bicontinuous) automorphism of $G$, then $\con^-(\alpha)=\con(\alpha^{-1})$. The \emph{parabolic subgroup} of $\alpha$ is the set $\parb(\alpha)$ of all $x\in G$ such that $\{\alpha^n(x)\colon n\in{\mathbb N}_0\}$ is relatively compact in~$G$; the \emph{anti-parabolic subgroup} of~$\alpha$ is the set $\parb^-(\alpha)$ of all $x\in G$ admitting an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}_0}$ such that $\{x_{-n}\colon n\in{\mathbb N}_0\}$ is relatively compact in~$G$. Then $\parb(\alpha)$ and $\parb^-(\alpha)$ are closed subgroups of~$G$, whence also the \emph{Levi subgroup} $\lev(\alpha):=\parb(\alpha)\cap\parb^-(\alpha)$ is closed; moreover, \[ \alpha(\con(\alpha))\subseteq\con(\alpha),\quad \alpha(\con^-(\alpha))=\con^-(\alpha),\quad \alpha(\parb(\alpha))\subseteq\parb(\alpha), \] \[ \alpha(\parb^-(\alpha))=\parb^-(\alpha),\quad \mbox{and}\quad \alpha(\lev(\alpha))=\lev(\alpha); \] see \cite{Wi3} (notably Proposition~19). We are interested in the subset \[ \Omega:=\con(\alpha)\lev(\alpha)\con^-(\alpha) \] of~$G$, the so-called \emph{big cell}. If $\con(\alpha)$ is closed in~$G$, then also $\con^-(\alpha)$ is closed and the product map \[ \pi\colon \con(\alpha)\times\lev(\alpha)\times\con^-(\alpha)\to\Omega \] is a homeomorphism, see \cite[Theorems D and F]{BGT} (cf.\ already \cite{Wan} for the case of automorphisms of $p$-adic Lie groups). If $\alpha$ is a ${\mathbb K}$-analytic endomorphism of a Lie group~$G$ over a totally disconnected local field~${\mathbb K}$ and $\con(\alpha)$ is closed, then $\con(\alpha)$, $\lev(\alpha)$, and $\con^-(\alpha)$ are Lie subgroups of~$G$ and $\pi$ is a ${\mathbb K}$-analytic diffeomorphism, see \cite[Theorem~8.15]{END}. If ${\mathbb K}$ has characteristic~$0$, then $\con(\alpha)$ is always closed (see \cite[Corollary~6.7]{END}). Our goal is twofold: First, we strive for additional information concerning the subgroups just discussed, e.g., group-theoretic properties like nilpotency. Second, we wish to explore what can be said when $\con(\alpha)$ fails to be closed. While our focus is on Lie groups, we start with a general observation: \begin{prop}\label{fi-res} Let $G$ be a totally disconnected, locally compact group and $\alpha\colon G\to G$ be a continuous endomorphism. Then $\Omega:=\con(\alpha)\lev(\alpha)\con^-(\alpha)$ is an open $e$-neighbourhood in~$G$ such that $\alpha(\Omega)\subseteq\Omega$ and the map \[ \parb(\alpha)\times\parb^-(\alpha)\to\Omega,\quad (x,y)\mapsto xy \] is continuous, surjective, and open. \end{prop} Let ${\mathbb K}$ be a totally disconnected local field. For the study of a ${\mathbb K}$-analytic endomorphism $\alpha$ of a ${\mathbb K}$-analytic Lie group~$G$, it is useful to consider $(G,\alpha)$ as a ${\mathbb K}$-analytic dynamical system with fixed point~$e$. We shall use \emph{local invariant manifolds} for analytic dynamical systems as introduced in \cite{INV} and \cite{FIN}, stimulated by classical constructions in the theory of smooth dynamical systems over~${\mathbb R}$ (cf.\ \cite{Irw} and \cite{Wel}). See \cite{AUT} and \cite{END} for previous applications of such tools to invariant subgroups for analytic automorphisms and endomorphisms. We recall the concepts of local stable manifolds, local unstable manifolds and related notions in Section~\ref{secprel}. \begin{thm}\label{thmA} Let $\alpha\colon G\to G$ be a ${\mathbb K}$-analytic endomorphism of a Lie group~$G$ over a totally disconnected local field~${\mathbb K}$. Then the following holds: \begin{itemize} \item[\rm(a)] There exists a unique ${\mathbb K}$-analytic manifold structure on $\con(\alpha)$ making it a ${\mathbb K}$-analytic Lie group $\con_*(\alpha)$, such that $\con_*(\alpha)$ has an open subset which is a local stable manifold for~$\alpha$ around~$e$. \item[\rm(b)] There exists a unique ${\mathbb K}$-analytic manifold structure on $\con^-(\alpha)$ making it a ${\mathbb K}$-analytic Lie group $\con^-_*(\alpha)$, such that $\con^-_*(\alpha)$ has an open subset which is a local unstable manifold for~$\alpha$ around~$e$. \item[\rm(c)] There exists a unique ${\mathbb K}$-analytic manifold structure on $\lev(\alpha)$ making it a ${\mathbb K}$-analytic Lie group $\lev_*(\alpha)$, such that $\lev_*(\alpha)$ has an open subset which is a centre manifold for~$\alpha$ around~$e$. \item[\rm(d)] There exists a unique ${\mathbb K}$-analytic manifold structure on $\parb(\alpha)$ making it a ${\mathbb K}$-analytic Lie group $\parb_*(\alpha)$, such that $\parb_*(\alpha)$ has an open subset which is a centre-stable manifold for~$\alpha$ around~$e$. \item[\rm(e)] There exists a unique ${\mathbb K}$-analytic manifold structure on $\parb^-(\alpha)$ making it a ${\mathbb K}$-analytic Lie group $\parb^-_*(\alpha)$, such that $\parb_*^-(\alpha)$ has an open subset which is a local centre-unstable manifold for~$\alpha$ around~$e$. \end{itemize} \end{thm} In the following three theorems, we retain the situation of Theorem~\ref{thmA}. \begin{thm}\label{thmB} The manifolds just constructed have the following properties. \begin{itemize} \item[\rm(a)] Each of $\con_*(\alpha)$, $\con^-_*(\alpha)$, $\lev_*(\alpha)$, $\parb_*(\alpha)$, and $\parb^-_*(\alpha)$ is an immersed Lie subgroup of~$G$ and $\alpha$ restricts to a ${\mathbb K}$-analytic endomorphism $\alpha_s$, $\alpha_u$, $\alpha_c$, $\alpha_{cs}$, and $\alpha_{cu}$ thereon, respectively. \item[\rm(b)] We have \begin{equation}\label{again-contr} \con(\alpha_s)=\con_*(\alpha)\quad\mbox{and}\quad \con^-(\alpha_u)=\con^-_*(\alpha). \end{equation} Moreover, $\lev(\alpha_c)$, $\parb(\alpha_{cs})$, and $\parb^-(\alpha_{cu})$ are open subgroups of $\lev_*(\alpha)$, $\parb_*(\alpha)$, and $\parb^-_*(\alpha)$, respectively. \end{itemize} \end{thm} It is well known that $\parb(\alpha)$ normalizes $\con(\alpha)$ and $\parb^-(\alpha)$ normalizes $\con^-(\alpha)$ (see \cite[Lemma~13.1\,(a)]{BGT}; cf.\ \cite[Proposition~3.4]{BaW} for automorphisms). \begin{thm}\label{thmC} \begin{itemize} \item[\rm(a)] Let $\Omega\subseteq G$ be the big cell for~$\alpha$. The product map \[ \pi\colon \con_*(\alpha)\times\lev_*(\alpha)\times \con^-_*(\alpha)\to\Omega, \quad (x,y,z)\mapsto xyz \] is an \'{e}tale ${\mathbb K}$-analytic map and surjective. \item[\rm(b)] The action $\lev_*(\alpha)\times \con_*(\alpha)\to\con_*(\alpha)$, $(x,y)\mapsto xyx^{-1}$ is ${\mathbb K}$-analytic, whence the product manifold structure turns $\con_*(\alpha)\rtimes \lev_*(\alpha)$ into a ${\mathbb K}$-analytic Lie group. The product map \[ \con_*(\alpha)\rtimes \lev_*(\alpha)\to\parb_*(\alpha),\quad (x,y)\mapsto xy \] is a surjective group homomorphism and an \'{e}tale ${\mathbb K}$-analytic map. \item[\rm(c)] The action $\lev_*^-(\alpha)\times \con_*^-(\alpha)\to\con_*^-(\alpha)$, $(x,y)\mapsto xyx^{-1}$ is ${\mathbb K}$-analytic, whence the product manifold structure turns $\con_*^-(\alpha)\rtimes \lev_*(\alpha)$ into a ${\mathbb K}$-analytic Lie group. The product map \[ \con_*^-(\alpha)\rtimes \lev_*(\alpha)\to\parb_*^-(\alpha),\quad (x,y)\mapsto xy \] is a surjective group homomorphism and an \'{e}tale ${\mathbb K}$-analytic map. \end{itemize} \end{thm} If $H$ is a group and $\beta\colon H\to H$ an endomorphism, we write \[ \ik(\beta):=\bigcup_{n\in{\mathbb N}_0}\ker(\beta^n) \] for the \emph{iterated kernel}. If $G$ is a totally disconnected, locally compact group and $\beta$ a continuous endomorphism, then $\ik(\beta)\subseteq \con(\beta)$. Let us call a continuous endomorphism of a topological group~$G$ \emph{contractive} if $\alpha^n(g)\to e$ as $n\to\infty$, for each $g\in G$. If $G$ is a ${\mathbb K}$-analytic endomorphism of a ${\mathbb K}$-analytic Lie group~$G$ and $\alpha$ is contractive, then $G=\con(\alpha)=\con_*(\alpha)$ (cf.\ \cite[Proposition~7.10\,(a)]{END}). \begin{thm}\label{thmD} \begin{itemize} \item[\rm(a)] If $\alpha$ is \'{e}tale, then $\ik(\alpha)$ is discrete in $\con_*(\alpha)$. If ${\mathbb K}$ has characteristic~$0$, then both properties are equivalent. \item[\rm(b)] If $\alpha$ is \'{e}tale, then $\con_*(\alpha)/\ik(\alpha)$ is an open $\beta$-invariant subgroup for some ${\mathbb K}$-analytic Lie group $H$ and contractive ${\mathbb K}$-analytic automorphism $\beta\colon H\to H$ of~$H$ which extends the ${\mathbb K}$-analytic endomorphism $\overline{\alpha}_s$ induced by $\alpha_s$ on $\con_*(\alpha)/\ik(\alpha_s)$. In particular, $\con(\alpha)/\ik(\alpha)$ is nilpotent. Moreover, $\con_*(\alpha)$ has an open, $\alpha$-invariant subgroup~$U$ which is nilpotent. \item[\rm(c)] If $\car({\mathbb K})=0$, then $\ik(\alpha_s)$ is a Lie subgroup of $\con_*(\alpha)$. Thus $Q:=\con_*(\alpha)/\ik(\alpha)$ has a unique ${\mathbb K}$-analytic manifold structure making the canonical map $\con_*(\alpha)\to Q$ a submersion, and the latter turns~$Q$ into a ${\mathbb K}$-analytic Lie group. There exists a ${\mathbb K}$-analytic Lie group $H$ containing $Q$ as an open submanifold and subgroup, and a contractive ${\mathbb K}$-analytic automorphism $\beta\colon H\to H$ which extends the ${\mathbb K}$-analytic endomorphism $\overline{\alpha}_s$ induced by $\alpha_s$ on $Q$. Notably, $\con(\alpha)/\ik(\alpha)$ is nilpotent. \item[\rm(d)] The ${\mathbb K}$-analytic surjective endomorphism $\alpha_u$ of $\con^-_*(\alpha)$ is \'{e}tale. Moreover, $\con^-_*(\alpha)$ has an open subgroup~$S$ with $S\subseteq\alpha_u(S)$ such that $\alpha_u|_S\colon S\to\alpha_u(S)$ is a ${\mathbb K}$-analytic diffeomorphism and $\alpha_u(S)$ can be regarded as an open submanifold and subgroup of a ${\mathbb K}$-analytic Lie group~$H$ which admits a ${\mathbb K}$-analytic contractive automorphism $\beta\colon H\to H$ such that $\alpha_u|_S=\beta^{-1}|_S$. In particular, $S$ and $\alpha_u(S)$ are nilpotent. \end{itemize} \end{thm} \begin{rem} (a) If $\alpha\colon G\to G$ is a ${\mathbb K}$-analytic automorphism, it was known previously that $\con(\alpha)$ can be given an immersed Lie subgroup structure modelled on $\con(L(\alpha))$ such that $\alpha|_{\con(\alpha)}$ becomes a ${\mathbb K}$-analytic contractive automorphism; likewise for $\con^-(\alpha)=\con(\alpha^{-1})$ (see \cite[Proposition~6.3\,(b)]{FIN}). Hence $\con(\alpha)$ and $\con^-(\alpha)$ are nilpotent in this case (see \cite{CON}).\\[2mm] (b) If $\alpha\colon G\to G$ is a ${\mathbb K}$-analytic endomorphism and $\con(\alpha)$ is closed in~$G$, then $\alpha|_{\con^-(\alpha)}$ is a ${\mathbb K}$-analytic automorphism of the Lie subgroup $\con^-(\alpha)$ of~$G$ such that $(\alpha|_{\con^-(\alpha)})^{-1}$ is contractive (cf.\ \cite[Theorem~8.15]{END}), whence $\con^-(\alpha)$ is nilpotent. Moreover, $\alpha|_{\lev(\alpha)}$ is a ${\mathbb K}$-analytic automorphism of the Lie subgroup $\lev(\alpha)$ of~$G$ and $\alpha|_{\lev(\alpha)}$ is distal (see \cite[Theorem~8.15 and Proposition~7.10\,(c)]{END}). \end{rem} We also observe: \begin{prop}\label{thmE} In the setting of Theorem~\emph{\ref{thmA}}, the following holds: \begin{itemize} \item[\rm(a)] $\ik(\alpha)\cap \lev_*(\alpha)$ is discrete. \item[\rm(b)] $\ker(\alpha)\cap \parb^-_*(\alpha)$ is discrete. \item[\rm(c)] $\ker(\alpha)\cap \con^-_*(\alpha)$ is discrete. \end{itemize} \end{prop} We mention that refined topologies on contraction subgroups of well-behaved automorphisms (e.g., expansive automorphisms) were also studied in \cite{Si2} and~\cite{GaR}.\\[2.3mm] The following examples illustrate the results. \begin{example}\label{any-gp} Let $H$ be any group and ${\bf 1}\colon H\to H$, $x\mapsto e$ be the constant endomorphism. If we endow $H$ with the discrete topology, then $H$ becomes a ${\mathbb K}$-analytic Lie group modeled on ${\mathbb K}^0$, and ${\bf 1}$ a ${\mathbb K}$-analytic endomorphism such that $\con({\bf 1})=H$. As a consequence, contraction groups of endomorphisms do not have any special group-theoretic properties: any group can occur. \end{example} \begin{example}\label{Zp} For a prime number~$p$, consider the group $({\mathbb Z}_p,+)$ of $p$-adic integers, which is an open subgroup of the local field ${\mathbb Q}_p$ of $p$-adic numbers and thus a $1$-dimensional analytic Lie group over~${\mathbb Q}_p$. The map \[ \beta\colon {\mathbb Z}_p\to{\mathbb Z}_p, \quad z\mapsto pz \] is a contractive ${\mathbb Q}_p$-analytic endomorphism. Since $\beta$ is injective, $\ik(\beta)=\{0\}$ is trivial and hence discrete. We can extend $\beta$ to the analytic contractive automorphism $\gamma\colon {\mathbb Q}_p\to{\mathbb Q}_p$, $z\mapsto pz$. \end{example} \begin{example} If $H$ in Example~\ref{any-gp} is non-trivial, then \[ \alpha:={\bf 1}\times \beta \colon H\times {\mathbb Z}_p\to H\times {\mathbb Z}_p \] is a contractive ${\mathbb Q}_p$-analytic endomorphism which cannot extend to a contractive automorphism of a Lie group $G\supseteq H\times{\mathbb Z}_p$ as $\alpha$ is not injective. However, $\{e\}\times {\mathbb Z}_p\cong {\mathbb Z}_p$ is an open $\alpha$-invariant subgroup and embeds in $({\mathbb Q}_p,\gamma)$. Likewise, $G/\ik(\alpha)=G/(H\times\{0\})\cong {\mathbb Z}_p$ embeds in $({\mathbb Q}_p,\gamma)$. \end{example} \begin{example}\label{two-sided} Let ${\mathbb F}$ be a finite field, ${\mathbb K}:={\mathbb F}(\!(X)\!)$ be the local field of formal Laurent series over~${\mathbb F}$ and ${\mathbb F}[\![X]\!]$ be its compact open subring of formal power series. Then $G:={\mathbb F}^{\mathbb Z}$, with the compact product topology, can be made a $2$-dimensional ${\mathbb K}$-analytic Lie group in such a way that the bijection \[ \phi \colon G\to X {\mathbb F}[\![X]\!]\times {\mathbb F}[\![X]\!],\quad (a_k)_{k\in{\mathbb Z}}\mapsto \left( \sum_{k=1}^\infty a_{-k} X^k, \sum_{k=0}^\infty a_k X^k \right) \] becomes a ${\mathbb K}$-analytic diffeomorphism. The right-shift $\alpha\colon G\to G$, $(a_k)_{k\in{\mathbb Z}}\mapsto (a_{k-1})_{k\in{\mathbb Z}}$ is an endomorphism and ${\mathbb K}$-analytic, as $\phi\circ \alpha\circ\phi^{-1}$ is the restriction to an open set of the ${\mathbb K}$-linear map \[ {\mathbb K}\times {\mathbb K}\to{\mathbb K}\times {\mathbb K},\quad (z,w)\mapsto (X^{-1}z,Xw). \] Here $\con(\alpha)$ is the dense proper subgroup of all $(a_k)_{k\in{\mathbb N}}$ of support bounded to the left, i.e., there is $k_0\in{\mathbb Z}$ such that $a_k=0$ for all $k\leq k_0$. Moreover, \[ \con_*(\alpha)={\mathbb F}(\!(X)\!). \] Likewise, $\con^-(\alpha)$ is the set of all sequences with support bounded to the right and $\con^-_*(\alpha)\cong{\mathbb F}(\!(X)\!)$ with the automorphism $z\mapsto X^{-1}z$. Note that $\con(\alpha)\cap\con^-(\alpha)$ is the subgroup of all finitely-supported sequences, which is dense in~$G$ and dense in both $\con_*(\alpha)$ and $\con_*^-(\alpha)$. As $G$ is compact, we have $\lev(\alpha)=G$. Moreover, $\lev^*(\alpha)=G$, endowed with the discrete topology. \end{example} \begin{example} For ${\mathbb K}$ as in Example~\ref{two-sided}, $G:={\mathbb F}[\![X]\!]$ is an open subgroup of $({\mathbb K},+)$. The left-shift \[ \alpha\colon G\to G, \quad \sum_{k=0}^\infty a_kX^k\mapsto \sum_{k=0}^\infty a_{k+1}X^k \] is an endomorphism of~$G$ and ${\mathbb K}$-analytic as it coincides with the ${\mathbb K}$-linear map ${\mathbb K}\to{\mathbb K}$, $z\mapsto X^{-1}z$ on the open subgroup $X{\mathbb F}[\![X]\!]$. We have $\con^-(\alpha)={\mathbb F}[\![X]\!]$, since $(X^nz)_{n\in{\mathbb N}_0}$ is an $\alpha$-regressive trajectory for $z\in G$ with $X^nz\to 0$ as $n\to\infty$. Moreover, $\con(\alpha)\subseteq G$ is the dense subgroup of finitely supported sequences, $\ker(\alpha)={\mathbb F} X^0$, $\ik(\alpha)=\con(\alpha)$, and $\lev(\alpha)=G$. Also note that $\con^-(\alpha)=\con^-_*(\alpha)$, while $\con_*(\alpha)$ and $\lev_*(\alpha)$ are discrete. \end{example} \begin{example} If ${\mathbb K}$ is a totally disconnected local field and $\alpha$ a ${\mathbb K}$-linear endomorphism of a finite-dimensional ${\mathbb K}$-vector space~$E$, then $E$ admits the Fitting decomposition \[ E=\ik(\alpha)\oplus F \] with $F:=\bigcap_{k\in{\mathbb N}_0}\alpha^k(E)$. \end{example} For a non-discrete ${\mathbb K}$-analytic Lie group~$G$ and a ${\mathbb K}$-analytic endomorphism $\alpha\colon G\to G$ it can happen that both $\bigcap_{k\in{\mathbb N}_0}\alpha^k(G)=\{e\}$ and $\ik(\alpha)=\{e\}$, even in the case ${\mathbb K}={\mathbb Q}_p$ (see Example~\ref{Zp}).\footnote{We mention that the corresponding Lie algebra endomorphism $L(\alpha)\colon L(G)\to L(G)$, $z\mapsto pz$ is an isomorphism; thus $\ik(L(\alpha))=\{0\}$ and $\bigcap_{k\in{\mathbb N}_0}L(\alpha)^k(L(G))=L(G)$.} In the following example, $\ik(\alpha)=\{e\}$ holds, $\bigcap_{k\in{\mathbb N}_0}\alpha^k(G)=\{e\}$ and $L(\alpha)=0$, whence $\ik(L(\alpha))=L(G)$ and $\bigcap_{k\in{\mathbb N}_0}L(\alpha)^k(L(G))=\{0\}$. \begin{example} If ${\mathbb F}$ is a finite field of positive characteristic~$p$, then the Frobenius homomorphism ${\mathbb F}(\!(X)\!)\to{\mathbb F}(\!(X)\!)$, $z\mapsto z^p$ is a ${\mathbb K}$-analytic endomorphism of $({\mathbb F}(\!(X)\!),+)$ and restricts to an injective contractive endomorphism~$\alpha$ of the compact open subgroup $G:=X{\mathbb F}[\![X]\!]$ of ${\mathbb F}(\!(X)\!)$. Since $\frac{d}{dz}\big|_{z=0}(z^p)=pz^{p-1}|_{z=0}=0$, we have $L(\alpha)=0$. \end{example} As shown in \cite{CON}, a ${\mathbb K}$-analytic Lie group~$G$ is nilpotent if it admits a contractive ${\mathbb K}$-analytic automorphism~$\alpha$. This becomes false for endomorphisms; not even open subgroups need to exist in this case which are nilpotent. \begin{example} Let ${\mathbb F}_2$ be the field with $2$ elements. Then $S:=\SL_2({\mathbb F}(\!(X)\!)$ is a totally disconnected, locally compact group which is compactly generated and simple as an abstract group. Hence none of the open subgroups of~$S$ is soluble (see \cite{WiS}), whence none of them is nilpotent. Let $G\subseteq S$ be the congruence subgroup consisting of all $(a_{ij})_{i,j=1}^2\in S$ such that $a_{ij}-\delta_{ij}\in X{\mathbb F}[\![X]\!]$ for all $i,j\in\{1,2\}$. Then $G$ is an open subgroup of~$S$ and the map \[ \alpha\colon G\to G,\;\; \left( \begin{array}{cc} a & b\\ c & d \end{array} \right)\mapsto \left( \begin{array}{cc} a^2 & b^2\\ c^2 & d^2 \end{array} \right), \] which applies the Frobenius to each matrix element, is a contractive ${\mathbb K}$-analytic endomorphism and injective. No open subgroup of~$G$ is nilpotent. \end{example} The pathology also occurs if $\car({\mathbb K})=0$. \begin{example} Let $G:=\PSL_2({\mathbb Q}_p)$ and $\alpha\colon G\to G$ be the trivial endomorphism $g\mapsto e$. Then $G=\con(\alpha)=\con_*(\alpha)$. As $G$ is a totally disconnected group which is compactly generated and simple, no open subgroup of~$G$ is soluble (nor nilpotent, as a special case). \end{example} \section{Preliminaries and notation}\label{secprel} We write ${\mathbb N}:=\{1,2,\ldots\}$ and ${\mathbb N}_0:={\mathbb N}\cup\{0\}$. All topological groups considered in this work are assumed Hausdorff. If $X$ is a set, $Y\subseteq X$ a subset and $f\colon Y\to X$ a map, we call a subset $M\subseteq Y$ \emph{invariant} under $f$ if $f(M)\subseteq M$; we say that $M$ is \emph{$f$-stable} if $f(M)=M$. In the following, $1/0:=\infty$. Recall that a topological field~${\mathbb K}$ is called a \emph{local field} if it is non-discrete and locally compact~\cite{Wei}. If~${\mathbb K}$ is a totally disconnected local field, we fix an absolute value $|\cdot|$ on~${\mathbb K}$ which defines its topology and extend it to an absolute value, also denoted $|\cdot|$, on an algebraic closure~$\overline{{\mathbb K}}$ of~${\mathbb K}$ (see \cite[\S14]{Sch}). If $E$ is a vector space over~${\mathbb K}$, endowed with an ultrametric norm $\|\cdot\|$, we write \[ B_r^E(x):=\{y\in E\colon \|y-x\|<r\} \] for the ball of radius $r>0$ around $x\in E$. If $\alpha\colon E\to F$ is a continuous linear map between finite-dimensional ${\mathbb K}$-vector spaces~$E$ and $F$, endowed with ultrametric norms $\|\cdot\|_E$ and $\|\cdot\|_F$, respectively, we let \[ \|\alpha\|_{\op}:=\sup\left\{\frac{\|\alpha(x)\|_F}{\|x\|_E}\colon x\in E\setminus \{0\}\right\}\in [0,\infty[ \] be its operator norm. For the basic theory of ${\mathbb K}$-analytic mappings between open subsets of finite-dimensional ${\mathbb K}$-vector spaces and the corresponding ${\mathbb K}$-analytic manifolds and Lie groups modeled on ${\mathbb K}^m$, we refer to \cite{Ser} (cf.\ also \cite{FAS} and \cite{Bou}). We shall write $T_pM$ for the tangent space of a ${\mathbb K}$-analytic manifold~$M$ at $p\in M$. Given a ${\mathbb K}$-analytic map $f\colon M\to N$ between ${\mathbb K}$-analytic manifolds, we write $T_pf\colon T_pM\to T_{f(p)}M$ for the tangent map of $f$ at $p\in M$. Submanifolds are as in \cite[Part~II, Chapter~III, \S11]{Ser}. If $N$ is a submanifold of a ${\mathbb K}$-analytic manifold~$M$ and $p\in N$, then the inclusion map $\iota\colon N \to M$ is an immersion and we identify $T_pN$ with the vector subspace $T_p\iota(T_p(N))$ of~$T_pM$. If $F\subseteq T_pM$ is a vector subspace and $T_pN=F$, we say that \emph{$N$ is tangent to~$F$} at $p$. If $M$ is a set, $p\in M$ and $N_1$, $N_2$ are ${\mathbb K}$-analytic manifolds such that $N_1,N_2\subseteq M$ as a set and $p\in N_1\cap N_2$, we write $N_1\sim_p N_2$ if there exists a subset $U\subseteq N_1\cap N_2$ which is an open $p$-neighbourhood in both $N_1$ and $N_2$, and such that $N_1$ and $N_2$ induce the same ${\mathbb K}$-analytic manifold structure on~$U$. Then $\sim_p$ is an equivalence relation; the equivalence class of $N_1$ with respect to $\sim_p$ is called the \emph{germ of $N_1$ at~$p$}. If $G$ is a ${\mathbb K}$-analytic Lie group, we let $L(G):=T_eG$, endowed with its natural Lie algebra structure; if $f\colon G\to H$ is a ${\mathbb K}$-analytic homomorphism between Lie groups, we abbreviate $L(f):=T_e(f)\colon L(G)\to L(H)$. If $G$ is a ${\mathbb K}$-analytic Lie group and a subgroup $H\subseteq G$ is endowed with a ${\mathbb K}$-analytic Lie group structure turning the inclusion map $H\to G$ into an immersion, we call $H$ an \emph{immersed Lie subgroup} of~$G$. This holds if and only if $H$ has an open subgroup which is a submanifold of~$G$. If we call a map $f\colon M\to N$ between ${\mathbb K}$-analytic manifolds a ${\mathbb K}$-analytic diffeomorphism, then $f^{-1}$ is assumed ${\mathbb K}$-analytic as well. Likewise, ${\mathbb K}$-analytic isomorphisms between ${\mathbb K}$-analytic Lie groups (notably, ${\mathbb K}$-analytic automorphisms) presume a ${\mathbb K}$-analytic inverse function. If $U$ is an open subset of a finite-dimensional ${\mathbb K}$-vector space, we identify the tangent bundle $U$ with $U\times E$, as usual. If $f\colon M\to U$ is a ${\mathbb K}$-analytic map, we write $df\colon TM\to E$ for the second component of the tangent map $Tf\colon TM\to TU=U\times E$. If $X$ and $Y$ are topological spaces and $x\in X$, we say that a map $f\colon X\to Y$ is \emph{open at $x$} if $f(U)$ is an $f(y)$-neighbourhood in~$Y$ for each $x$-neighbourhood $U$ in~$X$. If $G$ is a group with group multiplication $(x,y)\mapsto xy$, we write $G^{\op}$ for $G$ endowed with the opposite group structure, with multiplication $(x,y)\mapsto yx$.\\[1mm] We shall use several basic facts (with proofs recalled in Appendix~\ref{appA}). The terminology in~(d) is as in \cite[Part~II, Chapter~III, \S9]{Ser}. \begin{numba}\label{fact-2} Let $\sigma\colon G\times X\to X$, $(g,x)\mapsto g.x$ be a continuous left action of a topological group~$G$ on a topological space~$X$, and $x\in X$. Then we have: \begin{itemize} \item[(a)] If the orbit $G.x$ has an interior point, then $G.x$ is open in~$X$. \item[(b)] If $\sigma^x\colon G\to G.x$, $g\mapsto g.x$ is open at~$e$, then $\sigma^x$ is an open map. \item[(c)] If $G$ is compact and~$X$ is Hausdorff, then $\sigma^x\colon G\to G.x$ is an open map. \item[(d)] If $G$ is a ${\mathbb K}$-analytic Lie group, $X$ a ${\mathbb K}$-analytic manifold, $\sigma$ is ${\mathbb K}$-analytic, $G.x\subseteq X$ is open and $\sigma^x$ is \'{e}tale at $e$, then $\sigma^x$ is \'{e}tale. \end{itemize} \end{numba} We shall also use the following fact concerning automorphic actions. \begin{numba}\label{act-ana} Let $G$ and $H$ be ${\mathbb K}$-analytic Lie groups and $\sigma\colon G\times H\to H$ be a left $G$-action on~$H$ with the following properties: \begin{itemize} \item[(a)] $\sigma_g:=\sigma(g,\cdot)\colon H\to H$ is an automorphism of the group~$H$, for each $g\in G$. \item[(b)] For each $g\in G$, there exists an open $e$-neighbourhood $Q\subseteq H$ such that $\sigma_g|_Q$ is ${\mathbb K}$-analytic. \item[(c)] For each $x\in H$, there exists an open $e$-neighbourhood $P\subseteq G$ such that $\sigma^x:=\sigma(\cdot,x)\colon G\to H$ is ${\mathbb K}$-analytic on~$P$. \item[(d)] There exists an open $e$-neighbourhood $U\subseteq G$ and an open $e$-neighbourhood $V\subseteq H$ such that $\sigma|_{U\times V}$ is ${\mathbb K}$-analytic. \end{itemize} Then $\sigma$ is ${\mathbb K}$-analytic. \end{numba} Again, a proof can be found in Appendix~\ref{appA}. Likewise for the following fact. \begin{numba}\label{comp-to-id} Let $G$ be a topological group, $(g_n)_{n\in{\mathbb N}}$ be a sequence in~$G$ such that $\{g_n\colon n\in {\mathbb N}\}$ is relatively compact and $(x_n)_{n\in{\mathbb N}}$ be a sequence in~$G$ such that $x_n\to e$ as $n\to\infty$. Then $g_nx_ng_n^{-1}\to e$. \end{numba} \begin{numba}\label{thesubsp} Henceforth, let ${\mathbb K}$ be a totally disconnected local field with algebraic closure~$\overline{{\mathbb K}}$ and absolute value~$|\cdot|$. If~$E$ is a finite-dimensional ${\mathbb K}$-vector space and $\alpha\colon E\to E$ a ${\mathbb K}$-linear endomorphism, call $\rho\in [0,\infty]$ a \emph{characteristic value} of~$\alpha$ if $\rho=|\lambda|$ for some eigenvalue $\lambda\in\overline{{\mathbb K}}$ of the endomorphism $\alpha_{\overline{{\mathbb K}}}:=\alpha\otimes_{\mathbb K} \id_{\overline{{\mathbb K}}}$ of $E\otimes_{\mathbb K}\overline{{\mathbb K}}$. If $R(\alpha)\subseteq[0,\infty[$ is the set of all characteristic values of~$\alpha$, then \[ E=\bigoplus_{\rho \in R(\alpha)}E_\rho \] for unique $\alpha$-invariant vector subspaces $E_\rho\subseteq E$ such that $E_\rho\otimes_{\mathbb K}\overline{K}$ equals the sum of all generalized eigenspaces of $\alpha_{\overline{K}}$ for eigenvalues $\lambda\in\overline{K}$ with $|\lambda|=\rho$ (compare \cite[Chapter~II, (1.0)]{Mar}). If $a\in]0,\infty[$ such that $a\not\in R(\alpha)$, we say that $\alpha$ is \emph{$a$-hyperbolic}. For $\rho\in [0,\infty[\setminus R(\alpha)$, let $E_\rho:=\{0\}$. For $\rho\in [0,\infty[$, we call $E_\rho$ the \emph{characteristic subspace} of~$E$ for $\rho$. Then $E_0=\ik(\alpha)$. Moreover, $\alpha(E_\rho)=E_\rho$ for each $\rho>0$ and $\alpha|_{E_\rho}\colon E_\rho\to E_\rho$ is an isomorphism. For each $a\in \,]0,\infty[$, we consider the following $\alpha$-invariant vector subspaces of~$E$: \[ E_{<a}:=\bigoplus_{\rho<a}E_\rho,\quad E_{\leq a}:=\bigoplus_{\rho\leq a}E_\rho,\quad E_{>a}:=\bigoplus_{\rho>a}E_\rho\quad\mbox{and}\quad E_{\geq a}:=\bigoplus_{\rho\geq a}E_\rho. \] \end{numba} \begin{numba}\label{def-adap} By \cite[Proposition~2.4]{FIN}, $E$ admits an ultrametric norm $\|\cdot\|$ which is \emph{adapted to $\alpha$} in the following sense: \begin{itemize} \item[(a)] $\|x\|=\max\{\|x_\rho\|\colon \rho\in R(\alpha)\}$ if we write $x\in E$ as $x=\sum_{\rho\in R(\alpha)}x_\rho$ with $x_\rho\in E_\rho$; \item[(b)] $\|\alpha|_{E_0}\|_{\op}<1$; \item[(c)] For all $\rho\in R(\alpha)$ such that $\rho>0$, we have $\|\alpha(x)\|=\rho\|x\|$ for all $x\in E_\rho$. \end{itemize} If $\varepsilon\in\,]0,1]$ is given, then an adapted norm can be found such that, moreover, $\|\alpha|_{E_0}\|_{\op}<\varepsilon$. \end{numba} \begin{rem}\label{forifthm} By~(a) in~\ref{def-adap}, we have \[ B^E_r(0)=\prod_{\rho\in R(\alpha)} B^{E_\rho}_r(0) \] for each $r>0$, identifying $E$ with $\prod_{\rho\in R(\alpha)}E_\rho$. For each $\rho\in R(\alpha)\setminus \{0\}$, (c) implies that \[ \alpha(B^{E_\rho}_r)=B_{\rho r}^{E_\rho}(0)\quad\mbox{for all $\, \rho\in R(\alpha)\setminus\{0\}$.} \] \end{rem} \begin{numba}\label{thesetinvma} Let $M$ be a ${\mathbb K}$-analytic manifold and $p\in M$. Let $M_0\subseteq M$ be an open $p$-neighbourhood and $f\colon M_0\to M$ be a ${\mathbb K}$-analytic map such that $f(p)=p$. Let $T_p(M)_\rho$ for $\rho>0$ be the characteristic subspaces of $T_pM$ with respect to the endomorphism $T_pf$ of $T_pM$. \end{numba} \begin{numba} Let $N\subseteq M_0$ be a submanifold such that $p\in N$. \begin{itemize} \item[(a)] If $a\in\,]0,1]$ and $T_pf$ is $a$-hyperbolic, we say that $N$ is a \emph{local $a$-stable manifold} for $f$ around~$p$ if $T_pN=(T_pM)_{<a}$ and $f(N)\subseteq N$. If, moreover, $a>\rho$ for each $\rho\in R(T_pf)$ such that $\rho<1$, we call $N$ a \emph{local stable manifold} for $f$ around~$p$. \item[(b)] We say that $N$ is a \emph{centre manifold} for $f$ around~$p$ if $T_pN=(T_pM)_1$ and $f(N)=N$. \item[(c)] If $b\geq 1$ and $T_pf$ is $b$-hyperbolic, we say that $N$ is a \emph{local $b$-unstable manifold} for~$f$ around~$p$ if $T_pN=(T_pM)_{>b}$ and there exists an open $p$-neighbourhood $P\subseteq N$ such that $f(P)\subseteq N$. If, moreover, $b<\rho$ for each $\rho\in R(T_pf)$ such that $\rho>1$, we call $N$ a \emph{local unstable manifold} for $f$ around~$p$. \item[(d)] We call $N$ a \emph{centre-stable manifold} for $f$ around~$p$ if $f(N)\subseteq N$ and $T_pN=(T_pM)_{\leq 1}$. \item[(e)] We call $N$ a \emph{local centre-unstable manifold} for $f$ around~$p$ if $T_pN=(T_pM)_{\geq 1}$ and there exists an open $p$-neighbourhood $P\subseteq N$ such that $f(P)\subseteq N$. \end{itemize} \end{numba} \begin{numba}\label{uni-germ} We mention that a centre manifold for $f$ around~$p$ always exists in the situation of~\ref{thesetinvma}, by \cite[Theorem~1.10]{INV}, whose germ at~$p$ is uniquely determined (noting that the alternative argument in the proof does not require that $T_pf$ be an automorphism). For each $a\in\,]0,1]$ such that $T_pf$ is $a$-hyperbolic, a local $a$-stable manifold around $p$ exists, whose germ around~$p$ is uniquely determined (by the Local Invariant Manifold Theorem in~\cite{FIN}). For each $b\in [1,\infty[$ such that $T_pf$ is $b$-hyperbolic, a local $b$-unstable manifold around $p$ exists, whose germ around~$p$ is uniquely determined (by the Local Invariant Manifold Theorem just cited). Moreover, a centre-stable manifold for $f$ around~$p$ always exists, whose germ at~$p$ is uniquely determined (again by the Local Invariant Manifold Theorem). \end{numba} \begin{numba}\label{also-nonhypo} The germ at $p$ of a local stable manifold for $f$ around~$p$ is uniquely determined. \,In fact, \[ (T_pM)_{<a}=(T_pM)_{<b} \] for all $a,b\in\,]0,1]\setminus R(T_p(f))$ with $a>\rho$ and $b>\rho$ for all $\rho\in R(T_pf)\cap \,[0,1[$. As a consequence, a submanifold $N\subseteq M$ is a local $a$-stable manifold if and only if it is a local $b$-stable manifold for~$f$ around~$p$. Thus \ref{uni-germ} applies. \end{numba} Likewise, $(T_pM)_{>a}=(T_pM)_{>b}$ for all $a,b\in [1,\infty[\,\setminus R(T_pf)$ such that $a<\rho$ and $b<\rho$ for all $\rho\in R(T_pf)\cap \,]1,\infty[$. Hence, a submanifold $N\subseteq M$ is a local $a$-unstable manifold for $f$ around~$p$ if and only if it is a local $b$-unstable manifold. As a consequence: \begin{numba}\label{nonhypo2} The germ at~$p$ of a local unstable manifold for $f$ around~$p$ is uniquely determined. \end{numba} Let $M$ be a ${\mathbb K}$-analytic manifold, $f\colon M\to M$ be ${\mathbb K}$-analytic and $p\in M$ such that $f(p)=p$. Let $a\in\,]0,1]\setminus R(T_pf)$, $\,b\in [1,\infty[\setminus R(T_pf)$ and $\|\cdot\|$ be an ultrametric norm on $E:=T_pM$ adapted to $T_pf$ such that $\|T_pf|_{E_0}\|_{\op}<a$. Endow vector subspaces $F\subseteq E$ with the norm induced by $\|\cdot\|$ and abbreviate $B^F_t:=B^F_t(0)$ for $t>0$. We shall use the following fact, which is \cite[Proposition~7.3]{END}: \begin{numba}\label{prop-7-3} There exists $R>0$ with the following properties: \begin{itemize} \item[(a)] There exists a local $a$-stable manifold $W^s_a$ for~$f$ around~$p$ and a ${\mathbb K}$-analytic diffeomorphism \[ \phi_s\colon W_a^s\to B_R^{E_{<a}} \] such that $\phi_s(p)=0$, $W^s_a(t):=\phi_s^{-1}(B_t^{E_{<a}})$ is a local $a$-stable manifold for~$f$ around $p$ for all $t\in \,]0,R]$, and $d\phi_s|_{T_p(W^s_a)}=\id_{E_{<a}}$. \item[(b)] There exists a centre manifold $W^c$ for~$f$ around~$p$ and a ${\mathbb K}$-analytic diffeomorphism \[ \phi_c\colon W^c\to B_R^{E_1} \] such that $\phi_c(p)=0$, $W^c(t):=\phi_c^{-1}(B_t^{E_1})$ is a centre manifold for~$f$ around $p$ for all $t\in \,]0,R]$, and $d\phi_c|_{T_p(W^c)}=\id_{E_1}$. \item[(c)] There exists a local $b$-unstable manifold $W^u_b$ for~$f$ around~$p$ and a ${\mathbb K}$-analytic diffeomorphism \[ \phi_u\colon W_b^u\to B_R^{E_{>b}} \] such that $\phi_u(p)=0$, $W^u_b(t):=\phi_u^{-1}(B_t^{E_{>b}})$ is a local $b$-unstable manifold for~$f$ around $p$ for all $t\in \,]0,R]$, and $d\phi_u|_{T_p(W^u_b)}=\id_{E_{>b}}$. \end{itemize} \end{numba} See Appendix~\ref{appA} for a proof of the following auxiliary result. \begin{la}\label{enough-uni} Let $M$ be a ${\mathbb K}$-analytic manifold, $p\in M$ and $f\colon M_0\to M$ be a ${\mathbb K}$-analytic mapping such that $f(p)=p$. If $N\subseteq M$ is a local centre-unstable manifold for~$f$ around~$p$, then for every $p$-neighbourhood $W\subseteq N$, there exists an open $p$-neighbourhood $O\subseteq N$ such that $f(O)$ is open in~$N$ and $O\subseteq f(O)\subseteq W$. In particular, $f(O)$ is a local centre-unstable manifold for $f$ around~$p$. \end{la} \section{Proof of Proposition~\ref{fi-res}}\label{fipro} Let $U\subseteq G$ be a compact open subgroup which minimizes the index \[ [\alpha(U):\alpha(U)\cap U]. \] Then $U$ is \emph{tidy} for $\alpha$ in the sense of \cite[Definition~2]{Wi3} (by the main theorem of~\cite{Wi3}, on p.\,405). Thus \[ U=U_+U_-=U_-U_+, \] where $U_+$ is the subgroup of all $x\in U$ having an $\alpha$-regressive trajectory in~$U$ and $U_-:=\{x\in U\colon (\forall n\in{\mathbb N})\; \alpha^n(x)\in U\}$. Moreover, $U_+$ and $U_-$ are compact subgroups of~$U$ (see \cite[Definition 4 and Proposition~1]{Wi3}). Now \[ \parb(\alpha)\cap U=U_-\quad\mbox{and}\quad \parb^-(\alpha)\cap U=U_+, \] by \cite[Proposition~11]{Wi3}, whence $U_+$ and $U_-$ are compact open subgroups of $\parb^-(\alpha)$ and $\parb(\alpha)$, respectively. By \cite[Lemma~13.1]{BGT}, we have \[ \parb(\alpha)=\con(\alpha)\lev(\alpha)\quad\mbox{and}\quad \parb^-(\alpha)=\con^-(\alpha)\lev(\alpha)=\lev(\alpha)\con^-(\alpha), \] entailing that \[ \Omega:=\con(\alpha)\lev(\alpha)\con^-(\alpha)=\parb(\alpha)\parb^-(\alpha). \] The product map \[ p\colon \parb(\alpha)\times \parb^-(\alpha)\to G,\quad (x,y)\mapsto xy \] is continuous, with image~$\Omega$. We get a continuous left action $\sigma\colon H\times G\to G$ of the direct product \[ H:=\parb(\alpha)\times(\parb^-(\alpha)^{\op}) \] on~$G$ via $(x,y).z:=xzy$. Then $\Omega=H.e$ equals the $e$-orbit. As $\Omega\supseteq U_-U_+=U$ is a neighbourhood of~$e$ in~$G$, \ref{fact-2}\,(a) shows that $\Omega$ is open in~$G$. Note that the orbit map \[ \sigma^e\colon H\to G,\quad (x,y)\mapsto xey=xy \] equals~$p$. We now restrict $\sigma$ to a continuous left action $\tau\colon K\times G\to G$ of the compact group \[ K:= U_-\times (U_+)^{\op} \] on $G$. Then $p(K)=U=K.e$ and \[ p|_K\colon K\to K.e \] is the orbit map, which is an open map by \ref{fact-2}\,(c). Since $p(K)=U$ is open in~$G$, we deduce that also the map $p\colon H\to H.e=\Omega$ is open at~$e$ end hence an open map, by \ref{fact-2}\,(b). $\,\square$ \section{Local structure of {\boldmath$(G,\alpha)$} around {\boldmath$e$}}\label{locstru} We recall the construction of well-behaved $e$-neighbourhoods from~\cite[\S8]{END}. These facts are essential for all of our main results. Let $G$ be a Lie group over a totally disconnected local field~${\mathbb K}$ and $\alpha\colon G\to G$ be a ${\mathbb K}$-analytic endomorphism. Pick an ultrametric norm $\|\cdot\|$ on ${\mathfrak g}:=L(G)$ which is adapted to $L(\alpha)$. \begin{numba} Pick $a\in\,]0,1]$ such that $L(\alpha)$ is $a$-hyperbolic and $a>\rho$ for each characteristic value~$\rho$ of $L(\alpha)$ such that $\rho<1$. Pick $b\in [1,\infty[$ such that $L(\alpha)$ is $b$-hyperbolic and $b<\rho$ for each charcteristic value $\rho$ of $L(\alpha)$ such that $\rho>1$. With respect to the endomorphism $L(\alpha)$ of~${\mathfrak g}$, we then have \[ {\mathfrak g}_{<1}={\mathfrak g}_{<a}\quad\mbox{and}\quad {\mathfrak g}_{>1}={\mathfrak g}_{>b}, \] entailing that \[ {\mathfrak g}={\mathfrak g}_{<a}\oplus {\mathfrak g}_1\oplus {\mathfrak g}_{>b}. \] We find it useful to identify ${\mathfrak g}$ with the direct product ${\mathfrak g}_{<a}\times{\mathfrak g}_1\times{\mathfrak g}_{>b}$; an element $(x,y,z)$ of the latter is identified with $x+y+z\in{\mathfrak g}$. Let $R>0$, $W^s_a$, $W^c$, $W^u_b$ and the ${\mathbb K}$-analytic diffeomorphisms \[ \phi_s\colon W^s_a\to B^{{\mathfrak g}_{<a}}_R(0),\quad \phi_c\colon W^c\to B^{{\mathfrak g}_1}_R(0),\quad\mbox{and}\quad \phi_u\colon W^u_b\to B^{{\mathfrak g}_{>b}}_R(0) \] be as in \ref{prop-7-3}, applied with $G$ in place of~$M$, $\alpha$ in place of~$f$ and $e$ in place of~$p$. Abbreviate $B^F_t:=B^F_t(0)$ if $t>0$ and $F\subseteq{\mathfrak g}$ is a vector subspace. Using the inverse maps $\psi_s:=\phi_s^{-1}$, $\psi_c:=\phi_c^{-1}$, and $\psi_u:=\phi_u^{-1}$, we define the ${\mathbb K}$-analytic map \[ \psi\colon B_R^{\mathfrak g}=B_R^{{\mathfrak g}_{<a}}\times B^{{\mathfrak g}_1}_R\times B^{{\mathfrak g}_{>b}}_R\to G,\quad (x,y,z)\mapsto \psi_s(x)\psi_c(y)\psi_u(z). \] Then $T_0\psi=\id_{\mathfrak g}$ if we identify $T_0{\mathfrak g}=\{0\}\times{\mathfrak g}$ with ${\mathfrak g}$ by forgetting the first component. By the inverse function theorem, after shrinking~$R$ if necessary, we may assume that the image $W^s_aW^cW^u_b$ of~$\psi$ is an open identity neighbourhood in~$G$, and that \[ \psi\colon B^{\mathfrak g}_R\to W^s_aW^cW^u_b \] is a ${\mathbb K}$-analytic diffeomorphism. In particular, the product map \begin{equation}\label{localprod} W^s_a\times W^c\times W^u_b\to W^s_aW^cW^u_b,\quad (x,y,z)\mapsto xyz \end{equation} is a ${\mathbb K}$-analytic diffeomorphism. We define $\phi:=\psi^{-1}$, with domain $U:=W^s_aW^cW^u_b$ and image $V:=B^{\mathfrak g}_R$. After shrinking $R$ further if necessary, we may assume that \[ B^\phi_t:=\phi^{-1}(B^{\mathfrak g}_t) \] is a compact open subgroup of~$G$ for each $t\in\,]0,R]$ and a normal subgroup of~$B^\phi_R$ (see \cite[5.1 and Lemma~5.2]{END}). Then \[ B^\phi_t=\phi^{-1}(B^{\mathfrak g}_t)=\psi(B^{\mathfrak g}_t) =\psi_s(B^{{\mathfrak g}_{<a}}_t)\psi_c(B^{{\mathfrak g}_1}_t)\psi_u(B^{{\mathfrak g}_{>b}}_t) =W^s_a(t)W^c(t)W^u_b(t) \] with notation as in \ref{prop-7-3}. \end{numba} \begin{numba}\label{nunum} After shrinking~$R$ if necessary, we may assume that $W^u_b(t)$ is a subgroup of~$G$ for all $t\in\, ]0,R]$ and the set $W^c(t):=\phi_c^{-1}(B^{{\mathfrak g}_1}_t)$ normalizes $W^u_b(t)$ (see \cite[Lemma~8.7]{END}). Since $W^s_a(t)$ is a local $a$-stable manifold for~$\alpha$, we have \[ \alpha(W^s_a(t))\subseteq W^s_a(t)\quad\mbox{for all $t\in\,]0,R]$.} \] After shrinking~$R$, moreover \begin{equation}\label{subcontra} \bigcap_{n\in{\mathbb N}_0}\alpha^n(W^s_a)=\{e\}\quad\mbox{and}\quad \lim_{n\to\infty}\alpha^n(x)=e\;\,\mbox{for all $x\in W^s_a$} \end{equation} (see \cite[8.8]{END}). In addition, one may assume that \[ \alpha|_{W^c(t)}\colon W^c(t)\to W^c(t) \] is a ${\mathbb K}$-analytic diffeomorphism for each $t\in\,]0,R]$ (see \cite[8.9]{END}). \end{numba} \begin{numba}\label{subu} As shown in \cite[8.10]{END}, after shrinking~$R$ one may assume that, for each $t\in\,]0,R]$ and $x\in W^u_b(t)$, there exists an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}_0}$ in~$W^u_b(t)$ such that $x_0=x$ and \[ \lim_{n\to\infty} x_{-n}=e; \] moreover, one may assume that $W^u_b(t)\subseteq\alpha(W^u_b(t))$ for all $t\in\,]0,R]$. Since $L(\alpha|_{W^u_b})=L(\alpha)|_{{\mathfrak g}_{>b}}$ is injective, we may assume that $\alpha|_{W^u_b}$ is an injective immersion, after possibly shrinking~$R$. \end{numba} \begin{numba} After shrinking~$R$ if necessary, we may assume that $W^s_a(t)$ is a subgroup of~$G$ for each $t\in\,]0,R]$ and that $W^c(t)$ normalizes $W^s_a(t)$. Using a dynamical description of the local $a$-stable manifolds as in \cite[Theorem~6.6(c)(i)]{INV}, this can be proved like \cite[Lemma~8.7]{END}. \end{numba} \begin{numba} By \cite[8.1]{END}, there exists $r\in\,]0,R]$ such that $\alpha(W^u_b(r))\subseteq W^u_b$ and, for each $x\in W^u_b(r)\setminus\{e\}$, there exists $n\in {\mathbb N}$ such that $\alpha^n(x)\not\in W^u_b(r)$. \end{numba} \begin{numba} For each $t\in \,]0,R]$, \[ (B_t^\phi)_-:=\{x\in B^\phi_t\colon (\forall n\in{\mathbb N})\colon \alpha^n(x)\in B^\phi_t\} \] is a compact subgroup of $B^\phi_t$. Let $(B^\phi_t)_+$ be the set of all $x\in B^\phi_t$ for which there exists an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}_0}$ such that $x_{-n}\in B^\phi_t$ for all $n\in{\mathbb N}_0$ and $x_0=x$. As recalled in Section~\ref{fipro}, also $B^\phi_+$ is a compact subgroup of~$B^\phi_t$. For each $t\in \,]0,R]$, we have \[ (B^\phi_t)_+=W^c(t)W^u_b(t); \] moreover, \[ (B^\phi_t)_-=W^s_a(t)W^c(t) \] for all $t\in\,]0,r]$, by Equations~(68) and (73), respectively, in \cite[proof of Theorem~8.13]{END}. Moreover, \[ W^c(t)=(B^\phi_t)_-\cap (B^\phi_t)_+ \] is a compact \emph{subgroup} of $B^\phi_t$ for each $t\in \,]0,r]$, see \cite[Remark~8.14]{END}. \end{numba} We shall use the following result concerning local unstable manifolds. \begin{la}\label{germ-cu} There exists a local centre-unstable manifold $N$ for $\alpha$ around~$e$. Its germ at~$e$ is uniquely determined. \end{la} \begin{proof} The submanifold $N:=W^cW^u_b$ of $U=W^s_aW^cW^u_b\cong W^s_1\times W^c\times W^u_b$ is a local centre-unstable manifold for $\alpha$ around~$p$, as $T_eN=T(W^c)\oplus T(W^u_b)=(T_pM)_{\geq 1}$ and $P:=W^cW^u_r$ is an open $e$-neighbourhood in $N$ such that $\alpha(P)\subseteq N$. If also $N'$ is a local centre-unstable manifold for~$\alpha$ around~$p$, then there exists an open $p$-neighbourhood $Q\subseteq N'$ such that $\alpha(Q)\subseteq N'$. By Lemma~\ref{enough-uni}, there exists an open $p$-neighbourhood $O\subseteq N'$ such that $\alpha(O)$ is open in $N'$ and \[ O\subseteq\alpha(O)\subseteq U\cap N'. \] Hence, for each $x\in O$ we find an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}_0}$ in~$O$ such that $x_0=x$. Since $x_{-n}\in O\subseteq U$ for each $n\in{\mathbb N}_0$, we have $x\in U_+=N$. Thus $O\subseteq N$. As the inclusion map $\iota\colon O\to N$ is ${\mathbb K}$-analytic and $T_p\iota$ is the identity map of $(T_pM)_{\geq 1}$, the inverse function theorem shows that $O$ contains an open $p$-neighbourhood~$W$ such that $W=j(W)$ is open in~$N$ and $\id=j|_W\colon W\to W$ is a ${\mathbb K}$-analytic diffeomorphism. Thus $N'$ and $N$ induce the same ${\mathbb K}$-analytic manifold structure on their joint open subset~$W$, whence the germs of $N$ and $N'$ at~$p$ coincide. \end{proof} \begin{la}\label{con-loc} We have $\con(\alpha)\cap W^s_a(r)W^c(r)=W_s^a(r)$. Moreover, $W^u_b(r)$ is the set of all $x\in W^c(r)W^u_b(r)$ admitting an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}_0}$ in $W^c(r)W^u_b(r)$ such that $x_0=x$ and $x_{-n}\to e$ for $n\to\infty$. \end{la} \begin{proof} If $x\in W^s_a(r)$, then $\alpha^n(x)\to e$ as $n\to\infty$ and thus $x\in\con(\alpha)$, by~(\ref{subcontra}). Now assume that $x\in\con(\alpha)\cap W^s_a(r)W^c(r)$. Then $x=yz$ for unique $y\in W^s_a(r)$ and $z\in W^c(r)$. If we had $z\not=e$, then we could find $t\in \,]0,r[$ such that $z\not\in W^c(t)$. Then $\alpha^n(y)\in W^s_a(r)$ for each $n\in {\mathbb N}_0$. Since $\alpha|_{W^c(r)}\colon W^c(r)\to W^c(r)$ is a bijection which takes $W^c(t)$ onto itself, we deduce that $\alpha^n(z)\in W^c(r)\setminus W^c(t)$ for all $n\in {\mathbb N}_0$, entailing that the group element $\alpha^n(x)=\alpha^n(y)\alpha^n(z)$ is in $B^\phi_r\setminus B^\phi_t$ for each $n\in{\mathbb N}_0$. Hence $\alpha^n(x)\not\to e$, contradiction. Thus $z=0$ and thus $x\in W^s_a(r)$.\\[2mm] By \ref{subu}, each $x\in W^u_b(r)$ has an $\alpha$-regressive trajectory of the asserted form. Now let $x\in W^c(r)W^u_b(r)$ and assume there exists an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}_0}$ in $W^r(r)W^u_b(r)$ such that $x_0=x$ and $x_{-n}\to e$ for $n\to\infty$. Write $x_{-n}=y_{-n}z_{-n}$ with $y_{-n}\in W^c(r)$ and $z_{-n}\in W^u_b(r)$. Then $y_{-n}\to e$ and $z_{-n}\to e$ as $n\to\infty$. For each $n\in{\mathbb N}$, we have \[ y_{-n+1}z_{-n+1}=x_{-n+1}=\alpha(x_{-n})=\alpha(y_{-n})\alpha(z_{-n}) \] with $\alpha(y_{-n})\in W^c(r)$ and $\alpha(z_{-n})\in \alpha(W^u_b(r))\subseteq W^u_b$. As the product map $W^c\times W^u_b\to W^cW^u_b$ is a bijection, we deduce that $y_{-n+1}=\alpha(y_{-n})$ and $z_{-n+1}=\alpha(z_{-n})$. If we had $y\not=e$, we could find $t\in\,]0,r[$ such that $y\not\in W^c(t)$. There would be some $N\in{\mathbb N}$ such that $x_{-n}\in W^c(t)$ for all $n\geq N$. Since $\alpha(W^c(t))=W^c(t)$, this would imply $y=\alpha^N(y_{-N})\in W^c(t)$, a contradiction. Thus $y=0$ and thus $x=z\in W^u_b(r)$. \end{proof} \begin{rem}\label{union-con} Since $W^u_b(r)\subseteq \alpha(W^u_b(r))$ by \ref{subu}, $(\alpha^n(W^u_b(r))_{n\in{\mathbb N}}$ is an ascending sequence of compact subgroups of $\con^-(\alpha)$. Moreover, \[ \con^-(\alpha)=\bigcup_{n\in{\mathbb N}}\alpha^n(W^u_b(r)). \] In fact, for $x\in\con^-(\alpha)$ there exists an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}}$ such that $x_0=x$ and $x_{-n}\to e$ as $n\to\infty$. There exists $N\in{\mathbb N}$ such that $x_{-n}\in W^s_a(r)W^c(r)W^u_b(r)=B^\phi_r$ for all $n\geq N$. For each $n\geq N$, the element $x_{-n}$ of $B^\phi_r$ has the $\alpha$-regressive trajectory $(x_{-n-m})_{m\in{\mathbb N}_0}$ in $B^\phi_r$, whence $x_{-n}\in (B^\phi_r)_+=W^c(r)W^u_b(r)$. As $x_{-n-m}\in W^c(r)W^u_b(r)$ and $x_{-n-m}\to e$ as $m\to\infty$, Lemma~\ref{con-loc} shows that $x_{-n}\in W^u_b(r)$ for each $n\geq N$. In particular, $x_{-N}\in W^u_b(r)$ and $x=x_0=\alpha^N(x_{-N})\in\alpha^N(W^u_b(r))$. \end{rem} \section{Proof of Theorems~\ref{thmA} and \ref{thmB}} We retain the notation of Section~\ref{locstru}. As is well known, a homomorphism $f\colon H\to K$ between ${\mathbb K}$-analytic Lie groups is ${\mathbb K}$-analytic whenever it is ${\mathbb K}$-analytic on an open $e$-neighbourhood in~$H$. Applying this to $f=\id_H$, we see that two Lie group structures on~$H$ coincide if their germ at $e$ coincides. The uniqueness of the Lie group structures in parts~(a), (b), (c), and (d) of Theorem~\ref{thmA} therefore follows from the uniqueness statements concerning manifold germs in \ref{uni-germ}, \ref{also-nonhypo}, and \ref{nonhypo2}; the uniqueness statement in~(e) follows from Lemma~\ref{germ-cu}. It remains to prove the existence of the asserted Lie group structures, and that they have the properties described in Theorem~\ref{thmB}.\\[1mm] \emph{Contraction groups.} Since $W_a^s$ is a subgroup of~$G$ and a submanifold, it is a Lie subgroup. By Lemma~\ref{con-loc}, we have $W^s_a\subseteq\con(\alpha)$. If $g\in\con(\alpha)$, then $\{\alpha^n(g)\colon n\in{\mathbb N}_0\}$ is relatively compact, whence there exists $t\in \,]0,r]$ such that \[ \alpha^n(g)B^\phi_t\alpha^n(g)^{-1}\subseteq B^\phi_r \] for all $n\in{\mathbb N}_0$. For each $x\in W^s_a(t)$ and each $n\in{\mathbb N}_0$, we then have \[ \alpha^n(gxg^{-1})=\alpha^n(g)\alpha^n(x)\alpha^n(g)^{-1}\in B^\phi_r, \] as $\alpha^n(x)\in W^s_a(t)$. As a consequence, $gxg^{-1}\in (B^\phi_r)_-=W^s_a(r)W^c(r)$. Moreover, $gxg^{-1}\in \con(\alpha)$ as $g\in\con(\alpha)$ and $x\in W^s_a(t)\subseteq \con(\alpha)$. Hence $gxg^{-1}\in W^s_a(r)$, by Lemma~\ref{con-loc}. Being a restriction of the ${\mathbb K}$-analytic conjugation map $G\to G$, $h\mapsto ghg^{-1}$, the map \[ W^s_a(t)\to W^s_a,\quad x\mapsto gxg^{-1} \] is ${\mathbb K}$-analytic. By the Local Description of Lie Group Structures (see Proposition 18 in \cite[Chapter~III,\S1, no.\,9]{Bou}), we get a unique ${\mathbb K}$-analytic manifold structure on $\con(\alpha)$ making it a Lie group $\con_*(\alpha)$, such that $W^s_a$ is an open submanifold. As $W^s_a$ is a submanifold of~$G$, $\con_*(\alpha)$ is an immersed Lie subgroup of~$G$. Now $\alpha(W^s_a)\subseteq W^s_a$ and $\alpha|_{W^s_a}\colon W^s_a\to W^s_a$ is ${\mathbb K}$-analytic since $\alpha$ is ${\mathbb K}$-analytic and $W^s_a$ is a submanifold of~$G$. The restriction $\alpha_s$ of $\alpha$ to an endomorphism of the subgroup $\con_*(\alpha)$ coincides with $\alpha|_{W^s_a}$ on the open subset $W^s_a$ of~$\con_*(\alpha)$, whence $\alpha_s$ is ${\mathbb K}$-analytic. If $g\in \con(\alpha)$, there exists $N\in{\mathbb N}$ such that $\alpha^n(g)\in B^\phi_r$ for all $n\geq N$, whence $\alpha^n(g)\in (B^\phi_r)_-=W^s_a(r)W^c(r)$. Since also $\alpha^n(g)\in \con(\alpha)$, Lemma~\ref{con-loc} shows that $\alpha^n(g)\in W^s_a(r)$. As $\con_*(\alpha)$ hast $W^s_a$ as an open submanifold and $\alpha^n(g)\to e$ in $W^s_a$ as $N\leq n\to\infty$, we see that $(\alpha_s)^n(g)=\alpha^n(g)\to e$ also in $\con_*(\alpha)$. Thus $\con(\alpha_s)=\con_*(\alpha)$.\\[2mm] \emph{Anti-contraction groups.} Being a subgroup of~$G$ and a submanifold, $W_u^b$ is a Lie subgroup. By Lemma~\ref{con-loc}, we have $W^u_b\subseteq\con^-(\alpha)$. If $g\in\con^-(\alpha)$, then there exists an $\alpha$-regressive trajectory $(g_{-n})_{n\in{\mathbb N}_0}$ such that $g_0=g$ and $g_{-n}\to e$ as $n\to\infty$. Notably, $\{g_{-n}\colon n\in{\mathbb N}_0\}$ is relatively compact, whence there exists $t\in \,]0,r]$ such that \[ g_{-n}B^\phi_t(g_{-n})^{-1}\subseteq B^\phi_r \] for all $n\in{\mathbb N}_0$. For each $x\in W^u_b(t)$, there exists an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}_0}$ in $W^u_b(t)$ such that $x_0=x$ and $x_{-n}\to e$ as $n\to\infty$ (see \ref{subu}). Then $(g_{-n}x_{-n}(g_{-n})^{-1})_{n\in{\mathbb N}_0}$ is an $\alpha$-regressive trajectory for $gxg^{-1}$ such that $g_{-n}x_{-n}(g_{-n})^{-1}\to e$ as $n\to\infty$ and \[ g_{-n}x_{-n}(g_{-n})^{-1}\in g_{-n}B^\phi_t (g_{-n})^{-1}\subseteq B^\phi_r \] for all $n\in{\mathbb N}_0$, whence $g_{-n}x_{-n}(g_{-n})^{-1}\in (B^\phi_r)_+= W^c(r)W^u_b(r)$ for all $n\in{\mathbb N}_0$ and thus $gxg^{-1}\in W^u_b(r)$, by Lemma~\ref{con-loc}. Being a restriction of the ${\mathbb K}$-analytic conjugation map $G\to G$, $h\mapsto ghg^{-1}$, the map \[ W^u_b(t)\to W^u_b,\quad x\mapsto gxg^{-1} \] is ${\mathbb K}$-analytic. Using the Local Description of Lie Group Structures, we get a unique ${\mathbb K}$-analytic manifold structure on $\con^-(\alpha)$ making it a Lie group $\con^-_*(\alpha)$, such that $W^u_b$ is an open submanifold. As $W^u_b$ is a submanifold of~$G$, $\con^-_*(\alpha)$ is an immersed Lie subgroup of~$G$. Now $\alpha(W^u_b(r))\subseteq W^u_b$ and $\alpha|_{W^u_b(r)}\colon W^u_b(r)\to W^u_b$ is ${\mathbb K}$-analytic since $\alpha$ is ${\mathbb K}$-analytic and $W^u_b$ is a submanifold of~$G$. The restriction $\alpha_u$ of $\alpha$ to an endomorphism of $\con_*^-(\alpha)$ coincides with $\alpha|_{W^u_b(r)}$ on the open subset $W^u_b(r)$ of~$\con^-_*(\alpha)$, whence $\alpha_u$ is ${\mathbb K}$-analytic. If $g\in \con^-(\alpha)$, then there exists an $\alpha$-regressive trajectory $(g_{-n})_{n\in{\mathbb N}_0}$ such that $g_0=g$ and $g_{-n}\to e$ in~$G$ as $n\to\infty$. Then $g_{-n}\in \con^-(\alpha)$ for all $n\in{\mathbb N}_0$ and we have seen in Remark~\ref{union-con} that there exists an $N\in{\mathbb N}$ such that $g_{-n}\in W^u_b(r)$ for all $n\geq N$. As $\con^-_*(\alpha)$ and $G$ induce the same topology on $W^u_b(r)$, we deduce that $g_{-n}\to e$ also in $\con^-_*(\alpha)$. Thus $g\in \con^-(\alpha_u)$.\\[2mm] \emph{Parabolic subgroups.} $(B^\phi_r)_-=W_a^s(r)W^c(r)$ is a Lie subgroup of~$G$ and a subset of $\parb(\alpha)$. If $g\in\parb(\alpha)$, then $\{\alpha^n(g)\colon n\in{\mathbb N}_0\}$ is relatively compact, whence there exists $t\in \,]0,r]$ such that \[ \alpha^n(g)B^\phi_t\alpha^n(g)^{-1}\subseteq B^\phi_r \] for all $n\in{\mathbb N}_0$. For each $x\in W^s_a(t)W^c(t)$, we have $\alpha^n(x)\in W^s_a(t)W^c(t)$ for all $n\in{\mathbb N}_0$ and thus \[ \alpha^n(gxg^{-1})=\alpha^n(g)\alpha^n(x)\alpha^n(g)^{-1} \in \alpha^n(g)B^\phi_t\alpha^n(g)^{-1}\subseteq B^\phi_r, \] whence $gxg^{-1}\in (B^\phi_r)_-=W^s_a(r)W^c(r)$. Being a restriction of the ${\mathbb K}$-analytic conjugation map $G\to G$, $h\mapsto ghg^{-1}$, the map \[ W^s_a(t)W^c(t)\to W^s_a(r)W^c(r),\quad x\mapsto gxg^{-1} \] is ${\mathbb K}$-analytic. By the Local Description of Lie Group Structures, we get a unique ${\mathbb K}$-analytic manifold structure on $\parb(\alpha)$ making it a Lie group $\parb_*(\alpha)$, such that $W^s_a(r)W^c(r)$ is an open submanifold. As $W^s_a(r)W^c(r)$ is a submanifold of~$G$, $\parb_*(\alpha)$ is an immersed Lie subgroup of~$G$. Now $\alpha(W^s_a(r)W^c(r))\subseteq W^s_a(r)W^c(r)$ and $\alpha|_{W^s_a(r)W^c(r)}\colon W^s_a(r)W^c(r)\to W^s_a(r)W^c(r)$ is ${\mathbb K}$-analytic. As a consequence, the restriction $\alpha_{cs}$ of $\alpha$ to an endomorphism of the subgroup $\parb_*(\alpha)$ is ${\mathbb K}$-analytic. If $g\in W^s_a(r)W^c(r)$, then $\{\alpha^n(g)\colon n\in{\mathbb N}_0\}$ is contained in the compact open subgroup $W^s_a(r)W^c(r)$ of $\parb_*(\alpha)$, whence $g\in \parb(\alpha_{cs})$. Thus $W^s_a(r)W^c(r)\subseteq\parb(\alpha_{cs})$, entailing that the subgroup $\parb(\alpha_{cs})$ is an $e$-neighbourhood and hence open in $\parb_*(\alpha)$.\\[2mm] \emph{Antiparabolic subgroups.} $(B^\phi_r)_+=W^c(r)W_u^b(r)$ is a Lie subgroup of~$G$ and a subgroup of $\parb_-(\alpha)$. If $g\in\parb^-(\alpha)$, then there exists an $\alpha$-regressive trajectory $(g_{-n})_{n\in{\mathbb N}_0}$ such that $g_0=g$ and $\{g_{-n}\colon n\in{\mathbb N}_0\}$ is relatively compact. Thus, there exists $t\in \,]0,r]$ such that \[ g_{-n}B^\phi_t(g_{-n})^{-1}\subseteq B^\phi_r \] for all $n\in{\mathbb N}_0$. For each $x\in W^c(t)W^u_b(t)=(B^\phi_t)_+$, there exists an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}_0}$ in $W^c(t)W^u_b(t)$ such that $x_0=x$. Then $(g_{-n}x_{-n}(g_{-n})^{-1})_{n\in{\mathbb N}_0}$ is an $\alpha$-regressive trajectory for $gxg^{-1}$ such that \[ g_{-n}x_{-n}(g_{-n})^{-1}\in g_{-n}B^\phi_t (g_{-n})^{-1}\subseteq B^\phi_r \] for all $n\in{\mathbb N}_0$, whence $gxg^{-1}\in (B^r_\phi)_+=W^c(r)W^u_b(r)$. Being a restriction of the ${\mathbb K}$-analytic conjugation map $G\to G$, $h\mapsto ghg^{-1}$, the map \[ W^c(t)W^u_b(t)\to W^c(r)W^u_b(r),\quad x\mapsto gxg^{-1} \] is ${\mathbb K}$-analytic. Using the Local Description of Lie Group Structures, we get a unique ${\mathbb K}$-analytic manifold structure on $\parb^-(\alpha)$ making it a Lie group $\parb^-_*(\alpha)$, such that $W^c(r)W^u_b(r)$ is an open submanifold. As $W^c(r)W^u_b(r)$ is a submanifold of~$G$, $\parb^-_*(\alpha)$ is an immersed Lie subgroup of~$G$. Since $\alpha|_{W^u_b(r)}\colon W^u_b(r)\to W^u_b$ is continuous and $W^u_b(r)$ is open in $W^u_b$, there exists $t\,]0,r]$ such that $\alpha(W^u_b(t))\subseteq W^u_b(r)$. Now $\alpha(W^c(t)W^u_b(t))\subseteq W^c(r)W^u_b(r)$, entailing that the restriction $\alpha_{cu}$ of $\alpha$ to an endomorphism of $\parb_*^-(\alpha)$ is ${\mathbb K}$-analytic. If $g\in W^c(r)W^u_b(r)=(B^r_\phi)_+$, then there exists an $\alpha$-regressive trajectory $(g_{-n})_{n\in{\mathbb N}_0}$ in $B^\phi_r$ such that $g_0=g$. For each $n\in{\mathbb N}_0$, the sequence $(g_{-n-m})_{m\in{\mathbb N}_0}$ is an $\alpha$-regressive trajectory in $B^r_\phi$ for $g_{-n}$, whence $g_{-n}\in (B^r_\phi)_+=W^c(r)W^u_b(r)$. As $\parb^-_*(\alpha)$ and~$G$ induce the same topology on $W^c(r)W^b_u(r)$, we deduce that $g\in \parb^-(\alpha_{cu})$. Thus $W^c(r)W^u_b(r)\subseteq\parb^-(\alpha_{cu})$, showing that the latter is an open subgroup of $\parb^-_*(\alpha)$.\\[2mm] \emph{Levi subgroups.} $W^c(r)$ is a Lie subgroup of~$G$ and a subgroup of $\lev(\alpha)$. If $g\in\lev(\alpha)=\parb(\alpha)\cap\parb^-(\alpha)$, our discussion of $\parb_*(\alpha)$ and $\parb^-_*(\alpha)$ yield a $t\in\,]0,r]$ such that $g W^s_a(t)W^c(t)g^{-1}\subseteq W^s_a(r)W^c(r)$ and $gW^c(t)W^u_b(t)g^{-1}\subseteq W^c(r)W^u_b(r)$, whence \[ gW^c(t)g^{-1}\subseteq W^s_a(r)W^c(r)\cap W^c(r)W^u_b(r)=W^c(r). \] Being a restriction of the ${\mathbb K}$-analytic conjugation map $G\to G$, $h\mapsto ghg^{-1}$, the map \[ W^c(t)\to W^c(r),\quad x\mapsto gxg^{-1} \] is ${\mathbb K}$-analytic. Using the Local Description of Lie Group Structures, we get a unique ${\mathbb K}$-analytic manifold structure on $\lev(\alpha)$ making it a Lie group $\lev_*(\alpha)$, such that $W^c(r)$ is an open submanifold. As $W^c(r)$ is a submanifold of~$G$, $\lev_*(\alpha)$ is an immersed Lie subgroup of~$G$. Now $\alpha(W^c(r))=W^c(r)$, entailing that the restriction $\alpha_c$ of $\alpha$ to an endomorphism of $\lev_*(\alpha)$ is ${\mathbb K}$-analytic. If $g\in W^c(r)$, then there exists an $\alpha$-regressive trajectory $(g_{-n})_{n\in{\mathbb N}_0}$ in $W^c(r)$ such that $g_0=g$. Moreover, $\alpha^n(g)\in W^c(r)$ for all $n\in{\mathbb N}_0$. As $\lev_*(\alpha)$ and~$G$ induce the same topology on $W^c(r)$, we deduce that $g\in \parb^-(\alpha_c)$ and $g\in\parb(\alpha_c)$. Thus $g\in\lev(\alpha_c)$, and thus $W^c(r)\subseteq\lev(\alpha_c)$, showing that the latter is an open subgroup of $\lev_*(\alpha)$. $\,\square$ \section{Proof of Theorem~\ref{thmC}} We start with a lemma. \begin{la}\label{inclu} If $G$ is a ${\mathbb K}$-analytic Lie group over a totally disconnected local field~${\mathbb K}$ and $\alpha\colon G\to G$ a ${\mathbb K}$-analytic endomorphism, then the inclusion maps \[ \lev_*(\alpha)\to\parb_*(\alpha)\quad \mbox{and}\quad \lev_*(\alpha)\to\parb^-_*(\alpha) \] are ${\mathbb K}$-analytic group homomorphisms and immersions. The actions \[ \parb_*(\alpha)\times\con_*(\alpha)\to\con_*(\alpha)\quad\mbox{and}\quad \parb^-_*(\alpha)\times \con^-_*(\alpha)\to\con^-_*(\alpha) \] given by $(g,x)\mapsto gxg^{-1}$ are ${\mathbb K}$-analytic. \end{la} \begin{proof} The first assertion follows from the fact that $W^c(r)$, $W^s_a(r)W^c(r)$, and $W^c(r)W^u_b(r)$ are open $e$-neighbourhoods in $\lev_*(\alpha)$, $\parb_*(\alpha)$ and $\parb^-_*(\alpha)$, respectively, and the product maps $W^s_a(r)\times W^c(r)\to W^s_a(r)W^c(r)$ and $W^c(r)\times W^u_b(r)\to W^c(r)W^u_b(r)$ are ${\mathbb K}$-analytic diffeomorphisms.\\[2mm] To see that the action of $\parb_*(\alpha)$ on $\con_*(\alpha)$ is ${\mathbb K}$-analytic, we verify the hypotheses of Lemma~\ref{act-ana}. For each $g\in\parb_*(\alpha)$, the set $\{\alpha^n(g)\colon n\in{\mathbb N}_0\}$ is relatively compact in~$G$, whence we find $t\in\,]0,r]$ such that $\alpha^n(g)B^\phi_t\alpha^n(g)^{-1}\subseteq B^\phi_r$. For $x\in W^s_a(t)$, we have $\alpha^n(x)\in W^s_a(t)\subseteq B^\phi_t$ for each $n\in{\mathbb N}_0$, whence $\alpha^n(gxg^{-1})\in B^\phi_r$ and thus $gxg^{-1}\in (B^\phi_r)_-=W^s_a(r)W^c(r)$. As \begin{equation}\label{again-2-id} \alpha^n(gxg^{-1})=\alpha^n(g)\alpha^n(x)\alpha^n(g)^{-1}\to e \end{equation} by \ref{comp-to-id}, Lemma~\ref{con-loc} shows that $gxg^{-1}\in W^s_a(r)$. As the map $W^s_a(t)\to W^s_a(r)$, $x\mapsto gxg^{-1}$ is ${\mathbb K}$-analytic, so ist $W^s_a(t)\to\con_*(\alpha)$, $x\mapsto gxg^{-1}$. Next, we note that if $g\in W^s_a(r)W^c(r)$ and $x\in W^c(r)$, then $gxg^{-1}\in B^\phi_r$ and \[ \alpha^n(gxg^{-1})=\alpha^n(g)\alpha^n(x)\alpha^n(g)^{-1}\in B^\phi_r \] for all $n\in{\mathbb N}_0$ as $W^s_a(r)W^c(r)$ and $W^s_a(r)$ are $\alpha$-invariant. Hence $gxg^{-1}\in (B^\phi_r)_-=W^a_s(r)W^c(r)$. Moreover, (\ref{again-2-id}) holds by \ref{comp-to-id}, whence $gxg^{-1}\in W^s_a(r)$ by Lemma~\ref{con-loc}. Thus \[ W^s_a(r)W^c(r)\times W^s_a(r)\to W^s_a(r)\subseteq\con_*(\alpha),\quad (g,x)\mapsto gxg^{-1} \] is ${\mathbb K}$-analytic. Finally, let $x\in \con_*(\alpha)$ be arbitrary. There exists $N\in{\mathbb N}$ such that $\alpha^n(x)\in B^\phi_r$ for all $n\geq N$. There exists $t\in\,]0,r]$ such that \[ \alpha^n(gxg^{-1}x^{-1})\in B^\phi_r\quad \mbox{for all $n\in \{0,\ldots, N-1\}$ and $g\in W^s_a(t)W^c(t)$.} \] Let $g\in W^s_a(t)W^c(t)$. For all $n\geq N$ we have $\alpha^n(g)\in W^s_a(t)W^c(t)\subseteq B^\phi_t\subseteq B^\phi_r$ and $\alpha^n(x)\in B^\phi_r$, whence \[ \alpha^n(gxg^{-1}x^{-1})=\alpha^n(g)\alpha^n(x) \alpha^n(g)^{-1}\alpha^n(x)^{-1}\in B^\phi_r. \] Hence $gxgx^{-1}\in (B^\phi_r)_-$. Since $\alpha^n(x)\to e$ holds and (\ref{again-2-id}), we deduce that \[ \alpha^n(gxg^{-1}x^{-1})=\alpha^n(gxg^{-1})\alpha^n(x)^{-1}\to e \] as $n\to \infty$. Thus $gxg^{-1}x^{-1}\in W^s_a(r)$, by Lemma~\ref{con-loc}, entailing that the map \[ W^s_a(t)W^c(t)\to W^s_a(r)\subseteq\con_*(\alpha),\quad g\mapsto gxg^{-1}x^{-1} \] is ${\mathbb K}$-analytic and hence also the map $W^s_a(t)W^c(t)\to\con_*(\alpha)$, $g\mapsto gxg^{-1}$; all hypotheses of Lemma~\ref{act-ana} are verified.\\[2mm] To see that the conjugation action of $\parb^-_*(\alpha)$ on $\con^-_*(\alpha)$ is ${\mathbb K}$-analytic, again we verify the hypotheses of Lemma~\ref{act-ana}. For each $g\in\parb^-_*(\alpha)$, there exists an $\alpha$-regressive trajectory $(g_{-n})_{n\in{\mathbb N}_0}$ with $g_0=g$ such that $\{g_{-n}\colon n\in{\mathbb N}_0\}$ is relatively compact in~$G$. There exists $t\in\,]0,r]$ such that $g_{-n}B^\phi_t (g_{-n})^{-1} \subseteq B^\phi_r$. For each $x\in W^u_b(t)$, there exists an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}_0}$ in $W^u_b(t)$ such that $x_0=x$. Then $g_{-n}x_{-n}(g_{-n})^{-1} \in B^\phi_r$ for each $n\in{\mathbb N}_0$. As $(g_{-n-m}x_{-n-m}(g_{-n-m})^{-1})_{m\in{\mathbb N}_0}$ is an $\alpha$-regressive trajectory for $g_{-n}x_{-n}(g_{-n})^{-1}$, we conclude that $g_{-n}x_{-n}(g_{-n})^{-1}\in (B^\phi_r)_+=W^c(r)W^u_b(r)$ for each $n\in{\mathbb N}$. By \ref{comp-to-id}, we have \begin{equation}\label{again-3-id} g_{-n}x_{-n}(g_{-n})^{-1}\to e\;\,\mbox{as $\,n\to\infty$.} \end{equation} Thus Lemma~\ref{con-loc} shows that $gxg^{-1}\in W^u_b(r)$. As a consequence, the map $W^u_b(t)\to W^u_b(r)\subseteq\con^-_*(\alpha)$, $x\mapsto gxg^{-1}$ is ${\mathbb K}$-analytic. Next, for $g\in W^c(r)W^u_b(r)=(B^\phi_r)_+$ and $x\in W^c(r)$, there exists an $\alpha$-regressive trajectory $(g_{-n})_{n\in{\mathbb N}_0}$ in $B^r_\phi$ with $g_0=g$. Moreover, there is an $\alpha$-regressive trajectory $(x_{-n})_{n\in{\mathbb N}_0}$ in $W^u_b(r)W^c(r)$ such that $x_0=x$ and $x_{-n}\to e$ as $n\to\infty$ (see Lemma~\ref{con-loc}). Then \[ g_{-n}x_{-n}(g_{-n})^{-1}\in B^\phi_r \] for all $n\in{\mathbb N}_0$. Hence $gxg^{-1}\in (B^\phi_r)_+=W^c(r)W^u_b(r)$. Moreover, (\ref{again-3-id}) holds by \ref{comp-to-id}, whence $gxg^{-1}\in W^u_b(r)$ by Lemma~\ref{con-loc}. Thus \[ W^c(r)W^u_b(r)\times W^u_b(r)\to W^u_b(r)\subseteq\con_*^-(\alpha),\quad (g,x)\mapsto gxg^{-1} \] is ${\mathbb K}$-analytic. Finally, let $x\in \con_*^-(\alpha)$ be arbitrary and $(x_{-n})_{n\in{\mathbb N}_0}$ be an $\alpha$-regressive trajectory such that $x_0=x$ and $x_{-n}\to e$ as $n\to\infty$. There exists $N\in{\mathbb N}$ such that $x_{-n}\in B^\phi_r$ for all $n\geq N$. There exists $t\in\,]0,r]$ such that \[ gx_{-n}g^{-1}(x_{-n})^{-1}\in B^\phi_r\quad \mbox{for all $n\in \{0,\ldots, N-1\}$ and $g\in B^\phi_t$.} \] Let $g\in W^c(t)W^u_b(t)=(B^\phi_t)_+$ and $(g_{-n})_{n\in{\mathbb N}_0}$ be an $\alpha$-regressive trajectory in $B^\phi_t$ such that $g_0=g$. For all $n\geq N$ we have $g_{-n},x_{-n}\in B^\phi_r$ and thus \[ g_{-n}x_{-n}(g_{-n})^{-1}(x_{-n})^{-1}\in B^\phi_r. \] Define $y_{-n}:= g_{-n}x_{-n}(g_{-n})^{-1}(x_{-n})^{-1}$ for $n\in{\mathbb N}_0$. For each $n\in{\mathbb N}_0$, the sequence $(y_{-n-m})_{m\in{\mathbb N}_0}$ is an $\alpha$-regressive trajectory for $y_n$ in $B^\phi_r$ and thus $y_n\in (B^\phi_r)_+$. Notably, $(y_{-n})_{n\in{\mathbb N}_0}$ is an $\alpha$-regressive trajectory in $(B^\phi_r)_+=W^c(r)W^u_b(r)$. Since $x_{-n}\to e$, using (\ref{again-3-id}), we deduce that \[ y_n=(g_{-n}x_{-n}(g_{-n})^{-1})(x_{-n})^{-1}=\alpha^n(gxg^{-1})\to e \] as $n\to \infty$. Hence $gxg^{-1}x^{-1}=y_0\in W^u_b(r)$, by Lemma~\ref{con-loc}. As a consequence, the map \[ W^c(t)W^u_b(t)\to W^u_b(r)\subseteq\con_*^-(\alpha),\quad g\mapsto gxg^{-1}x^{-1} \] is ${\mathbb K}$-analytic and hence also the map $W^c(t)W^u_b(t)\to\con_*^-(\alpha)$, $g\mapsto gxg^{-1}$; all hypotheses of Lemma~\ref{act-ana} are verified. \end{proof} {\bf Proof of Theorem~\ref{thmC}.} (a) In the Lie group \[ H:=\con_*(\alpha)\times \lev_*(\alpha)\times (\con^-_*(\alpha)^{\op}), \] the subset $W^s_a(r)\times W^c(r)\times W^u_b(r)$ is an open identity neighbourhood and the restriction of $\pi$ to this set is a ${\mathbb K}$-analytic diffeomorphism onto the open identity neighbourhood $B^r_\phi=W^s_a(r)W^c(r)W^u_b(r)$ of~$G$, see~(\ref{localprod}). Note that \[ H\times G\to G,\quad ((x,y,z),g)\mapsto (x,y,z).g:=xygz \] is a ${\mathbb K}$-analytic left action of~$H$ on~$G$. Moreover, $H.e=\Omega$ is open in~$G$ and $\pi\colon H\to \Omega$ is the orbit map~$\sigma^e$. Since $\pi$ is \'{e}tale at $e$ by the preceding, $\pi$ is \'{e}tale by Lemma~\ref{fact-2}\,(d).\\[2mm] (b) By \cite[Lemma~13.1\,(d)]{BGT}, we have $\parb(\alpha)=\con(\alpha)\lev(\alpha)$. Lemma~\ref{inclu} entails that the conjugation action of $\lev_*(\alpha)$ on $\con_*(\alpha)$ is ${\mathbb K}$-analytic. As the conjugation action is used to define the semi-direct product $\con_*(\alpha)\rtimes \lev_*(\alpha)$, the latter is a ${\mathbb K}$-analytic Lie group and the product map \[ p\colon \con_*(\alpha)\rtimes \lev(\alpha)\to \con_*(\alpha)\lev_*(\alpha)=\parb_*(\alpha),\;\; (x,y)\mapsto xy \] is a group homomorphism. Being the pointwise product of the projections onto $x$ and $y$, the map $p$ is ${\mathbb K}$-analytic. The restriction of~$p$ to a map \[ W^s_a(r)\times W^c(r)\to W^s_a(r)W^c(r) \] is a ${\mathbb K}$-analytic diffeomorphism onto the open subset $W^s_a(r)W^c(r)$ or $\parb_*(\alpha)$ (cf.\ (\ref{localprod})). As a consequence, the group homomorphism $p$ is \'{e}tale.\\[1mm] (c) By \cite[Lemma~13.1\,(e)]{BGT}, we have $\parb^-(\alpha)=\con^-(\alpha)\lev(\alpha)$. Lemma~\ref{inclu} entails that the conjugation action of $\lev_*(\alpha)$ on $\con_*^-(\alpha)$ is ${\mathbb K}$-analytic. As the conjugation action is used to define the semi-direct product $\con^-_*(\alpha)\rtimes \lev_*(\alpha)$, the latter is a ${\mathbb K}$-analytic Lie group and the product map \[ p\colon \con^-_*(\alpha)\rtimes \lev(\alpha)\to \con^-_*(\alpha)\lev_*(\alpha)=\parb^-_*(\alpha),\;\; (x,y)\mapsto xy \] is a group homomorphism. Being the pointwise product of the projections onto $x$ and $y$, the map $p$ is ${\mathbb K}$-analytic. We know that the map \[ q\colon W^c(r)\times W^u_b(r)\to W^c(r)W^u_b(r)=(B^\phi_r)_+,\quad (a,b)\mapsto ab \] is a ${\mathbb K}$-analytic diffeomorphism (cf.\ (\ref{localprod})). Since $W^c(r)$, $W^u_b(r)$, and $(B^\phi_r)_+$ are subgroups, we have \[ (B^\phi_r)_+=((B^\phi_r)_+)^{-1}=W^u_b(r)^{-1}W^c(r)^{-1}=W^ub(r)W^c(r). \] The restriction of $p$ to the open set $W^u_b(r)\times W^c(r)$ has open image \[ W^u_b(r)W^c(r)=(B^\phi_r)_+ \] and is given by $p(x,y)=q(y^{-1},x^{-1})^{-1}$, whence it is a ${\mathbb K}$-analytic diffeomorphism onto its open image. As a consequence, the group homomorphism $p$ is \'{e}tale. $\,\square$ \section{Proof of Theorem~\ref{thmD}} (a) If $\alpha$ is \'{e}tale, then $L(\alpha_s)=L(\alpha)|_{{\mathfrak g}_{<a}}$ is an automorphism of $L(\con_*(\alpha))={\mathfrak g}_{<a}$, whence $\alpha_s$ is injective on some $e$-neighbourhood, by the inverse function theorem. Hence $\alpha|_{W^s_a(t)}$ is injective for some $t\in\,]0,r]$. Since $\alpha(W^s_a(t))\subseteq W^s_a(t)$, we deduce that $(\alpha^n)|_{W^s_a(t)}=(\alpha|_{W^s_a(t)})^n$ is injective, whence $\ik(\alpha)\cap W^s_a(t)=\{e\}$. Since $W^s_a(t)$ is an open $e$-neighbourhood in $\con_*(\alpha)$, we deduce that the subgroup $\ik(\alpha)\subseteq \con_*(\alpha)$ is discrete. If $\car({\mathbb K})=0$ and $\alpha$ is not \'{e}tale, then ${\mathfrak g}_0\supseteq \ker(L(\alpha))\not=\{0\}$, whence $L(\alpha_s)=L(\alpha)|_{{\mathfrak g}_0}$ is not injective. Since $\car({\mathbb K})=0$, we have $L(\ker(\alpha_s))=\ker L(\alpha_s)$ (compare 4) in \S2 of \cite[Part~II, Chapter~V]{Ser} and Corollary~1 in \cite[Part~II, Chapter~III, \S10]{Ser}). As a consequence, $\ker(\alpha_s)$ is not discrete, whence also the subgroup $\ik(\alpha)$ of $\con_*(\alpha)$ which contains $\ker(\alpha_s)$ is not discrete.\\[2mm] (b) We give $Q:=\con_*(\alpha)/\ik(\alpha)$ the unique Lie group structure turning the canonical quotient map $\con_*(\alpha)\to Q$ into an \'{e}tale ${\mathbb K}$-analytic map. Then $\overline{\alpha_s}\colon Q\to Q$, $g\ik(\alpha)\to\alpha(g)\ik(\alpha)$ is a well-defined, ${\mathbb K}$-analytic endomorphism which is contractive as so is $\alpha_s$. Moreover, $\overline{\alpha_s}$ is injective and \'{e}tale, whence $\overline{\alpha_s}(Q)$ is an open subgroup of~$Q$ and $\overline{\alpha_s}\colon Q\to\overline{\alpha_s}(Q)$ a ${\mathbb K}$-analytic isomorphism. We define $H_n:=Q$ for $n\in{\mathbb N}$ and $\phi_{n,m}\colon H_m\to H_n$ via $\phi_{n,m}:=(\overline{\alpha_s})^{n-m}$ for all $n\geq m$ in~${\mathbb N}$. Then $((H_n)_{n\in{\mathbb N}},(\phi_{n,m})_{n\geq m})$ is a directed system of ${\mathbb K}$-analytic Lie groups and ${\mathbb K}$-analytic group homomorphisms which are \'{e}tale embeddings, whence the direct limit group \[ H:={\displaystyle \lim_{\longrightarrow}}\,H_n \] can be given a ${\mathbb K}$-analytic manifold structure making it a ${\mathbb K}$-analytic Lie group and each bonding maps $\phi_n\colon H_n\to H$ an \'{e}tale embedding. Then also $H={\displaystyle \lim_{\longrightarrow}}\, H_{n+1}$; if we let $\beta_n\colon H_{n+1}\to H_n$ be the map $\psi_n:=\id_Q$, we obtain a group homomorphism \[ \beta:={\displaystyle \lim_{\longrightarrow}}\, \beta_n\colon {\displaystyle \lim_{\longrightarrow}}\, H_{n+1}\to {\displaystyle \lim_{\longrightarrow}}\, H_n \] determined by $\beta\circ \phi_{n+1}=\phi_n$. As the images of the $\phi_{n+1}$ form an open cover of~$H$ and $\beta\circ\phi_n$ is ${\mathbb K}$-analytic and \'{e}tale, also $\beta$ is ${\mathbb K}$-analytic and \'{e}tale. Moreover, $\beta$ is surjective and injective and hence a ${\mathbb K}$-analytic automorphism of~$H$. If we identify $Q$ with an open subgroup of~$H$ by means of $\phi_1$, then $\beta(\phi_1(x))=\beta(\phi_2(\phi_{12}(x)))= \phi_1(\phi_{12}(x))=\phi_1(\overline{\alpha_s}(x))$ for $x\in Q$ shows that $\beta$ restricts to $\overline{\alpha_s}$ on~$Q$. Since $\beta$ is a contractive ${\mathbb K}$-analytic automorphism, $H$ is nilpotent (see \cite{CON}). Hence also~$Q$ is nilpotent. Since $\ik(\alpha)$ is discrete, there exists a compact open subgroup $U\subseteq \con_*(\alpha)$ such that $U\cap \ik(\alpha)=\{e\}$. Then $q|_U$ is injective, whence $U$ is nilpotent.\\[2mm] (c) Since $\ker(\alpha_s)\subseteq \ker((\alpha_s)^2)\subseteq\cdots$ and $\car({\mathbb K})=0$, the union $\ik(\alpha)=\bigcup_{n\in{\mathbb N}}\ker((\alpha_s)^n)$ is closed in~$G$ (see \cite[Proposition~4.20]{AUT}). By a Baire argument (or \cite[Proposition~1.19]{AUT}), there exists $n\in{\mathbb N}$ such that $\ker((\alpha_s)^n)$ is open in $\ik(\alpha)$. Since $\ker((\alpha_s)^n)$ is a Lie subgroup of $\con_*(\alpha_s)$ (as a special case of Theorems 2 and 3 in \cite[Part~II, Chapter~IV, \S5]{Ser}), we deduce that also $\ik(\alpha)$ is a Lie subgroup. Using Theorem~1 from loc.\,cit., we get a unique ${\mathbb K}$-analytic manifold structure on $Q:=\con_*(\alpha)/\ik(\alpha)$ turning the canonical quotient map $q\colon \con_*(\alpha)\to Q$ into a submersion; by Remark~2) following the cited theorem, the latter manifold structure makes $Q$ a ${\mathbb K}$-analytic Lie group. Then $\overline{\alpha_s}\colon Q\to Q$, $g\ik(\alpha)\to\alpha(g)\ik(\alpha)$ is a well-defined endomorphism which is ${\mathbb K}$-analytic as $q$ is a submersion and $\overline{\alpha_s}\circ q=q\circ \alpha_s$ is ${\mathbb K}$-analytic. Moreover, $\overline{\alpha_s}$ is contractive as so is $\alpha_s$. In addition, $\overline{\alpha_s}$ is injective and hence \'{e}tale, as $\car({\mathbb K})=0$ (so that we can use the naturality of the exponential function to see that $L(\overline{\alpha}_s)$ is injective and hence an automorphism). Hence $\overline{\alpha_s}(Q)$ is an open subgroup of~$Q$ and $\overline{\alpha_s}\colon Q\to\overline{\alpha_s}(Q)$ a ${\mathbb K}$-analytic isomorphism. We define $H_n:=Q$ for $n\in{\mathbb N}$ and $\phi_{n,m}\colon H_m\to H_n$ via $\phi_{n,m}:=(\overline{\alpha_s})^{n-m}$ for all $n\geq m$ in~${\mathbb N}$. As in~(b), we obtain a ${\mathbb K}$-analytic Lie group structure on $H:={\displaystyle \lim_{\longrightarrow}}\,H_n$ and a contractive ${\mathbb K}$-analytic automorphism of~$H$ which extends $\overline{\alpha}_s$. As $H$ admits a contractive ${\mathbb K}$-analytic automorphism, it is nilpotent.\\[2mm] (d) Recall that $W^u_b$ is an open submanifold of $\con^-_*(\alpha)$ and $\alpha(W^u_b(r))\subseteq W^u_s$. For each $t\in \,]0,r]$, the group homomorphism $\alpha|_{W^u_b(t)}$ is an injective immersion, whence $\alpha_u(W^u_b(t))$ is an open subgroup of $W^u_b$ (and thus of $\con^-_*(\alpha)$) and $\alpha_u|_{W^u_b(t)}$ is a ${\mathbb K}$-analytic diffeomorphism onto this open subgroup of $W^u_b$ and hence of $\con^-_*(\alpha)$. As a consequence, $\alpha_u$ is \'{e}tale. Since $L(\alpha_u)=L(\alpha)|_{{\mathfrak g}_{>b}}$, the Ultrametric Inverse Function Theorem (see \cite[Lemma~6.1\,(b)]{IMP}) provides $\theta\in \,]0,r]$ with $b\theta\leq R$ such that \[ \alpha_u(W^u_b(t))\supseteq W^u_b(bt)\quad \mbox{for all $t\in \,]0,\theta]$,} \] exploiting Remark~\ref{forifthm}. Thus \begin{equation}\label{forinve} \alpha_u(W^u_b(t/b))\supseteq W^u_b(t)\quad\mbox{for all $t\leq b\theta$.} \end{equation} Let $S:=(\alpha_u|_{W^u_b(\theta)})^{-1}(W^u_b(b \theta))$. Then $\alpha_u(S)=W^u_b(b\theta)$, which is an open subgroup of $\con^-_*(\alpha)$, and $\alpha_u|_S\colon S\to \alpha_u(S)$ is a ${\mathbb K}$-analytic isomorphism. Moreover, \[ (\alpha_u|_S)^{-1}\colon \alpha_u(S)\to S\subseteq \alpha_u(S) \] maps $W^u_b(t)$ into $W^u_b(t/b)$ for each $t\in\,]0,b\theta]$, by (\ref{forinve}), whence \[ ((\alpha_u|_S)^{-1})^n (W^u_b(b\theta))\subseteq W^u_b(\theta/b^{n-1}) \] for each $n\in{\mathbb N}$. As a consequence, $(\alpha_u|_S)^{-1}$ is a contractive endomorphism of $\alpha_u(S)$. We now define $H_n:=\alpha_u(S)$ for each $n\in{\mathbb N}$. Using the bonding maps $\phi_{n,m}:=((\alpha_u|_S)^{-1})^{n-m}\colon H_m\to H_n$, we can form the direct limit group $H:={\displaystyle \lim_{\longrightarrow}}\,H_n$ and give it a ${\mathbb K}$-analytic manifold structure making it a Lie group and turning each limit map $\phi_n\colon H_n\to H$ into an injective, \'{e}tale, ${\mathbb K}$-analytic group homomorphism. As in the proofs of (b) and (c), we obtain a ${\mathbb K}$-analytic automorphism $\beta$ of~$H$ which extends $(\alpha_u|_S)^{-1}\colon \alpha_u(S)\to S\subseteq \alpha_u(S)$ and is contractive. Notably, $H$ is nilpotent. $\,\square$ \section{Proof of Proposition~\ref{thmE}} (a) Since $\alpha(W^c(r)=W^c(r)$ and $\alpha|_{W^c(r)}$ is injective (see \ref{nunum}), we see that $(\alpha^n)|_{W^c(r)} =(\alpha|_{W^c(r)})^n$ is injective for each $n\in{\mathbb N}$, entailing that $\ik(\alpha)\cap W^c(r)=\{e\}$. The asssertion follows as $W^c(r)$ is an open $e$-neighbourhood in $\lev_*(\alpha)$.\\[2mm] (b) As the product map $W^c\times W^u_b\to W^cW^u_b$, $(x,y)\mapsto xy$ is a bijection, $\alpha(xy)=\alpha(x)\alpha(y)$ holds for $(x,y)\in W^c(r)\times W^u_b(r)$ and the restrictions of $\alpha$ to mappings $W^c(r)\to W^c(r)\subseteq W^c$ and $W^u_b(r)\to W^u_b$ are injective (see \ref{subu}), we deduce that $\ker(\alpha)\cap W^c(r)W^u_b(r)=\{e\}$. It remains to recall that $W^c(r)W^u_b(r)$ is an open $e$-neighbourhood in $\parb^-_*(\alpha)$.\\[2mm] (c) We know from Theorem~\ref{thmD}\,(d) that $\alpha_u$ is \'{e}tale. Hence $\ker(\alpha_u)=\ker(\alpha)\cap\con^-_*(\alpha)$ is discrete in $\con^-_*(\alpha)$. $\,\square$.
1,108,101,566,136
arxiv
\section{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex}{2.3ex plus .2ex}{\large\bf}} \def\arabic{section}.{\arabic{section}.} \def\Roman{section}-\Alph{subsection}.{\Roman{section}-\Alph{subsection}.} \def#1}{} \def\FERMILABPub#1{\def#1}{#1}} \def\ps@headings{\def\@oddfoot{}\def\@evenfoot{} \def\@oddhead{\hbox{}\hfill \makebox[.5\textwidth]{\raggedright\ignorespaces --\thepage{}-- \hfill }} \def\@evenhead{\@oddhead} \def\subsectionmark##1{\markboth{##1}{}} } \ps@headings \catcode`\@=12 \relax \def\r#1{\ignorespaces $^{#1}$} \def\figcap{\section*{Figure Captions\markboth {FIGURECAPTIONS}{FIGURECAPTIONS}}\list {Fig. \arabic{enumi}:\hfill}{\settowidth\labelwidth{Fig. 999:} \leftmargin\labelwidth \advance\leftmargin\labelsep\usecounter{enumi}}} \let\endfigcap\endlist \relax \def\tablecap{\section*{Table Captions\markboth {TABLECAPTIONS}{TABLECAPTIONS}}\list {Table \arabic{enumi}:\hfill}{\settowidth\labelwidth{Table 999:} \leftmargin\labelwidth \advance\leftmargin\labelsep\usecounter{enumi}}} \let\endtablecap\endlist \relax \def\reflist{\section*{References\markboth {REFLIST}{REFLIST}}\list {[\arabic{enumi}]\hfill}{\settowidth\labelwidth{[999]} \leftmargin\labelwidth \advance\leftmargin\labelsep\usecounter{enumi}}} \let\endreflist\endlist \relax \catcode`\@=11 \def\marginnote#1{} \newcount\hour \newcount\minute \newtoks\amorpm \hour=\time\divide\hour by60 \minute=\time{\multiply\hour by60 \global\advance\minute by- \hour} \edef\standardtime{{\ifnum\hour<12 \global\amorpm={am}% \else\global\amorpm={pm}\advance\hour by-12 \fi \ifnum\hour=0 \hour=12 \fi \number\hour:\ifnum\minute<100\fi\number\minute\the\amorpm}} \edef\militarytime{\number\hour:\ifnum\minute<100\fi\number\minute} \def\draftlabel#1{{\@bsphack\if@filesw {\let\thepage\relax \xdef\@gtempa{\write\@auxout{\string \newlabel{#1}{{\@currentlabel}{\thepage}}}}}\@gtempa \if@nobreak \ifvmode\nobreak\fi\fi\fi\@esphack} \gdef\@eqnlabel{#1}} \def\@eqnlabel{} \def\@vacuum{} \def\draftmarginnote#1{\marginpar{\raggedright\scriptsize\tt#1}} \def\draft{\oddsidemargin -.5truein \def\@oddfoot{\sl preliminary draft \hfil \rm\thepage\hfil\sl\today\quad\militarytime} \let\@evenfoot\@oddfoot \overfullrule 3pt \let\label=\draftlabel \let\marginnote=\draftmarginnote \def\@eqnnum{(\arabic{section}.\arabic{equation})\rlap{\kern\marginparsep\tt\@eqnlabel}% \global\let\@eqnlabel\@vacuum} } \def\preprint{\twocolumn\sloppy\flushbottom\parindent 1em \leftmargini 2em\leftmarginv .5em\leftmarginvi .5em \oddsidemargin -.5in \evensidemargin -.5in \columnsep 15mm \footheight 0pt \textwidth 250mmin \topmargin -.4in \headheight 12pt \topskip .4in \textheight 175mm \footskip 0pt \def\@oddhead{\thepage\hfil\addtocounter{page}{1}\thepage} \let\@evenhead\@oddhead \def\@oddfoot{} \def\@evenfoot{} } \def\titlepage{\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn \else \newpage \fi \thispagestyle{empty}\c@page\z@ \def\arabic{footnote}{\fnsymbol{footnote}} } \def\endtitlepage{\if@restonecol\twocolumn \else \fi \def\arabic{footnote}{\arabic{footnote}} \setcounter{footnote}{0}} \catcode`@=12 \relax \def#1}{} \def\FERMILABPub#1{\def#1}{#1}} \def\ps@headings{\def\@oddfoot{}\def\@evenfoot{} \def\@oddhead{\hbox{}\hfill \makebox[.5\textwidth]{\raggedright\ignorespaces --\thepage{}-- \hfill }} \def\@evenhead{\@oddhead} \def\subsectionmark##1{\markboth{##1}{}} } \ps@headings \relax \def\firstpage#1#2#3#4#5#6{ \begin{document} \def\beq{\begin{equation}} \def\eeq{\end{equation}} \def\bea{\begin{eqnarray}} \def\eea{\end{eqnarray}} \def\bq{\begin{quote}} \def\end{quote}{\end{quote}} \def\ra{\rightarrow} \def\lra{\leftrightarrow} \def\upsilon{\upsilon} \def\bq{\begin{quote}} \def\end{quote}{\end{quote}} \def\ra{\rightarrow} \def\underline{\underline} \def\overline{\overline} \newcommand{Commun.\ Math.\ Phys.~}{Commun.\ Math.\ Phys.~} \newcommand{Phys.\ Rev.\ Lett.~}{Phys.\ Rev.\ Lett.~} \newcommand{Phys.\ Rev.\ D~}{Phys.\ Rev.\ D~} \newcommand{Phys.\ Lett.\ B~}{Phys.\ Lett.\ B~} \newcommand{\bar{\imath}}{\bar{\imath}} \newcommand{\bar{\jmath}}{\bar{\jmath}} \newcommand{Nucl.\ Phys.\ B~}{Nucl.\ Phys.\ B~} \newcommand{{\cal F}}{{\cal F}} \renewcommand{\L}{{\cal L}} \newcommand{{\cal A}}{{\cal A}} \def\frac{15}{4}{\frac{15}{4}} \def\frac{15}{3}{\frac{15}{3}} \def\frac{3}{2}{\frac{3}{2}} \def\frac{25}{4}{\frac{25}{4}} \begin{titlepage} \nopagebreak \title{\begin{flushright} \vspace*{-1.8in} {\normalsize CERN-TH/97-311}\\[-9mm] {\normalsize IOA--TH.97-15}\\[-9mm] {\normalsize UUTP-22/97}\\[-9mm] {\normalsize hep-th/9711044}\\[4mm] \end{flushright} \vfill {#3}} \author{\large #4 \\[1.0cm] #5} \maketitle \vskip -7mm \nopagebreak \begin{abstract} {\noindent #6} \end{abstract} \vfill \begin{flushleft} \rule{16.1cm}{0.2mm}\\[-3mm] $^{\dagger}${\small Supported by the European Community under Human Capital and Mobility Grant No ERBCHBICT960773}\\ CERN-TH/97-331\\ October 1997 \end{flushleft} \thispagestyle{empty} \end{titlepage}} \def\stackrel{<}{{}_\sim}{\stackrel{<}{{}_\sim}} \def\stackrel{>}{{}_\sim}{\stackrel{>}{{}_\sim}} \date{} \firstpage{3118}{IC/95/34} {\large\bf On The Instanton Solutions Of The Self-Dual Membrane\\ In Various Dimensions} {E.G. Floratos$^{\,a,b}$, G.K. Leontaris$^{\,c,d}$, A.P. Polychronakos$^{\,c,e}$ and R. Tzani$^{\,c\dagger}$ {\normalsize\sl $^a$ NRCS Demokritos, Athens, Greece\\[-3mm] \normalsize\sl $^b$ Physics Department, University of Iraklion, Crete, Greece.\\[-3mm] \normalsize\sl $^c$Theoretical Physics Division, Ioannina University, GR-45110 Ioannina, Greece\\[-3mm] \normalsize\sl $^d$CERN, Theory Division, 1211 Geneva 23, Switzerland\\[-3mm] \normalsize\sl $^e$Theoretical Physics Department, Uppsala University, S-751 08 Uppsala, Sweden.} {We present some methods of determining explicit solutions for self-dual supermembranes in $4+1$ and $8+1$ dimensions with spherical or toroidal topology. For configurations of axial symmetry, the continuous $SU(\infty)$ Toda equation turns out to play a central role, and a specific method of determining all the periodic solutions are suggested. A number of examples are studied in detail. } \newpage Nowadays, a revived interest in membrane theory~\cite{1} has been spurred by the fact that M-theory, which is considered as the leading candidate theory for explaining the non-perturbative net of string dualities, contains membranes and their dual five-branes in eleven dimensions~\cite{2}. The main activity in recent literature has been the classification of the BPS spectra of various string compactifications which pressumably M-theory owes to organize in a compact and intuitive way. Among the BPS states, there is an important class made up of the Euclidean solitons (instantons). This sector plays a role in the understanding of the non-perturbative vacuum structure of string compactifications. Some years ago, we introduced, at the level of the bosonic membranes, a specific self-duality, which in modern language is nothing but S-duality for Euclidean instantons~\cite{fl1}. The self-dual membranes solve $SU(N)$ Nahm's equations for a specific $N\ra \infty $ limit where $SU(N)$ becomes the area-preserving diffeomorphism group on the surface of the membrane, a symmetry that exists in the light-cone quantization of the membranes. Recently, extensions of the self-duality of membranes in 7,8,9 dimensions have been introduced~\cite{cfz,fl}. In the present work, we develop new methods for solving the self-duality equations in three and seven dimensions~\cite{fair}. In the case of toroidal compactifications, the role of string excitations of self-dual membranes becomes visible and we exhibit explicit examples where analytic solutions are found. In three, and also in seven dimensions, and for the case of cylindrical symmetry, the self-duality equations reduce to continuous Toda equations which have been studied in order to determine self-dual Euclidean solutions of Einstein equations~\cite{Toda}. In the present work, we provide a first-order non-linear system, the axially-symmetric three-dimensional self-duality equations, which at the same time provide a Lax pair of the axially symmetric Toda equation. Inverting this non-linear system, we find a completely integrable linear system, which we explicitly solve and, thus, we present a method to determine all the solutions of the axially symmetric Toda equations~\cite{RW,BS}. We start our analysis by reviewing the salient features of the theory. In ref.~\cite{FIT} it was pointed out that in the large-$N$ limit, $SU(N)$ YM theories have, at the classical level, a simple geometrical structure with the $SU(N)$ matrix potentials $A_{\mu}(X)$ replaced by c-number functions of two additional cordinates $\theta, \phi$ of an internal sphere $S^2$ at every space time point, while the $SU(N)$ symmetry is replaced by the infinite-dimensional algebra of area-preserving diffeomorphisms of the sphere $S^2$ called $SDiff (S^2)$. The $SU(N)$ fields are Hermitian $N\times N$ matrices which in the large $N$ limit are written in terms of the spherical harmonics on $S^2$, while commutators are replaced by the Poisson brackets on $S^2$: \begin{eqnarray} [A_{\mu},A_{\nu}]&\ra&\{A_{\mu},A_{\nu}\}\nonumber\\ &=&\frac{\partial A_{\mu}}{\partial\sigma_1} \frac{\partial A_{\nu}}{\partial\sigma_2} -\frac{\partial A_{\nu}}{\partial\sigma_1} \frac{\partial A_{\mu}}{\partial\sigma_2}. \label{6} \end{eqnarray} In three dimensions the self-duality relation is defined by the equation\footnote{The anti-self dual case $E_i=-B_i$ can be treated similarly.} \begin{equation} E_i= B_i, \end{equation} where $E _ i$ and $ B _ i$ are the electric and the magnetic $SU(\infty )$ colour fields. Since \begin{equation} E_ i = \frac{\partial A _ i} {\partial t}, \quad i=1,2,3 \label{12} \end{equation} and \begin{equation} B_i=\frac{1}{2}\varepsilon_{ijk}\{A_j,A_k\}, \label{13b} \end{equation} where $\varepsilon_{ijk}$ is the antisymmetric tensor in three dimensions, one obtains the following equations \begin{equation} \dot{A}_i=\frac{1}{2}\varepsilon_{ijk}\{A_j,A_k\}, \quad i,j,k=1,2,3. \label{16} \end{equation} These equations solve the Gauss constraints and the second-order Euclidean equations of motion for the bosonic part of the supermembrane (fermionic DOF set to zero) in the light-cone gauge~\cite{rev}. In what follows we discuss methods of solution of the above three-dimensional system. It has been suggested in ref.~\cite{fl1} that one can use quaternions to transform the above equations into a matrix differential one. We define the matrix \beq A = A_i \sigma_i, \, i=1,2,3, \label{AM} \eeq where $\sigma_i$ are the standard Pauli matrices. The matrix function $A$ satisfies the equation \beq \dot{A}= -\frac{\imath}2\{A,A\}. \label{ME} \eeq In the case of the sphere, which has been analysed in ref.~\cite{fl1}, the Darboux coordinates are $\xi_1=\cos\theta$, $\xi_2=\phi$. The infinite-dimensional group $SDiff(S^2)$ has $SO(3)$ as the only finite-dimensional subgroup which is generated by the three functions $e_1= \cos\phi \sin\theta$ $e_2=\sin\phi \sin\theta$, $e_3=\cos\theta$: \beq \{e_i,e_j\} = - \varepsilon_{ijk} e_k \label{PBe} \eeq Looking for factorized $SO(3)$-symmetric solutions, we set $A= T_i(t)e_i$, which implies \beq \dot{T}_i = \frac{\imath}4 \varepsilon_{ijk}[T_j,T_k], \label{nahm} \eeq that is, the Nahm equation for an $SU(2)$ monopole of magnetic charge $k=2$~\cite{Nahm}. We thus obtain for each choice of solution of the Nahm equations for magnetic charge $k=2$ (eight-dimensional moduli space) a solution of the self-duality equations. This system of equations is known to be integrable, and particular solutions for specific boundary conditions at $t=0$, $t=2$ (simple poles with $SU(2)$ matrices as residues) can be expressed in terms of elliptic functions~\cite{sut}. In ref.~\cite{fl1}, zero total angular momentum (axially symmetric) solutions of the system (\ref{16}) have been explicitly determined in terms of the functions $e_i$ and the solutions of the $SU(2)$ Toda equation. In the following we will show that the requirement of axial symmetry on the above system leads to a first-order system for two functions, which plays the role of the Lax pair for the continuous axially symmetric Toda equation. Indeed, the ansatz \bea A_1 = R(\sigma_1,t) \cos \sigma_2 ,& A_2 = R(\sigma_1,t) \sin \sigma_2 ,& A_3 = z(\sigma_1,t) \eea leads to the system, \bea \dot{z}&=& R R'\label{Zeq}\\ \dot{R}&=& - R z'\label{Req} \eea where the prime now is used to declare differentiation with respect to $\sigma_1$ (i.e. $ \frac {\partial}{\partial \sigma _ 1} $)\footnote{ We observe that, if we replace $\sigma_2$ by $n \sigma_2$, $n$ integer, then this implies that $t\ra n t$ in the original solution.}. Combining equations (\ref{Zeq}) and (\ref{Req}) we obtain the axially symmetric continuous Toda equation \beq \frac{d^2\Psi}{d t^2}+ \frac{d^2{e^{\Psi}}}{d \sigma_1^2}=0, \label{,.} \eeq where $R^2 = e^{\Psi}$. Solutions of this equation have been discussed in the literature in connection with the self-dual 4d Einstein metrics with rotational and axial Killing vectors~\cite{Toda,RW}. Here though, we note that $\sigma_1$ runs in a compact interval ($0, 2\pi$) for torus and ($-1,1$) for the sphere. At this point, we want to provide a specific example of a solution with separation of variables of the Toda equation, in the case of spherical topology ($\sigma_1 =\cos\theta$, $\sigma_2=\phi$). Separation of variables $R(\sigma_1,t)= R_1(\sigma_1)R_2(t)$ corresponds to $\Psi (\sigma_1,t)= \Theta (\sigma_1)+ T(t)$ which leads to \bea \frac{d^2T}{d t^2} - k e^T =0\label{T1}\\ \frac{d^2e^{\Theta}}{d \sigma_1^2} +k =0\label{The1} \eea Multiplying (\ref{T1}) by $\dot{T}$ we obtain \beq \frac{dT}{dt} = \sqrt{2 k}\left( e^T +\frac{\nu}{k}\right)^{1/2} \label{T2}, \eeq where $\nu$ is a new constant. Equation (\ref{T2}) is easily solved, making use of the transformation $e^T = \frac{\nu}{k}\phi^{-2}$, the final result being \bea R(\theta,t)&=& \kappa\frac{\sin\theta}{\sinh\left[\kappa (t_0-t)\right]}\\ z(\theta,t)&=& \kappa \coth\left[\kappa (t_0-t)\right]\cos\theta, \eea where $\kappa$ is a new constant. Interestingly, this solution coincides with that of ~\cite{fl1} representing axisymmetric ellipsoids, which was derived from the $SU(2)$ Toda equation (with respect to the time $t$). We now exhibit a variation of the method of ref.~\cite{ward} where by inversion of the non-linear system (\ref{Req}) we construct a linear one and we determine all solutions. Indeed, by going from the pair of variables ($R,z$) to ($S,T$), which we take to be the inverse mapping $(\sigma_1,t)\ra (R,z)$, we find \bea \frac{\partial S }{\partial u} - \frac{\partial T}{\partial v }& =& 0\\ \frac{\partial S}{\partial v} + u \frac{\partial T }{\partial u}&=&0\label{inv}, \eea where $u=R^2$ and $v=2 z$. This system is linear and we can easily separate the variables $u$ and $v$, $S = S_1(u) S_2(v),\,\, T = T_1(u) T_2(v)$. We introduce two constants of separation, \bea \begin{array}{cc} {{\partial S_1 }/{\partial u}=\lambda {T_1}},& -u {\partial T_1}/{\partial u}=\mu S_1 \\ {{\partial T_2}/{\partial v}=\lambda {S_2}},& {\partial S_2 }/{\partial v}=\mu T_2. \end{array} \label{sep} \eea We see that $S_2$ and $T_2$ are trigonometric (hyperbolic) functions of $v$ depending on whether the sign of the product $\lambda\cdot\mu$ is minus (plus). Also, from the first order equations of $S_1$ and $T_1$, assuming analyticity around $u=0$, we obtain unique solutions $T_1\propto J_0(k_0 R)$ and $S_1\propto R J_1(k_0 R)$, and $k_0 = \sqrt{\lambda\cdot\mu}$. By appropriate linear combinations of the solutions of $S_1,T_1$ and $S_2,T_2$, we can determine functions $S$ and $T$ which, by inversion, give functions $R,z$, periodic in $\sigma_1$. As a demonstration, consider the solution \bea S = \imath A \cos (k_0 z) R J_1 (\imath k_0 R) + k_1 z \\ T = A \sin (k_0 z) J_0 (\imath k_0 R) - k_1 \ln R \eea where $A,k_1,k_0$ are real constants. If space is compactified in the $z$-direction with length $L$, and we want $\sigma_1$ to range from 0 to $2\pi$, we choose $k_1 =2\pi/L$ and $k_0 =nk_1 $ for some integer $n$. The above then represents a membrane with $n$ branches extending to $R=\infty$, which, at some critical time, collides with itself and separates into a finite piece with toroidal topology, exhibiting $n$ ripples within the period $L$, and $n$ infinite pieces that fly away. We leave the question of explicit constructions for a future work. We should note, though, that the linearization method of ref.~\cite{ward} should be examined in more detail in order to construct other interesting examples. We finally discuss in three dimensions two sorts of toroidal compactifications where by double compactification we derive string self-dual solutions. First, when the three dimensional space topology is $R^2\times S^1$, we doubly compactify the membrane ~\cite{1}. We choose as an example $A_3=n \sigma_2$ and $A_{1,2}= A_{1,2}(\sigma_1,t)$. Then it is straightforward to see that $A_{1}+\imath A_2$ must be an analytic function of $\sigma_1-\imath n t$, where $n$ is the winding number. These are world-sheet string instantons. The second compactification is on the three-dimensional torus $T^3$, where windings for various embeddings of toroidal membrane lead to string excitations with non-zero center-of-mass momentum. We discuss this case below, where more general seven-dimensional embeddings are studied. We now extend our discussion in seven dimensions, where the fully antisymmetric symbol of three dimensions $\varepsilon_{ijk}$ in eqs.(\ref{16}) is replaced by the corresponding octonionic structure constants $\Psi_{ijk}$~\cite{cfz,fl}: \begin{equation} \dot{X}_i = \frac{1}{2} \Psi_{ijk}\{X_j,X_k\}, \label{osce} \end{equation} where the indices run from 1 to 7 while $\Psi_{ijk}$ is completely antisymmetric and has the value 1 for the following combinations of indices: \beq \Psi_{ijk}=\left\{\begin{array}{ccccccc}1&2&4&3&6&5&7\\ 2&4&3&6&5&7&1\\ 3&6&5&7&1&2&4 \end{array}\right. \label{2.1} \eeq The second-order Euclidean equations and the Gauss law results automatically by making use of the $\Psi_{ijk}$ cyclic symmetry $\{\dot{X}_i,X_i\}= 0$. In ref.~\cite{fl}, one class of three-dimensional solutions which are embedded in the seven-dimensional system was found according to the identifications \beq X_3\ra A_3, \;\;\; X_{\pm}\ra A_{\pm}/\sqrt{3}\label{7to3} \eeq where the seven coordinates $X_{i},(i=1,2,...,7)$ are grouped in terms of the complex coordinates $X_{\pm} =X_1\pm \imath X_2,$, $Y_{\pm} =X_4\pm \imath X_5,$ and $Z_{\pm} =X_6\pm \imath X_7$ and we have made the ansatz that ${X}_+=Z_+ = \imath Y_-$, while $A_{\pm,3}$ is the three-dimensional solution. The seven-dimensional solution is essentially the three-dimensional one rotated by an orthogonal transformation in 7-space. Therefore, any three-dimensional self-dual solution automatically generates a corresponding 7 dimensional one. The generalization to the string-like solution of the self-duality equation (\ref{osce}) in 7 dimensions is straightforward. We assume the form \beq X_i(\sigma_{1,2},t) = A_i \sigma_1 +B_i \sigma_2 +P_i t+ f_i(\sigma_1,\sigma_2,t) \eeq with $i=1,...,7$, and $f$ being a periodic function of $\sigma_{1,2}$ and $A,B$ integer vectors. Then we obtain \bea P_i & =& \Psi_{ijk} A_j B_k\label{Pi}\\ \dot{f}_i & = & \Psi_{ijk} \left( A_j \frac{\partial}{\partial\sigma_2}-B_j \frac{\partial}{\partial\sigma_1}\right) f_k +\frac{1}{2}\Psi_{ijk}\{f_j,f_k\} \label{Pi1} \eea Since $f$ is a periodic function with respect to $\sigma_{1,2}$, we can expand it in terms of an infinite number of strings, depending on the coordinate $\sigma_1$: \beq f_i(\sigma_1,\sigma_2,t) = \sum_nX_i^n(\sigma_1,t) e^{in\sigma_2}. \eeq Then, from the self-duality equations (\ref{Pi},\ref{Pi1}) we find that the winding number of the membrane is related to the center-of-mass momentum, which is transverse to the compactification directions $A$ and $B$. Also, the infinite number of strings are coupled through the following equations \beq \dot{X}_i^n(\sigma_1,t) = \Psi_{ijk} \left(A_j n - B_j \frac{\partial}{\partial\sigma_1}\right) X_k^n +\frac{ \imath}{ 2}\Psi_{ijk} \sum_{n_1+n_2=n}\left(n_2 \frac{\partial X_j^{n_1}}{\partial\sigma_1}X_k^{n_2}-n_1X_j^{n_1} \frac{\partial X_k^{n_2}}{\partial\sigma_1}\right) \eeq The string-like solution corresponds to the particular case $ {\partial f_i}/{\partial \sigma_2}=0$, where we obtain \beq X_i^0 = X_i(\sigma_1,t)\ra \dot{X}_i = \Psi_{ijk}B_k \frac{\partial X_j}{\partial\sigma_1}. \eeq This equation is formally solved in vector form by \beq X (\sigma_1, t) = e^{t M \frac{\partial }{\partial\sigma_1}} X (\sigma_1, 0) \label{form} \eeq where we defined the $7 \times 7$ matrix $M_{ij} = \Psi_{ijk}B_k$. Explicit solutions are found by expanding $X_i$ in terms of the eigenvectors of $M$. In fact, since $M$ is real and antisymmetric, the real 7-dimensional vector space decomposes into three orthogonal two-dimensional subspaces, each corresponding to a pair of imaginary eigenvalues $\pm \imath\lambda$, and a one-dimensional subspace, in the direction of $B_i$, corresponding to the zero eigenvalue. Since, in addition, $(M^2)_{ij} = - B^2 \delta_{ij} + B_i B_j$ (as can be checked), we see that the imaginary eigenvalue pairs are all $\pm \imath |B|$. Therefore the problem decomposes into three 3-dimensional problems (one for each subspace) of the kind we solved before. The general solution is then \bea &X_1^{(n)} +\imath X_2^{(n)} = F_n ( \sigma_1 -\imath B t ) ~,~~~n=1,2,3\\ &X^{(0)} = |B| t \eea where $(X_1^{(n)} , X_2^{(n)} )$ are the projections of the membrane coordinates on the $n$-th two-dimensional eigenspace and $X^{(0)} = X_i B_i / |B|$ is the projection on $B_i$. As an example, if we choose $B_i$ in the third direction, $B_i = B \delta_{i3}$, we have \bea &X_1 + i X_2 = F_1 ( \sigma_1 -\imath B t )\\ &X_5 + i X_4 = F_2 ( \sigma_1 -\imath B t )\\ &X_6 + i X_7 = F_3 ( \sigma_1 -\imath B t )\\ &X_3 = B t. \eea Considering, now, the case when at $t=0$ we have a proper (not string-like) membrane configuration, with its periodic part dependent on both variables, we write \beq X_i = f_{i}^{cl}+ f_i, \eeq where $f_i^{cl} = A_i \sigma_1 +B_i \sigma_2$. The equation of the general case (\ref{osce}) can be written in a symbolic form, by defining the matrix differential operator \beq L_f^{ik} = \Psi_{ijk} \left(\frac{\partial f_j}{\partial\sigma_1} \frac{\partial}{\partial\sigma_2}- \frac{\partial f_j}{\partial\sigma_2} \frac{\partial}{\partial\sigma_1}\right) \label{oper} \eeq as a vector equation \beq \dot{f} = (L_{f^{cl}} +\frac 12 L_f) f. \eeq It is possible to solve this non-linear matrix differential system by iteration of the solution of its linear part, \beq \dot{g}= L_{f^{cl}} g . \label{homg} \eeq The above differential system can be written as a matrix integral equation as follows \beq f= g + \frac 12e^{t L_{f^{cl}}} \int^t e^{-t' L_{f^{cl}}} L_f f dt'\label{inte} \eeq It is easy to show that the infinite iteration of the solution $g$ solves the non-linear differential system and, moreover, when the initial configuration is a string, the second part of the integral equation is zero and the problem is reduced to the homogeneous case. The general solution of the homogeneous system (\ref{homg}) is \beq g(t) = e^{t L_{f^{cl}}} g(t=0), \eeq where $f_i(t=0)= g_i(t=0)$. At this point, we would like to note that in ref~\cite{fl} for the case of zero-winding we have been able to separate the time and the parameter dependence of the coordinates of the octonionic self-dual membrane. The time equations are generalizations of Nahm matrix equations (\ref{nahm}), where in the place of the three $SU(2)$-$T_i$ matrices, a pair $T_i, S_i$ appears. A generalization of the Euler Top equations using octonions has been proposed in ref~\cite{fair}, where it was shown that this system is an integrable one and the explicit set of seven conservation laws including their algebraic relation has been provided. This system of equations is a specific case of the generalized Nahm equations when $T_i, S_i$ are proportional to the Pauli matrices. Thus, for every solution of the generalized Euler Top system, one can obtain the corresponding self-dual membrane. We close our short analysis by pointing out the existence of a different kind of self-duality equations which satisfy also the second-order Euclidean equations which has been introduced for self-dual Yang-Mills fields in ref.~\cite{iva}.\footnote{ We thank T. Ivanova for bringing her work with A. Popov to our attention .} This system of equations could be generalized to membranes embedded in dimensions $D=dim(G)$ where $G$ is any Lie algebra. This system of self-duality equations is an integrable one (as it was pointed out to us by T. Ivanova) but the geometrical significance for the dynamics of the self-dual membrane is not obvious to us. On the other hand, it is interesting to see what type of world-volume membrane instantons are obtained by this method. We would like to conclude with few remarks. A systematic approach for the solutions of the seven-dimensional equations has been proposed in the case of toroidal compactifications which turn out to provide world volume membrane instantons which play an important role in the understanding of the vacuum structure of supermembrane theory. The question of the surviving supersymmetries for various classes of solutions is an important problem for the determinations of the BPS states of the supermembrane. The richness of the self-duality equations concerning string excitations suggest that probably it is the right framework of examining the non-perturbative unification of string interactions. This goes along the lines of an old suggestion that supermembranes are string solitons or coherent states of interacting strings. It remains to be seen if the strong coupling problem of string interactions is tamed by the determination of the correct non-perturbative string vacuum.
1,108,101,566,137
arxiv
\section{Introduction} Recently, there has been huge progress in understanding out-of-equilibrium dynamics in isolated many-body quantum systems, both theoretically and experimentally. Different experimental setups, in particular experiments on ultra cold atoms, can be found in the review article \cite{review} and references therein. Theoretically, one-dimensional integrable models play an essential role in under\-standing relaxation processes in many-body quantum systems. Many of them show strong correlations and there are (infinitely many) non-trivial conservation laws that strongly constrain the dynamics. Despite these constraints and although the time evolution is unitary, it is believed that local observables generically approach stationary values. The underlying assumption is that the system relaxes to a so-called generalized Gibbs ensemble \cite{2008_Rigol}, which depends upon as many parameters as the number of conservation laws. Calculating these parameters (even when the GGE is truncated) is, in general, a difficult problem \cite{2012_Mossel_JPA_45, 2013_Pozsgay}. In case of the one-dimensional spin-1/2 XXZ model one can avoid their explicit calculation by working with the so-called generating function \cite{Fagotti_1311.5216, 2013_Fagotti}, and using the quantum transfer matrix technique \cite{1993_Klumper, 2002_Klumper} and the related method of calculating short-distance correlation functions \cite{2007_Boos, 2008_Boos, 2010_Trippe}. However, the recently-proposed quench action approach \cite{2013_Caux_PRL_110} does not utilize an underlying assumption for the steady state ensemble and hence circumvents those difficulties. Within this method the steady state can be exactly calculated in the thermodynamic limit. It determines stationary expectation values of operators as well as their full time dependencies. One of the main requirements for this approach is the knowledge of the large size scaling of overlaps of the initial state with energy eigenstates. For many integrable models the calculation of scalar products between different Bethe states \cite{1982_Korepin, 1984_Izergin, 1987_Izergin, 1990_Slavnov_TMP_79_82, 2007_Kitanine} is possible due to underlying algebraic structures (algebraic Bethe ansatz \cite{1979_Faddeev}, see e.g.~the textbook \cite{KorepinBOOK}). However, up to recent times very little was known about scalar products of eigenstates of different Hamiltonians. In two cases, namely the Lieb-Liniger Bose gas \cite{1963_Lieb_PR_130_1} and the one-dimensional spin-1/2 XXZ model \cite{1928_Heisi}, analytic expressions for such scalar products that are treatable in the thermodynamic limit were recently discovered \cite{XXZpaper, LLpaper}. They are all given by determinants of `Gaudin-like' form \cite{1981_Gaudin_PRD_23}. Hence, their scaling with large system size can be extracted, allowing then to study interaction quench problems following the approach of~\cite{LLpaper, 2013_Caux_PRL_110, XXZpaper2} as well as some thermodynamic equilibrium properties of spin chains as in~\cite{2012_Kozlowski_JSTAT_P05021}. More specifically, in~\cite{LLpaper} the authors present a formula for overlaps of Lieb-Liniger Bethe states with the state of spatially uniformly distributed bosons (BEC state), the ground state of the non-interacting Bose gas. They checked their result analytically up to eight particles. In~\cite{XXZpaper} the same authors show that the overlap of the N\'eel state, the ground state of the antiferromagnetic Ising model, with XXZ Bethe states can be expressed similarly. One of the aims of the present paper is to give a rigorous proof of the Lieb-Liniger overlap formula for an arbitrary number of particles by using the proven results for XXZ overlaps of~\cite{XXZpaper}. In~\cite{XXZpaper} only the N\'eel state as an initial state is considered, which lies in the zero-magnetization sector of the XXZ spin chain. Here we shall present a formula for overlaps of Bethe states with so-called $q$-raised N\'eel states which lie in non-zero magnetization sectors. We consider overlaps with different initial states, namely the $q$-raised dimer and $q$-dimer states, as well. The paper is organized as follows. In chapter \ref{sec:XXZ_basics} we define the main objects of the algebraic Bethe ansatz of the XXZ model and we present the most important formulas that are needed in following chapters. We introduce different initial states for which we can express the overlaps with XXZ Bethe states by a determinant of Gaudin type. We further discuss the special scaling limit of the XXZ spin chain which leads to the Lieb-Liniger Bose gas. In chapter \ref{sec:overlaps} we present the overlap formulas for those initial states. We show that one of these determinant expressions can be evaluated in the scaling limit to Lieb-Liniger which eventually proves the recently-proposed Lieb-Liniger overlap formula of~\cite{LLpaper}. \section{Algebraic structures of the XXZ model and scaling limit to Lieb-Liniger}\label{sec:XXZ_basics} The Hamiltonian of the one-dimensional spin-1/2 XXZ model is given by \begin{equation}\label{eq:Hamiltonian_XXZ} H = \sum_{j=1}^{N}\left(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{y}\sigma_{j+1}^{y}+\Delta ( \sigma_{j}^{z}\sigma_{j+1}^{z}-1)\right)\: , \end{equation} where periodic boundary conditions $\sigma_{N+1}^{\alpha}=\sigma_{1}^{\alpha}$, $\alpha = x,y,z$, are supposed. We parametrize the anisotropy parameter of the model by $\Delta=\cosh(\eta)=(q+q^{-1})/2$. The XXZ model is integrable. It corresponds to a two-dimensional classical 6-vertex model \cite{BaxterBOOK}, which means that it is solvable using Bethe ansatz techniques \cite{1931_Bethe}, especially the algebraic Bethe ansatz \cite{1979_Faddeev, KorepinBOOK}. One of the basic ideas of the algebraic Bethe ansatz is that the Hamiltonian of the model under consideration can be constructed as a member of an infinite series of conserved quantities which can be obtained from a family of commuting matrices. The transfer matrix $t$ depends on a spectral parameter $\lambda$ and is defined as the trace of the so-called monodromy matrix. The commutativity $[t(\lambda),t(\mu)]=0$ is related to an underlying algebraic structure of the monodromy matrix, the Yang-Baxter algebra, which is a set of quadratic relations defined by the so-called $R$-matrix. The latter can be interpreted as a vertex operator of a classical vertex model, which is, in case of the spin-1/2 XXZ chain, the $R$-matrix of the 6-vertex model \cite{BaxterBOOK}. \subsection{Algebraic Bethe Ansatz for XXZ} The integrable structure of the spin-1/2 XXZ chain is related to the Yang-Baxter algebra that is defined as the free associative algebra of generators $T^{\alpha}_{\beta}(\lambda)$, $\alpha,\beta=1,\ldots,d$, modulo the quadratic relations (see e.g.~the textbooks \cite{KorepinBOOK, HubbardBOOK}) \begin{equation}\label{eq:YBA} \check{R}(\lambda-\mu)\left(T(\lambda)\otimes T(\mu)\right) = \left(T(\mu) \otimes T(\lambda)\right)\check{R}(\lambda-\mu)\: . \end{equation} The $d\times d$ matrix $T(\lambda)$ is called monodromy matrix and has the generators of the Yang-Baxter algebra as entries. $\lambda$ is the spectral parameter. The R-matrix $\check{R}(\lambda)$ is a solution of the Yang-Baxter equation (in braid form) \cite{BaxterBOOK} \begin{equation}\label{eq:YBE} \left(\check{R}(\lambda)\otimes\mathds{1}\right) \left(\mathds{1}\otimes \check{R}(\lambda+\mu)\right)\left(\check{R}(\mu)\otimes\mathds{1}\right) = \left(\mathds{1}\otimes \check{R}(\mu)\right)\left(\check{R}(\lambda+\mu)\otimes\mathds{1}\right)\left(\mathds{1}\otimes \check{R}(\lambda)\right) \end{equation} with the unity matrix $\mathds{1}$. In case of the XXZ model $d=2$ and the $R$-matrix is given by \begin{equation}\label{eq:R_matrix} \check{R}(\lambda)=\frac{1}{\sinh(\lambda+\eta)}\left(\begin{array}{cccc} \sinh(\lambda+\eta) & 0 & 0 & 0\\ 0 & \sinh(\eta) & \sinh(\lambda) & 0\\ 0 & \sinh(\lambda) & \sinh(\eta) & 0\\ 0 & 0 & 0 & \sinh(\lambda+\eta) \end{array}\right)\: , \end{equation} which is the $R$-matrix of the 6-vertex model. The complex parameter $\eta$ is determined by the anisotropy parameter $\Delta = \cosh{\eta}$. For real $\eta \neq 0$ we have $\Delta> 1$ and we are in the antiferromagnetic gapped regime. For purely imaginary $\eta$ we have $-1\leq \Delta \leq 1$ and we are in the gapless regime. One can construct an explicit representation of the Yang-Baxter algebra \eqref{eq:YBA} using the explicit form of the $R$-matrix \eqref{eq:R_matrix}. Using the permutation operator $P$ and the Pauli matrices $\sigma^{\alpha}$, $\alpha=z,+,-$, the $R$-matrix can be written as \begin{equation}\label{eq:def_R_matrix} R(\lambda)=P\check{R }(\lambda-\eta/2) = \frac{\sinh\left(\lambda+\frac{\eta}{2}\sigma^{z}\otimes\sigma^{z}\right)}{\sinh(\lambda+\eta/2)} +\frac{\sinh(\eta)\left(\sigma^{+}\otimes\sigma^{-} +\sigma^{-}\otimes\sigma^{+}\right)}{\sinh(\lambda+\eta/2)}\: . \end{equation} We introduce an auxiliary space $\mathbb{C}^2$ and index it with the letter $a$. We label the local quantum spaces with indices $n=1,\ldots,N$. The Lax operator on lattice site $n$ is defined as a $2\times 2$ matrix in the auxiliary space \begin{equation}\label{eq:def_Lax} L_n(\lambda) = R_{an}(\lambda) = \frac{1}{\sinh(\lambda+\eta/2)}\left(\begin{array}{cc} \sinh\big(\lambda+\frac{\eta}{2}\sigma_n^z\big) & \sinh(\eta)\sigma_n^- \\[1ex] \sinh(\eta)\sigma_n^+ & \sinh\big(\lambda-\frac{\eta}{2}\sigma_n^z \big) \end{array}\right)\: . \end{equation} The monodromy matrix is the product (in auxiliary space) of $N$ Lax operators \cite{KorepinBOOK}, \begin{equation}\label{eq:def_monodromy} T(\lambda)=\prod_{n=1}^N L_n(\lambda) = L_1(\lambda)\ldots L_N(\lambda)=: \left(\!\!\begin{array}{c@{\hspace{1.3ex}}c} A(\lambda) & B(\lambda) \\[0.7ex] C(\lambda) & D(\lambda) \end{array}\!\!\right)\: . \end{equation} It is a $2\times 2$ matrix with entries that are operators in the Hilbert space $(\mathbb{C}^2)^{\otimes N}$ of the XXZ spin chain, the $N$-fold tensor product of local spin-1/2 representation spaces $\mathbb{C}^2$. Using definitions \eqref{eq:def_R_matrix} and \eqref{eq:def_Lax} and the Yang-Baxter equation \eqref{eq:YBE} it is obvious that each Lax operator $L_n(\lambda)$, $n=1,\ldots,N$, is a representation of the Yang-Baxter algebra \eqref{eq:YBA}. Hence, the monodromy matrix \eqref{eq:def_monodromy} as a product of Lax operators acting on different lattice sites is a representation as well. The transfer matrix is defined as the trace over the auxiliary space of the monodromy matrix, \begin{equation}\label{eq:transfer_matrix} t(\lambda)=\text{tr}_a\big(T(\lambda)\big)=A(\lambda)+D(\lambda)\: . \end{equation} Multiplying equation \eqref{eq:YBA} with the inverse of $\check{R}(\lambda-\mu)$ from the right and taking the trace on both sides we easily find that the transfer matrices build a commutative family, \begin{equation}\label{eq:transfer_matrix_commutative_family} t(\lambda)=\text{tr}\left(T(\lambda)\right) \quad\Rightarrow\quad [t(\lambda),t(\mu)]=0\: . \end{equation} From this commutativity one can easily see that the coefficients $J_m=\left.\frac{\partial^m}{\partial\lambda^m}\ln(t(\lambda))\right|_{\lambda=\eta/2}$ in an expansion of $\ln(t(\lambda))$ around $\lambda=\eta/2$ commute with each other. They are called the conserved currents of the XXZ spin chain and they form a commutative subalgebra of the Yang-Baxter algebra. Together with the explicit expression \eqref{eq:def_Lax} one finds that the Hamiltonian \eqref{eq:Hamiltonian_XXZ} is given by $J_1$, \begin{equation}\label{eq:Hamiltonian_XXZ2} H = 2\sinh(\eta)J_1 = 2\sinh(\eta)\left.\frac{\partial}{\partial\lambda}\ln\left(t(\lambda)\right)\right|_{\lambda=\eta/2}\: . \end{equation} In order to construct eigenstates of the Hamiltonian (and all other conserved currents $J_m$) we need a pseudo vacuum $|0\rangle$ onto which the monodromy matrix acts triangularly, {\it i.e.}~$C(\lambda)|0\rangle = 0$. For this one can simply choose the fully-polarized state $|0\rangle=|\!\uparrow\ldots\uparrow\rangle=|\!\uparrow\rangle^{\otimes N}$. A Bethe state $|\{\lambda_j\}_{j=1}^M\rangle$ is defined as a product of $B$-operators from definition \eqref{eq:def_monodromy}, with (arbitrary) spectral parameters $\{\lambda_j\}_{j=1}^M$ that acts onto the pseudo vacuum, \begin{equation}\label{eq:Bethe_state} |\{\lambda_j\}_{j=1}^{M}\rangle = \left[\prod\limits_{j=1}^M B(\lambda_j)\right]|0\rangle\: . \end{equation} It is an eigenstate of the transfer matrix \eqref{eq:transfer_matrix}, and thus of the Hamiltonian \eqref{eq:Hamiltonian_XXZ}, if the parameters $\lambda_j$, $j=1,\ldots,M$, fulfill the Bethe equations \begin{equation}\label{eq:BAE} \left(\frac{\sinh(\lambda_j+\eta/2)}{\sinh(\lambda_j-\eta/2)}\right)^N=-\prod_{k=1}^M\frac{\sinh(\lambda_j-\lambda_k+\eta)}{\sinh(\lambda_j-\lambda_k-\eta)}\: , \qquad j=1,\ldots,M \: . \end{equation} A solution $\{\lambda_j\}_{j=1}^M$ to these coupled algebraic equations with $\lambda_j \neq \lambda_k$ for all $j,k$ is called a set of Bethe roots. According to the conventions used in~\cite{XXZpaper} we shall call Bethe states `on-shell' if $\{\lambda_j\}_{j=1}^M$ is a set of Bethe roots, and `off-shell' otherwise. The eigenvalues of the transfer matrix $t$ and of the Hamiltonian $H$ are respectively given by \begin{subequations} \begin{align} \tau(\lambda) &= \prod_{k=1}^M\frac{\sinh(\lambda-\lambda_k-\eta)}{\sinh(\lambda-\lambda_k)} + \left[\frac{\sinh\left(\lambda-\eta/2\right)}{\sinh\left(\lambda+\eta/2\right)}\right]^N\prod_{k=1}^M\frac{\sinh(\lambda-\lambda_k+\eta)}{\sinh(\lambda-\lambda_k)}\: ,\\ E &= 2\sinh(\eta)\left.\frac{\partial}{\partial\lambda}\ln\left(\tau(\lambda)\right)\right|_{\lambda=\eta/2} = \sum_{k=1}^M \frac{2\sinh^2(\eta)}{\sinh(\lambda_k+\eta/2)\sinh(\lambda_k-\eta/2)}\: . \end{align} \end{subequations} A state of the form \eqref{eq:Bethe_state} is also an eigenstate of the magnetization $S^z = \sum_{n=1}^N \sigma_n^z/2$ with eigenvalue $N/2-M$. In the following we will call the space spanned by Bethe states with a fixed number $M$ of spectral parameters the sector of fixed magnetization $S^z=N/2-M$. Furthermore, a Bethe state is called parity invariant if the set of spectral parameters fulfills the symmetry $\{\lambda_j\}_{j=1}^M= \{-\lambda_j\}_{j=1}^M$. The norm of an on-shell Bethe state is given by \begin{subequations}\label{eq:norm_Bethe_state} \begin{align} \|\{\lambda_j\}_{j=1}^M\| &= \sqrt{\langle \{\lambda_j\}_{j=1}^M| \{\lambda_j\}_{j=1}^M \rangle}\: , \\ \label{eq:norm_Bethe_state_b} \langle \{\lambda_j\}_{j=1}^M| \{\lambda_j\}_{j=1}^M \rangle &= \sinh^M(\eta) \prod_{\substack{j,k=1\\j\neq k}}^M \frac{\sinh(\lambda_j - \lambda_k + \eta)}{\sinh(\lambda_j - \lambda_k)} \det{}_{\!M} (G_{jk}) \: ,\\ G_{jk} &= \label{eq:Gaudin_matrix} \delta_{jk}\left(NK_{\eta/2}(\lambda_j)-\sum_{l=1}^{M}K_\eta(\lambda_j-\lambda_l)\right) + K_\eta(\lambda_j-\lambda_k)\: , \end{align} \end{subequations} where $K_\eta(\lambda)=\frac{\sinh(2\eta)}{\sinh(\lambda+\eta)\sinh(\lambda-\eta)}$ is the derivative of $\theta(\lambda)=i\ln\big[\frac{\sinh(\lambda+\eta)}{\sinh(\lambda-\eta)}\big]$, the scattering matrix of the XXZ model. The norm formula was first suggested by Gaudin in~\cite{1981_Gaudin_PRD_23} and then rigorously proven by Korepin in~\cite{1982_Korepin}. \subsection{Connection to the fundamental representation of $U_q(sl_2)$} Let us consider $B$- and $C$-operators in the limit of an infinitely large spectral parameter. We write the monodromy matrix \eqref{eq:def_monodromy} in the following way ($q=e^\eta$, $s_n^z= \sigma_n^z/2$, $s_n^\pm = \sigma_n^\pm$): \begin{align} \left(\!\!\begin{array}{c@{\hspace{1ex}}c} A(\lambda) & B(\lambda) \\[0.7ex] C(\lambda) & D(\lambda) \end{array}\!\!\right) &= \frac{1}{\sinh^N(\lambda+\eta/2)}\prod_{n=1}^N \left(\!\begin{array}{c@{\hspace{0ex}}c}\sinh\big(\lambda+ \eta s_n^z\big) & \sinh(\eta)s_n^- \\[1.0ex] \sinh(\eta)s_n^+ & \sinh\big(\lambda- \eta s_n^z \big) \end{array}\!\right)\notag\\[2ex] &\sim q^{\mp N/2}\prod_{n=1}^N \left[\left(\!\!\begin{array}{c@{\hspace{0.5ex}}c} q^{\pm s_n^z} & 0\\[0.5ex] 0 & q^{\mp s_n^z} \end{array}\!\!\right) \pm 2e^{\mp\lambda}\sinh(\eta)\left(\!\begin{array}{c@{\hspace{0.5ex}}c} 0 & s_n^- \\[0.5ex] s_n^+ & 0\end{array}\!\right) \right]\: , \end{align} where the two signs indicate the different behavior for $\lambda\to\pm\infty$. Therefore, we get \begin{subequations}\label{eq:Sq_operators} \begin{align} S_q^- &= \lim_{\lambda\to+\infty} \left(\frac{q^{+N/2}\sinh(\lambda)B(\lambda)}{\sinh(\eta)} \right) = \sum_{n=1}^N \left[\prod_{j=1}^{n-1} q^{+s_j^z}\right] s_n^- \left[\prod_{j=n+1}^N q^{-s_j^z}\right]\: , \\ \tilde{S}_q^- &= \lim_{\lambda\to-\infty} \left( \frac{q^{-N/2}\sinh(\lambda)B(\lambda)}{\sinh(\eta)} \right) = \sum_{n=1}^N \left[\prod_{j=1}^{n-1} q^{-s_j^z}\right] s_n^- \left[\prod_{j=n+1}^N q^{+s_j^z}\right]\: ,\\ S_q^+ &= \lim_{\lambda\to-\infty} \left( \frac{q^{-N/2}\sinh(\lambda)C(\lambda)}{\sinh(\eta)} \right) = \sum_{n=1}^N \left[\prod_{j=1}^{n-1} q^{+s_j^z}\right] s_n^+ \left[\prod_{j=n+1}^N q^{-s_j^z}\right]\: ,\\ \tilde{S}_q^+ &= \lim_{\lambda\to+\infty} \left( \frac{q^{+N/2}\sinh(\lambda)C(\lambda)}{\sinh(\eta)} \right) = \sum_{n=1}^N \left[\prod_{j=1}^{n-1} q^{-s_j^z}\right] s_n^+ \left[\prod_{j=n+1}^N q^{+s_j^z}\right]\: , \end{align} \end{subequations} which are $U_q(sl_2)$ symmetry operators \cite{1990_Pasquier}. We will use the raising and lowering operators $S_q^{\pm}$ and $\tilde{S}_q^{\pm}$ in the next section \ref{sec:initial_states} to create $q$-raised N\'eel and dimer states. The operators $q^{2s^z}$ and $s^\pm$ satisfy the relations $q^{2s^z}s^\pm = q^{\pm 2}s^\pm q^{2s^z}$ and $s^+s^--s^-s^+ = (q^{2s^z}-q^{-2s^z})/(q-q^{-1})$, and they form the so-called fundamental representation of $U_q(sl_2)$. Since $U_q(sl_2)$ has the structure of a Hopf algebra \cite{1985_Drinfeld, 1985_Jimbo} we can use the two co-multiplications, defined by \begin{align*} \Delta(q^{2s^z}) &= q^{2s^z} \otimes q^{2s^z}\: , & \Delta(s^+) &= q^{2s^z}\otimes s^+ + s^+\otimes\mathds{1}\: , & \Delta(s^-) &= \mathds{1}\otimes s^- + s^-\otimes q^{-2s^z}\: ,\\[1ex] \tilde\Delta(q^{2s^z}) &= q^{2s^z} \otimes q^{2s^z}\: , & \tilde\Delta(s^+) &= \mathds{1}\otimes s^+ + s^+\otimes q^{2s^z}\: , & \tilde\Delta(s^-) &= q^{-2s^z}\otimes s^- + s^-\otimes \mathds{1}\: , \end{align*} to rewrite the operators $S_q^{\pm}$, $\tilde{S}_q^{\pm}$ in the following way \begin{subequations}\label{eq:S_operators} \begin{align} S_q^+ q^{1/2+\sum_{j=1}^Ns_j^z} &= \sum_{n=1}^N (q^{2s^z})^{\otimes n-1}\otimes s^+ \otimes \mathds{1}^{\otimes N-n} = \Delta^{N-1}(s^+)\: ,\\ q^{-1/2-\sum_{j=1}^Ns_j^z}S_q^- &= \sum_{n=1}^N \mathds{1}^{\otimes n-1}\otimes s^- \otimes (q^{-2s^z})^{\otimes N-n} = \Delta^{N-1}(s^-)\: ,\\ \tilde{S}_q^+ q^{1/2+\sum_{j=1}^Ns_j^z} &= \sum_{n=1}^N \mathds{1}^{\otimes n-1}\otimes s^+ \otimes (q^{2s^z})^{\otimes N-n} = \tilde\Delta^{N-1}(s^+)\: ,\\ q^{-1/2-\sum_{j=1}^Ns_j^z} \tilde{S}_q^- &= \sum_{n=1}^N (q^{-2s^z})^{\otimes n-1}\otimes s^- \otimes \mathds{1}^{\otimes N-n} = \tilde\Delta^{N-1}(s^-)\: . \end{align} \end{subequations} We will need this representation of $q$-raising and $q$-lowering operators to investigate the limit $q\to-1$ in section~\ref{sec:scaling_limit}. There are in principle problems with periodic boundary conditions, which is discussed in~\cite{1990_Pasquier}, but for our purposes, especially in the limits $N\to\infty$ and $q\to-1$, they can be ignored. \subsection{Different initial states}\label{sec:initial_states} We are interested in overlaps of special initial states $\left|\Psi\right\rangle$ with parity-invariant Bethe states. Note that for some of the initial states also non parity-invariant Bethe states have non-zero overlap which is important for non-equilibrium dynamics. For convenience we choose in the following $N$ divisible by four and $M$ even, and denote parity invariant Bethe states by $|\{\pm\lambda_j\}_{j=1}^{m}\rangle$, $m=M/2$. We want to calculate overlaps $\langle \Psi | \{\pm\lambda_j\}_{j=1}^{m}\rangle$. Some states for which a Gaudin-like determinant expression exists (see section~\ref{sec:overlaps}) are \cite{Pozsgay_1309.4593} \begin{subequations}\label{eq:initial_states} \begin{itemize} \item the N{\'e}el and the anti-N\'eel state \begin{equation}\label{eq:Neel} \left|\Psi_N\right\rangle = \left|\uparrow\downarrow \uparrow\downarrow\ldots\right\rangle\: , \qquad \left|\Psi_{AN}\right\rangle=\left|\downarrow\uparrow\downarrow\uparrow \ldots \right\rangle\: , \end{equation} and especially its symmetric combination $\left|\Psi_0\right\rangle = \frac{1}{\sqrt{2}}(\left|\Psi_N\right\rangle + \left|\Psi_{AN}\right\rangle)$, which we call the zero-momentum N{\'e}el state, \item the dimer state \begin{equation}\label{qe:dimer} \left|\Psi_D\right\rangle = \bigotimes_{j=1}^{N/2}\frac{\left|\uparrow\downarrow\right\rangle-\left|\downarrow\uparrow\right\rangle}{\sqrt{2}}\: , \end{equation} \item and the q-dimer state \begin{equation}\label{eq:qdimer} \left|\Psi_{qD}\right\rangle = \bigotimes_{j=1}^{N/2}\frac{q^{1/2}\left|\uparrow\downarrow\right\rangle-q^{-1/2}\left|\downarrow\uparrow\right\rangle}{\sqrt{|q|+|q|^{-1}}}\: , \end{equation} where here and in the following the value of $q$ is fixed by the anisotropy parameter of the Hamiltonian \eqref{eq:Hamiltonian_XXZ}, $\Delta = \cosh(\eta) = (q+q^{-1})/2$. \end{itemize} \end{subequations} They all lie in the sector of zero magnetization, $S^z=N-M/2=0$, and they only have non-vanishing overlaps with Bethe states $|\{\pm\lambda_j\}_{j=1}^{m}\rangle$ if $m=M/2=N/4$. The corresponding (unnormalized) $2n$-fold $q$-raised states are \begin{subequations}\label{eq:q-raised_states} \begin{itemize} \item the $q$-raised N\'eel state \begin{equation}\label{eq:q-raised_Neel} |\Psi_N^{(n)}\rangle = \left(S_q^+\tilde{S}_q^+\right)^{n}\left|\Psi_N\right\rangle\: , \end{equation} \item the $q$-raised dimer state \begin{equation}\label{eq:q-raised_dimer} |\Psi_{D}^{(n)}\rangle = \left(S_q^+\tilde{S}_q^+\right)^{n}\left|\Psi_D\right\rangle\: , \end{equation} \item and the $q$-raised $q$-dimer state \begin{equation}\label{eq:q-raised_qdimer} |\Psi_{qD}^{(n)}\rangle = \left(S_q^+\right)^{2n}\left|\Psi_{qD}\right\rangle\: . \end{equation} \end{itemize} \end{subequations} Note that $\tilde{S}_q^\pm|\Psi_{qD}\rangle = 0$. All of these initial states have non-vanishing magnetization $S^z=2n$ and we necessarily have $m=M/2=N/4-n$. To calculate overlaps of these states with Bethe ket states we need the corresponding bra states. Since $\left(S_q^+\right)^\dagger= S_q^-$ and $\left(\tilde{S}_q^+\right)^\dagger= \tilde{S}_q^-$ commute, they can be simply obtained by acting with $\left(S_q^-\tilde{S}_q^-\right)^{n}$, $\left(S_q^-\right)^{2n}$ from the right on $\left\langle\Psi_{X}\right|$ for $X=N, D, qD$, respectively. \subsection{Scaling limit to the Lieb-Liniger model}\label{sec:scaling_limit} The scaling limit of the spin-1/2 XXZ chain to the Lieb-Liniger Bose gas is given by \cite{GaudinBOOK, 1987_Golzer, 2007_Seel, Pozsgay_JStatMech_P11017} \begin{equation}\label{eq:scaling_limit} \eta = i\pi - i\epsilon\: , \qquad N = cL/\epsilon^2\: ,\qquad \lambda_j \to \epsilon\lambda_j/c\: ,\qquad \epsilon\to 0\: . \end{equation} The Bethe equations \eqref{eq:BAE} for a finite number $M$ of rapidities become ($N$ even) \begin{equation} e^{iL\lambda_j} = -\prod_{k=1}^M\frac{\lambda_j-\lambda_k+ic}{\lambda_j-\lambda_k-ic}\: , \qquad j=1,\ldots,M\: . \end{equation} These are the Bethe equations of the Lieb-Linger model \cite{1963_Lieb_PR_130_1}. Since $q=e^\eta\to-1$ in the limit \eqref{eq:scaling_limit} the operators $S_q^\pm$, $\tilde{S}_q^\pm$ become staggered $SU(2)$ symmetry operators (up to a trivial prefactor) \begin{equation} S_{q}^\pm, \tilde{S}_{q}^\pm \to S_{\text{st}}^\pm=\sum_{n=1}^N(-1)^n s_n^\pm\: , \end{equation} which can be seen using the representation \eqref{eq:S_operators} for the $q$-raising operators. Since the operators $s_n^\pm$ act locally as spin raising and spin lowering operators and as they act non-trivially only on even or only on odd lattice sites, $S^\pm_{\text{st}}$ act on the N\'eel state \eqref{eq:Neel} as usual global $SU(2)$ spin raising and lowering operators $S^\pm$. We eventually obtain for the ($N/2-2m$)-fold $q$-raised N\'eel state \eqref{eq:q-raised_Neel} \begin{equation} \left\langle \Psi_N \right|(S_q^{-}\tilde{S}_q^{-})^{N/4-m} = \left\langle \Psi_N \right|(S^{-})^{N/2-2m}\: . \end{equation} This is a state with $2m$ uniformly-distributed down spins. In the scaling limit to Lieb-Liniger it corresponds to the state of $N_{LL}=2m$ spatially uniformly-distributed bosons, the so-called BEC state of~\cite{LLpaper}. \section{Overlaps for $q$-raised states}\label{sec:overlaps} In order to obtain an expression for the overlap of a $q$-raised state \eqref{eq:q-raised_states} with a normalized parity-invariant XXZ on-shell Bethe state in the non-zero magnetization sector we start in section~\ref{sec:old_overlap} with the overlap of a zero magnetization state \eqref{eq:initial_states} with an unnormalized parity-invariant off-shell state $|\{\pm\lambda_j\}_{j=1}^{N/4} \rangle$ as in~\cite{XXZpaper}. The calculation of overlaps of $q$-raised states can be reduced to the calculation of overlaps in the zero-magnetization sector where, according to equation~\eqref{eq:Sq_operators}, some of the spectral parameters of the Bethe state, $\{\mu_j\}_{j=1}^{2n}$, $2n=N/2-2m$, are sent to infinity, \begin{subequations} \begin{align}\label{eq:OL_q_to_normal} \langle \Psi_{N,D}^{(n)} | \{\pm\lambda_j\}_{j=1}^{m}\rangle &= \lim\nolimits_{\{\mu_j\to\infty\}_{j=1}^{n}} (-1)^n\prod_{j=1}^n\frac{\sinh^2(\mu_j)}{\sinh^2(\eta)}\langle \Psi_{N,D} | \{\pm\lambda_j\}_{j=1}^{m}\cup\{\pm\mu_j\}_{j=1}^{n}\rangle\: ,\\ \langle \Psi_{qD}^{(n)} |\{\pm\lambda_j\}_{j=1}^{m}\rangle &= \lim\nolimits_{\{\mu_j\to\infty\}_{j=1}^{2n}}q^{nN}\prod_{j=1}^{2n}\frac{\sinh(\mu_j)}{\sinh(\eta)}\langle \Psi_{qD} | \{\pm\lambda_j\}_{j=1}^{m}\cup\{\mu_j\}_{j=1}^{2n}\rangle\: . \end{align} \end{subequations} The minus sign $(-1)^n$ in the first equation comes from the fact that the Bethe state is parity-invariant and that we always send two parameters $\pm \mu_j$ to $\pm\infty$ at the same time. \subsection{Connection between different initial states} In~\cite{Pozsgay_1309.4593} it is shown that the overlaps of the different initial states \eqref{eq:initial_states} with an XXZ Bethe state (not necessarily parity invariant) are related to each other, \begin{multline}\label{eq:OL_rel_normal} \langle \Psi_N|\{\lambda_j\}_{j=1}^{N/2}\rangle\prod_{j=1}^{N/2}\frac{\sinh(\eta)/\sqrt{2}}{\sinh(\eta/2+\lambda_j)} = \langle \Psi_D|\{\lambda_j\}_{j=1}^{N/2}\rangle \prod_{j=1}^{N/2}\frac{\cosh(\frac{\eta}{2})}{\cosh(\lambda_j)} \\ = \langle \Psi_{qD}|\{\lambda_j\}_{j=1}^{N/2}\rangle \prod_{j=1}^{N/2}\frac{\sqrt{\cosh(\eta)}}{\exp(\lambda_j)}\: . \end{multline} The last equation is only true for $\Delta>1$. For $\Delta <1$ the square root $\sqrt{\cosh(\eta)}$ disappears. Similar relations are true for the corresponding $q$-raised states \eqref{eq:q-raised_states}. In this case we send pairs of parameters to infinity: $\pm\mu_j\to\pm\infty$, $j=1,\ldots,n$, for the N\'eel and the dimer state, and $\mu_{j}\to\infty$, $j=1,\ldots,2n$ for the $q$-dimer state. The rest of the parameters belong to the parity-invariant Bethe state denoted by $|\lambda_\pm\rangle = |\{\pm\lambda_j\}_{j=1}^{m}\rangle$. The divergent factors cancel and we obtain the relations ($m+n=N/4$) \begin{equation}\label{eq:OL_rel_qdimer} \frac{(-2)^{N/4}\langle \Psi_N^{(n)}|\lambda_{\pm}\rangle}{[\sinh^2(\eta)]^{-n}}\prod_{j=1}^{m}E(\lambda_j) = \frac{\langle \Psi_D^{(n)}|\lambda_\pm\rangle}{[\cosh^2(\frac{\eta}{2})]^{-n}} \prod_{j=1}^{m}\frac{\cosh^2(\frac{\eta}{2})}{\cosh^2(\lambda_j)} = \frac{\langle \Psi_{qD}^{(n)}|\lambda_\pm\rangle}{[\cosh(\eta)]^{-n-m}} \end{equation} with $E(\lambda) = \frac{\sinh^2(\eta)}{\sinh(\lambda+\eta/2) \sinh(\lambda-\eta/2)}$. Furthermore, in the scaling limit \eqref{eq:scaling_limit}, all factors in front of the overlaps become independent of the rapidities $\{\lambda_j\}_{j=1}^m$, and the relation between the overlaps of the $q$-raised N\'eel state and of the $q$-raised dimer state just reads \begin{equation} \langle \Psi_D^{(n)}|\lambda_{\pm}\rangle = 2^{N/4}\langle \Psi_N^{(n)}|\lambda_\pm\rangle\: . \end{equation} Due to this relation we only need to consider in the following $q$-raised N\'eel states. \subsection{Overlap of the N{\'e}el state with an off-shell Bethe state}\label{sec:old_overlap} The overlap of the N\'eel state \eqref{eq:Neel} with an unnormalized parity-invariant XXZ off-shell state \eqref{eq:Bethe_state} is given by \cite{XXZpaper} \begin{subequations}\label{eq:overlap_XXZ_offshell} \begin{equation} \langle \Psi_N |\{\pm\lambda_j\}_{j=1}^{N/4}\rangle = \gamma\det{}_{\!N/4}(G_{jk}^{+})\: , \end{equation} where the prefactor $\gamma$ and the matrix $G_{jk}^+$ read \begin{align}\label{eq:gamma} \gamma &= \left[\prod_{j=1}^{N/4}\frac{\sine{\lambda_j}{+\eta/2}\sine{\lambda_j}{-\eta/2}}{\sine{2\lambda_j}{0}^2}\right]\left[\prod_{\substack{j>k=1\\ \ \sigma=\pm}}^{N/4}\frac{\sine{\lambda_j+\sigma\lambda_k}{+\eta}\sine{\lambda_j+\sigma\lambda_k}{-\eta}}{\sine{\lambda_j+\sigma\lambda_k}{0}^2}\right]\\[3ex] G_{jk}^{+} &= \delta_{jk}\left(N\sine{0}{\eta}K_{\eta/2}(\lambda_j)-\sum_{l=1}^{N/4}\sine{0}{\eta}K_\eta^{+}(\lambda_j,\lambda_l)\right) + \sine{0}{\eta}K_\eta^{+}(\lambda_j,\lambda_k)\notag \\ \label{eq:Gaudin_plus} &\qquad\quad + \delta_{jk}\frac{\sine{2\lambda_j}{+\eta}\,\mathfrak{A}_j+\sine{2\lambda_j}{-\eta}\,\bar{\mathfrak{A}}_j}{\sine{2\lambda_j}{0}} + (1-\delta_{jk})f_{jk}\: , \quad\qquad j,k=1,\ldots,N/4\\[3ex] \label{eq:f_jk} f_{jk} &= \mathfrak{A}_k\left( \frac{\sine{2\lambda_j}{+\eta} \sine{0}{\eta}}{\sine{\lambda_j+\lambda_k}{0}\sine{\lambda_j-\lambda_k}{+\eta}} - \frac{\sine{2\lambda_j}{-\eta}\sine{0}{\eta}}{\sine{\lambda_j-\lambda_k}{0}\sine{\lambda_j+\lambda_k}{-\eta}} \right) + \mathfrak{A}_k\bar{\mathfrak{A}}_j \left(\frac{\sine{2\lambda_j}{-\eta}\sine{0}{\eta}}{\sine{\lambda_j-\lambda_k}{0}\sine{\lambda_j+\lambda_k}{-\eta}}\right) \notag\\ &\quad - \bar{\mathfrak{A}}_j\left(\frac{\sine{2\lambda_j}{-\eta}\sine{0}{\eta}}{\sine{\lambda_j-\lambda_k}{0}\sine{\lambda_j+\lambda_k}{-\eta}} + \frac{\sine{2\lambda_j}{-\eta}\sine{0}{\eta}}{\sine{\lambda_j+\lambda_k}{0}\sine{\lambda_j-\lambda_k}{-\eta}}\right) \end{align} with $K_\eta^{+}(\lambda,\mu)=K_\eta(\lambda+\mu)+K_\eta(\lambda-\mu)$ and $K_\eta(\lambda)=\frac{\sine{0}{2\eta}}{\sine{\lambda}{+\eta}\sine{\lambda}{-\eta}}$. We also introduced the shortcuts $\sine{\lambda}{\eta}=\sinh(\lambda+\eta)$ and \begin{equation}\label{eq:func_a_tilde} \mathfrak{A}_j = 1 + \mathfrak{a}_j\: ,\quad \bar{\mathfrak{A}}_j = 1 + \mathfrak{a}_j^{-1}\: ,\quad \mathfrak{a}_j = \left[\prod_{\substack{k=1\\ \ \sigma=\pm}}^{N/4}\frac{\sine{\lambda_j-\sigma\lambda_k}{-\eta}}{\sine{\lambda_j-\sigma\lambda_k}{+\eta}}\right]\left(\frac{\sine{\lambda_j}{+\eta/2}}{\sine{\lambda_j}{-\eta/2}}\right)^{N}\: . \end{equation} \end{subequations} Note that there is a difference of a factor $\sqrt{2}$ in $\gamma$ as compared to~\cite{XXZpaper} since here we consider the N\'eel state instead of the symmetric combination of N\'eel and anti-N\'eel. Here the parameters $\lambda_j$, $j=1,\ldots, N/4$, are arbitrary complex numbers. \subsection{Determinant expression for the overlap of a q-raised N\'eel state} First, we split the set of rapidities in formula \eqref{eq:overlap_XXZ_offshell} into two subsets labeled by $\{\pm\lambda_j\}_{j=1}^m$ and $\{\pm\mu_j\}_{j=1}^n$, $m+n=N/4$. To get the overlap of the $(N/2-2m)$-fold $q$-raised N\'eel state $\langle\Psi_N^{(N/4-m)}|$ with a parity-invariant Bethe state $|\{\pm\lambda_j\}_{j=1}^m\rangle$, we then have to take the limits $\mu_j\to\infty$ as in equation~\eqref{eq:OL_q_to_normal}. We shall do this step by step. We start with the $\mu$-dependent part of the factor $\gamma$ in equation~\eqref{eq:gamma}. Together with the normalization factor in equation~\eqref{eq:OL_q_to_normal} it becomes \begin{equation}\label{eq:gamma_mu_to_infty} \gamma_\mu = (-1)^n\left[\prod_{j=1}^n\frac{\sine{\mu_j}{0}^2}{\sine{0}{\eta}^2}\right]\left[\prod_{j=1}^{n}\frac{\sine{\mu_j}{+\eta/2}\sine{\mu_j}{-\eta/2}}{\sine{2\mu_j}{0}^2}\right]\left[\prod_{\substack{j>k=1\\ \ \sigma=\pm}}^{n}\frac{\sine{\mu_j+\sigma\mu_k}{+\eta}\sine{\mu_j+\sigma\mu_k}{-\eta}}{\sine{\mu_j+\sigma\mu_k}{0}^2}\right]\: , \end{equation} where we already neglected in the last product all factors containing one $\mu$- and one $\lambda$-parameter since they all become unity in the limit $\mu\to\infty$. We now send the $\mu$-parameters to infinity in such a way that all differences and sums of $\mu$'s are infinity. Then the third product becomes unity as well and we have $\gamma_\mu \to \gamma_\infty = (-1)^n4^{-n}\sine{0}{\eta}^{-2n}$. In total, \begin{equation}\label{eq:new_gamma} \gamma = \gamma_\infty\hat{\gamma}=\frac{(-1)^n}{4^n\sine{0}{\eta}^{2n}} \left[\prod_{j=1}^{m}\frac{\sine{\lambda_j}{+\eta/2}\sine{\lambda_j}{-\eta/2}}{\sine{2\lambda_j}{0}^2}\right]\left[\prod_{\substack{j>k=1\\ \ \sigma=\pm}}^{m}\frac{\sine{\lambda_j+\sigma\lambda_k}{+\eta}\sine{\lambda_j+\sigma\lambda_k}{-\eta}}{\sine{\lambda_j+\sigma\lambda_k}{0}^2}\right]\: . \end{equation} The second step is to calculate the determinant of the matrix $G_{jk}^+$ in equation~\eqref{eq:overlap_XXZ_offshell} in the limits $\mu_j\to\infty$, $j=1,\ldots,n$. We immediately see that all $K^{+}$-terms vanish as long as one of the two arguments is one of the $\mu$-parameters. Furthermore, in the first $m$ rows and last $n$ columns, {\it i.e.}~$\lambda_j$ finite and $\lambda_k=\mu_{k-m}$, the terms $f_{jk}$ in equation~\eqref{eq:f_jk} vanish since the symbols $\mathfrak{A}_j$ are bounded and all factors inside the brackets vanish. Hence, the entire upper right $m\times n$ block of the matrix $G_{jk}^+$ is zero, and the determinant becomes decomposed into the product of two determinants. One is just the determinant of a reduced Gaudin-like matrix, \begin{align} \hat{G}_{jk}^{+} &= \delta_{jk}\left(N\sine{0}{\eta}K_{\eta/2}(\lambda_j)-\sum_{l=1}^{m}\sine{0}{\eta}K_\eta^{+}(\lambda_j,\lambda_l)\right) + \sine{0}{\eta}K_\eta^{+}(\lambda_j,\lambda_k)\notag \\ \label{eq:new_Gaudin_matrix} &\qquad\quad + \delta_{jk}\frac{\sine{2\lambda_j}{+\eta}\,\mathfrak{A}_j+\sine{2\lambda_j}{-\eta}\,\bar{\mathfrak{A}}_j}{\sine{2\lambda_j}{0}} + (1-\delta_{jk})f_{jk}\: , \quad\qquad j,k=1,\ldots,m\: , \end{align} where $K_\eta$, $K_\eta^+$, $f_{jk}$, $\mathfrak{A}_j = 1 +\mathfrak{a}_j$, $\bar{\mathfrak{A}}_j = 1 +\mathfrak{a}_j^{-1}$ are defined as before (see equations~\eqref{eq:f_jk},~\eqref{eq:func_a_tilde}) and the symbols $\mathfrak{a}_j$, $j=1,\ldots,m$, reduce to \begin{equation}\label{eq:new_func_a_tilde} \mathfrak{a}_j = \left[\prod_{\substack{k=1\\ \ \sigma=\pm}}^{m}\frac{\sine{\lambda_j-\sigma\lambda_k}{-\eta}}{\sine{\lambda_j-\sigma\lambda_k}{+\eta}}\right]\left(\frac{\sine{\lambda_j}{+\eta/2}}{\sine{\lambda_j}{-\eta/2}}\right)^{N}\: . \end{equation} The other determinant can be easily evaluated. Fixing a special order of limits $\mu_j\to\infty$, $j=1,\ldots,n$, in such a way that $\mu_k-\mu_j\to+\infty$ for $j>k$, the lower right $n\times n$ block of the matrix $G_{jk}^+$ becomes a triangular matrix and the determinant is just the product of all diagonal elements $D_j$. We thus have $\det{}_{\!M/2}(G_{jk}^+) = \det{}_{\!m}(\hat{G}_{jk}^+)\prod_{j=1}^n D_j$. The next task is to calculate these diagonal elements. Using the previously-introduced order of limits, which we denote by $\lim\nolimits_{\mu}$, we obtain \begin{equation} \mathfrak{a}_j=\lim\nolimits_\mu\left\{ \left[\prod_{\substack{k=1\\ \ \sigma=\pm}}^{n}\frac{\sine{\mu_j-\sigma\mu_k}{-\eta}}{\sine{\mu_j-\sigma\mu_k}{+\eta}}\right] \left[\prod_{\substack{k=1\\ \ \sigma=\pm}}^{m}\frac{\sine{\mu_j-\sigma\lambda_k}{-\eta}}{\sine{\mu_j-\sigma\lambda_k}{+\eta}}\right] \left(\frac{\sine{\mu_j}{+\eta/2}}{\sine{\mu_j}{-\eta/2}}\right)^{N} \right\} = -e^{4\eta(j-1/2)}\: . \end{equation} Therefore, the diagonal elements can be written as [see the third term in equation~\eqref{eq:Gaudin_plus}], \begin{multline}\label{eq:new_factor} D_j = e^\eta \mathfrak{A}_j + e^{-\eta}\bar{\mathfrak{A}}_j = e^\eta (1-e^{4\eta(j-1/2)}) + e^{-\eta}(1-e^{-4\eta(j-1/2)})\\ = -4\sinh((2j-1)\eta)\sinh(2j\eta)\: . \end{multline} Together with the $n$-dependent part $\gamma_\infty$ of the $\gamma$-factor the product of all diagonal elements becomes \begin{equation} \frac{(-1)^n}{4^n\sinh^{2n}(\eta)}\prod_{j=1}^n D_j = \prod_{j=1}^{n}\frac{\sinh((2j-1)\eta)\sinh(2j\eta)}{\sinh^2(\eta)} = \prod_{j=1}^{2n}\frac{q^j-q^{-j}}{q-q^{-1}} = [2n]_q!\: . \end{equation} As a final result we obtain the overlap of the normalized $q$-raised N\'eel state with normalized parity-invariant on-shell Bethe states. All together, using norm formula \eqref{eq:norm_Bethe_state} of an on-shell Bethe state, we have [$\hat{\gamma}$ and $\hat{G}_{jk}$ are defined in equations~\eqref{eq:new_gamma} and \eqref{eq:new_Gaudin_matrix}] \begin{subequations}\label{eq:new_overlap} \begin{align}\label{eq:new_overlap_a} \frac{\langle \Psi_N^{(n)} |\{\pm\lambda_j\}_{j=1}^{m}\rangle}{\|\Psi_N^{(n)} \| \|\{\pm\lambda_j\}_{j=1}^m\|} &= \frac{[2n]_q!}{\|\Psi_N^{(n)}\|} \frac{\hat{\gamma}\det{}_{\!m}(\hat{G}_{jk}^{+})}{\|\{\pm\lambda_j\}_{j=1}^m\|} \notag \\ &= \frac{[2n]_q!}{\|\Psi_N^{(n)}\|} \left[\prod_{j=1}^{m}\frac{\sqrt{\tanh(\lambda_j+\frac{\eta}{2}) \tanh(\lambda_j-\frac{\eta}{2})}}{2\sinh(2\lambda_j)}\right]\sqrt{ \frac{\det_{m}(\hat{G}_{jk}^{+})}{\det_{m}(\hat{G}_{jk}^{-})}} \end{align} where \begin{equation} \hat{G}_{jk}^\pm = \delta_{jk}\left(NK_{\eta/2}(\lambda_j)-\sum_{l=1}^{m}K_\eta^+(\lambda_j,\lambda_l)\right) + K_\eta^\pm(\lambda_j,\lambda_k)\: , \quad j,k=1,\ldots,m\: , \end{equation} \end{subequations} and $K_\eta^\pm$, $K_\eta$ are defined as before. Here the parameters $\lambda_j$, $j=1,\ldots,m$, are Bethe roots but still, in general, complex numbers (string solutions). $\|\Psi_N^{(n)}\|$ is the norm of the $2n$-fold $q$-raised N\'eel state. We calculate this norm in the limit $q\to -1$ in section~\ref{sec:proof_LL}. We can use overlap formula \eqref{eq:new_overlap} for $q$-raised N{\'e}el states to prove the formula for overlaps of Lieb-Liniger Bethe states with the BEC state of one-dimensional free Bosons, which was recently discovered in~\cite{LLpaper}. \subsection{Scaling to Lieb-Liniger and proof of the BEC Lieb-Liniger overlap formula}\label{sec:proof_LL} In this section we prove the Lieb-Liniger overlap formula of~\cite{LLpaper} for an arbitrary even number of bosons. We have already seen at the end of chapter \ref{sec:scaling_limit} that, in the scaling limit of the XXZ spin chain to the Lieb-Liniger Bose gas, the $(N/2-N_{LL})$-fold $q$-raised N\'eel state scales to the BEC state of $N_{LL}$ bosons. We investigate in the following the identification of these states. The normalized BEC state is given by the $N\to\infty$ limit of the state with $N_{LL}$ uniformly distributed down spins, \begin{equation} |BEC\rangle\ \widehat{=}\ \begin{pmatrix} N \\ N_{LL}\end{pmatrix}^{-1/2}\sum_{\{n_j\}_{j=1}^{N_{LL}}}\sigma^-_{n_1}\ldots\sigma_{n_{N_{LL}}}^-|\uparrow\ldots\uparrow\rangle\: . \end{equation} The sum is over all $N \choose N_{LL}$ subsets $\{n_j\}_{j=1}^{N_{LL}}$ of the first $N$ integers. The normalized $(N/2-N_{LL})$-fold $q$-raised N\'eel state reads \begin{equation} \frac{ (S^+)^{N/2-N_{LL}}|\Psi_N\rangle}{\| (S^+)^{N/2-N_{LL}}|\Psi_N\rangle \| } = \begin{pmatrix} N/2 \\ N_{LL}\end{pmatrix}^{-1/2}\sum_{\substack{\{n_j\}_{j=1}^{N_{LL}} \\ n_j \text{ even}}} \sigma_{n_1}^-\ldots\sigma_{n_{N_{LL}}}^-|\uparrow\ldots\uparrow\rangle\: . \end{equation} Here the sum is over all $N/2 \choose N_{LL}$ subsets of even integers from $1$ to $N$ because in the $q$-raised N\'eel state the down spins sit only on even lattice sites. In the large $N$ limit the ratio of numbers of local spin basis states can be calculated by means of Stirling's formula (note that $N_{LL}$ is finite), \begin{equation} \lim_{N\to\infty}\left[ \begin{pmatrix} N \\ N_{LL}\end{pmatrix} \left/ \begin{pmatrix} N/2 \\ N_{LL} \end{pmatrix}\right.\right] = \lim_{N\to\infty}\left[\frac{N!\:(\frac{N}{2}-N_{LL})!}{(\frac{N}{2})!(N-N_{LL})!}\right] = 2^{N_{LL}}\: . \end{equation} In the scaling limit we can identify the $q$-raised N\'eel state itself with the BEC state. In order to do this we have to multiply overlaps of this state with a factor $2^{N_{LL}}$ that takes account of the contribution of all `missing' states which also scale to the BEC state in the dilute limit. We further have to divide by a factor $\sqrt{2^{N_{LL}}}$ that corrects for the norm of the state. Both factors together therefore lead to a corrective factor of $2^{N_{LL}/2}$. Between XXZ and Lieb-Liniger Bethe states there is a one-to-one correspondence \cite{1987_Golzer}. Furthermore, the Gaudin matrix of norm formula \eqref{eq:norm_Bethe_state} turns into the Gaudin matrix of the Lieb-Liniger norm \cite{KorepinBOOK}. Similarly, the modified Gaudin matrices $\hat{G}_{jk}^\pm$ turn into the corresponding Lieb-Liniger matrices. The norm of the initial state is given by $\|\Psi_N^{(n)}\| = (2n)! \sqrt{N/2 \choose 2n}= (2n)! \sqrt{N/2 \choose 2m}$. Using the scaling limit \eqref{eq:scaling_limit} we obtain for the prefactor in equation~\eqref{eq:new_overlap_a}, where we omit a factor $(-1)^n$ coming from the $q$-deformed factorial when $q\to-1$, and where we use the corrective factor $2^{N_{LL}/2}$, \begin{align} 2^{N_{LL}/2}\frac{[2n]_q!}{\|\Psi_N^{(n)}\|} & \left[\prod_{j=1}^{m} \frac{\sqrt{\tanh(\lambda_j+\frac{\eta}{2}) \tanh(\lambda_j-\frac{\eta}{2})}}{2\sinh(2\lambda_j)}\right] \notag\\ &\to \frac{2^{N_{LL}/2}}{\sqrt{N/2 \choose 2m}} \left[\prod_{j=1}^{m}\frac{\sqrt{\coth(\epsilon\lambda_j/c-\frac{i\epsilon}{2}) \coth(\epsilon\lambda_j/c+\frac{i\epsilon}{2})}}{2\sinh(2\epsilon\lambda_j/c)}\right] \notag \\[1ex] &\to \frac{2^{N_{LL}/2}}{4^m\epsilon^{2m}}\sqrt{\frac{(N/2-2m)!}{(N/2)!}}\sqrt{(2m)!} \left[\prod_{j=1}^{m}\frac{1}{\frac{\lambda_j}{c}\sqrt{\frac{\lambda_j^2}{c^2}+\frac{1}{4}}}\right] \notag \\[2ex] &\to \frac{2^{N_{LL}/2}}{N^m\epsilon^{2m}}\frac{\sqrt{(2m)!}}{2^m} \left[\prod_{j=1}^{m}\frac{1}{\frac{\lambda_j}{c}\sqrt{\frac{\lambda_j^2}{c^2}+\frac{1}{4}}}\right] \to \frac{(cL)^{-N_{LL}/2}\sqrt{N_{LL}!}}{\displaystyle \prod_{j=1}^{N_{LL}/2}\frac{\lambda_j}{c}\sqrt{\frac{\lambda_j^2}{c^2}+\frac{1}{4}}}\: . \end{align} In the second last step we used Stirling's formula and in the last step we plugged in $N=cL/\epsilon^2$ and $m=N_{LL}/2$. We eventually combine this with the determinants in the scaling limit to \begin{equation}\label{eq:Overlaps} \frac{\langle BEC | \{\pm\lambda_j\}_{j=1}^{N_{LL}/2} \rangle}{\|\{\pm\lambda_j\}_{j=1}^{N_{LL}/2} \|} = \frac{\sqrt{ (cL)^{-N_{LL}}N_{LL}!}} { {\displaystyle \prod\limits_{j=1}^{N_{LL}/2} \frac{\lambda_j}{c} \sqrt{\frac{\lambda_j^2}{c^2} + \frac{1}{4} } } } \sqrt{\frac{\det_{j,k=1}^{N_{LL}/2} \tilde{G}^{+}_{jk}} { \det_{j,k=1}^{N_{LL}/2}\tilde{G}_{jk}^- } }\: . \end{equation} The matrices $\tilde{G}_{jk}^{\pm}$ are similar to the Gaudin matrix $G_{jk}$ of the Lieb-Liniger model \cite{KorepinBOOK, 1981_Gaudin_PRD_23}, but with a different kernel: \begin{equation}\label{eq:Gaudin_pm_LL} \tilde{G}^\pm_{jk} = \delta_{jk} \Big( L + \sum_{l=1}^{N_{LL}/2} \tilde{K}^+(\lambda_{j},\lambda_{l}) \Big) - \tilde{K}^\pm (\lambda_{j},\lambda_{k}) \: , \end{equation} where $\tilde{K}^{\pm}(\lambda,\mu) = \tilde{K}(\lambda - \mu) \pm \tilde{K}(\lambda + \mu)$ and $\tilde{K}(\lambda) = 2c/(\lambda^2 + c^2)$. Hence we proved, starting from the XXZ off-shell formula (\ref{eq:overlap_XXZ_offshell}), the formula for overlaps of the BEC state with Bethe states of the Lieb-Liniger Bose gas \cite{LLpaper} for an arbitrary number of bosons $N_{LL}$. Note that in~\cite{LLpaper} the quotient of determinants is presented in a different way, but can be easily transformed into our representation using the relation \begin{equation} \det{}_{\!N}\left(\!\!\begin{array}{cc} A & B \\ B & A \end{array}\!\!\right) = \det{}_{\!N/2}(A+B)\det{}_{\!N/2}(A-B) \end{equation} for block matrices. Note furthermore that equation~\eqref{eq:Overlaps} holds for any solution of LL Bethe equations, irrespective of whether the Bethe roots are purely real numbers (as is the case in the repulsive regime $c>0$ of the Lieb-Liniger Bose gas) or form complex string solutions (which can occur in the attractive regime $c<0$). \section{Summary} In this paper we presented a rigorous proof of the BEC Lieb-Liniger overlap formula of~\cite{LLpaper} using the formula for overlaps of the N\'eel state with XXZ off-shell Bethe states, which was proven in~\cite{XXZpaper}. We sent parameters to infinity to recover global symmetry operators that act, in the scaling limit to Lieb-Liniger, on the N\'eel state as global $SU(2)$ operators. In this way the number of down spins could be reduced to a fixed finite number and the resulting state could be identified with the initial state of finitely many uniformly-distributed bosons. This allowed to gain the formula for overlaps of Lieb-Liniger Bethe states with this initial state, which has a nice application in the context of the KPZ equation \cite{Calabrese_1402.1278} that is related to the attractive Lieb-Liniger Bose gas. Another nice application is the solution of the interaction quench to repulsive bosons in~\cite{LLpaper} using the so-called quench action approach \cite{2013_Caux_PRL_110}. Furthermore, using the results of~\cite{Pozsgay_1309.4593}, we related overlaps of $q$-raised N\'eel states to overlaps of different initial states which lie, as well as the $q$-raised N\'eel state itself, in a non-zero magnetization sector of the spin chain. This extends the results of~\cite{XXZpaper} where only the N\'eel state was considered. The connection between the overlaps for the two different models, the XXZ spin chain and the Lieb-Liniger Bose gas, opens a way to discover more initial states for the Lieb-Liniger model. One could, for example, create non-uniform states by connecting the scaling limit to Lieb-Liniger with the limit of the rapidities which are sent to infinity. This can lead to different initial states and, of course, to different Gaudin-like determinant expressions, depending on how we send the rapidities to infinity. The leading behavior of these determinants in the thermodynamic limit can be evaluated which allows then for an exact analysis of non-equilibrium dynamics using the quench action approach proposed in~\cite{2013_Caux_PRL_110}. Within this method the time dependence of expectation values of certain operators is in principle accessible. Especially in the large time limit they can be expressed as expectation values of a single state, the so-called saddle point state. Since correlation functions for both models, the spin-1/2 XXZ chain and the Lieb-Liniger Bose gas, are related to each other \cite{2007_Seel, Pozsgay_JStatMech_P11017}, it would be interesting to investigate them regarding the saddle-point state. We will address these questions in an up-coming publication \cite{XXZpaper2}. \vspace{3ex} \section*{Acknowledgements} I would like to express my gratitude to Frank G{\"o}hmann, Jean-S{\'e}bastien Caux, Jacopo De Nardis, and Bram Wouters for useful discussions. I thank Pasquale Calabrese for pointing out a sign mistake in equation~\eqref{eq:Gaudin_pm_LL}. I also thank the Netherlands Organisation for Scientific Research (NWO) for financial support. \section*{References}
1,108,101,566,138
arxiv
\section{Introduction} Human-level intelligence has two remarkable hallmarks: quick learning and slow forgetting. Human can efficiently learn to recognize new concepts from a few of examples without forgetting the prior knowledge. Ideally, the artificial agent should be able to demonstrate the same capabilities, learning continually from small volume of data and preserving what it has learned. We call this human-like learning scenario as \emph{continual low-shot learning}, which can be seen as a generalization of the standard \emph{continual learning} (CL) \cite{li2017learning,rebuffi2017icarl}. The comparison between the standard CL and continual low-shot learning is illustrated in Figure \ref{Fig:continual_low_shot}. \begin{figure}[h] \centering \includegraphics[width=0.4\textheight]{continual_lowshot_learning.png} \caption{The comparison between standard continual learning and continual low-shot learning for image classification. The top row is standard CL in which each task has plentiful training data. The bottom row is continual low-shot learning where only a handful of training data for each task. \label{Fig:continual_low_shot}} \end{figure} The characteristics of continual low-shot learning problem can be formulated as follows: \begin{enumerate}[(i)] \item Non-stationary data. A model will be trained in the whole data stream where new task data become available at different phases. Compared with the previous tasks, the new task data could have different data distribution and categories. \item Efficiency. During training and testing, the system resource consumption and computational complexity should be bounded. For example, when model learns new tasks, it cannot see old task data for quick learning and storage saving. \item Small size of data. The volume of training samples could be small (e.g. a few or dozens of training data). \end{enumerate} The first two criteria are the important properties of the standard continual learning. The third criterion generalizes CL to address low-shot learning. This generalization is important in many practical scenarios. For example, in realistic vision applications (e.g. classification, detection), the labeled training data is usually rare and can only be available incrementally due to high cost data labeling. It could be beneficial that a model can effectively learn from a small size of data and continually evolve itself as new data are available. Despite its importance, limited literature discussed this practical and more human-like learning problem. In continual low-shot learning, a model should demonstrate good performance in the entire data stream where the volume of each task is small. Hence, learning efficiently from limited training data and simultaneously preserving learned knowledge are crucial. Efficient learning means that a model can quickly learn the intrinsic knowledge from the limited data and obtain generalization. Knowledge preservation entails that new data learning should not cause negative interference in learned knowledge. The interference, however, is inevitable since the architecture of deep learning model is highly coupled. Paucity of old task data supervision, the new data learning usually cause severe negative interference and the performance on previous tasks quickly deteriorates, which is so-called \emph{catastrophic forgetting} \cite{mccloskey1989catastrophic}. These two properties, efficient learning and knowledge preservation, usually conflict with each other and it is challenging to find the optimal trade-off. In this work, we propose a novel algorithm to address this challenge from two aspects. (1) In contrast to prior methods which focus on how to reduce forgetting \cite{zenke2017continual,kirkpatrick2017overcoming,aljundi2018memory}, we try to strengthen model adaptation via a multi-steps optimization procedure. This procedure can efficiently learn meta knowledge from a small size of data, and the strong adaptation can also give more potential space for learning-forgetting compromise. (2) Instead of applying a fixed hyperparameter to balance learning objective and regularization terms, we develop a dynamic balance strategy by altering optimization gradients. This dynamic strategy provides a comparable or better trade-off between learning and forgetting, and thus further improves the overall performance. For knowledge preservation, we adopt the parameter regularization based approaches, which measures the importance of model parameters and penalize its change in new task training. Compared with other approaches like model expansion \cite{aljundi2017expert,rusu2016progressive} and gradient regularization \cite{lopez2017gradient,chaudhry2018efficient}, the parameter regularization is more computational efficient and does not access previous task data. We implement our model-agnostic algorithm MetaCL based on three state-of-the-art parameter regularization methods: EWC \cite{kirkpatrick2017overcoming}, PI \cite{zenke2017continual}, and MAS \cite{aljundi2018memory}. And extensive experiments show that our approach can further improve those baselines. In summary, our main contributions of this work include \begin{itemize} \item We design a model-agnostic algorithm, MetaCL, which strengthen model adaptation ability in continual low-shot learning without using any data in previous tasks. \item We develop a dynamic balance strategy to adaptively penalize parameter changes to stabilize optimization gradients and achieve better trade-off between current task learning and previous task forgetting. \item We compare our approach with existing algorithms under various experimental settings and analyze them in terms of accuracy, forgetting, and adaptation. \end{itemize} \section{Related Work} Our approach builds on the insights of model adaptation and knowledge preservation. These two characteristics have been mainly addressed in meta learning and continual learning fields. We briefly discuss both. \textbf{Meta learning}. The main goal in meta learning is to endow a model with strong adaptation ability, so as to a model trained on a domain (i.e. so-called meta training dataset) can be quickly transferred to other new domains (i.e. meta testing dataset) where only few of labeled data (i.e. support set) are available. Generally, the existing methods can be categorized into three categories: metric-based, model-based and optimization-based. Metric-based approaches \cite{vinyals2016matching,snell2017prototypical,sung2018learning} try to learn a similarity metric so that the model can obtain more general and intrinsic knowledge. Model-based approaches \cite{santoro2016meta,munkhdalai2017meta} achieve adaptation via altering model components. Optimization-based methods \cite{finn2017model} apply new optimization algorithms to find a good initialization. However, all above approaches only consider how to learn from few-shot data, regardless of the model knowledge preservation. More recently, \cite{gidaris2018dynamic} implemented a meta-learning model through a similarity-based classifier and weight generator. It protects the performance on meta training dataset after fine tuning on support set. \textbf{Nevertheless, our continual low-shot learning differs from meta learning in two significant aspects. First, there is no extra dataset (i.e. meta training dataset) for prior knowledge obtaining in continual low-shot learning. Second, instead of only two different datasets/tasks, the model faces theoretically unlimited tasks in continual low-shot learning}. So the existing meta learning methods cannot be directly applied to solve our problem. \textbf{Continual learning}, on the other hand, mainly focuses on how to remedy the catastrophic forgetting when model learns new tasks. Most existing literature addressed this problem from two aspects: model decoupling and model regularization. \cite{aljundi2017expert,aljundi2018selfless} decouple model to decrease the interference when learning new data. Model regularization methods \cite{li2017learning,kirkpatrick2017overcoming} add an extra regularization term to preserve learned knowledge. In spite of their effectiveness in knowledge preservation, these methods neglect the low-shot scenarios and adaptation ability. Later, \cite{lopez2017gradient,chaudhry2018riemannian} observed the compromise between learning and forgetting. But they didn't develop a strategy to explicitly enhance learning and adaptation ability. In contrast to prior methods, we address continual low-shot learning and propose a model-agnostic algorithm that strengthens adaptation and provides a better trade-off between learning and forgetting. Our method neither modifies the network architecture nor relies on external experience memory. This makes our method memory efficient and easy to be extended to other existing models and applications. \section{Approach} We aim to train a model to obtain strong adaptation and preserve its performance on previous tasks. In the following, we will define the problem setup and present our approach in classification context, but the idea can be extended to other learning problems. \subsection{Continual Low-shot Learning Problem Setup} The goal of continual low-shot learning is to train a model that can not only quickly adapt to a new task using a small size of data but also demonstrate high performance on previous tasks. In particular, the model $f_\theta$, which is parameterized by $\theta \in \mathbb{R}^p$ will be trained on a stream of data $(x_i, y_i, t_j)$, where the $t_j \in \mathcal{T} (j=1,2,...,n)$ is the task descriptor and $(x_i, y_i) \in \mathcal{X}_j$ is a data point in task $j$. In continual low-shot learning, the volume of training data for each task is small. Besides, the model $f_\theta$ can only see the training dataset $\mathcal{X}_j$ when learning task $j$. Formally, the objective function can be written as: \begin{equation} \label{Eqn:obj} \min_\theta L(f_\theta, \mathcal{X}, \mathcal{T}) = \sum_{t_j \in \mathcal{T}} \sum_{(x_i, y_i) \in \mathcal{X}_j} \ell(f_\theta(x_i, t_j), y_i) \end{equation} where $\ell(\cdot, \cdot)$ is the loss function which could be cross-entropy in image classification. For simplicity, we will use $\ell(\theta)$ to denote $\ell(f_\theta(x_i, t_j), y_i)$ in the following formulations. If all task data are available in one training phase, we can trivially train all data to minimize above objective Eq. \ref{Eqn:obj} (a.k.a. \emph{joint training}). In continual low-shot learning, however, only current task data can be accessed during a training stage. Under such incomplete supervision, the model is prone to encounter catastrophic forgetting. \subsection{Reducing Forgetting} To alleviate the forgetting problem, we adopt parameter regularization-based methods which measures the parameter importance in prior tasks and penalizes its change in new task training. As indicated in \cite{chaudhry2018riemannian}, this kind of method is more memory efficient and scalable than activation (output) regularization \cite{rebuffi2017icarl,li2017learning} and network expansion methods \cite{yoon2018lifelong,rusu2016progressive,aljundi2017expert}. Generally, the parameter regularization for learning task $t_j$ can be formulated as below: \begin{equation} \label{Eqn:final_obj} L_{t_j} = \sum_{(x_i, y_i) \in \mathcal{X}_j} [\ell(f_\theta(x_i, t_j), y_i) + \beta \sum_{k=1}^p \Omega_k (\theta_k - \bar{\theta}_k)^2 ] \end{equation} where $\Omega_k$ is the importance measure for $k$-th parameter $\theta_k$ (total $p$ parameters in model). $\bar{\theta}_k$ is the pretrained parameter from previous tasks $t_1, t_2, ..., t_{j-1}$. $\beta$ is a hyperparameter which balance current task $j$ learning and previous tasks forgetting. Obviously, the bigger $\beta$ is, the stronger knowledge preservation and less knowledge update can be achieved. There are two key problems in parameter regularization: (1) how to calculate the importance measure $\Omega_k$ and (2) how to set a proper hyperparameter $\beta$ to get a good trade-off. A lot of literature \cite{lee2017overcoming,zenke2017continual,aljundi2018memory,chaudhry2018riemannian} have addressed the first problem, but few discuss the second one. In this work, we develop a dynamic balance strategy that address the latter problem. \subsection{Dynamic Balance Strategy} There are two terms for every data point optimization in Eq. \ref{Eqn:final_obj}. The first term $\ell(\theta) \coloneqq \ell(f_\theta(x_i, t_j), y_i)$ drives the model toward current task learning. The second regularization term $ \ell^{reg}(\theta) \coloneqq \sum_{k=1}^p \Omega_k (\theta_k - \bar{\theta}_k)^2$ preserves the previous task knowledge. A fixed hyperparameter $\beta$ is applied to balance current task learning and old knowledge preservation. This simple balance strategy is widely adopted in many existing model regularization methods like \cite{zenke2017continual,kirkpatrick2017overcoming,aljundi2018memory}. However, one has to spend a lot of time to manually search a proper hyperparameter. Besides, if the gradients of those two terms are unstable, the fixed hyperparameter may not be able to provide a good compromise between $\ell(\theta)$ and $\ell^{reg}(\theta)$ in the entire data stream (a concrete example is given in Experiment Section). To mitigate these problems, we propose a dynamic balance strategy which adaptively adjusts the gradient direction to compromise current task learning and knowledge preservation. The key intuition behind this strategy is that a good balance can be reached if we can find an optimization direction $g_x$ which satisfies the following two conditions: (1) $g_x$ is as close as possible to the gradient of current task learning $g_1 = \frac{\partial \ell(\theta)}{\partial \theta}$; (2) optimizing along with $g_x$ should not increase the second regularization term $\ell^{reg}$ for knowledge preservation. Suppose the objective function is locally linear (it happens around small optimization steps), we can formulate above intuition in a constrained optimization problem: \begin{align} \label{Eqn:Dynamic_balancing_obj} \min_{g_x} \frac{1}{2} \|g_x - g_1 \|^2 \notag \\ s.t. \quad \langle g_x, g_2 \rangle \ge 0 \end{align} where $g_2 = \frac{\partial \ell^{reg}(\theta)}{\partial \theta}$, the operator $\langle \cdot, \cdot \rangle$ is dot product. The optimization object in Eq. \ref{Eqn:Dynamic_balancing_obj} indicates that the $g_x$ should be as close as possible to $g_1$ in the squared $\ell_2$ norm. The constraint term represents that the gradient angle between $g_x$ and $g_2$ should be smaller than $90^{\circ}$ so that the optimization toward $g_x$ doesn't increase the second regularization term $\ell^{reg}$. Since $g_x$ has $p$ variables (the number of parameters in the neural network), it is intractable to solve Eq. \ref{Eqn:Dynamic_balancing_obj} directly. We apply the principle of quadratic program and its dual problem \cite{dorn1960duality}, and the Eq. \ref{Eqn:Dynamic_balancing_obj} can be converted to its dual space (please check Appendix A for detailed derivation): \begin{align} \label{Eqn:Dynamic_balancing_dual_obj} \min_{\lambda} \frac{1}{2} & g_2^T g_2 \lambda^2 + g_1^T g_2 \lambda \notag \\ s.t. \quad & \lambda \ge 0, \notag \\ & g_x = \lambda g_2 + g_1 \end{align} where $\lambda$ is a Lagrange multiplier. Eq. \ref{Eqn:Dynamic_balancing_dual_obj} is a simple one-variable quadratic optimization. The optimal $\lambda$ is \begin{align} \label{Eqn:Dynamic_balancing_lambda_solution} \lambda = \begin{cases} 0 & \mbox{if $ g_1^T g_2 \ge 0 $}\\ - \frac{g_1^T g_2}{g_2^T g_2} & \mbox{if $ g_1^T g_2 < 0 $} \end{cases} \end{align} Then, we can calculate the optimal $g_x = g_1 + \lambda g_2$. As a comparison, the gradient in fixed balance strategy is $g = g_1 + \beta g_2$, whereas the dynamic balance strategy uses the gradient $g_x = g_1 + \lambda g_2$ with the adaptive weight $\lambda= -\frac{g_1^T g_2}{g_2^T g_2}$. Fig. \ref{Fig:dynamic} shows the difference between two strategies. \begin{figure}[h] \centering \begin{tabular}{c} \includegraphics[width=0.35\textheight]{d1.png} \\ (a) Fixed balance strategy. $\beta$ is a fixed fraction (e.g. 0.6). \\ \includegraphics[width=0.35\textheight]{d2.png} \\ (b) Dynamic balance strategy. $\lambda$ is dynamically determined. \end{tabular} \caption{The difference between two strategies. The dynamic balance strategy can provide more reliable optimization direction $g_x$, even though $g_2$ grows in optimization procedure. \label{Fig:dynamic}} \end{figure} Since $g_1, g_2$ are related with current parameters and data point, $\lambda$ can vary and adaptively balance $\ell(\theta)$ and $\ell^{reg}(\theta)$ during the whole of training procedure. In practice, we found that adding a small constant $\gamma > 0$ to the adaptive weight $\lambda$ will further fortify the knowledge preservation. \subsection{Strengthening Adaptation} If there are sufficient training data in task $j$, we may directly train a model based on Eq. \ref{Eqn:final_obj} and achieve desirable results. But this assumption doesn't hold in continual low-shot learning problem where the size of training data for a task is small. To address this low-shot learning problem, the model needs to adequately exploit the intrinsic features from limited data. One way to do so is to maximize the inner product between gradients of different data points within a task: \begin{equation} \label{Eqn:inner_g} \max \frac{\partial \ell(f_\theta(x_u, t_j), y_u)}{\partial \theta} \cdot \frac{\partial \ell(f_\theta(x_v, t_j), y_v)}{\partial \theta} \end{equation} Eq. \ref{Eqn:inner_g} can lead the learning procedure to find common features among different data rather than just fitting a single data point. Combining Eq. \ref{Eqn:inner_g} and Eq. \ref{Eqn:final_obj}, we are interested to optimize the below new objective: \begin{equation} \label{Eqn:new_obj} L_{t_j} = \sum_{u, v \in \mathcal{X}_j} [\ell_u(\theta) + \ell_v(\theta) - \alpha \frac{\partial \ell_u(\theta)}{\partial \theta} \cdot \frac{\partial \ell_v(\theta)}{\partial \theta} + \beta \ell^{reg}(\theta)] \end{equation} where $\ell_u(\theta), \ell_v(\theta)$ denote the losses at data points $(x_u, y_u), (x_v, y_v)$ respectively. Optimizing Eq. \ref{Eqn:new_obj} needs the second derivative w.r.t. $\theta$, which is expensive to calculate. Inspired from the recent meta-learning algorithm, Reptile \cite{nichol2018first}, we can design a multi-step optimization algorithm that bypasses the second derivative calculation and seamlessly integrates with parameter importance measurement. The complete MetaCL is outlined in Algorithm \ref{Alg:MetaCL_beta}. \begin{algorithm} \caption{MetaCL-$\beta$ (fixed balance version)} \label{Alg:MetaCL_beta} \begin{algorithmic} \REQUIRE{The training data $\mathcal{X}_j$ in task $t_j$, the model $f$ with pretrained parameter $\bar{\theta}$. Step size hyperparameters $\alpha, \eta$. Balance hyperparameter $\beta$.} \ENSURE{The new model parameter $\theta^*$} \STATE $f_{\theta} \gets$ load the pretrained parameter $\bar{\theta}$. \FOR{ epoch$=1, 2, ...$} \FOR{ mini-batch $B$ in $\mathcal{X}_j$ } \item Randomly split mini-batch $B$ to mini-bundles $b_1, b_2, ..., b_m$. \item // Inner loop optimization. \FOR{ $i=1,2,...,m$} \item $\theta^{i} = \theta^{i-1} - \alpha \ell_{b_i}^{'}(\theta^{i-1}) $. (Note that $\theta^{0} \equiv \theta$) \ENDFOR \item // The gradient for current task learning \item $g_1 = (\theta - \theta^{m})/(\alpha * m)$ \item // The gradient for forgetting reducing \item $g_2 = \ell^{reg '}(\theta)$ \item Calculate $g = g_1 + \beta g_2$ \item Update $\theta \gets \theta - \eta * g$ \ENDFOR \ENDFOR \STATE $\theta^* = \theta$ \end{algorithmic} \end{algorithm} \textbf{Algorithm analysis}. Algorithm \ref{Alg:MetaCL_beta} implicitly satisfies the objective Eq. \ref{Eqn:new_obj}. Let's check the current task learning gradient $g_1$ to explain how it works. If we sum up all mini-bundles optimization in the inner loop of Algorithm \ref{Alg:MetaCL_beta}, we have \begin{equation} \theta^0 - \theta^m = \theta - \theta^m = \alpha \sum_{i=1}^{m} \ell_{b_i}^{'}(\theta^{i-1}) \end{equation} Therefore, the gradient $g_1$ can be rewritten as: \begin{equation} \label{Eqn:g_1} g_1 = \frac{\theta-\theta^m}{\alpha m} = \frac{1}{m} \sum_{i=1}^{m} \ell_{b_i}^{'}(\theta^{i-1}) \end{equation} By applying Taylor series expansion on $\ell_{b_i}^{'}(\theta^{i-1})$, we have \begin{align} \label{Eqn:taylor_exp} \ell_{b_i}^{'}(\theta^{i-1}) &= \ell_{b_i}^{'}(\theta^{0}) + \ell_{b_i}^{''}(\theta^{0})(\theta^{i-1} - \theta^0) + O(\Vert \theta^{i-1} - \theta^0\Vert^2) \notag \\ &\approx \ell_{b_i}^{'}(\theta) + \ell_{b_i}^{''}(\theta)(\theta^{i-1} - \theta^0) \notag \\ &= \ell_{b_i}^{'}(\theta) - \alpha \ell_{b_i}^{''}(\theta) \sum_{k=1}^{i-1} \ell_{b_k}^{'}(\theta^{k-1}) \notag \\ \end{align} Apply Taylor series expansion on $\ell_{b_k}^{'}(\theta^{k-1})$ again: \begin{equation} \label{Eqn:taylor_exp2} \ell_{b_k}^{'}(\theta^{k-1}) = \ell_{b_k}^{'}(\theta^0) + O(\Vert \theta^{i-1} - \theta^0\Vert) \approx \ell_{b_k}^{'}(\theta) \end{equation} These approximation can hold if the $m, \alpha$ are small (i.e. small update in inner loop optimization). Substituting Eq. \ref{Eqn:taylor_exp2} into Eq. \ref{Eqn:taylor_exp}, we have: \begin{equation} \label{Eqn:b_i} \ell_{b_i}^{'}(\theta^{i-1}) \approx \ell_{b_i}^{'}(\theta) - \alpha \ell_{b_i}^{''}(\theta) \sum_{k=1}^{i-1} \ell_{b_k}^{'}(\theta) \end{equation} Since the mini-batches and mini-bundles are randomly sampled, the data point subscript exchange should be satisfied: $\ell_{b_i}^{''}(\theta)\ell_{b_k}^{'}(\theta) = \ell_{b_k}^{''}(\theta)\ell_{b_i}^{'}(\theta)$. Therefore, the Eq. \ref{Eqn:b_i} can be converted to \begin{align} \label{Eqn:b_i_final} \ell_{b_i}^{'}(\theta^{i-1}) &\approx \ell_{b_i}^{'}(\theta) - \alpha \ell_{b_i}^{''}(\theta) \sum_{k=1}^{i-1} \ell_{b_k}^{'}(\theta) \notag \\ &= \ell_{b_i}^{'}(\theta) - \frac{1}{2}\alpha \sum_{k=1}^{i-1} (\ell_{b_i}^{''}(\theta)\ell_{b_k}^{'}(\theta) + \ell_{b_k}^{''}(\theta)\ell_{b_i}^{'}(\theta) ) \notag \\ &= \ell_{b_i}^{'}(\theta) - \frac{1}{2}\alpha \sum_{k=1}^{i-1} \frac{\partial \ell_{b_i}^{'}(\theta)\ell_{b_k}^{'}(\theta) }{\partial \theta} \end{align} Substituting Eq. \ref{Eqn:b_i_final} into Eq. \ref{Eqn:g_1}, we can see \begin{equation} g_1 = \frac{1}{m} \sum_{i=1}^{m} [ \ell_{b_i}^{'}(\theta) - \frac{1}{2}\alpha \sum_{k=1}^{i-1} \frac{\partial \ell_{b_i}^{'}(\theta)\ell_{b_k}^{'}(\theta) }{\partial \theta} ] \end{equation} $\ell_{b_i}^{'}(\theta)$ is the gradient to minimize the loss at mini-bundle $b_i$. The second term $\sum_{k=1}^{i-1} \frac{\partial \ell_{b_i}^{'}(\theta)\ell_{b_t}^{'}(\theta) }{\partial \theta}$ is the inner product between gradients of different mini-bundles. It indicates that the model should be optimized to not only fit current mini-bundle but also learn the common features among different mini-bundles. The common feature learning, which can be seen as meta knowledge, strengthens adaption and generalization. When $m=2$, the $g_1$ can be seen as the gradient for current task learning in objective Eq. \ref{Eqn:new_obj}. As explained in the previous subsection, the fixed balance strategy may cause several problems and dynamic balance is more desirable when optimization gradients are unstable. We integrate this dynamic balance strategy to our MetaCL algorithm, called MetaCL-$\lambda$, which is concluded in Algorithm \ref{Alg:MetaCL_lambda}. \begin{algorithm} \caption{MetaCL-$\lambda$ (dynamic balance version)} \label{Alg:MetaCL_lambda} \begin{algorithmic} \REQUIRE{The training data $\mathcal{X}_j$ in task $t_j$, the model $f$ with pretrained parameter $\bar{\theta}$. Step size hyperparameters $\alpha, \eta$.} \ENSURE{The new model parameter $\theta^*$} \STATE $f_{\theta} \gets$ load the pretrained parameter $\bar{\theta}$. \FOR{ epoch$=1, 2, ...$} \FOR{ mini-batch $B$ in $\mathcal{X}_j$ } \item Randomly split mini-batch $B$ to mini-bundles $b_1, b_2, ..., b_m$. \item // Inner loop optimization. \FOR{ $i=1,2,...,m$} \item $\theta^{i} = \theta^{i-1} - \alpha \ell_{b_i}^{'}(\theta^{i-1}) $. (Note that $\theta^{0} \equiv \theta$) \ENDFOR \item // The gradient for current task learning \item $g_1 = (\theta - \theta^{m})/(\alpha * m)$ \item // The gradient for forgetting reducing \item $g_2 = \ell^{reg '}(\theta)$ \item Calculate $\lambda$ using Eq. \ref{Eqn:Dynamic_balancing_lambda_solution}. \item Calculate the optimization gradient $g_x = g_1 + \lambda g_2$. \item Update $\theta \gets \theta - \eta * g_x$ \ENDFOR \ENDFOR \STATE $\theta^* = \theta$ \end{algorithmic} \end{algorithm} \section{Experiments} We conduct experiments to evaluate baselines and our proposed MetaCL in various public benchmarks and settings. \subsection{Datasets} We use three datasets: \emph{Permuted MNIST} \cite{kirkpatrick2017overcoming}, \emph{CIFAR100} \cite{krizhevsky2009learning} and \emph{CUB} \cite{WahCUB_200_2011}. Permuted MNIST is a variant of the standard handwritten digits dataset, MNIST \cite{lecun1998mnist}, where the data in each task are arranged by a fixed permutation of pixels, and thus the data distribution between different tasks is unrelated. The CIFAR100 dataset contains 60k 32$\times$32 images with 100 different classes. The CUB dataset has roughly 12k high resolution images with 200 fine-grained bird classes. These datasets have been widely used in a variety of continual learning methods evaluation \cite{zenke2017continual,aljundi2018memory}. The size of original training datasets is large. To simulate the low-shot setting, we sample the first $K$ images from each class to create a small volume of training data and use original testing data to evaluate. Note that when $K=1, 5$, the setting is similar with the 1-shot and 5-shot meta-learning \cite{finn2017model}. In contrast to meta learning, however, our continual low-shot learning problem does not have meta-training dataset to learn prior knowledge before learning consecutive task streams. We observe that there is no algorithm that can effectively learn from scratch without overfitting when $K=1, 5$. In this work, we typically sample $K=10, 20$, and put the extreme low-shot $K=1, 5$ for future study. \begin{figure*}[h] \centering \setlength\tabcolsep{0.5pt} \begin{tabular}{cccc} \includegraphics[width=0.19\textheight]{mnist_200_average_acc.png} & \includegraphics[width=0.19\textheight]{mnist_5000_average_acc.png} & \includegraphics[width=0.19\textheight]{cifar_200_average_acc.png} & \includegraphics[width=0.19\textheight]{cifar_5000_average_acc.png} \end{tabular} \caption{The average accuracy changes as more tasks are learned with different K. The parameter regularization based methods relieve knowledge forgetting and MetaCL algorithm can further improve the model performance, especially on low-shot setting. \label{Fig:average_acc}} \end{figure*} \subsection{Metrics} We use the following metrics to quantitatively evaluate: \textbf{Average Accuracy (ACC)}: if we define $a_{i,j}$ as the testing accuracy on task $j$ after incrementally training the model from task $1$ to $i$, the average accuracy on task $i$ can be calculated by $\frac{1}{i}\sum_{j=1}^{i} a_{i,j}$. We are interested in the final average accuracy after all $n$ tasks have been trained. \begin{equation} ACC = \frac{1}{n}\sum_{j=1}^{n} a_{n,j} \end{equation} \textbf{Backward Transfer (BT)}: We adopt the forgetting measure in \cite{chaudhry2018riemannian} to calculate the backward transfer. \begin{equation} \label{Eqn:BT} BT = \frac{1}{n-1} \sum_{j=1}^{n-1} [\min_{i\in \{1, 2, ..., n-1\}} a_{n,j} - a_{i,j} ] \end{equation} If $BT > 0$, positive backward transfer occurs, which means that the following tasks learning helps improve the performance on prior tasks. If $BT < 0$, on the other hand, the negative backward transfer causes the performance deterioration on previous tasks. \textbf{Forward Adaptation (FA)}: The forward adaptation we calculate here is similar with the intransigence measure \cite{chaudhry2018riemannian} and the forward transfer \cite{lopez2017gradient}. But we train a randomly initialized model over one task data as the reference model. The forward adaptation can be formulated as below: \begin{equation} FA = \frac{1}{n} \sum_{i=1}^{n} a_{i,i} - a_{i}^{*} \end{equation} where $a_{i}^{*}$ is the reference model trained from the task $i$ only. We use $a_{i}^{*}$ instead of the joint training accuracy in \cite{chaudhry2018riemannian}. Because $a_{i}^{*}$ is only related with task $i$, we can better understand how the previous tasks learning affects on current task learning. For example, if $a_{i,i} - a_{i}^{*} > 0$, it means that the previous tasks knowledge facilitates current task learning (i.e. positive forward adaptation). \subsection{Baselines} We apply three state-of-the-art parameter regularization based methods, EWC \cite{kirkpatrick2017overcoming}, PI \cite{zenke2017continual}, and MAS \cite{aljundi2018memory}, to estimate the parameter importance. We implement our algorithms MetaCL-$\beta$, MetaCL-$\lambda$ based on their importance estimations, called \{EWC, PI, MAS\}-MetaCL-\{$\beta$, $\lambda$\} (please refer to Appendix B for implementation details). We compare them against their original methods (i.e. EWC, PI, MAS) and straightforward fine tune. \subsection{Results} The experiments are conducted on Permuted MNIST, CIFAR-100 and CUB datasets. We follow single-head protocol on Permuted MNIST and multi-head protocol on CIFAR-100 and CUB datasets. The difference between single-head and multi-head protocol is whether task descriptor is available \cite{chaudhry2018riemannian}. For these datasets statistics, please refer to Appendix C. We run all methods 3 times and compute the 95\% confidence intervals using the standard deviation across the runs. \begin{table}[h] \centering \small \caption{Experiment results on Permuted MNIST dataset \label{Tab:MNIST}} \begin{tabular}{|c|ccc|} \hline & \multicolumn{3}{|c|}{ Permuted MNIST ($K=20$) } \\ Method & ACC (\%) & BT (\%) & FA (\%) \\ \hline Fine tune & 46.8 $\pm$ 0.6 & \textbf{-14.8 $\pm$ 1.1} & -0.3 $\pm$ 0.8 \\ MetaCL, w/o \emph{reg} & \textbf{48.0 $\pm$ 0.8} & -16.8 $\pm$ 0.8 & \textbf{3.3 $\pm$ 0.8} \\ \hline MAS & 55.7 $\pm$ 1.1 & -6.2 $\pm$ 0.9 & 0.2 $\pm$ 0.5 \\ MAS-MetaCL-$\beta$ & 56.9 $\pm$ 0.7 & -6.0 $\pm$ 0.7 & 1.3 $\pm$ 0.6 \\ MAS-MetaCL-$\lambda$ & \textbf{57.7 $\pm$ 0.6} & \textbf{-5.4 $\pm$ 0.2} & \textbf{1.4 $\pm$ 0.3} \\ \hline PI & 50.6 $\pm$ 0.9 & -7.2 $\pm$ 1.0 & -4.6 $\pm$ 0.8 \\ PI-MetaCL-$\beta$ & 54.1 $\pm$ 0.7 & \textbf{-6.3 $\pm$ 0.4} & -1.9 $\pm$ 0.5 \\ PI-MetaCL-$\lambda$ & \textbf{56.0 $\pm$ 0.4} & -7.8 $\pm$ 0.5 & \textbf{2.1 $\pm$ 0.9} \\ \hline EWC & 50.4 $\pm$ 0.6 & -12.7 $\pm$ 0.5 & 1.5 $\pm$ 0.4 \\ EWC-MetaCL-$\beta$ & 53.3 $\pm$ 1.2 & -10.0 $\pm$ 0.5 & 1.8 $\pm$ 0.9 \\ EWC-MetaCL-$\lambda$ & \textbf{53.8 $\pm$ 0.8} & \textbf{-9.8 $\pm$ 1.0} & \textbf{2.5 $\pm$ 0.1} \\ \hline \end{tabular} \end{table} \begin{table}[h] \centering \small \caption{Experiment results on CIFAR-100 dataset \label{Tab:CIFAR}} \begin{tabular}{|c|ccc|} \hline & \multicolumn{3}{|c|}{ CIFAR-100 ($K=20$) } \\ Method & ACC (\%) & BT (\%) & FA (\%) \\ \hline Fine tune & 22.9 $\pm$ 0.8 & -14.0 $\pm$ 0.7 & 4.1 $\pm$ 0.8 \\ MetaCL, w/o \emph{reg} & \textbf{27.5 $\pm$ 1.4} & \textbf{-11.7 $\pm$ 1.3} & \textbf{7.0 $\pm$ 0.6} \\ \hline MAS & 34.2 $\pm$ 0.4 & 1.4 $\pm$ 0.5 & -0.7 $\pm$ 1.0 \\ MAS-MetaCL-$\beta$ & 37.0 $\pm$ 0.4 & \textbf{1.9 $\pm$ 0.2} & 1.4 $\pm$ 0.4 \\ MAS-MetaCL-$\lambda$ & \textbf{37.5 $\pm$ 0.9} & 1.4 $\pm$ 0.4 & \textbf{2.9 $\pm$ 1.1} \\ \hline PI & 34.8 $\pm$ 0.7 & 1.3 $\pm$ 0.5 & -0.5 $\pm$ 0.9 \\ PI-MetaCL-$\beta$ & 39.0 $\pm$ 1.1 & 2.3 $\pm$ 0.8 & 2.8 $\pm$ 0.3 \\ PI-MetaCL-$\lambda$ & \textbf{39.8 $\pm$ 0.4} & \textbf{2.4 $\pm$ 0.3} & \textbf{4.5 $\pm$ 1.3} \\ \hline EWC & 33.5 $\pm$ 0.6 & 1.8 $\pm$ 0.5 & -2.4 $\pm$ 0.9 \\ EWC-MetaCL-$\beta$ & 37.5 $\pm$ 0.3 & 1.9 $\pm$ 0.2 & 1.2 $\pm$ 0.7 \\ EWC-MetaCL-$\lambda$ & \textbf{38.3 $\pm$ 0.5} & \textbf{2.1 $\pm$ 0.2} & \textbf{2.0 $\pm$ 0.4} \\ \hline \end{tabular} \end{table} \begin{table}[h] \centering \small \caption{Experiment results on CUB dataset \label{Tab:CUB}} \begin{tabular}{|c|ccc|} \hline & \multicolumn{3}{|c|}{ CUB ($K=10$) } \\ Method & ACC (\%) & BT (\%) & FA (\%) \\ \hline Fune tine & 9.8 $\pm$ 0.8 & \textbf{-32.1 $\pm$ 0.9} & -15.3 $\pm$ 0.7 \\ MetaCL, w/o \emph{reg} & \textbf{11.4 $\pm$ 0.4} & -36.2 $\pm$ 0.7 & \textbf{-9.1 $\pm$ 0.6} \\ \hline MAS & 26.4 $\pm$ 1.0 & \textbf{-21.7 $\pm$ 1.3} & -7.9 $\pm$ 1.1 \\ MAS-MetaCL-$\beta$ & 30.4 $\pm$ 1.2 & -22.5 $\pm$ 2.0 & -2.0 $\pm$ 1.3 \\ MAS-MetaCL-$\lambda$ & \textbf{30.7 $\pm$ 1.2} & -23.6 $\pm$ 1.4 & \textbf{-0.3 $\pm$ 1.2} \\ \hline PI & 38.1 $\pm$ 1.0 & -8.3 $\pm$ 1.3 & -9.6 $\pm$ 1.2 \\ PI-MetaCL-$\beta$ & 46.1 $\pm$ 1.4 & -3.8 $\pm$ 0.9 & -3.3 $\pm$ 0.6 \\ PI-MetaCL-$\lambda$ & \textbf{48.7 $\pm$ 1.3} & \textbf{-3.0 $\pm$ 0.7} & \textbf{-2.9 $\pm$ 1.2} \\ \hline EWC & 32.7 $\pm$ 1.1 & -7.6 $\pm$ 2.6 & -17.4 $\pm$ 2.2 \\ EWC-MetaCL-$\beta$ & 44.9 $\pm$ 0.2 & -3.1 $\pm$ 0.2 & -8.6 $\pm$ 1.1 \\ EWC-MetaCL-$\lambda$ & \textbf{45.7 $\pm$ 0.7} & \textbf{-2.3 $\pm$ 1.1} & \textbf{-8.5 $\pm$ 1.1} \\ \hline \end{tabular} \end{table} The experiment results on these three datasets are outlined in Tab. \ref{Tab:MNIST}, \ref{Tab:CIFAR}, \ref{Tab:CUB}. Since our algorithms are integrated with various parameter regularization methods, the comparison should be checked within the same regularization method to fairly verify the effectiveness of our methods. When there is no regularization for knowledge preservation, MetaCL w/o \emph{reg} demonstrates better ACC and stronger forward adaption than straightforward fine tune, with a little cost of BT. This demonstrates that MetaCL can exploit the intrinsic features and further strengthen adaptation. When we consider parameter regularization, the BT significantly improved. For example, in CIFAR-100 dataset (Tab. \ref{Tab:CIFAR}), all MAS, PI and EWC achieve better BT than fine tune (from -14.0\% to 1.5\%). In addition, after applying MetaCL algorithms on these regularization methods, all three metrics ACC, BT and FA are improved. In CUB dataset (Tab. \ref{Tab:CUB}), EWC-MetaCL-{$\beta, \lambda$} outperform original EWC with more than 10\% ACC improvement. Finally, compared with the fixed balance strategy MetaCL-$\beta$, the dynamic balance MetaCL-$\lambda$ achieves comparable or better trade-off between BT and FA, and thus further improves ACC. \textbf{Performance with Different $K$}. We evaluate our algorithms on different sizes of training data to comprehensively check the performance. The evaluations are conducted on Permutated MNIST and CIFAR-100 with $K=20, 50, 200, 500$, in which $K=20, 50$ can be seen as low-shot scenarios and $K=200, 500$ are standard training. Fig. \ref{Fig:average_acc} shows that the average accuracy changes when more tasks are learned. Tables in Appendix D document all evaluation results. Compared with large size of training data, our algorithms can provide more improvement on low-shot scenarios. For example, in CIFAR-100 dataset, PI-MetaCL-{$\beta, \lambda$} outperform original PI with 5\% ACC margin in $K=20, 50$, 3\% in $K=200$ and 1\% in $K=500$. This is because standard training procedure could achieve good generalization on large datasets, but it lacks ability to obtain enough intrinsic knowledge from low-shot data. \textbf{Learning Speed Comparison}. The MetaCL algorithms not only enhance the forward adaptation but also speed up the learning procedure. We run validation on CIFAR-100 testing data every epoch and record the validation accuracy to indicate the learning speed and model performance. Fig. \ref{Fig:learning_speed} illustrates the learning curves when MetaCL algorithm is adopted versus not adopted. The curves of MetaCL methods are always above the original approaches (i.e. orange, blue and green curves), which indicates faster learning speed and higher accuracy. \begin{figure}[h] \centering \setlength\tabcolsep{0.5pt} \begin{tabular}{cc} \includegraphics[width=0.18\textheight]{learning_speed_cifar_200_task1.png} & \includegraphics[width=0.18\textheight]{learning_speed_cifar_200_task10.png} \end{tabular} \caption{The learning speed and average accuracy comparison among different methods. The MetaCL methods can exploit the intrinsic feature within the limited data and achieve faster learning speed and better model performance. \label{Fig:learning_speed}} \end{figure} \textbf{Regularization Strategy Analysis}. With the dynamic balance strategy, the MetaCL-$\lambda$ generally outperforms the fixed balance method. On Permuted MNIST with $K=50$, the PI-MetaCL-$\lambda$ surpasses PI-MetaCL-$\beta$ over 8\% in terms of ACC (please check the Table 3 in Appendix D). We take this experiment as an example to analyze the optimization gradients and demonstrate the effectiveness of our new balance strategy. As illustrated in Fig. \ref{Fig:opt_degree}, all methods have similar compromise at the beginning (i.e. left figure, learning task 2). But with more tasks learned (right figure), the fixed balance strategy struggles to learn current task 20 (i.e. the angle of $\langle g_1, g_x\rangle$ is big) and cannot provide a stable compromise between current learning object $\ell(\theta)$ and regularization term $\ell^{reg}(\theta)$ (i.e. the angles $\langle g_1, g_x\rangle$, $\langle g_2, g_x \rangle$ are perturbed dramatically). As a comparison, the dynamic balance method (purple and grey curves) can give a more stable and better trade-off. \begin{figure}[h] \centering \includegraphics[width=0.37\textheight]{opt_degree.png} \caption{The gradient angles ($\langle g_1, g_x\rangle$, $\langle g_2, g_x \rangle$) change in optimization procedure. The dynamic balance strategy can provide a more stable and better compromise. \label{Fig:opt_degree}} \end{figure} \section{Conclusion} In this paper, we generalize the standard continual learning to low-shot scenario. The low-shot setting is more practical and human-like. To address the challenges it brings, we develop a new algorithm that can exploit intrinsic features within limited training data and strengthen adaptation ability. To provide a better compromise between learning and forgetting, a new dynamic balance strategy has been proposed. With these two technical components, our algorithm further improve the existing state-of-the-art methods. In future study, an interesting and more challenging direction is to further decrease the training data (e.g. 1-shot and 5-shot) in continual learning. A possible solution for such extreme case is to design new models to further exploit intrinsic information like feature spatial relationship in capsule network \cite{sabour2017dynamic}. \bibliographystyle{aaai}
1,108,101,566,139
arxiv
\section*{Appendix \thesection\protect\indent \parbox[t]{11.15cm}{#1}} \addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1}} \usepackage{accents} \newcommand{\dbtilde}[1]{\accentset{\approx}{#1}} \def{\bf{e}}{{\bf{e}}} \def{\cal{H}}{{\cal{H}}} \def{\cal{G}}{{\cal{G}}} \def{\mathbb{I}}{{\mathbb{I}}} \def{\mathbb{C}}{{\mathbb{C}}} \def{\mathbb{R}}{{\mathbb{R}}} \def\slashed {F}{{\slashed{F}}} \def{\bf{E}}{{\bf{E}}} \def{\bar{w}}{{\bar{w}}} \def\slashed {F}{\slashed {F}} \def\slashed {\gX}{\slashed {\gX}} \def\slashed {G}{\slashed {G}} \def\slashed {\gY}{\slashed {\gY}} \def\slashed {h}{\slashed {h}} \def\slashed {F}{\slashed {F}} \def\slashed {\gF}{\slashed {\gF}} \def\slashed {Q}{\slashed {Q}} \def\slashed {\gQ}{\slashed {\gQ}} \def\slashed{\FH}{\slashed{\FH}} \def\slashed{H}{\slashed{H}} \def\slashed{\gH}{\slashed{\gH}} \def\slashed{\gG}{\slashed{\gG}} \def{\hat{\nabla}} {{\hat{\nabla}}} \def{\tilde{\nabla}}{{\tilde{\nabla}}} \def{\cal D}{{\cal D}} \def{\buildrel{\circ} \over \nabla}{{\buildrel{\circ} \over \nabla}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \begin{document} \begin{titlepage} \begin{center} \vspace*{-1.0cm} \hfill DMUS-MP-22-05 \\ \vspace{2.0cm} \renewcommand{\thefootnote}{\fnsymbol{footnote}} {\Large{\bf $D=11$ $dS_5$ backgrounds with enhanced supersymmetry}} \vskip1cm \vskip 1.3cm D. Farotti and J. Gutowski \vskip 1cm {\small{\it Department of Mathematics, University of Surrey \\ Guildford, GU2 7XH, UK.}\\ \texttt{[email protected], [email protected]}} \end{center} \bigskip \begin{center} {\bf Abstract} \end{center} We classify all warped $dS_5$ backgrounds in $D=11$ supergravity with enhanced supersymmetry. We show that backgrounds preserving $N=16$ supersymmetries consist of either a stack of M5 branes with transverse space $\mathbb{R}^5$, or a generalized M5-brane configuration with transverse space $\mathbb{R} \times N_4$, where $N_4$ is a hyper-K\"ahler manifold and the $M5$-brane harmonic function is determined by a hyper-K\"ahler potential on $N_4$. Moreover, we find that there are no backgrounds preserving exactly $N=24$ supersymmetries. Backgrounds preserving $N=32$ supersymmetries correspond to either $\mathbb{R}^{1,10}$ or $AdS_7\times S^4$. \end{titlepage} \section{Introduction} Supersymmetry enhancement is known to play a particularly important role in the context of the geometric properties of supersymmetric black holes. It has been shown that for many supergravity theories, the near-horizon limits of supersymmetric extremal black holes{\footnote{In $N=2$, $D=4$ and $N=2$, $D=5$ supergravity, supersymmetric black holes are automatically extreme, however in $D=11$ supergravity this need not be the case.}} (and also branes) exhibit supersymmetry enhancement, which in turn imposes additional conditions on the geometry near to the event horizon. In particular, as a consequence of the enhanced supersymmetry, the isometry algebra enlarges in the near-horizon limit, containing a subalgebra isomorphic to $\mathfrak{sl}(2, \mathbb{R})$. In the context of $D=11$ and type IIA (including massive IIA) supergravity, it has been shown that near-horizon geometries with smooth fields preserve an even number of supersymmetries \cite{Gutowski:2013kma, Gran:2014fsa, Gran:2014yqa}. The proof for this utilizes Lichnerowicz type theorems for certain generalized Dirac operators defined on the horizon spatial cross-section, which is assumed to be compact and without boundary. The index of these Dirac operators vanishes, which then establishes the supersymmetry enhancement. Alternatively, one may consider the construction of a classification of highly supersymmetric solutions. In theories such as $D=11$ supergravity, solutions preserving the minimal $N=1$ supersymmetry have rather weak conditions on the geometry \cite{Gauntlett:2002fz, Gauntlett:2003wb}. In contrast, it is reasonable to expect that the classification of solutions with many supersymmetries will produce a much more restricted set of geometries. An important result is the homogeneity theorem, which states that backgrounds preserving $N>16$ supersymmetry are locally homogeneous \cite{Figueroa-OFarrill:2012kws}, i.e. the tangent space at each point is spanned by the Killing vectors which are constructed as bilinears of the Killing spinors. The theorem has been proven for $D=11$ supergravity and type II $D=10$ supergravities, and holds for many other theories as well. Using an adaptation of the homogeneity theorem, combined with an analysis of the associated superalgebras, it has been shown that there are no $N>16$ smooth near-horizon geometries with non-trivial fluxes and also no warped $AdS_2$ backgrounds in ten or eleven dimensions \cite{Gran:2017qus}. This non-existence theorem applies provided that the horizon section, or the internal manifold, respectively, are compact and without boundary. Moreover, in \cite{Figueroa-OFarrill:2011tnp} homogeneous $D=11$ backgrounds which are symmetric have been classified up to local isometry. This provides a classification of $N>16$ symmetric backgrounds in $D=11$ supergravity. Furthermore, using spinorial geometry techniques it has been shown that all $D=11$ backgrounds preserving $N=30,31$ supersymmetries and all type IIB backgrounds preserving $N>28$ supersymmetries are maximally supersymmetric \cite{Gran:2006cn, Gran:2010tj,Gran:2007eu}; also there is a unique plane wave solution in IIB supergravity preserving $N=28$ supersymmetry \cite{Gran:2009cz}. The spinorial geometry technique is a powerful tool to solve the Killing spinor equations (KSE) of supergravity theories and can be adapted to backgrounds with near maximal number of supersymmetries. It is based on the use of the gauge covariance of the KSE, together with a representation of the Clifford algebra, in an appropriate oscillator basis, acting on spinors which correspond to differential forms \cite{Gillard:2004xq, Gran:2005wu}. In this paper, we shall classify the warped product $dS_5 \times_w M_6$ solutions in $D=11$ supergravity which exhibit enhanced supersymmetry. In \cite{Farotti:2022xsd} it was shown that supersymmetric warped product $dS_5 \times_w M_6$ solutions must preserve $N=8k$ supersymmetries for $k=1,2,3,4$. Minimal supersymmetry therefore corresponds to $N=8$ supersymmetry - these were fully classified in \cite{Farotti:2022xsd}. Furthermore, it was also noted that the only possible $N=32$ warped-product $dS_5 \times_w M_6$ solutions are $\mathbb{R}^{1,10}$ with vanishing 4-form, or the maximally supersymmetric $AdS_7 \times S^4$ solution. Hence, in this paper we shall primarily be concerned with the $N=16$ and $N=24$ cases. We shall prove that there are no exactly $N=24$ solutions. This is somewhat analogous to the analysis for warped product $AdS_5 \times_w M_6$ solutions in \cite{Beck:2016lwk}, in which it was proven that there are no $N=24$ warped product $AdS_5$ solutions. However, there are differences between the $AdS_5$ and $dS_5$ analysis. The non-existence of $N=24$ $AdS_5$ solutions was established via an adapted version of the homogeneity theorem in \cite{Figueroa-OFarrill:2012kws}, together with a maximum principle argument which utilizes certain (assumed) global properties of the internal space. In contrast, for the $dS_5$ solutions, the local geometric conditions are sufficiently strong to allow an explicit integration of the Killing spinor equation along two of the directions of $M_6$. The analysis of the Killing spinor equation then simplifies significantly to the counting of certain parallel spinors on a hyper-K\"ahler manifold. This enables the case of $N=24$ supersymmetries to be excluded by direct inspection. Moreover, this notable simplification also allows for the full classification of the $N=16$ $dS_5$ solutions. We find two classes of $N=16$ $dS_5 \times_w M_6$ backgrounds. The first is a special class of solutions constructed in \cite{Farotti:2022xsd} for which the 4-form is parallel; the geometry is a generalized M5-brane configuration with transverse space $\mathbb{R} \times N_4$, where $N_4$ is a hyper-K\"ahler manifold and the $M5$-brane harmonic function is determined by a hyper-K\"ahler potential on $N_4$. The second class is a stack of M5 branes with transverse space $\mathbb{R}^5$ \cite{Gueven:1992hh}. Further recent progress towards classifying the $N=16$ $AdS_5$ warped product solutions has been made in \cite{Papadopoulos:2020mbw}, which develops a systematic examination of spacetimes, and other fields, which are invariant under the action of certain R-symmetry groups. It is known that there are many no-go theorems which imply non-existence of de Sitter solutions in supergravity \cite{gbds, deWit:1986mwo, Maldacena:2000mw}, in cases for which the warp factor and fluxes and are smooth, and the internal manifold is smooth and compact without boundary. Our motivation is therefore to construct a systematic classification of supersymmetric de Sitter solutions in supergravity theories from a purely local perspective. In addition, we shall not assume that the spinors factorize into products of spinors on $dS_n$ and on the internal space, as it is known that such factorizations can produce a miscounting of supersymmetries \cite{Gran:2016zxk}. This classification programme has already been fully completed in the case of heterotic supergravity, including the first order corrections in $\alpha'$ \cite{Farotti:2022twf}. For heterotic warped product de Sitter geometries, the warped product $dS_2$ solutions are in 1-1 correspondence with the direct product $AdS_3$ solutions classified in \cite{Beck:2015gqa}; moreover all warped product $dS_n$ solutions for $3 \leq n \leq 9$ are direct product $\mathbb{R}^{1,n} \times M_{9-n}$ backgrounds. This is consistent with the restrictions on heterotic $dS_n$ solutions for $n \geq 4$ found in \cite{Kutasov:2015eba}; it is also clear from the analysis of \cite{Farotti:2022twf} that the warped product $dS_2$ and $dS_3$ solutions are also highly restricted in heterotic supergravity. The warped product $dS_n$ solutions in $D=11$ supergravity exhibit similar foliation properties for $5 \leq n \leq 10$. In the case of $n=5$, $dS_5$ arises as a (conformal) foliation of $\mathbb{R}^{1,5}$, corresponding to the directions along the M5-brane worldvolume. In contrast, the warped product $dS_4$ solutions with minimal $N=8$ supersymmetry have been classified in \cite{DiGioia:2022bqg}, and there is no analogous foliation of $dS_4$ into $AdS_5$ or $\mathbb{R}^{1,4}$ appearing. The plan of this paper is as follows: in section 2 we summarize some key aspects of the classification of warped product $dS_5$ solutions, $dS_5 \times_w M_6$, preserving the minimal amount of $N=8$ supersymmetry constructed in \cite{Farotti:2022xsd}, in which it is shown that all such solutions are generalized $M5$-brane solutions for which the 5-dimensional transverse manifold is $\mathbb{R} \times N_4$, where $N_4$ is a hyper-K\"ahler manifold. A particularly useful special case, which turns out to have enhanced supersymmetry, for which the 4-form $F$ is covariantly constant, is also presented. In section 3, we use the results presented in section 2 to explicitly integrate the Killing spinor equations, acting on a generic spinor, along $M_6$. This produces a gravitino type equation, and an algebraic condition. The analysis then splits into two subcases, depending on the properties of the algebraic condition. In each of these subcases, the Killing spinor equations can ultimately be reduced to counting parallel spinors on the hyper-K\"ahler manifold $N_4$, enabling all of the solutions with enhanced supersymmetry to be fully classified. The results are summarized in section 4. \section{$D=11$ warped product $dS_5$ backgrounds} In this section, we summarize the key results about warped $dS_5\times_w M_6$ backgrounds in $D=11$ supergravity \cite{Farotti:2022xsd}, which were derived for solutions preserving the minimal $N=8$ supersymmetry. First of all, the 11-dimensional metric is given by \begin{eqnarray} ds^2(M_{11})=A^2ds^2(dS_5)+ds^2(M_6) \label{11dmetric} \end{eqnarray} where $A$ is a function of the co-ordinates of the internal Riemannian manifold $M_6$ and \begin{eqnarray} ds^2(dS_5)=\frac{1}{\big(1+\frac{k}{4}|x|^2\big)^2}\eta_{\mu\nu}dx^{\mu}dx^{\nu}~,~~~\mu,\nu=0,1,\dots, 4 \end{eqnarray} is the metric of 5-dimensional de-Sitter spacetime, with $|x|^2=\eta_{\mu\nu}x^{\mu}x^{\nu}$ and $k=\frac{1}{\ell ^2}$. We require that the Lie derivative of the 4-form flux $F$ with respect to all of the isometries of $dS_5$ vanishes, and consequently \begin{eqnarray} F = X \end{eqnarray} where $X$ is a 4-form on $M_6$ whose components depend only on the co-ordinates of $M_6$. Moreover, as we have mentioned previously, in what follows we do not make any assumption on smoothness of $A$ or the 4-form flux , neither do we require the internal manifold to be smooth or compact without boundary. Rather, the analysis is entirely local. \\ \indent Let us introduce the space-time vielbein \begin{eqnarray} \textbf{e}^{\mu}=\frac{A}{1+\frac{k}{4}||x||^2}dx^{\mu}~,~~~~~\textbf{e}^a=e^a_{~\alpha}(y)dy^{\alpha} \label{viel1} \end{eqnarray} where $a=5,6,\dots ,\sharp$ is a frame index on $M_6$ and we have denoted by $y^{\alpha}$ the co-ordinates on $M_6$. The bosonic field equations of $D=11$ supergravity \begin{eqnarray} R_{AB}=\frac{1}{12}F_{AL_1L_2L_3}F_B^{~L_1L_2L_3}-\frac{1}{144}g_{AB}F^2 \end{eqnarray} \begin{eqnarray} d\star_{11}F-\frac{1}{2}F\wedge F=0 \label{4formEOMS} \end{eqnarray} and the Bianchi identities \begin{eqnarray} dF=0 \end{eqnarray} can be reduced on $M_6$ yielding \begin{eqnarray} 4kA^{-2}-A^{-1}\widetilde{\nabla}^a\widetilde{\nabla}_a A-4A^{-2}(\widetilde{\nabla}A)^2+\frac{1}{12}G^2=0 \label{eins456} \end{eqnarray} \begin{eqnarray} \widetilde{R}_{ab}=5A^{-1}\widetilde{\nabla}_a\widetilde{\nabla}_b A+\frac{1}{6}G^2\delta_{ab}-\frac{1}{2}G_{cb}G^c_{~a} \label{eins457} \end{eqnarray} \begin{eqnarray} \tilde{d}(A^5 G)=0 \label{gaugeds5} \end{eqnarray} \begin{eqnarray} \tilde{d}\star_6 G=0 \label{bianchids5} \end{eqnarray} where $\widetilde{\nabla}$ denotes the Levi-Civita connection on $M_6$, $\tilde{d}$ is the exterior derivative on $M_6$, and \begin{eqnarray} G=\star_6 X~. \label{gdef} \end{eqnarray} Furthermore, the KSE of $D=11$ supergravity \begin{eqnarray} \bigg(\nabla_A-\frac{1}{288}\Gamma_A^{~~B_1B_2B_3B_4}F_{B_1B_2B_3B_4}+\frac{1}{36}F_{AB_1B_2B_3}\Gamma^{B_1B_2B_3}\bigg)\epsilon=0 \label{KSE1} \end{eqnarray} can be integrated along the de-Sitter directions yielding \begin{eqnarray} \epsilon=\big(1+\frac{k}{4}||x||^2\big)^{-\frac{1}{2}}\bigg(1+x^{\mu}\Gamma_{\mu}\big(-\frac{1}{2}\widetilde{\slashed{\nabla}}A+\frac{A}{288}\slashed{X}\big)\bigg)\psi \end{eqnarray} where $\psi$ is a 32-component Majorana spinor on $M_6$ which satisfies \begin{eqnarray} \widetilde{\nabla}_a\psi=\bigg(-\frac{1}{12}G_{ab}\Gamma^b+\frac{1}{12}\Gamma_a^{~~bc}G_{bc}\bigg)\Gamma^{(7)}\psi \label{killingG} \end{eqnarray} and \begin{eqnarray} \Gamma^{(7)}=\frac{1}{6!}\epsilon_{a_1a_2\dots a_6}\Gamma^{a_1a_2\dots a_6} \end{eqnarray} is the highest rank Gamma matrix on $M_6$. As shown in \cite{Farotti:2022xsd}, we can pick co-ordinates $\{t,s,z^i\}$, with $i=1,2,3,4$ on $M_6$ such that \begin{eqnarray} ds^2(M_6)=f^{-4}(s,z)ds^2+\frac{1}{k}f^2(s,z)dt^2+f^{-4}(s,z)ds^2(N_4) \label{M6rescaleds} \end{eqnarray} where $N_4$ is a 4-dimensional hyperK\"ahler manifold, with metric tensor \begin{eqnarray} ds^2(N_4)=h_{ij}(z)dz^idz^j~. \end{eqnarray} Moreover, the warp factor $A$ is given by \begin{eqnarray} A=t\cdot f(s,z) \label{Afine} \end{eqnarray} and the 2-form flux $G$ is \begin{eqnarray} G=\frac{6}{\sqrt{k}}df\wedge dt~. \label{Gcoordinate} \end{eqnarray} Setting $f=H^{-{1 \over 6}}$, and using \eqref{M6rescaleds} and \eqref{Afine}, the 11-dimensional metric tensor \eqref{11dmetric} is given by \begin{eqnarray} ds^2(M_{11}) = H^{-{1 \over 3}} ds^2({\mathbb{R}}^{1,5}) + H^{{2 \over 3}} ds^2 ({\mathbb{R}} \times N_4) \label{metricHH} \end{eqnarray} and the Einstein equation \eqref{eins456} simplifies to \begin{eqnarray} \Box_5 H =0 \label{harmonicH} \end{eqnarray} where $\Box_5$ denotes the Laplacian on ${\mathbb{R}} \times N_4$. Moreover, \eqref{Gcoordinate} yields \begin{eqnarray} F = \star_5 dH \label{4formH} \end{eqnarray} where $\star_5$ is the Hodge dual on ${\mathbb{R}} \times N_4$. The geometry given by \eqref{metricHH}-\eqref{4formH} corresponds to that of a generalized M5-brane configuration, with transverse space ${\mathbb{R}} \times N_4$ \cite{Gauntlett:1997pk}.\\ \subsection{Solutions with parallel 4-form} A sub-class of these solutions corresponds to those backgrounds for which the 4-form $F$ is covariantly constant with respect to the 11-dimensional Levi-Civita connection; as we shall prove in the section 3.1, this class of solutions actually has enhanced supersymmetry. These backgrounds satisfy \begin{eqnarray} \widetilde{\nabla} G=0~. \label{covconstG} \end{eqnarray} Note that \eqref{covconstG} implies \begin{eqnarray} G^2=c^2 \label{GC} \end{eqnarray} with $c$ constant. The condition \eqref{covconstG} yields the following set of PDEs \begin{eqnarray} \frac{\partial^2f}{\partial s^2}+f^{-1}\bigg(\frac{\partial f}{\partial s}\bigg)^2-2f^{-1}h^{ij}\frac{\partial f}{\partial z^i}\frac{\partial f}{\partial z^j}=0 \label{pde1} \end{eqnarray} \begin{eqnarray} \frac{\partial f}{\partial s\partial z^i}+3f^{-1}\frac{\partial f}{\partial s}\frac{\partial f}{\partial z^i}=0 \label{pde2} \end{eqnarray} \begin{eqnarray} {\buildrel{\circ} \over \nabla}_i {\buildrel{\circ} \over \nabla}_j f+3f^{-1}\frac{\partial f}{\partial z^i}\frac{\partial f}{\partial z^j}-2f^{-1}h_{ij}\bigg(\bigg(\frac{\partial f}{\partial s}\bigg)^2+h^{kl}\frac{\partial f}{\partial z^k}\frac{\partial f}{\partial z^l}\bigg)=0~. \label{pde3} \end{eqnarray} Equations \eqref{pde1}-\eqref{pde3}, supplemented by \eqref{GC}, are equivalent to \begin{eqnarray} f(s,z)=\sqrt{2}\bigg(c^2s^2+P(z)\bigg)^{\frac{1}{4}} \label{fszizi2} \end{eqnarray} \begin{eqnarray} ({\buildrel{\circ} \over \nabla} P)^2=4c^2P \label{usefulPP2} \end{eqnarray} \begin{eqnarray} {\buildrel{\circ} \over \nabla}_i {\buildrel{\circ} \over \nabla}_j P=2c^2 h_{ij} \label{usefulhij2} \end{eqnarray} where ${\buildrel{\circ} \over \nabla}$ denotes the Levi-Civita connection on $N_4$. In particular, by taking $c=0$ and $N_4=\mathbb{R}^4$, we recover the maximally supersymmetric solution $\mathbb{R}^{1,10}$ with vanishing 4-form.\footnote{The other maximally supersymmetric solution which is a warped product $dS_5$ solution is $AdS_7\times S^4$, which arises as the near-horizon limit of the standard M5-brane, with transverse space $\mathbb{R}^5$.} In particular, ({\ref{usefulhij2}}) implies that ${P \over 2c^2}$ is a hyper-K\"ahler potential for $N_4$, \cite{Swann}. \section{Integration of the KSE on $M_6$} In this section, we integrate the KSE \eqref{killingG} along two of the directions of $M_6$, corresponding to the co-ordinates $s$ and $t$. The resulting reduced Killing spinor equations ultimately are, after some further manipulation, equivalent to requiring the existence of certain parallel spinors on $N_4$. As we shall see, this enables to classify $dS_5$ backgrounds with extended supersymmetry. First of all, let us introduce the vielbein on $M_6$ \begin{eqnarray} \tilde{\textbf{e}}^5=f^{-2}ds ~,~~~\tilde{\textbf{e}}^6=\frac{1}{\sqrt{k}}f dt~,~~~\tilde{\textbf{e}}^I=f^{-2}\buildrel{\circ} \over e^I \label{vielbeinm6} \end{eqnarray} where $\buildrel{\circ} \over e^I=\buildrel{\circ} \over e^I_{~i}dz^i$ and \begin{eqnarray} h_{ij}=\delta_{IJ}\buildrel{\circ} \over e^I_{~i}\buildrel{\circ} \over e^J_{~j}~. \end{eqnarray} Using \eqref{vielbeinm6}, equation \eqref{M6rescaleds} reads \begin{eqnarray} ds^2(M_6)=(\tilde{\textbf{e}}^5)^2+(\tilde{\textbf{e}}^6)^2+\delta_{IJ}\tilde{\textbf{e}}^I\tilde{\textbf{e}}^J~. \end{eqnarray} The non-vanishing components of the spin connection on $M_6$ with respect to the frame \eqref{vielbeinm6} are given by \begin{eqnarray} \widetilde{\Omega}_{6,65}=f\frac{\partial f}{\partial s} \label{spinm61} \end{eqnarray} \begin{eqnarray} \widetilde{\Omega}_{6,6I}=-\frac{1}{2}\widetilde{\Omega}_{5,5I}=f\buildrel{\circ} \over e_If \end{eqnarray} \begin{eqnarray} \widetilde{\Omega}_{I,5J}=2f\frac{\partial f}{\partial s}\delta_{IJ} \end{eqnarray} \begin{eqnarray} \widetilde{\Omega}_{I,JK}=f^2\buildrel{\circ} \over\Omega_{I,JK}-4f\delta_{I[J}\buildrel{\circ} \over e_{K]} f \label{spinm6fin} \end{eqnarray} where $\buildrel{\circ} \over\Omega_{I,JK}$ is the spin connection on $N_4$.\\ \indent Using \eqref{spinm61}-\eqref{spinm6fin} and \eqref{Gcoordinate}, the KSE \eqref{killingG} read \begin{eqnarray} \frac{\partial\psi}{\partial t}=\frac{1}{2\sqrt{k}}\bigg(f^2\frac{\partial f}{\partial s}\Gamma^5+f^2\buildrel{\circ} \over \nabla_{I} f \Gamma^I\bigg)\big(\Gamma^6+\Gamma^{(7)}\big)\psi \label{partialt} \end{eqnarray} \begin{eqnarray} \frac{\partial \psi}{\partial s}=f^{-1}\buildrel{\circ} \over \nabla_{I} f\Gamma^6\Gamma^5\Gamma^I\big(\Gamma^{6}+\Gamma^{(7)}\big)\psi-\frac{1}{2}f^{-1}\frac{\partial f}{\partial s} \Gamma^6 \Gamma^{(7)}\psi \label{partials} \end{eqnarray} \begin{eqnarray} \buildrel{\circ} \over \nabla_I \psi&=&f^{-1}\frac{\partial f}{\partial s}\Gamma^6\Gamma_I \Gamma^5\big(\Gamma^6+\Gamma^{(7)}\big)\psi+f^{-1}\buildrel{\circ} \over \nabla_{J}f\Gamma^6\Gamma_I^{~~J}\big(\Gamma^6+\Gamma^{(7)}\big)\psi \nonumber \\ &-&\frac{1}{2}f^{-1}\buildrel{\circ} \over \nabla_{I} f\Gamma^6\Gamma^{(7)}\psi~. \label{partiali} \end{eqnarray} Since $f$ does not depend on $t$, equation \eqref{partialt} can be easily integrated, yielding \begin{eqnarray} \psi=e^{t\mathcal{X}}\eta~,~~~~~\frac{\partial\eta}{\partial t}=0 \label{psitx} \end{eqnarray} where $\eta$ is a 32-component Majorana spinor and \begin{eqnarray} \mathcal{X}:=\frac{1}{2\sqrt{k}}\bigg(f^2\frac{\partial f}{\partial s}\Gamma^5+f^2\buildrel{\circ} \over \nabla_{I} f \Gamma^I\bigg)\big(\Gamma^6+\Gamma^{(7)}\big)~. \end{eqnarray} Notice that $\mathcal{X}^2=0$, thus $e^{t\mathcal{X}}=1+t\mathcal{X}$ and \eqref{psitx} yields \begin{eqnarray} \psi=\eta+\frac{t}{2\sqrt{k}}\bigg(f^2\frac{\partial f}{\partial s}\Gamma^5+f^2\buildrel{\circ} \over \nabla_{I} f \Gamma^I\bigg)\big(\Gamma^6+\Gamma^{(7)}\big)\eta~. \label{psitintegrated} \end{eqnarray} We can rewrite \eqref{partials}, \eqref{partiali} and \eqref{psitintegrated} covariantly on $\mathbb{R}\times N_4$ as follows \begin{eqnarray} D_{\alpha}\psi=f^{-1}D_{\beta}f\Gamma^6\Gamma_{\alpha}^{~~\beta}(\Gamma^6+\Gamma^{(7)})\psi-\frac{1}{2}f^{-1}D_{\alpha}f\Gamma^6\Gamma^{(7)}\psi \label{KillingN51} \end{eqnarray} and \begin{eqnarray} \psi=\eta+\frac{t}{2\sqrt{k}}f^2D_{\alpha}f\Gamma^{\alpha}\big(\Gamma^6+\Gamma^{(7)}\big)\eta \label{KillingN52} \end{eqnarray} where $D$ is the Levi-Civita connection on $\mathbb{R}\times N_4$ and $\alpha,\beta$ are frame indices on $\mathbb{R}\times N_4$. Inserting \eqref{KillingN52} into \eqref{KillingN51}, the vanishing of the $t$-independent terms and the terms linear in $t$ yields \begin{eqnarray} D_{\alpha}\eta=f^{-1}D_{\beta}f\Gamma^6\Gamma_{\alpha}^{~~\beta}(\Gamma^6+\Gamma^{(7)})\eta-\frac{1}{2}f^{-1}D_{\alpha}f\Gamma^6\Gamma^{(7)}\eta \label{KillingN53} \end{eqnarray} and \begin{eqnarray} \mathcal{H}_{\alpha\beta}\Gamma^{\beta}(\Gamma^6+\Gamma^{(7)})\eta=0 \label{KillingN54} \end{eqnarray} respectively, where we have defined \begin{eqnarray} \mathcal{H}_{\alpha\beta}:=f^2D_{\alpha}D_{\beta}f+3fD_{\alpha}fD_{\beta}f-2f(Df)^2\delta_{\alpha\beta}~. \label{definHIJ} \end{eqnarray} Two distinct cases must be considered, depending on whether $\mathcal{H}_{\alpha\beta}$ vanishes or not. \subsection{$\mathcal{H}_{\alpha\beta}=0$} If $\mathcal{H}_{\alpha\beta}=0$, then \eqref{KillingN54} is automatically satisfied and \eqref{definHIJ} implies \begin{eqnarray} f^2D_{\alpha}D_{\beta}f+3fD_{\alpha}fD_{\beta}f-2f(Df)^2\delta_{\alpha\beta}=0~. \label{KillingN55} \end{eqnarray} Decomposing \eqref{KillingN55} on $N_4$, we find \eqref{pde1}-\eqref{pde3}, hence $\widetilde{\nabla} G=0$ and equations \eqref{fszizi2}-\eqref{usefulhij2} hold. In the following, it is convenient to write $\eta$ as \begin{eqnarray} \eta=\eta^+ +\eta^- \label{chir20} \end{eqnarray} where \begin{eqnarray} \Gamma^6\Gamma^{(7)}\eta^{\pm}=\pm \eta^{\pm}~. \label{chir21} \end{eqnarray} Using \eqref{chir20} and \eqref{chir21}, equation \eqref{KillingN53} yields \begin{eqnarray} D_{\alpha}\eta^+=2f^{-1}D_{\beta}f\Gamma_{\alpha}^{~~\beta}\eta^+-\frac{1}{2}f^{-1}D_{\alpha}f\eta^+ \label{chir1} \end{eqnarray} and \begin{eqnarray} D_{\alpha}\eta^-=\frac{1}{2}f^{-1}D_{\alpha}f\eta^-~. \label{chir2} \end{eqnarray} Let us perform two different conformal transformations on the spinors $\eta_+$ and $\eta_-$ as follows \begin{eqnarray} \hat{\eta}^+=f^{\frac{1}{2}}\eta^+~,~~~~\hat{\eta}^-=f^{-\frac{1}{2}}\eta^-~. \label{chir24} \end{eqnarray} Implementing \eqref{chir24} into \eqref{chir1} and \eqref{chir2}, we find \begin{eqnarray} D_{\alpha}\hat{\eta}^+=2f^{-1}D_{\beta}f\Gamma_{\alpha}^{~~\beta}\hat{\eta}^+ \label{chir3} \end{eqnarray} and \begin{eqnarray} D_{\alpha}\hat{\eta}^-=0 \label{chir4} \end{eqnarray} respectively. Equation \eqref{chir4} implies that \begin{eqnarray} \hat{\eta}^-=\sigma^- \label{chir233} \end{eqnarray} where $\sigma^-$ is a 32-component Majorana spinor independent of $s$ which is covariantly constant on $N_4$, i.e. \begin{eqnarray} \buildrel{\circ} \over \nabla_ I \sigma^-=0~. \label{chir25} \end{eqnarray} Let us now analyze \eqref{chir3}. The 5-component of \eqref{chir3} yields \begin{eqnarray} \frac{\partial \hat{\eta}^+}{\partial s}=\frac{1}{2(c^2s^2+P(z))}\buildrel{\circ} \over\nabla_I P\Gamma^1\Gamma^I \hat{\eta}^+ \label{chir6} \end{eqnarray} where we have implemented \eqref{fszizi2}. We shall explicitly integrate this condition by setting \begin{eqnarray} \hat{\eta}^+=\exp\bigg(u(s,z)\buildrel{\circ} \over \nabla_I P\Gamma^1\Gamma^I\bigg)\phi^+~,~~~~~~~~\frac{\partial \phi^+}{\partial s}=0 \ . \label{chir7} \end{eqnarray} On substituting this into \eqref{chir6}, one obtains \begin{eqnarray} \frac{\partial u}{\partial s}=\frac{1}{2(c^2 s^2+P(z))}~. \label{chir10} \end{eqnarray} Notice that $P$ is positive by means of \eqref{usefulPP2}. Hence, a solution to equation \eqref{chir10} is given by \begin{eqnarray} u(s,z)=\frac{1}{2c\sqrt{P(z)}}\arctan\bigg(\frac{cs}{\sqrt{P(z)}}\bigg)~. \label{chir11} \end{eqnarray} Moreover, equation \eqref{chir7} implies \begin{eqnarray} \hat{\eta}^+=\bigg\{\cos\bigg(2c\sqrt{P(z)} u(s,z)\bigg)\mathbb{I}+\frac{1}{2c\sqrt{P(z)}}\sin\bigg(2c\sqrt{P(z)}u(s,z)\bigg)\buildrel{\circ} \over \nabla_I P\Gamma^1\Gamma^I \bigg\}\phi^+~. \nonumber \\ \label{chir12} \end{eqnarray} Inserting \eqref{chir11} into \eqref{chir12}, we get \begin{eqnarray} \hat{\eta}^+=\frac{1}{\sqrt{c^2s^2+P(z)}}\bigg(\sqrt{P(z)}\mathbb{I}+\frac{s}{2\sqrt{P(z)}}\buildrel{\circ} \over \nabla_I P\Gamma^1\Gamma^I\bigg)\phi^+~. \nonumber \\ \label{chir13} \end{eqnarray} The $I$-component of \eqref{chir3} is given by \begin{eqnarray} \buildrel{\circ} \over \nabla_I \hat{\eta}^+=\frac{c^2s}{c^2s^2+P(z)}\Gamma_I\Gamma^1\hat{\eta}^++\frac{1}{2(c^2s^2+P(z))}\buildrel{\circ} \over \nabla_J P\Gamma_I^{~~J}\hat{\eta}^+~. \label{chir14} \end{eqnarray} Inserting \eqref{chir13} into \eqref{chir14} and using \eqref{usefulhij2}, we get \begin{eqnarray} \buildrel{\circ} \over \nabla_I \sigma^+=0 \label{chir16} \end{eqnarray} where $\sigma^+$ is given by \begin{eqnarray} \phi^+=\frac{1}{\sqrt{P(z)}}\Gamma^I\buildrel{\circ} \over \nabla_I P\sigma^+~. \label{chir15} \end{eqnarray} Inserting \eqref{chir15} in \eqref{chir13}, we obtain \begin{eqnarray} \hat{\eta}^+=\frac{1}{\sqrt{c^2s^2+P(z)}}\bigg(\Gamma^I\buildrel{\circ} \over \nabla_I P+2c^2s\Gamma^5\bigg)\sigma^+~. \label{chir17} \end{eqnarray} Moreover, \eqref{KillingN52} is equivalent to \begin{eqnarray} \psi=\eta^++\eta^-+\frac{1}{\sqrt{2k}}t(c^2s^2+P(z))^{-\frac{1}{4}}\bigg(2c^2s\Gamma^5+\Gamma^I\buildrel{\circ} \over \nabla_I P\bigg)\Gamma^6\eta^+ \label{chir22} \end{eqnarray} where we have used \eqref{chir20} and \eqref{chir21}. Using \eqref{chir24}, \eqref{chir233} and \eqref{chir17}, equation \eqref{chir22} yields \begin{eqnarray} \psi&=&2^{-\frac{1}{4}}\big(c^2s^2+P(z)\big)^{-\frac{5}{8}}\bigg(2c^2s\Gamma^5+\Gamma^I\buildrel{\circ} \over \nabla_I P\bigg)\sigma^+ +2^{\frac{1}{4}}\big(c^2s^2+P(z)\big)^{\frac{1}{8}}\sigma^- \nonumber \\ &-&\frac{2^{\frac{5}{4}}c^2}{\sqrt{k}}t\big(c^2s^2+P(z)\big)^{\frac{1}{8}}\Gamma^6\sigma^+~. \label{chir40} \end{eqnarray} Defining $\check{\sigma}^+:=2^{-\frac{1}{4}}\sigma^+$ and $\check{\sigma}^-:=2^{\frac{1}{4}}\sigma^-$ and dropping the check for simplicity, \eqref{chir40} is equivalent to \begin{eqnarray} \psi&=&\big(c^2s^2+P(z)\big)^{-\frac{5}{8}}\bigg(2c^2s\Gamma^5+\Gamma^I\buildrel{\circ} \over \nabla_I P\bigg)\sigma^+ +\big(c^2s^2+P(z)\big)^{\frac{1}{8}}\sigma^- \nonumber \\ &-&2c^2\sqrt{\frac{2}{k}}~t\big(c^2s^2+P(z)\big)^{\frac{1}{8}}\Gamma^6\sigma^+~. \label{psic1} \end{eqnarray} Let us count the number of supersymmetries preserved by these backgrounds. To this end, define \begin{eqnarray} \mathcal{S}^{\pm}:=\textrm{span}\{\sigma^{\pm}\} \end{eqnarray} where $\sigma^{\pm}$ satisfy $\Gamma^6\Gamma^{(7)}\sigma^{\pm}=\pm \sigma^{\pm}$ and \begin{eqnarray} \buildrel{\circ} \over \nabla_I \sigma^{\pm}=0~. \end{eqnarray} Notice that if $\sigma^{\pm}\in\mathcal{S}^{\pm}$, then $\Gamma_{\mu\nu}\sigma^{\pm}\in\mathcal{S}^{\pm}$, where $\mu,\nu$ denote the de-Sitter directions. Hence, using the argument of Section 2.2 of \cite{Farotti:2022xsd}, it follows that \begin{eqnarray} \textrm{dim}~\mathcal{S}^{\pm}=8k^{\pm}~,~~~k^{\pm}=1,2~. \label{count3} \end{eqnarray} Moreover, if $\sigma^+\in\mathcal{S}^+$, then $\Gamma_{\mu}\sigma^+\in\mathcal{S}^-$, hence \begin{eqnarray} \textrm{dim}~\mathcal{S}^{+}=\textrm{dim}~\mathcal{S}^{-} \label{observation} \end{eqnarray} that is $k^+=k^-:=k$. Using \eqref{psic1}, \eqref{count3} and \eqref{observation}, it follows that the number of supersymmetries is \begin{eqnarray} N=\textrm{dim}~\mathcal{S}^{+}+\textrm{dim}~\mathcal{S}^{-}=16k~~~~~~k=1,2~. \label{count1} \end{eqnarray} Notice that equation \eqref{count1} implies that in this class of solutions there are no backgrounds preserving exactly $N=24$ supersymmetries. To summarize, for this class of solutions, the metric is given by \begin{eqnarray} ds^2(M_{11})=H^{-\frac{1}{3}}ds^2(\mathbb{R}^{1,5})+H^{\frac{2}{3}}ds^2(\mathbb{R}\times N_4) \label{metricH} \end{eqnarray} where $N_4$ is a hyperK\"ahler manifold and $H$ satisfies \eqref{KillingN55}, that is \begin{eqnarray} -3HD_{\alpha}D_{\beta}H+5D_{\alpha}HD_{\beta}H-(DH)^2\delta_{\alpha\beta}=0~. \label{hessianH} \end{eqnarray} where the functions $f$ and $H$ are related by $f=H^{-{1 \over 6}}$. In particular, \eqref{hessianH} implies that $H$ is harmonic on $\mathbb{R} \times N_4$, \eqref{harmonicH}. Moreover, the 4-form is given by \eqref{4formH} and is covariantly constant with respect to the Levi-Civita connection of 11-dimensional spacetime. In this case, it follows that the geometry is that of a generalized $M5$-brane configuration, for which the transverse space is ${\mathbb{R}} \times N_4$, and the harmonic function $H$ on $\mathbb{R} \times N_4$ is determined in terms of a hyper-K\"ahler potential $P$ on $N_4$ via \begin{eqnarray} H=\frac{1}{8}\big(c^2s^2+P(z)\big)^{-\frac{3}{2}} \label{HP} \end{eqnarray} where $P$ satisfies \eqref{usefulhij2}. \subsection{$\mathcal{H}_{\alpha\beta}\ne 0$} If $\mathcal{H}_{\alpha\beta}\ne 0$, then \eqref{KillingN54} yields \begin{eqnarray} (\Gamma^6+\Gamma^{(7)})\eta=0 \label{count10} \end{eqnarray} hence $\eta=\eta^-$. Using \eqref{count10}, \eqref{KillingN53} simplifies to \begin{eqnarray} D_{\alpha}\chi^-=0~. \label{aux3491} \end{eqnarray} where we have defined $\chi^-:=f^{-\frac{1}{2}}\eta^-$. Equation \eqref{aux3491} implies that \begin{eqnarray} \chi^-=\sigma^- \label{chir23} \end{eqnarray} where $\sigma^-$ is a spinor independent of $s$ which satisfies \begin{eqnarray} \buildrel{\circ} \over \nabla_ I \sigma^-=0~. \label{chir25} \end{eqnarray} Using \eqref{chir23}, equation \eqref{KillingN52} yields \begin{eqnarray} \psi=f^{\frac{1}{2}}\sigma^-~. \label{chir26} \end{eqnarray} Equation \eqref{count3} and \eqref{chir26} imply that the number of supersymmetries preserved by these backgrounds is \begin{eqnarray} N=8k^{-}~,~~~~k^{-}=1,2~. \label{countfinal} \end{eqnarray} For this class of solutions, the metric tensor is given by \begin{eqnarray} ds^2(M_{11})=H^{-\frac{1}{3}}ds^2(\mathbb{R}^{1,5})+H^{\frac{2}{3}}ds^2(\mathbb{R}\times N_4) \label{metricc} \end{eqnarray} where $N_4$ is a hyperK\"ahler manifold and $H$ is harmonic on $\mathbb{R}\times N_4$. i.e. it satisfies \eqref{harmonicH}. Moreover, the 4-form is given by \eqref{4formH}. \\ \indent Notice that equation \eqref{countfinal} implies that $N=24$ is excluded in this class of solutions as well, hence there are no $D=11$ warped $dS_5$ backgrounds preserving exactly $N=24$ supersymmetries. Moreover, the case $N=8$ has already been analyzed in \cite{Farotti:2022xsd}. Hence, we are left to consider $N=16$. In this case, \eqref{countfinal} implies that there are 16 linearly independent negative chirality spinors $\sigma^-$ which are covariantly constant on $N_4$. Using \eqref{observation}, it follows that there are also 16 positive chirality spinors $\sigma^+$ which are covariantly constant on $N_4$. Hence there are 32 linearly independent covariantly constant spinors on $N_4$. This implies $N_4=\mathbb{R}^4$, and \eqref{metricc}, \eqref{4formH} and \eqref{harmonicH} yield \begin{eqnarray} &&ds^2(M_{11})=H^{-\frac{1}{3}}ds^2(\mathbb{R}^{1,5})+H^{\frac{2}{3}}ds^2(\mathbb{R}^5) \nonumber \\ &&F=\star_5 dH \nonumber \\ &&\Box_5 H=0~. \label{M5brane} \end{eqnarray} The configuration \eqref{M5brane} corresponds to the standard M5-brane, which indeed preserves 16 supersymmetries in the bulk \cite{Gueven:1992hh}. \section{Conclusion} In this work, we have fully classified the warped product $dS_5$ backgrounds in $D=11$ supergravity with enhanced supersymmetry. It is known from \cite{Farotti:2022xsd} that supersymmetric warped product backgrounds must preserve $N=8k$ supersymmetries for $k=1,2,3,4$. Our analysis has established the following results for the different possible proportions of supersymmetry: \begin{itemize} \item{$N=8$:} The $N=8$ solutions were classified in \cite{Farotti:2022xsd}, and the results for this case are summarized in Section 2. The geometries are generalized $M5$-brane solutions, for which the transverse space is $\mathbb{R} \times N_4$, where $N_4$ is a hyper-K\"ahler manifold. \item{$N=16$:} There are two possibilities for $N=16$ supersymmetry, corresponding to whether $\mathcal{H}_{\alpha\beta}=0$, where $\mathcal{H}_{\alpha\beta}$ is defined in ({\ref{definHIJ}}). \begin{itemize} \item[(i)] If $\mathcal{H}_{\alpha\beta}=0$ then the $N=16$ solutions have the property that the 4-form $F$ is covariantly constant with respect to the 11-dimensional Levi-Civita connection and is given by \eqref{4formH}. Such solutions have been discussed in Section 5.4 of \cite{Farotti:2022xsd}; here we establish that these solutions actually have enhanced supersymmetry. The geometry corresponds to a generalized M5-brane configuration \eqref{metricH}, with transverse space ${\mathbb{R}} \times N_4$ for which the harmonic function $H$ on $\mathbb{R} \times N_4$ is determined in terms of a hyper-K\"ahler potential $P$ on $N_4$ via \eqref{HP}. \item[(ii)] If however, $\mathcal{H}_{\alpha\beta}\ne0$ then we find that the bosonic fields are given by \eqref{M5brane}. This configuration corresponds to a stack of M5-branes with transverse space $\mathbb{R}^5$ \cite{Gueven:1992hh}. \end{itemize} \item{$N=24$:} There are no warped product $dS_5$ solutions preserving exactly $N=24$ supersymmetries. \item{$N=32$:} This case has been considered in \cite{Farotti:2022xsd} and it has been shown that the only possibilities are $\mathbb{R}^{1,10}$ with vanishing 4-form, or the maximally supersymmetric $AdS_7 \times S^4$ solution. \end{itemize} \setcounter{section}{0} \setcounter{subsection}{0} \section*{Acknowledgments} DF is partially supported by the STFC DTP Grant ST/S505742. \section*{Data Management} No additional research data beyond the data presented and cited in this work are needed to validate the research findings in this work.
1,108,101,566,140
arxiv
\section{Introduction} The study of higher dimensional branes is an important part of the ongoing quest to understand the structure of superstring theory. It is generally motivated by their possible roles in understanding the string-theoretic origins of entropy, duality symmetries, the AdS/CFT correspondence etc \cite{Aharony:1999ti}. Particularly useful is how branes relate to models of dimensional reduction and/or large extra dimensions. It has long been the hope that a fully quantum mechanical theory of branes will lay the groundwork for a full exposition of nonperturbative string theory \cite{Duff:1999rk}. It is within this general view of `beyond the standard model' physics that much work has been done to classify all possible fully or partially supersymmetric brane configurations. This is usually performed within the boundaries of supergravity theory which, while admittedly a classical low energy version of the full string theory, is nonperturbative and hence expose properties of branes that would be difficult to explore in a perturbative approach. Furthermore, the possible interpretation of our universe as a 3-brane imbedded in a higher dimensional bulk adds even more interest to brane-theory and has understandably generated a lot of research in recent years, starting with the seminal work by Randall and Sundrum \cite{Randall:1999ee}. Since then, various models of `brane-cosmology' have been proposed \cite{Brax:2003fv, Maartens:2010ar, Roane:2007zz}. Most of which present studies of expanding (supersymmetric or non-supersymmetric) brane-universes via various stages of their evolution: inflation, re-heating, slow acceleration etc as well as possible `explanations' for the big bang itself (\emph{e.g.} \cite{Flanagan:1999cu, Binetruy:1999hy, Saaidi:2010jw, Saaidi:2012ri, Lidsey:2000mt, Choudhury:2012ib, Maia:2008yya, Okada:2014eva, Cordero:2011zz, Capistrano:2011zz, Amarilla:2009rs, Carmeli:1900zzc, Koyama:2007rx, Antoniadis:2007hp, McFadden:2005mq, Garriga:2001qt, Rasanen:2001hf, Khoury:2001wf, Falkowski:2000er}). It seems that the universe has always been in some form of accelerating expansion. While the current stage of slow acceleration is attributed to the cosmological constant/vacuum energy, the more expansive abrupt inflationary acceleration is explained by assuming the existence of the so-called `inflaton'; a scalar field active in the early universe \cite{astro-ph/9805201, Linde:1994yf}. Various models within the string theory landscape have been presented to explain either the current value of the cosmological constant or the inflaton field (e.g. \cite{AvilanV.:2010ri, 1203.0307, Park:2013nu, Kallosh:2010xz, Gong:2006be}). But there does not seem to be any studies that attempt to explore the history of the universe from inflation through the current phase, in other words no single model exists to explain why the universe passes through accelerating phases with various magnitudes. In this paper, we study a 3-brane embedded in five dimensional ungauged $\mathcal{N}=2$ supergravity with bulk hypermultiplets and find that the moduli of the complex structure of the underlying Calabi-Yau (CY) space act as a possible source for the various cosmological stages of said brane. The brane is vacuous, \emph{i.e.} devoid of all matter and radiation. We show that its spatial scale, described by a Robertson-Walker like scale factor $a\left(t\right)$, is dependent on the norm of the moduli of the CY complex structure, hence by considering various forms for $a\left(t\right)$ one can calculate, in reverse, the behavior of the moduli. We consider a generalized expansion model, where the brane is allowed to go through an inflationary phase, followed by a slow accelerative expansion, and find that these stages correlate to a behavior of the moduli that seems to suggest a high degree of instability. The norm of the moduli begins at a very high value then rapidly decays (synchronous with inflation) tending to a constant value at infinite times. From a cosmological perspective this is in agreement with the conjecture of the early production of heavy moduli and subsequent decay, most likely into gravitinos, as required by the phenomenology of the early universe. The configuration studied instantaneously satisfies the Bogomol'nyi-Prasad-Sommerfield (BPS) condition and breaks half of the supersymmetries under certain constraints which we also derive. There also seems to be a lot of freedom as to the exact form of the hypermultiplet fields, depending on the explicit form of the moduli as well as a bulk harmonic function. In addition to being an interesting result from the point of view of pure brane theory, one hopes that it will lay the ground work for further investigation of the possible effect of the complex structure moduli \cite{Hayashi:2014aua} on the evolution of our universe. \section{$D=5$ $\mathcal{N}=2$ supergravity with hypermultiplets} \label{theory} The dimensional reduction of $D=11$ supergravity theory over a Calabi-Yau 3-fold $\mathcal{M}$ with nontrivial complex structure moduli yields an $\mathcal{N}=2$ supergravity theory in $D=5$ with a set of scalar fields and their supersymmetric partners all together known as the \emph{hypermultiplets} (see \cite{Emam:2010kt} for a review and additional references). It should be noted that the other matter sector in the theory; the vector multiplets, trivially decouples from the hypermultiplets and can simply be set to zero, as we do here. The hypermultiplets are partially comprised of the \emph{universal hypermultiplet} $\left(\varphi, \sigma, \zeta^0, \tilde \zeta_0\right)$; so called because it appears irrespective of the detailed structure of the sub-manifold. The field $\varphi$ is known as the universal axion, and is magnetically dual to a three-form gauge field and the dilaton $\sigma$ is proportional to the natural logarithm of the volume of $\mathcal{M}$. The rest of the hypermultiplets are $\left(z^i, z^{\bar i}, \zeta^i, \tilde \zeta_i: i=1,\ldots, h_{2,1}\right)$, where the $z$'s are identified with the complex structure moduli of $\mathcal{M}$, and $h_{2,1}$ is the Hodge number determining the dimensions of the manifold of the Calabi-Yau's complex structure moduli, $\mathcal{M}_C$. The `bar' over an index denotes complex conjugation. The fields $\left(\zeta^I, \tilde\zeta_I: I=0,\ldots,h_{2,1}\right)$ are known as the axions and arise as a result of the $D=11$ Chern-Simons term. The supersymmetric partners known as the hyperini complete the hypermultiplets. The theory has a very rich structure that arises from the intricate topology of $\mathcal{M}$. Of particular usefulness is its symplectic covariance. Specifically, the axions $\left(\zeta^I, \tilde\zeta_I\right)$ can be defined as components of the symplectic vector \begin{equation}\label{DefOfSympVect} \left| \Xi \right\rangle = \left( {\begin{array}{*{20}c} {\,\,\,\,\,\zeta ^I } \\ -{\tilde \zeta _I } \\ \end{array}} \right), \end{equation} such that the symplectic scalar product is defined by, for example, \begin{equation} \left\langle {{\Xi }} \mathrel{\left | {\vphantom {{\Xi } d\Xi }} \right. \kern-\nulldelimiterspace} {d\Xi } \right\rangle = \zeta^I d\tilde \zeta_I - \tilde \zeta_I d\zeta^I,\label{DefOfSympScalarProduct} \end{equation} where $d$ is the spacetime exterior derivative $\left(d=dx^\mu\partial_\mu:\mu=0,\ldots,4\right)$. A `rotation' in symplectic space is defined by the matrix element \begin{eqnarray} \left\langle {\partial _\mu \Xi } \right|{\bf\Lambda} \left| {\partial ^\mu \Xi } \right\rangle \star \mathbf{1} &=& \left\langle {d\Xi } \right|\mathop {\bf\Lambda} \limits_ \wedge \left| {\star d\Xi } \right\rangle \nonumber\\ &=& 2\left\langle {{d\Xi }} \mathrel{\left | {\vphantom {{d\Xi } V}} \right. \kern-\nulldelimiterspace} {V} \right\rangle \mathop {}\limits_ \wedge \left\langle {{\bar V}} \mathrel{\left | {\vphantom {{\bar V} {\star d\Xi }}} \right. \kern-\nulldelimiterspace} {{\star d\Xi }} \right\rangle + 2G^{i\bar j} \left\langle {{d\Xi }} \mathrel{\left | {\vphantom {{d\Xi } {U_{\bar j} }}} \right. \kern-\nulldelimiterspace} {{U_{\bar j} }} \right\rangle \mathop {}\limits_ \wedge \left\langle {{U_i }} \mathrel{\left | {\vphantom {{U_i } {\star d\Xi }}} \right. \kern-\nulldelimiterspace} {{\star d\Xi }} \right\rangle - i\left\langle {d\Xi } \right.\mathop |\limits_ \wedge \left. {\star d\Xi } \right\rangle,\label{DefOfRotInSympSpace} \end{eqnarray} where $\star$ is the $D=5$ Hodge duality operator, and $G_{i\bar j}$ is a special K\"{a}hler metric on $\mathcal{M}_C$. The symplectic basis vectors $\left| V \right\rangle $, $\left| {U_i } \right\rangle $ and their complex conjugates are defined by \begin{equation} \left| V \right\rangle = e^{\frac{\mathcal{K}}{2}} \left( {\begin{array}{*{20}c} {Z^I } \\ {F_I } \\ \end{array}} \right),\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left| {\bar V} \right\rangle = e^{\frac{\mathcal{K}}{2}} \left( {\begin{array}{*{20}c} {\bar Z^I } \\ {\bar F_I } \\ \end{array}} \right)\label{DefOfVAndVBar} \end{equation} where $\mathcal{K}$ is the K\"{a}hler potential on $\mathcal{M}_C$, $\left( {Z,F} \right)$ are the periods of the Calabi-Yau's holomorphic volume form, and \begin{eqnarray} \left| {U_i } \right\rangle &=& \left| \nabla _i V \right\rangle=\left|\left[ {\partial _i + \frac{1}{2}\left( {\partial _i \mathcal{K}} \right)} \right] V \right\rangle \nonumber\\ \left| {U_{\bar i} } \right\rangle &=& \left|\nabla _{\bar i} {\bar V} \right\rangle=\left|\left[ {\partial _{\bar i} + \frac{1}{2}\left( {\partial _{\bar i} \mathcal{K}} \right)} \right] {\bar V} \right\rangle\label{DefOfUAndUBar} \end{eqnarray} where the derivatives are with respect to the moduli $\left(z^i, z^{\bar i}\right)$. These vectors satisfy the following conditions: \begin{eqnarray} \left\langle {{\bar V}} \mathrel{\left | {\vphantom {{\bar V} V}} \right. \kern-\nulldelimiterspace} {V} \right\rangle &=& i\nonumber\\ \left|\nabla _i {\bar V} \right\rangle &=& \left|\nabla _{\bar i} V \right\rangle =0\nonumber\\ \left\langle {{U_i }} \mathrel{\left | {\vphantom {{U_i } {U_j }}} \right. \kern-\nulldelimiterspace} {{U_j }} \right\rangle &=& \left\langle {{U_{\bar i} }} \mathrel{\left | {\vphantom {{U_{\bar i} } {U_{\bar j} }}} \right. \kern-\nulldelimiterspace} {{U_{\bar j} }} \right\rangle =0\nonumber\\ \left\langle {\bar V} \mathrel{\left | {\vphantom {\bar V {U_i }}} \right. \kern-\nulldelimiterspace} {{U_i }} \right\rangle &=& \left\langle {V} \mathrel{\left | {\vphantom {V {U_{\bar i} }}} \right. \kern-\nulldelimiterspace} {{U_{\bar i} }} \right\rangle = \left\langle { V} \mathrel{\left | {\vphantom { V {U_i }}} \right. \kern-\nulldelimiterspace} {{U_i }} \right\rangle=\left\langle {\bar V} \mathrel{\left | {\vphantom {\bar V {U_{\bar i} }}} \right. \kern-\nulldelimiterspace} {{U_{\bar i} }} \right\rangle= 0,\nonumber\\ \left|\nabla _{\bar j} {U_i } \right\rangle &=& G_{i\bar j} \left| V \right\rangle ,\quad \quad \left|\nabla _i {U_{\bar j} } \right\rangle = G_{i\bar j} \left| {\bar V} \right\rangle,\nonumber\\ G_{i\bar j}&=& \left( {\partial _i \partial _{\bar j} \mathcal{K}} \right)=- i \left\langle {{U_i }} \mathrel{\left | {\vphantom {{U_i } {U_{\bar j} }}} \right. \kern-\nulldelimiterspace} {{U_{\bar j} }} \right\rangle. \end{eqnarray} The origin of these identities lies in special K\"{a}hler geometry. In our previous work \cite{Emam:2009xj}, we derived the following useful formulae: \begin{eqnarray} dG_{i\bar j} &=& G_{k\bar j} \Gamma _{ri}^k dz^r + G_{i\bar k} \Gamma _{\bar r\bar j}^{\bar k} dz^{\bar r} \nonumber\\ dG^{i\bar j} &=& - G^{p\bar j} \Gamma _{rp}^i dz^r - G^{i\bar p} \Gamma _{\bar r\bar p}^{\bar j} dz^{\bar r} \nonumber\\ \left| {dV} \right\rangle &=& dz^i \left| {U_i } \right\rangle - i\mathfrak{Im} \left[ {\left( {\partial_i \mathcal{K}} \right)dz^i} \right]\left| V \right\rangle \nonumber \\ \left| {d\bar V} \right\rangle &=& dz^{\bar i} \left| {U_{\bar i} } \right\rangle + i\mathfrak{Im} \left[ {\left( {\partial_i \mathcal{K}} \right)dz^i} \right]\left| {\bar V} \right\rangle \nonumber \\ \left| {dU_i } \right\rangle &=& G_{i\bar j} dz^{\bar j} \left| V \right\rangle + \Gamma _{ik}^r dz^k \left| {U_r } \right\rangle+G^{j\bar l} C_{ijk} dz^k \left| {U_{\bar l} } \right\rangle - i\mathfrak{Im} \left[ {\left( {\partial_i \mathcal{K}} \right)dz^i} \right]\left| {U_i } \right\rangle \nonumber \\ \left| {dU_{\bar i} } \right\rangle &=& G_{j\bar i} dz^j \left| {\bar V} \right\rangle + \Gamma _{\bar i\bar k}^{\bar r} dz^{\bar k} \left| {U_{\bar r} } \right\rangle + G^{l\bar j} C_{\bar i\bar j\bar k} dz^{\bar k} \left| {U_l } \right\rangle + i\mathfrak{Im} \left[ {\left( {\partial_i \mathcal{K}} \right)dz^i} \right]\left| {U_{\bar i} } \right\rangle\nonumber \\ {\bf \Lambda } &=& 2\left| V \right\rangle \left\langle {\bar V} \right| + 2G^{i\bar j} \left| {U_{\bar j} } \right\rangle \left\langle {U_i } \right| -i\nonumber\\ {\bf \Lambda }^{-1} &=& -2\left| V \right\rangle \left\langle {\bar V} \right| - 2G^{i\bar j} \left| {U_{\bar j} } \right\rangle \left\langle {U_i } \right| +i\nonumber\\ \partial_i {\bf \Lambda } &=& 2\left| {U_i } \right\rangle \left\langle {\bar V} \right|+2\left| {\bar V} \right\rangle \left\langle {U_i } \right| + 2G^{j\bar r} G^{k\bar p} C_{ijk} \left| {U_{\bar r} } \right\rangle \left\langle {U_{\bar p} } \right|. \end{eqnarray} The quantities $C_{ijk}$ are the components of the totally symmetric tensor that appears in the curvature tensor of $\mathcal{M}_C$. In this language, the bosonic part of the action is given by: \begin{eqnarray} S_5 &=& \int\limits_5 {\left[ {R\star \mathbf{1} - \frac{1}{2}d\sigma \wedge\star d\sigma - G_{i\bar j} dz^i \wedge\star dz^{\bar j} } \right.} + e^\sigma \left\langle {d\Xi } \right|\mathop {\bf\Lambda} \limits_ \wedge \left| {\star d\Xi } \right\rangle\nonumber\\ & &\left. {\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad - \frac{1}{2} e^{2\sigma } \left[ {d\varphi + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle} \right] \wedge \star\left[ {d\varphi + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle} \right] } \right].\label{action} \end{eqnarray} The variation of the action yields the following field equations for $\sigma$, $\left(z^i,z^{\bar i}\right)$, $\left| \Xi \right\rangle$ and $\varphi$ respectively: \begin{eqnarray} \left( {\Delta \sigma } \right)\star \mathbf{1} + e^\sigma \left\langle {d\Xi } \right|\mathop {\bf\Lambda} \limits_ \wedge \left| {\star d\Xi } \right\rangle - e^{2\sigma }\left[ {d\varphi + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle} \right]\wedge\star\left[ {d\varphi + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle} \right] &=& 0\label{DilatonEOM}\\ \left( {\Delta z^i } \right)\star \mathbf{1} + \Gamma _{jk}^i dz^j \wedge \star dz^k + \frac{1}{2}e^\sigma G^{i\bar j} {\partial _{\bar j} \left\langle {d\Xi } \right|\mathop {\bf\Lambda} \limits_ \wedge \left| {\star d\Xi } \right\rangle} &=& 0 \nonumber\\ \left( {\Delta z^{\bar i} } \right)\star \mathbf{1} + \Gamma _{\bar j\bar k}^{\bar i} dz^{\bar j} \wedge \star dz^{\bar k} + \frac{1}{2}e^\sigma G^{\bar ij} {\partial _j \left\langle {d\Xi } \right|\mathop {\bf\Lambda} \limits_ \wedge \left| {\star d\Xi } \right\rangle} &=& 0\label{ZZBarEOM} \\ d^{\dag} \left\{ {e^\sigma \left| {{\bf\Lambda} d\Xi } \right\rangle - e^{2\sigma } \left[ {d\varphi + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}}\right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle } \right]\left| \Xi \right\rangle } \right\} &=& 0\label{AxionsEOM}\\ d^{\dag} \left[ {e^{2\sigma } d\varphi + e^{2\sigma } \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle} \right] &=& 0\label{aEOM} \end{eqnarray} where $d^\dagger$ is the $D=5$ adjoint exterior derivative, $\Delta$ is the Laplace-de Rahm operator and $\Gamma _{jk}^i$ is a connection on $\mathcal{M}_C$. The full action is symmetric under the following SUSY transformations: \begin{eqnarray} \delta _\epsilon \psi ^1 &=& D \epsilon _1 + \frac{1}{4}\left\{ {i {e^{\sigma } \left[ {d\varphi + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle } \right]}- Y} \right\}\epsilon _1 - e^{\frac{\sigma }{2}} \left\langle {{\bar V}} \mathrel{\left | {\vphantom {{\bar V} {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle\epsilon _2 \nonumber\\ \delta _\epsilon \psi ^2 &=& D \epsilon _2 - \frac{1}{4}\left\{ {i {e^{\sigma } \left[ {d\varphi + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle } \right]}- Y} \right\}\epsilon _2 + e^{\frac{\sigma }{2}} \left\langle {V} \mathrel{\left | {\vphantom {V {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle \epsilon _1 \label{SUSYGraviton} \\ \delta _\epsilon \xi _1^0 &=& e^{\frac{\sigma }{2}} \left\langle {V} \mathrel{\left | {\vphantom {V {\partial _\mu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle \Gamma ^\mu \epsilon _1 - \left\{ {\frac{1}{2}\left( {\partial _\mu \sigma } \right) - \frac{i}{2} e^{\sigma } \left[ {\left(\partial _\mu \varphi\right) + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {\partial _\mu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle } \right]} \right\}\Gamma ^\mu \epsilon _2 \nonumber\\ \delta _\epsilon \xi _2^0 &=& e^{\frac{\sigma }{2}} \left\langle {{\bar V}} \mathrel{\left | {\vphantom {{\bar V} {\partial _\mu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle \Gamma ^\mu \epsilon _2 + \left\{ {\frac{1}{2}\left( {\partial _\mu \sigma } \right) + \frac{i}{2} e^{\sigma } \left[ {\left(\partial _\mu \varphi\right) + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {\partial _\mu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle } \right]} \right\}\Gamma ^\mu \epsilon _1\label{SUSYHyperon1} \\ \delta _\epsilon \xi _1^{\hat i} &=& e^{\frac{\sigma }{2}} e^{\hat ij} \left\langle {{U_j }} \mathrel{\left | {\vphantom {{U_j } {\partial _\mu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle \Gamma ^\mu \epsilon _1 - e_{\,\,\,\bar j}^{\hat i} \left( {\partial _\mu z^{\bar j} } \right)\Gamma ^\mu \epsilon _2 \nonumber\\ \delta _\epsilon \xi _2^{\hat i} &=& e^{\frac{\sigma }{2}} e^{\hat i\bar j} \left\langle {{U_{\bar j} }} \mathrel{\left | {\vphantom {{U_{\bar j} } {\partial _\mu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle \Gamma ^\mu \epsilon _2 + e_{\,\,\,j}^{\hat i} \left( {\partial _\mu z^j } \right)\Gamma ^\mu \epsilon _1,\label{SUSYHyperon2} \end{eqnarray} where $\left(\psi ^1, \psi ^2\right)$ are the two gravitini and $\left(\xi _1^I, \xi _2^I\right)$ are the hyperini. The quantity $Y$ is defined by: \begin{equation} Y = \frac{{\bar Z^I N_{IJ} {d Z^J } - Z^I N_{IJ} {d \bar Z^J } }}{{\bar Z^I N_{IJ} Z^J }},\label{DefOfY} \end{equation} where $N_{IJ} = \mathfrak{Im} \left({\partial_IF_J } \right)$. The $e$'s are the beins of the special K\"{a}hler metric $G_{i\bar j}$, the $\epsilon$'s are the five-dimensional $\mathcal{N}=2$ SUSY spinors and the $\Gamma$'s are the usual Dirac matrices. The covariant derivative $D$ is given by $D=dx^\mu\left( \partial _\mu + \frac{1}{4}\omega _\mu^{\,\,\,\,\hat \mu\hat \nu} \Gamma _{\hat \mu\hat \nu}\right)\label{DefOfCovDerivative}$ as usual, where the $\omega$'s are the spin connections and the hatted indices are frame indices in a flat tangent space. Finally, the stress tensor is: \begin{eqnarray} T_{\mu \nu } &=& -\frac{1}{2}\left( {\partial _\mu \sigma } \right)\left( {\partial _\nu \sigma } \right) + \frac{1}{4}g_{\mu \nu } \left( {\partial _\alpha \sigma } \right)\left( {\partial ^\alpha \sigma } \right) + e^\sigma \left\langle {\partial _\mu \Xi } \right|{\bf\Lambda} \left| {\partial _\nu \Xi } \right\rangle - \frac{1}{2}e^{\sigma } g_{\mu \nu } \left\langle {\partial _\alpha \Xi } \right|{\bf\Lambda} \left| {\partial ^\alpha \Xi } \right\rangle \nonumber\\ & & - \frac{1}{2}e^{2\sigma } \left[ {\left( {\partial _\mu \varphi} \right) + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {\partial _\mu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle } \right]\left[ {\left( {\partial _\nu \varphi} \right) + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {\partial _\nu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\nu \Xi }} \right\rangle } \right] + \frac{1}{4}e^{2\sigma } g_{\mu \nu } \left[ {\left( {\partial _\alpha \varphi} \right) + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {\partial _\alpha \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\alpha \Xi }} \right\rangle } \right]\left[ {\left( {\partial ^\alpha \varphi} \right) + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {\partial ^\alpha \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial ^\alpha \Xi }} \right\rangle } \right]\nonumber\\ & & - G_{i\bar j} \left( {\partial _\mu z^i } \right)\left( {\partial _\nu z^{\bar j} } \right) + \frac{1}{2}g_{\mu \nu } G_{i\bar j} \left( {\partial _\alpha z^i } \right)\left( {\partial ^\alpha z^{\bar j} } \right).\label{StressTensor} \end{eqnarray} \section{Brane dynamics and bulk fields configurations}\label{Braneanalysis} We begin with a metric of the form \begin{equation} ds^2 = - e^{2\alpha \left( {t,y} \right)} dt^2 + e^{2\beta \left( {t,y} \right)} \left( {dr^2 + r^2 d\Omega ^2 } \right) + e^{2\gamma \left( {t,y} \right)} dy^2\label{GeneralBrane} \end{equation} where ${d\Omega ^2 = d\theta ^2 + \sin ^2 \left( \theta \right)d\phi ^2 }$. This metric may be interpreted as representing a single 3-brane located at $y=0$ in the transverse space, it may also represent a stack of $N$ branes located at various values of $y=y_I$ $\left(I = 1, \ldots ,N \in \mathbb{Z}\right)$ where the warp functions $\alpha$, $\beta$, and $\gamma$ are rewritten such that $y\rightarrow \sum\limits_{I = 1}^N {\left| {y - y_I } \right|} $. Either way, we will eventually focus on the four dimensional $\left(t, r, \theta, \phi\right)$ dynamics, effectively evaluating the warp functions at a specific, but arbitrary, $y$ value. We also note that a metric of the form (\ref{GeneralBrane}) was shown in \cite{Kallosh:2001du} to be exactly the type needed for a consistent BPS cosmology. In addition, a model along similar lines was proposed and studied in \cite{Kabat:2001qt}. The brane (or branes) is assumed completely vacuous, for the sake of simplicity, and as such it merely acts as a toy-model of a universe. To connect to a possible cosmological application a more realistic approach is needed, such as invoking the presence of the usual perfect fluid and a cosmological constant on the brane's surface, as well as possibly in the bulk (\emph{e.g.} \cite{Canestaro:2013xsa}). Based on this metric, the components of the Einstein tensor are \begin{eqnarray} G_{tt} &=& 3\left( {\dot \beta ^2 + \dot \beta \dot \gamma } \right) - 3e^{2\left( {\alpha - \gamma } \right)} \left( {\beta '' + 2\beta '^2 - \beta '\gamma '} \right) \nonumber\\ G_{rr} &=& - e^{2\left( {\beta - \alpha } \right)} \left[ {2\ddot \beta + 3\dot \beta ^2 + \ddot \gamma + \dot \gamma ^2 + 2\dot \beta \left( {\dot \gamma - \dot \alpha } \right) - \dot \alpha \dot \gamma } \right] \nonumber\\ & &+ e^{2\left( {\beta - \gamma } \right)} \left[ {2\beta '' + 3\beta '^2 + \alpha '' + \alpha '^2 + 2\beta '\left( {\alpha ' - \gamma '} \right) - \alpha '\gamma '} \right] \nonumber\\ G_{yy} &=& 3\left( {\beta '^2 + \beta '\alpha '} \right) - 3e^{2\left( {\gamma - \alpha } \right)} \left( {\ddot \beta + 2\dot \beta ^2 - \dot \beta \dot \alpha } \right) \nonumber\\ G_{yt} &=& 3\left( {\dot \beta \alpha ' + \beta '\dot \gamma - \dot \beta \beta ' - \dot \beta '} \right), \end{eqnarray} where a prime is a derivative with respect to $y$ and a dot is a derivative with respect to $t$. We are interested in bosonic configurations that preserve some supersymmetry, so the stress tensor (\ref{StressTensor}) can be considerably simplified by considering the vanishing of the supersymmetric variations (\ref{SUSYHyperon1}, \ref{SUSYHyperon2}), which may be rewritten in matrix form as follows \begin{equation} \left[ {\begin{array}{*{20}c} {2e^{\frac{\sigma }{2}} \left\langle {V} \mathrel{\left | {\vphantom {V {\partial _\mu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle \Gamma ^\mu} & {-\left\{ {\left( {\partial _\mu \sigma } \right) - i e^{\sigma } \left[ {\left(\partial _\mu \varphi\right) + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {\partial _\mu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle } \right]} \right\}\Gamma ^\mu} \\ {} & {} \\ {\left\{ {\left( {\partial _\mu \sigma } \right) + i e^{\sigma } \left[ {\left(\partial _\mu \varphi\right) + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {\partial _\mu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle } \right]} \right\}\Gamma ^\nu} & {2e^{\frac{\sigma }{2}} \left\langle {{\bar V}} \mathrel{\left | {\vphantom {{\bar V} {\partial _\nu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\nu \Xi }} \right\rangle \Gamma ^\nu} \\ \end{array}} \right]\left( {\begin{array}{*{20}c} {\epsilon _1 } \\ {} \\ {\epsilon _2 } \\ \end{array}} \right) = 0 \end{equation} \begin{equation} \left[ {\begin{array}{*{20}c} {e^{\frac{\sigma }{2}} e^{\hat ij} \left\langle {{U_j }} \mathrel{\left | {\vphantom {{U_j } {\partial _\mu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle \Gamma ^\mu } & {} & {-e_{\,\,\,\bar j}^{\hat i} \left( {\partial _\mu z^{\bar j} } \right)\Gamma ^\mu} \\ {} & {} & {} \\ {e_{\,\,\,k}^{\hat j} \left( {\partial _\nu z^k } \right)\Gamma ^\nu} & {} & {e^{\frac{\sigma }{2}} e^{\hat j\bar k} \left\langle {{U_{\bar k} }} \mathrel{\left | {\vphantom {{U_{\bar j} } {\partial _\nu \Xi }}} \right. \kern-\nulldelimiterspace} {{\partial _\mu \Xi }} \right\rangle \Gamma ^\nu} \\ \end{array}} \right]\left( {\begin{array}{*{20}c} {\epsilon _1 } \\ {} \\ {\epsilon _2 } \\ \end{array}} \right) = 0. \end{equation} The vanishing of the determinants gives the conditions: \begin{eqnarray} d\sigma \wedge \star d\sigma + e^{2\sigma } \left[ {d\varphi + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle } \right] \wedge \star\left[ {d\varphi + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle } \right] + 4e^\sigma \left\langle {V} \mathrel{\left | {\vphantom {V {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle \wedge \left\langle {{\bar V}} \mathrel{\left | {\vphantom {{\bar V} {\star d\Xi }}} \right. \kern-\nulldelimiterspace} {{\star d\Xi }} \right\rangle &=& 0 \nonumber\\ G_{i\bar j} dz^i \wedge \star dz^{\bar j} + e^\sigma G^{i\bar j} \left\langle {{U_i }} \mathrel{\left | {\vphantom {{U_i } {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle \wedge \left\langle {{U_{\bar j} }} \mathrel{\left | {\vphantom {{U_{\bar j} } {\star d\Xi }}} \right. \kern-\nulldelimiterspace} {{\star d\Xi }} \right\rangle &=& 0.\label{FromSUSY} \end{eqnarray} Using this with (\ref{DefOfRotInSympSpace}) we find \begin{equation} e^\sigma \left\langle {d\Xi } \right|\mathop {\bf\Lambda} \limits_ \wedge \left| {\star d\Xi } \right\rangle = \frac{1}{2}d\sigma \wedge \star d\sigma + \frac{1}{2}e^{2\sigma } \left[ {d\varphi + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle } \right] \wedge \star\left[ {d\varphi + \left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle } \right] + 2G_{i\bar j} dz^i \wedge \star dz^{\bar j},\label{Rotation} \end{equation} where we have used $\left\langle {d\Xi } \right.\mathop |\limits_ \wedge \left. {\star d\Xi } \right\rangle = 0$ as required by the reality of the axions. Using (\ref{Rotation}) in (\ref{StressTensor}) eliminates all terms involving $\sigma$, $\left| \Xi \right\rangle$ and $\varphi$, leaving the dynamics to depend \emph{only} on the complex structure moduli $\left(z^i,z^{\bar i}\right)$: \begin{equation} T_{\mu \nu } = G_{i\bar j} \left( {\partial _\mu z^i } \right)\left( {\partial _\nu z^{\bar j} } \right) - \frac{1}{2}g_{\mu \nu } G_{i\bar j} \left( {\partial _\alpha z^i } \right)\left( {\partial ^\alpha z^{\bar j} } \right). \end{equation} The Einstein equations then yield \begin{eqnarray} \frac{1}{2}G_{i\bar j} \dot z^i \dot z^{\bar j} &=& - \left[ {2\ddot \beta + 3\dot \beta ^2 + \ddot \gamma + \dot \gamma ^2 + 2\dot \beta \left( {\dot \gamma - \dot \alpha } \right) - \dot \alpha \dot \gamma } \right]\nonumber\\ &=& - 3\left( {\ddot \beta + 2\dot \beta ^2 - \dot \beta \dot \alpha } \right)\nonumber\\ &=& 3\left( {\dot \beta ^2 + \dot \beta \dot \gamma } \right)\label{Gtt}\\ \frac{1}{2}G_{i\bar j} {z^i}' {z^{\bar j}}' &=& -\left[{ 2\beta '' + 3\beta '^2 + \alpha '' + \alpha '^2 + 2\beta '\left( {\alpha ' - \gamma '} \right) - \alpha '\gamma ' }\right]\nonumber\\ &=& - 3\left( {\beta '' + 2\beta '^2 - \beta '\gamma '} \right)\nonumber\\ &=& 3\left( {\beta '^2 + \beta '\alpha '} \right)\label{G11}\\ G_{i\bar j} {z^i}' \dot z^{\bar j} &=& 3\left( {\dot \beta \alpha ' + \beta '\dot \gamma - \dot \beta \beta ' - \dot \beta '} \right).\label{G21} \end{eqnarray} The right hand side equalities in (\ref{Gtt}, \ref{G11}) lead to \begin{eqnarray} \frac{{\ddot \gamma }}{{\dot \gamma }} &=& \frac{{\ddot \beta }}{{\dot \beta }}, \quad\quad \ddot \gamma + \dot \gamma ^2 - \dot \alpha \dot \gamma + 3\dot \beta \dot \gamma = 0 \nonumber\\ \frac{{\alpha ''}}{{\alpha '}} &=& \frac{{\beta ''}}{{\beta '}}, \quad\quad \alpha '' + \alpha '^2 - \alpha '\gamma ' + 3\beta '\alpha ' = 0.\label{AlphaBetaGamma} \end{eqnarray} Now some analysis of the field equations, independently of the metric, can be done as follows. Equations (\ref{AxionsEOM}) and (\ref{aEOM}) are already first integrals, which may be integrated to give \begin{equation} d\varphi +\left\langle {\Xi } \mathrel{\left | {\vphantom {\Xi {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle = n e^{-2\sigma } dh,\label{universalaxionsolution} \end{equation} where $h$ is harmonic in $\left(t,y\right)$, \emph{i.e.} satisfies $\Delta h = 0$, and $n \in \mathbb{R}$. Similarly: \begin{equation} e^\sigma \left| {{\bf\Lambda} d\Xi } \right\rangle - n dh\left| \Xi \right\rangle = s \left| {dK} \right\rangle \,\,\,\,\,{\rm where}\,\,\,\,\,\left| {\Delta K} \right\rangle = 0\,\,\,\,\,{\rm and}\,\,\,\,\,s \in \mathbb{R}.\label{Axion-K} \end{equation} To find an expression for the axions, we look again at the vanishing of the hyperini transformations (\ref{SUSYHyperon1}) and (\ref{SUSYHyperon2}) and make the simplifying assumption $\epsilon_1=\pm\epsilon_2$. This leads to: \begin{eqnarray} \left\langle {V} \mathrel{\left | {\vphantom {V {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle &=& \frac{1}{2}e^{ - \frac{\sigma }{2}} d\sigma - \frac{{in}}{2}e^{ - \frac{3}{2}\sigma } dh\nonumber\\ \left\langle {{\bar V}} \mathrel{\left | {\vphantom {{\bar V} {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle &=& \frac{1}{2}e^{ - \frac{\sigma }{2}} d\sigma + \frac{{in}}{2}e^{ - \frac{3}{2}\sigma } dh \nonumber\\ \left\langle {{U_i }} \mathrel{\left | {\vphantom {{U_i } {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle &=& e^{ - \frac{\sigma }{2}} G_{i\bar j} dz^{\bar j}\nonumber\\ \left\langle {{U_{\bar j} }} \mathrel{\left | {\vphantom {{U_{\bar j} } {d\Xi }}} \right. \kern-\nulldelimiterspace} {{d\Xi }} \right\rangle &=& e^{ - \frac{\sigma }{2}} G_{i\bar j} dz^i. \end{eqnarray} These are the symplectic components of the full vector: \begin{equation} \left| {d\Xi } \right\rangle = e^{ - \frac{\sigma }{2}} \mathfrak{Re} \left[ {\left( {ne^{ - \sigma } dh- id\sigma} \right)\left| V \right\rangle + 2i\left| {U_i } \right\rangle dz^i } \right].\label{dXi} \end{equation} The reality condition $\overline{\left| {d\Xi } \right\rangle} = \left| {d\Xi } \right\rangle $ as well as the Bianchi identity on the axions are trivially satisfied. Substituting (\ref{dXi}) in (\ref{Axion-K}), we get \begin{equation} \left| \Xi \right\rangle dh = \frac{1}{n }e^{ - \frac{\sigma }{2}} \mathfrak{Re}\left[\left(d\sigma+in e^{-\sigma}dh\right)\left| V \right\rangle \right] + \frac{2}{n }e^{ - \frac{\sigma }{2}} \mathfrak{Re}\left[\left| U_i \right\rangle dz^i\right] - \frac{s}{n}e^{-\sigma}\left| {dK} \right\rangle .\label{42} \end{equation} These general constraints on the axions are in fact as far as one can get here. The exact solutions depend on the moduli and the symplectic basis vectors, which in turn require knowledge of the underlying Calabi-Yau metric, which is of course unknown. On the other hand, the harmonic function $h$, which arises from $\Delta h =0$: \begin{equation} e^{\left( {\alpha - \gamma } \right)} \left[ {h'' + \left( {\alpha ' + 3\beta ' - \gamma '} \right)h'} \right] = e^{\left( {\gamma - \alpha } \right)} \left[ {\ddot h - \left( {\dot \alpha - 3\dot \beta - \dot \gamma } \right)\dot h} \right]\label{HarmonicH} \end{equation} can be found, as we will see. Finally, the axions also depend on the dilaton. It can be shown that a simple ansatz for the dilaton (such as $\sigma\propto \ln h$, much used in the literature) is not satisfactory in this case and leads to trivial moduli. In fact, as we will see, the dilaton field equation turns out to be too complicated to solve generally. However, a certain special case solution can be written. \section{The cosmology of a single brane}\label{cosmology} The equations derived in the previous sections (specifically \ref{Gtt}, \ref{G11}, \ref{G21}, \ref{AlphaBetaGamma} and \ref{HarmonicH}) are the basic equations governing the dynamics of the multi-brane spacetime (\ref{GeneralBrane}). We may assume that the warp functions as well as $h$ are separable as follows: \begin{eqnarray} e^{\beta \left( {t,y} \right)} &=& a\left( t \right)F\left( y \right)\nonumber\\ e^{\gamma \left( {t,y} \right)} &=& b\left( t \right)K\left( y \right)\nonumber\\ e^{\alpha \left( {t,y} \right)} &=& c\left( t \right)N\left( y \right)\nonumber\\ h\left( {t,y} \right) &=& k\left( t \right)M\left( y \right).\label{Separation} \end{eqnarray} Our interest is the dynamics of a single brane out of an infinite number of possible 3-branes along $y$, so we will evaluate the functions $F\left( y \right)$, $K\left( y \right)$, $N\left( y \right)$ and $M\left( y \right)$ near the brane of interest and normalize the result to unity, \emph{i.e.} $F\left( 0 \right) = 1$ and so on, where the brane under study is located at $y=0$. The metric then becomes more Robertson-Walker like: \begin{equation} ds^2 = - c^2 \left( t \right)dt^2 + a^2 \left( t \right) \left( {dr^2 + r^2 d\Omega ^2 } \right) + b^2 \left( t \right) dy^2, \end{equation} and equations (\ref{Gtt}, \ref{G11}, \ref{G21}, \ref{AlphaBetaGamma} and \ref{HarmonicH}) simply reduce to: \begin{eqnarray} \frac{{\ddot b}}{{\dot b}} - \frac{{\dot b}}{b} &=& \frac{{\ddot a}}{{\dot a}} - \frac{{\dot a}}{a} \nonumber\\ \left( {\frac{{\dot c}}{c}} \right)&=&\left( {\frac{{\ddot b}}{\dot b}} \right)+3\left( {\frac{{\dot a}}{a}} \right)\label{ScaleEquations1}\\ G_{i\bar j} \dot z^i \dot z^{\bar j}&=& 6\left[ {\left( {\frac{{\dot a}}{a}} \right)^2 + \left( {\frac{{\dot a}}{a}} \right)\left( {\frac{{\dot b}}{b}} \right)} \right]\label{ScaleEquations2}\\ G_{i\bar j} {z^i}' {z^{\bar j}}'&=&G_{i\bar j} {z^i}' \dot z^{\bar j} =0.\label{ScaleEquations3}\\ \ddot k - \left[ {\left( {\frac{{\ddot a}}{\dot a}} \right) - \left( {\frac{{\dot a}}{a}} \right)} \right]\dot k &=& p^2\left( {\frac{c}{b}} \right)^2 k, \quad\quad p \in \mathbb{R}.\label{ScaleEquations4} \end{eqnarray} Equations (\ref{ScaleEquations1}) can be exactly solved in terms of the brane's scale factor $a\left(t\right)$ as follows: \begin{eqnarray} b\left( t \right) &=& G_2 a^{G_1 } \label{b_equation}\\ c\left( t \right) &=& G_3 \dot aa^{2 + G_1 }\label{c_equation} \end{eqnarray} where $G_i \in \mathbb{R}$ are arbitrary integration constants. As noted earlier, complete solutions of the scalar fields of the hypermultiplets would require full knowledge of the structure of the underlying manifold $\mathcal{M}$. Some insight may be gained by solving equation (\ref{ScaleEquations4}) for $k$, since the harmonic function is the only connection between the metric's warp factors and the hypermultiplets in the bulk. Fortunately, this is easily done: For the special case of the vanishing of the separability constant $p=0$, we find \begin{equation} k\left( t \right) = G_5 + G_6 \ln a. \end{equation} While for a general $p\ne 0$: \begin{equation} k\left( t \right)=G_5 I_0\left(\frac{p G_3 }{3 G_2} a^3 \right)+ G_6 K_0\left(\frac{p G_3 }{3 G_2}a^3\right), \end{equation} where $I_0$ and $K_0$ are the modified Bessel functions of the first and second kinds respectively. In the previous section we found partially explicit forms for the axions, all dependent on $k$ and the moduli. If we direct our attention to the dilaton, its time dependence can be found from (\ref{DilatonEOM}): \begin{equation} \ddot \sigma + \frac{1}{2}\dot \sigma ^2 = \frac{{n^2 }}{2}e^{ - 2\sigma } \dot k^2 - 12\left( {G_1 + 1} \right)\left( {\frac{{\dot a}}{a}} \right)^2, \end{equation} which is unfortunately too complicated to solve in terms of an arbitrary $a$. We did find, however, one (rather trivial) solution for the case $p=0$, $G_1=-1$ and assuming $a\left(t\right) \propto e^{\omega t}$, where $\omega \in \mathbb{R}$: \begin{equation} \sigma \left( t, 0 \right) = \ln \left[ {\frac{{G_7 }}{4}t^2 + \frac{{G_7 G_8 }}{2}t + \frac{{G_7 G_8^2 }}{4} + \frac{{n^2 \omega ^2 G_6^2 }}{{G_7 }}} \right]. \end{equation} Now, while again using the assumption $\epsilon_1=\pm\epsilon_2=\epsilon$, the vanishing of the gravitini equations (\ref{SUSYGraviton}) gives the following time dependent form for the near-brane spinors \begin{equation} \epsilon\left( t, 0 \right) = e^{\frac{\sigma }{2} + \frac{3}{4}in\Omega k - \Upsilon } \hat \epsilon, \end{equation} where $\hat \epsilon$ is an arbitrary constant spinor and the functions $\Upsilon$ and $\Omega$ are solutions of $\dot \Upsilon = Y$ and $\frac{d}{{dt}}\left( {\Omega k} \right) = e^{ - \sigma } \dot k$. From a cosmological perspective, the major result here follows from the moduli expression (\ref{ScaleEquations2}): \begin{equation} \left( {\frac{{\dot a}}{a}} \right)^2=\frac{G_{i\bar j} \dot z^i \dot z^{\bar j}}{6\left(G_1+1\right)}.\label{ModuliAsLambda} \end{equation} This is a surprisingly simple Friedmann-type equation. It states that the brane-universe's Hubble parameter is proportional to the complex structure moduli (clearly this requires $G_1>-1$). Unless $G_{i\bar j} \dot z^i \dot z^{\bar j}$ is vanishing, one can correlate the value of the brane's acceleration at any time with the evolution of the space of complex structure moduli $\mathcal{M}_C$. Thinking in terms of the cosmology of our own universe, one could ask what form of the moduli should be assumed? From a phenomenological perspective, massive unstable moduli (of any type) must have been densely produced in the Big Bang itself, and the heavier they were the faster they must have decayed early on \cite{Dine:2006ii,Bodeker:2006ij}. While moduli stabilization is necessary for other reasons \cite{Dudas:2012wi}, most realistic scenarios involve rapidly decaying moduli. Using (\ref{ModuliAsLambda}), we find that any reasonable choice of $a$ would necessarily lead to the decay of the moduli, with varying degrees of instability, as expected. For example: \begin{eqnarray} a &=& t^\omega \quad\quad \rightarrow\quad\quad G_{i\bar j} \dot z^i \dot z^{\bar j}=\frac{{{\rm 6}\omega ^{\rm 2} (G_1 + 1)}}{{t^2 }}\nonumber\\ a &=& e^{\omega t} \quad\quad \rightarrow\quad\quad G_{i\bar j} \dot z^i \dot z^{\bar j}={{\rm 6}\omega ^{\rm 2} (G_1 + 1)}\nonumber\\ a &=& \ln\left(\omega t\right) \quad\quad \rightarrow\quad\quad G_{i\bar j} \dot z^i \dot z^{\bar j}=\frac{{6\left( {G_1 + 1} \right)}}{{t^2 \ln ^2 \left( {\omega t} \right)}}, \end{eqnarray} and so on. More interestingly, one can choose a form of $a$ that represents different values of acceleration over $t\rightarrow 0$ and $t\rightarrow \infty$ epochs. This can be, for example: \begin{equation} a\left( t \right) = e^{{t \mathord{\left/ {\vphantom {t \omega }} \right. \kern-\nulldelimiterspace} \omega }} - e^{ - \kappa t}, \quad\quad \omega ,\kappa \in \mathbb{R} > 0\label{Acceleration} \end{equation} The explicit values of the constants $\omega$ and $\kappa$ dictate the dominant accelerative behavior of the brane at initial as well as large times. For example, larger values of $\kappa$ lead to more extreme early inflationary acceleration, while larger values of $\omega$ lead to slower, later time, accelerations. A model such as (\ref{Acceleration}) leads to \begin{equation} G_{i\bar j} \dot z^i \dot z^{\bar j}=\frac{{6(G_1 + 1)}}{{\omega ^{\rm 2} }}\left( {\frac{{e^{{t \mathord{\left/ {\vphantom {t \omega }} \right. \kern-\nulldelimiterspace} \omega }} + \omega \kappa e^{ - \kappa t} }}{{e^{{t \mathord{\left/ {\vphantom {t \omega }} \right. \kern-\nulldelimiterspace} \omega }} - e^{ - \kappa t} }}} \right)^2\label{ModuliDecay} \end{equation} which is initially very large, then converges to a constant quantity: \begin{equation} \mathop {\lim }\limits_{t \to \infty } G_{i\bar j} \dot z^i \dot z^{\bar j}= \frac{{6(G_1 + 1)}}{{\omega ^{\rm 2} }}. \end{equation} Reversing the logic, this suggests that the complex structure moduli are highly unstable, decaying very rapidly at very early times, synchronous with an inflationary period. The decay rate is controlled by $\kappa$, which may then be thought of as related to the mass of the moduli. At later times, however, a slowly convergent value of the moduli coincides with a slow accelerative expansion. From that perspective, the value of $\omega$ can possibly be thought of as related to the original density of the moduli. This behavior is in perfect agreement with the prevalent understanding in the literature. Highly dense massive moduli produced in the very early hot universe and rapidly decaying to an almost constant value correlate with the accelerative expansion of the universe. Whether this correlation is a direct causality or just a bi-product of a more complex mechanism is an important question best left to future study. Fig (\ref{fig:1}) gives a comparative sketch of $a\left( t \right)$ and $G_{i\bar j} \dot z^i \dot z^{\bar j}$ based on this argument. \begin{figure}[hp] \centering \includegraphics[scale=0.6]{Hubble_Workshop_VI} \caption{The correlation of the brane's scale factor (\ref{Acceleration}) with the norm of the moduli (\ref{ModuliDecay}).} \label{fig:1} \end{figure} \pagebreak \section{Conclusion} We have constructed a multi 3-brane embedding in ungauged five dimensional $\mathcal{N}=2$ supergravity. We derived the dynamical equations of this spacetime as well as studied the general form of the hypermultiplet scalars in the bulk. A natural difficulty that arises in this and similar calculations is that the hypermultiplet fields are dependent on the unknown form of the underlying Calabi-Yau space. The best one can do is to find constraints on the fields, rather than explicit solutions. The symplectic structure of the theory was used to simplify these relations. An added difficulty is the complexity of the resulting constraints, particularly on the dilaton in this case. We did find one analytical solution for the dilaton's time dependence that unfortunately suffers from being too trivial. The other quantity that the hypermultiplet scalars depend on is an arbitrary function harmonic in the bulk dimension. Fortunately this function can be found analytically in terms of the metric warp functions. We also showed that if one focuses on only one of the branes, its time evolution is dependent solely on the norm of the moduli of the complex structure of the Calabi-Yau. In cosmological terms, the moduli act as both a cosmological constant (or correction thereof) and inflaton potential. Focusing on specific samples of brane-universe evolution, the moduli exhibit what seems to be an instability. At very early times, their norm has a very large value that decays and converges to a constant quantity (which may be vanishing) at later times. In one particular case, the decay is very rapid and correlates directly with an early inflationary epoch. In terms of a possible application of these results to our own universe's cosmological evolution, we note that the early production and decay of heavy moduli is required to explain the phenomenology of the early universe, in perfect agreement with our conclusions. There is still, however, a lot of work to be done. For example, the correlation between the brane's accelerative behavior and that of the decay of the moduli is largely unexplained. More in-depth analysis of the flow and evolution of moduli needs to be performed before a specific mechanism explaining this behavior can be pinpointed. In addition, the brane we studied was assumed vacuous, devoid of all matter or radiation. A more realistic model is needed before one can make a concrete connection with our own universe's cosmology. All of this we plan to explore in future work. \pagebreak
1,108,101,566,141
arxiv
\section{Introduction} Since the influential paper of Morris and Thorne\cite{Morris}, one of the most fascinating features of the theory of general relativity is the potential existence of space-times with wormholes. It is believed that they are the short-cut between otherwise distant or unconnected regions of the universe. Topologically, wormhole space-times are the same as those of black holes (BHs), however its throat which possesses a minimal surface maintained in the time evolution allows a traveler to both direction. The throat is held open by the presence of a phantom field \cite{Caldwell}. Namely, phantom energy is precisely what is needed to support traversable wormholes. However, this exotic quantity violates the null energy condition, which signals the existence of the dark energy that dominates our Universe \cit {Caroll}. In the mid-1970s, Stephen Hawking looked into whether BHs could radiate thermally according to the quantum mechanics using the Wick rotation method \cite{Hawking,Hawking',Hawking''}. Throughout the space, short-lived \textquotedblleft virtual\textquotedblright\ particles (pair of the real particle and the anti-particle) continually pop into and out of the existence. Hawking realized that if the anti-particle falls into a BH and the real one escapes, and the BH would emit radiation, glowing like a dying ember.\ Heuristically, there exist several derivations of the HR, such as the Damour-Rufini method, the HJ method, and the Parikh-Wilczek tunneling method (PWTM) (see for instance \cit {Hawking1,Damour,Wilczek,Wilczek1,Mann,Mann1,Christensen,Zhang1}). The reader is referred to \cite{review} for the topical review. Meanwhile, it is worth noting that the PWTM is only applicable to a future outer trapping horizon of the wormhole \cite{hawyard2009}. All these methods can be used to calculate the emission/absorbtion probabilities of the particles penetrating the particular surface (event horizon) of the BH from the inside to the outside of the horizon, or vice-versa, via the following relation \begin{equation} \Gamma =e^{-2ImS/\hslash }, \label{1} \end{equation} where $S$ is the action of the classically forbidden trajectory. Thus, the Hawking temperature is derived from the tunneling rate of the emitted particles (see for example \cit {Jing,Mann3,yang,Kruglov1,ran2,sharif,ran,ChenZhou,ali2,ali1}. The remainder of this paper is organized as follows. In Sec. II, we introduce the 3+1 dimensional TLWH \cite{kim}. Section III analyzes the Proca equation for massive vector particles in the past outer trapping horizon \cite{hawyard94,hayward,hayward1} geometry of the TLWH. We show that the Proca equations amalgamated with the HJ method can be reduced to a single equation, which makes possible to compute the probabilities of the emission/absorption of the spin-1 particles. Then, we read the tunneling rate of the radiated particles and use it to derive the Hawking temperature of the TLWH. Finally, in Sec. VI, the conclusions are summarized and further comments are added. \section{TLWH in 3+1 Dimensions} There is an analog of the BHs with the wormhole topology \cite{Topology}. However, instead of the event horizon, the wormholes must have a throat, which allows the particles to pass through it in both directions. To construct the throat of a wormhole, an exotic matter is required. Since the wormholes have two ends, the inside particles can naturally radiate from the both ends. For studying the HR of TLWH, we consider a general spherically symmetric and dynamic wormhole with a past outer trapping horizon. As it is shown by \cit {kim}, this local metric can be expressed in terms of the generalized retarded Eddington-Finkelstein coordinates as \begin{equation} ds^{2}=-Cdu^{2}-2dudr+r^{2}\left( d\theta ^{2}+Bd\varphi ^{2}\right) , \label{2} \end{equation} where $C=$ $1-2M/r$\ and $B=\sin ^{2}\theta $. $M$ represents the gravitational energy in space with this symmetry,\ which is the so-called Misner-Sharp energy \cite{MisnerSharp}. $\ $It is defined by $M=\frac{1}{2 r(1-\partial ^{a}r\partial _{a}r),$ which becomes $M=\frac{1}{2}r$ on a trapping horizon. Moreover, the retarded coordinates admit that the marginal surfaces in which $C=0$ (at horizon: $r=r_{0}$) are the past marginal surfaces \cite{kim}. \section{HR of Vector Particles From 3+1 Dimensional TLWH} We start to the section by introducing the Proca equation for a curved space-time \cite{K2,ali3}: \begin{equation} \frac{1}{\sqrt{-g}}\frac{\partial \left( \sqrt{-g}\psi ^{\nu \mu }\right) } \partial x^{\mu }}+\frac{m^{2}}{\hbar ^{2}}\psi ^{\nu }=0, \label{3} \end{equation} where the wave function for a 3+1 dimension is given by $\psi _{\nu }=(\psi _{0},\psi _{1},\psi _{2},\psi _{3})$. Next, within the framework of the WKB approximation, we substitute the following HJ ans\"{a}tz into Eq. (3) \begin{equation} \psi _{\nu }=\left( c_{0},c_{1},c_{2},c_{3}\right) e^{\frac{i}{\hbar S(u,r,\theta ,\phi )}, \label{4} \end{equation} where $\left( c_{0},c_{1},c_{2},c_{3}\right) $ denote the arbitrary real constants. The action $S(u,r,\theta ,\phi )$ is given by \begin{equation} S(u,r,\theta ,\phi )=S_{0}(u,r,\theta ,\phi )+\hbar S_{1}(u,r,\theta ,\phi )+\hbar ^{2}S_{2}(u,r,\theta ,\phi )+.... \label{5} \end{equation} Since metric (2) is symmetric, we have the Killing vectors $\partial _{\theta }$\ and $\partial _{\phi }$.\ So, one can apply the separation of variables method to the action $S_{0}(u,r,\theta ,\phi )$: \begin{equation} S_{0}=Eu-W(r)-j\theta -k\phi , \label{6} \end{equation where $E$ and $(j,k)$ are energy and real angular constants, respectively. After inserting Eqs. (4), (5), and (6) into Eq. (3), we obtain a matrix equation $\Delta \left( c_{0,}c_{1},c_{2},c_{3}\right) ^{T}=0$ (to the leading order in $\hbar $)$,$ which has the following non-zero the components : \begin{eqnarray} \Delta _{11} &=&2B\left[ \partial _{r}W(r)\right] ^{2}r^{2},\ \notag \\ \Delta _{12} &=&\Delta _{21}=2m^{2}r^{2}B+2B\partial _{r}W(r)Er^{2}+2Bj^{2}+2k^{2}, \notag \\ \Delta _{13} &=&-\frac{2\Delta _{31}}{r^{2}}=-2Bj\partial _{r}W(r),\ \notag \\ \Delta _{14} &=&\frac{\Delta _{41}}{Br^{2}}=-2k\partial _{r}W(r), \notag \\ \Delta _{22} &=&-2BCm^{2}r^{2}+2E^{2}r^{2}B-2j^{2}BC-2k^{2}C,\ \label{7n} \\ \Delta _{23} &=&\frac{-2\Delta _{32}}{r^{2}}=2jBC\partial _{r}W(r)+2EjB, \notag \\ \Delta _{24} &=&\frac{\Delta _{42}}{Br^{2}}=2kC\partial _{r}W(r)+2kE, \notag \\ \Delta _{33} &=&m^{2}r^{2}B+2BEr^{2}\partial _{r}W(r)+r^{2}BC\left[ \partial _{r}W(r)\right] ^{2}+k^{2}, \notag \\ \Delta _{34} &=&\frac{-\Delta _{43}}{2B}=-kj, \notag \\ \Delta _{44} &=&-2r^{2}BC\left[ \partial _{r}W(r)\right] ^{2}-4BEr^{2 \partial _{r}W(r)-2B(m^{2}r^{2}+j^{2}). \notag \end{eqnarray} A non-trivial solution is conditional on the termination of the determinant of the $\Delta $-matrix ($\mbox{det}\Delta =0$). Hence, we ge \begin{equation} \mbox{det}\Delta =64Bm^{2}r^{2}\left\{ \frac{1}{2}r^{2}BC\left[ \partial _{r}W(r)\right] ^{2}+BEr^{2}\partial _{r}W(r)+\frac{B}{2}\left( m^{2}r^{2}+j^{2}\right) +\frac{k^{2}}{2}\right\} ^{3}=0. \label{8n} \end{equation} Solving Eq. (8) for $W(r)$ yields \begin{equation} W_{\pm }(r)=\int \left( \frac{-E}{C}\pm \sqrt{\frac{E^{2}}{C^{2}}-\frac{m^{2 }{C}-\frac{j^{2}}{CB^{2}r^{2}}-\frac{k^{2}}{Cr^{2}}}\right) dr. \label{9} \end{equation In the vicinity of the horizon ($r\rightarrow r_{0}$), the above intergral takes the following form \begin{equation} W_{\pm }(r)\simeq \int \left( \frac{-E}{C}\pm \frac{E}{C}\right) dr. \label{10} \end{equation} According to Eq. (1), the probabilities of emitted/absorbed particles depend on the imaginary contribution of the action. Since $C=0$ on the horizon, Eq. (10) has a pole. Therefore, the associated contribution is obtained by deforming the contour of integration in the upper $r$ half-plane. In short, at the horizon, Eq. (10) becomes \begin{equation} W_{\pm }=i\pi \left( \frac{-E}{2\kappa |_{H}}\pm \frac{E}{2\kappa |_{H} \right) . \label{11} \end{equation} Whence \begin{equation} ImS=ImW_{\pm }, \label{12} \end{equation} where $\kappa |_{H}=\partial _{r}C/2$ is the surface gravity at the horizon. It should be noted that since the throat is an outer trapping horizon, the \kappa |_{H}$ is positive quantity \cite{kim,kim2}. If we set the probability of absorption to $100\%$ (i.e., $\Gamma _{absorption}\approx e^{-2ImW}\approx 1)$ so that we consider\ $W_{+}$ for the ingoing particles, and consequently $W_{-}$ stands for the outgoing particles, we can compute the tunneling rate \cite{Mann1,K2,ran2,sharif,ran} of the vector particles as \begin{equation} \Gamma =\frac{\Gamma _{emission}}{\Gamma _{absorption}}=\Gamma _{emission}\approx e^{-2ImW_{-}}=e^{\frac{2\pi E}{\kappa |_{H}}}. \label{13} \end{equation} Comparing \ Eq. (13) with the Boltzmann factor $\Gamma \approx $ $e^{-\beta E}$ ($\beta $ is the inverse temperature), we then have \begin{equation} T|_{H}=-\frac{\kappa |_{H}}{2\pi }, \label{14} \end{equation where $T|_{H}$ is the Hawking temperature of the TLWH. But $T|_{H}$ is negative, as formerly stated by \cite{kim,kim2}. The main reason of this negativeness is the phantom energy \cite{kim,yurov}, which is located at the throat of wormhole. Furthermore, because of the phantom energy, the ordinary matter can travel backward in time \cite{kim2}\textbf{.} \section{Conclusion} In summary, we have calculated the HR of the massive vector particles from the TLWH in 3+1 dimensions. To this end, we have used the Proca equation. The probabilities of the vector particles crossing the trapped horizon of the TLWH\ have been obtained by applying the HJ method. The tunneling rate of the vector particles has been obtained, and comparing it with the Boltzmann factor the Hawking temperature of the TLWH has been obtained. Although the computed temperature is negative, our result is consistent with the results of \cite{kim,kim2}. Remarkably, we infer from the the negative T|_{H}$ that past outer trapping horizon of the TLWH radiate thermal phantom energy. On the other hand, it is a fact that phantom energy radiation must decrease both the size of the throat of the wormhole\ and its entropy. However, this does not constitute a problem. Because the total entropy of universe always increases, and consequently it prevents the violation of the second law of thermodynamics \cite{Diaz}. \bigskip
1,108,101,566,142
arxiv
\section{Introduction}\label{sec:INT} Eigenvalues of random matrices form a strongly correlated point process. One manifestation of this fact is the unusually small fluctuation of their linear statistics making the eigenvalue process distinctly different from a Poisson point process. Suppose that the \(n\times n\) random matrix \(X\) has i.i.d.\ entries of zero mean and variance \(1/n\). The empirical density of the eigenvalues \(\{ \sigma_i\}_{i=1}^n\) converges to a limit distribution; it is the uniform distribution on the unit disk in the non-Hermitian case \emph{(circular law)} and the semicircular density in the Hermitian case \emph{(Wigner semicircle law)}. For test functions \(f\) defined on the spectrum one may consider the fluctuation of the linear statistics and one expects that \begin{equation}\label{eq:linst} L_n(f):= \sum_{i=1}^n f(\sigma_i) - \E \sum_{i=1}^n f(\sigma_i) \sim \mathcal{N} (0, V_f) \end{equation} converges to a centred normal distribution as \(n\to \infty\). The variance \(V_f\) is expected to depend only on the second and fourth moments of the single entry distribution. Note that, unlike in the usual central limit theorem, there is no \(1/\sqrt{n}\) rescaling in~\eqref{eq:linst} which is a quantitative indication of a strong correlation. The main result of the current paper is the proof of~\eqref{eq:linst} for non-Hermitian random matrices with complex i.i.d.\ entries and for general test functions \(f\). We give an explicit formula for \(V_f\) that involves the fourth cumulant of \(X\) as well, disproving a conjecture by Chafa{\"\i}~\cite{chefai}. By polarisation, from~\eqref{eq:linst} it also follows that the limiting joint distribution of \((L_n(f_1), L_n(f_2), \ldots , L_n(f_k))\) for a fixed number of test functions is jointly Gaussian. We remark that another manifestation of the strong eigenvalue correlation is the repulsion between neighbouring eigenvalues. For Gaussian ensembles the local repulsion is directly seen from the well-known determinantal structure of the joint distribution of all eigenvalues; both in the non-Hermitian \emph{Ginibre} case and in the Hermitian \emph{GUE/GOE} case. In the spirit of \emph{Wigner-Dyson-Mehta universality} of the local correlation functions~\cite{MR0220494} level repulsion should also hold for random matrices with general distributions. While for the Hermitian case the universality has been rigorously established for a large class of random matrices (see e.g.~\cite{MR3699468} for a recent monograph), the analogous result for the non-Hermitian case is still open in the bulk spectrum (see, however,~\cite{MR4221653} for the edge regime and~\cite{MR3306005} for entry distributions whose first four moments match the Gaussian). These two manifestations of the eigenvalue correlations cannot be deduced from each other, however the proofs often share common tools. For \(n\)-independent test functions \(f\),~\eqref{eq:linst} apparently involves understanding the eigenvalues only on the macroscopic scales, while the level repulsion is expressly a property on the microscopic scale of individual eigenvalues. However the suppression of the usual \(\sqrt{n}\) fluctuation is due to delicate correlations on all scales, so~\eqref{eq:linst} also requires understanding local scales. Hermitian random matrices are much easier to handle, hence fluctuation results of the type~\eqref{eq:linst} have been gradually obtained for more and more general matrix ensembles as well as for broader classes of test functions, see, e.g.~\cite{MR1487983,MR2189081,MR1411619,MR2561434,MR2829615} and~\cite{MR3116567} for the weakest regularity conditions on \(f\). Considering \(n\)-dependent test functions, Gaussian fluctuations have been detected even on mesoscopic scales~\cite{MR1678012,MR1689027, MR3959983, MR3678478, MR4009708, MR3852256, MR4187127,MR4255183,2001.07661}. Non-Hermitian random matrices pose serious challenges, mainly because their eigenvalues are potentially very unstable. When \(X\) has i.i.d.\ centred Gaussian entries with variance \(1/n\) (this is called the \emph{Ginibre ensemble}), the explicit determinantal formulas for the correlation functions may be used to compute the distribution of the linear statistics \(L_n(f)\). Forrester in~\cite{MR1687948} proved~\eqref{eq:linst} for complex Ginibre ensemble and radially symmetric \(f\) and obtained the variance \(V_f = (4\pi)^{-1}\int_{\mathbf{D}} \abs{\nabla f}^2 \operatorname{d}\!{}^2z\) where \(\mathbf{D} \) is the unit disk. He also gave a heuristic argument based on Coulomb gas theory for general \(f\) and his calculations predicted an additional boundary term \(\frac{1}{2}\norm{f}_{\dot{H}^{1/2}(\partial \mathbf{D})}^2\) in the variance \(V_f\). Rider considered test functions \(f\) depending only on the angle~\cite{MR2095933} when \(f\not\in H^1(\mathbf{D})\) and accordingly \(V_f\) grows with \(\log n\) (similar growth is proved for \(f=\log\) in~\cite{MR3161483}). Finally, Rider and Vir\'ag in~\cite{MR2361453} have rigorously verified Forrester's prediction for general\(f\in C^1(\mathbf{D})\) using a cumulant formula for determinantal processes found first by Costin and Lebowitz~\cite{MR3155254} and extended by Soshnikov~\cite{MR1894104}. They also presented a \emph{Gaussian free field (GFF)} interpretation of the result that we extend in Section~\ref{sec GFF}. The first result beyond the explicitly computable Gaussian case is due to Rider and Silverstein~\cite[Theorem 1.1]{MR2294978} who proved~\eqref{eq:linst} for \(X\) with i.i.d.\ complex matrix elements and for test functions \(f\) that are analytic on a large disk. Analyticity allowed them to use contour integration and thus deduce the result from analysing the resolvent at spectral parameters far away from the actual spectrum. The domain of analyticity was optimized in~\cite{MR3540493}, where extensions to elliptic ensembles were also proven. Polynomial test functions via the alternative moment method were considered by Nourdin and Peccati in~\cite{MR2738319}. The analytic method of~\cite{MR2294978} was recently extended by Coston and O'Rourke~\cite{MR4125967} to fluctuations of linear statistics for \emph{products} of i.i.d.\ matrices. However, these method fail for a larger class of test functions. Since the first four moments of the matrix elements fully determine the limiting eigenvalue statistics, Tao and Vu were able to compare the fluctuation of the local eigenvalue density for a general non-Gaussian \(X\) with that of a Ginibre matrix~\cite[Corollary 10]{MR3306005} assuming the first four moments of \(X\) match those of the complex Ginibre ensemble. This method was extended by Kopel~\cite[Corollary 1]{1510.02987} to general smooth test functions with an additional study on the real eigenvalues when \(X\) is real (see also the work of Simm for polynomial statistics of the real eigenvalues~\cite{MR3612267}). Our result removes the limitations of both previous approaches: we allow general test functions and general distribution for the matrix elements without constraints on matching moments. We remark that the dependence of the variance \(V_f\) on the fourth cumulant of the single matrix entry escaped all previous works. The Ginibre ensemble with its vanishing fourth cumulant clearly cannot catch this dependence. Interestingly, even though the fourth cumulant in general is not zero in the work Rider and Silverstein~\cite{MR2294978}, it is multiplied by a functional of \(f\) that happens to vanish for analytic functions (see~\eqref{eq:cov},~\eqref{eq:expv} and Remark~\ref{rem:compo} later). Hence this result did not detect the precise role of the fourth cumulant either. This may have motivated the conjecture~\cite{chefai} that the variance does not depend on the fourth cumulant at all. In order to focus on the main new ideas, in this paper we consider the problem only for \(X\) with genuinely complex entries. Our method also works for real matrices where the real axis in the spectrum plays a special role that modifies the exact formula for the expectation and the variance \(V_f\) in~\eqref{eq:linst}. This leads to some additional technical complications that we have resolved in a separate work~\cite{MR4235475} which contains the real version of our main Theorem~\ref{theo:CLT}. Finally, we remark that the problem of fluctuations of linear statistics has been considered for \(\beta\)-log-gases in one and two dimensions; these are closely related to the eigenvalues of the Hermitian, resp.\ non-Hermitian Gaussian matrices for classical values \(\beta=1,2,4\) and for quadratic potential. In fact, in two dimensions the logarithmic interaction also corresponds to the Coulomb gas from statistical physics. Results analogous to~\eqref{eq:linst} in one dimension were obtained e.g.\ in~\cite{MR1487983, MR3063494, 1303.1045, MR3865662, MR4021234,MR3885548, MR4009708, MR4168391}. In two dimensions similar results have been established both in the macroscopic~\cite{MR3788208} and in the mesoscopic~\cite{MR4063572} regimes. We now outline the main ideas in our approach. We use Girko's formula~\cite{MR773436} in the form given in~\cite{MR3306005} to express linear eigenvalue statistics of \(X\) in terms of resolvents of a family of \(2n\times 2n\) Hermitian matrices \begin{equation}\label{eq:linz1} H^z:= \begin{pmatrix} 0 & X-z \\ X^*-\overline{z} & 0 \end{pmatrix} \end{equation} parametrized by \(z\in \mathbf{C} \). This formula asserts that \begin{equation}\label{girko} \sum_{\sigma\in \Spec(X)} f(\sigma) = -\frac{1}{4\pi} \int_{\mathbf{C} } \Delta f(z)\int_0^\infty \Im \Tr G^z(i\eta)\operatorname{d}\!{}\eta \operatorname{d}\!{}^2 z \end{equation} for any smooth, compactly supported test function \(f\) (the apparent divergence of the \(\eta\)-integral at infinity can easily be removed, see~\eqref{eq:GirkosplitA}). Here we set \(G^z(w):= (H^z-w)^{-1}\) to be the resolvent of \(H^z\). We have thus transformed our problem to a Hermitian one and all tools and results developed for Hermitian ensembles in the recent years are available. Utilizing Girko's formula requires a good understanding of the resolvent of \(H^z\) along the imaginary axis for all \(\eta>0\). On very small scales \(\eta\ll n^{-1}\), there are no eigenvalues thus \(\Im \Tr G^z(\mathrm{i}\eta)\) is negligible. All other scales \(\eta\gtrsim n^{-1}\) need to be controlled carefully since \emph{a priori} they could all contribute to the fluctuation of \(L_n(f)\), even though \emph{a posteriori} we find that the entire variance comes from scales \(\eta\sim 1\). In the mesoscopic regime \(\eta\gg n^{-1}\), \emph{local laws} from~\cite{MR3770875, 1907.13631} accurately describe the leading order deterministic behaviour of \(\frac{1}{n}\Tr G^z(\mathrm{i} \eta)\) and even the matrix elements \(G^z_{ab}(\mathrm{i}\eta)\); now we need to identify the next order fluctuating term in the local law. In other words we need to prove a central limit theorem for the traces of resolvents \(G^z\). In fact, based upon~\eqref{girko}, for the higher \(k\)-th moments of \(L_n(f)\) we need the joint distribution of \(\Tr G^{z_l}(\mathrm{i} \eta)\) for different spectral parameters \(z_1, z_2, \ldots, z_k\). This is one of our main technical achievements. Note that the asymptotic joint Gaussianity of traces of Wigner resolvents \(\Tr (H-w_1)^{-1}, \Tr (H-w_2)^{-1}, \ldots\) at different spectral parameters has been obtained in~\cite{MR4095015, MR3678478}. However, the method of this result is not applicable since the role of the spectral parameter \(z\) in~\eqref{eq:linz1} is very different from \(w\); it is in an off-diagonal position thus these resolvents do not commute and they are not in the spectral resolution of a single matrix. The microscopic regime, \(\eta\sim n^{-1}\), is much more involved than the mesoscopic one. Local laws and their fluctuations are not sufficient, we need to trace the effect of the individual eigenvalues \(0\le \lambda_1^z\le \lambda_2^z, \ldots\) of \(H^z\) near zero (the spectrum of \(H^z\) is symmetric, we may focus on the positive eigenvalues). Moreover, we need their \emph{joint} distribution for different \(z\) parameters which, for arbitrary \(z\)'s, is not known even in the Ginibre case. We prove, however, that \(\lambda_1^z\) and \(\lambda_1^{z'}\) are asymptotically independent if \(z\) and \(z'\) are far away, say \(\abs{z-z'}\ge n^{-1/100}\). A similar result holds simultaneously for several small eigenvalues. Notice that due to the \(z\)-integration in~\eqref{girko}, when the \(k\)-th moment of \(L_n(f)\) is computed, the integration variables \(z_1, z_2, \ldots, z_k\) are typically far away from each other. The resulting independence of the spectra of \(H^{z_1}\), \(H^{z_2}, \ldots \) near zero ensures that the microscopic regime eventually does not contribute to the fluctuation of \(L_n(f)\). The proof of the independence of \(\lambda_1^z\) and \(\lambda_1^{z'}\) relies on the analysis of the \emph{Dyson Brownian motion} (DBM) developed in the recent years~\cite{MR3699468} for the proof of the Wigner-Dyson-Mehta universality conjecture for Wigner matrices. The key mechanism is the fast local equilibration of the eigenvalues \({\bm \lambda}^z(t):= \{ \lambda_i^z(t)\}\) along the stochastic flow generated by adding a small time-dependent Gaussian component to the original matrix. This Gaussian component can then be removed by the \emph{Green function comparison theorem} (GFT). One of the main technical results of~\cite{MR3916329} (motivated by the analogous analysis in~\cite{MR3914908} for Wigner matrices that relied on coupling and homogenisation ideas introduced first in~\cite{MR3541852}) asserts that for any fixed \(z\) the DBM process \({\bm \lambda}^z(t)\) can be pathwise approximated by a similar DBM with a different initial condition by \emph{exactly} coupling the driving Brownian motions in their DBMs. We extend this idea to simultaneously trailing \({\bm \lambda}^z(t)\) and \({\bm \lambda}^{z'}(t)\) by their independent Ginibre counterparts. The evolutions of \({\bm \lambda}^z(t)\) and \({\bm \lambda}^{z'}(t)\) are not independent since their driving Brownian motions are correlated; the correlation is given by the eigenfunction overlap \(\braket{ u_i^z, u_j^{z'}}\braket{ v_j^{z'}, v_i^z}\) where \(w_i^z = (u_i^z, v_i^z)\in \mathbf{C}^n\times \mathbf{C}^n\) denotes the eigenvector of \(H^z\) belonging to \(\lambda_i^z\). However, this overlap turns out to be small if \(z\) and \(z'\) are far away and \(i\) is not too big. Thus the analysis of the microscopic regime has two ingredients: (i) extending the coupling idea to driving Brownian motions whose distributions are not identical but close to each other; and (ii) proving the smallness of the overlap. While (i) can be achieved by relatively minor modifications to the proofs in~\cite{MR3916329}, (ii) requires to develop a new type of local law. Indeed, the overlap can be estimated in terms of traces of products of resolvents, \(\Tr G^z( \mathrm{i}\eta) G^{z'}( \mathrm{i}\eta')\) with \(\eta, \eta'\sim n^{-1+\epsilon}\) in the mesoscopic regime. Customary local laws, however, do not apply to a quantity involving \emph{products} of resolvents. In fact, even the leading deterministic term needs to be identified by solving a new type of deterministic Dyson equation. We first show the stability of this new equation using the lower bound on \(\abs{z-z'}\). Then we prove the necessary high probability bound for the error term in the Dyson equation by a diagrammatic cumulant expansion adapted to the new situation of product of resolvents. The key novelty is to extract the effect that \(G^z\) and \(G^{z'}\) are weakly correlated when \(z\) and \(z'\) are far away from each other. We close this section with an important remark concerning the proofs for Hermitian versus non-Hermitian matrices. Similarly to Girko's formula~\eqref{girko}, the linear eigenvalue statistics for Hermitian matrices are also expressed by an integral of the resolvents over all spectral parameters. However, in the corresponding Helffer-Sj\"ostrand formula, sufficient regularity of \(f\) directly neutralizes the potentially singular behaviour of the resolvent near the real axis, giving rise to CLT results even with suboptimal control on the resolvent in the mesoscopic regime. A similar trade-off in~\eqref{girko} is not apparent; it is unclear if and how the integration in \(z\) could help regularize the \(\eta\) integral. This is a fundamental difference between CLTs for Hermitian and non-Hermitian ensembles that explains the abundance of Hermitian results in contrast to the scarcity of available non-Hermitian CLTs. \subsection*{Acknowledgement} L.E.\ would like to thank Nathana\"el Berestycki, and D.S.\ would like to thank Nina Holden for valuable discussions on the Gaussian free field. \subsection*{Notations and conventions} We introduce some notations we use throughout the paper. For integers \(k\in\mathbf{N} \) we use the notation \([k]:= \{1,\dots, k\}\). We write \(\mathbf{H} \) for the upper half-plane \(\mathbf{H} := \set{z\in\mathbf{C} \given \Im z>0}\), \(\mathbf{D}\subset \mathbf{C}\) for the open unit disk, and for any \(z\in\mathbf{C} \) we use the notation \(\operatorname{d}\!{}^2 z:= 2^{-1} \mathrm{i}(\operatorname{d}\!{} z\wedge \operatorname{d}\!{} \overline{z})\) for the two dimensional volume form on \(\mathbf{C} \). For positive quantities \(f,g\) we write \(f\lesssim g\) and \(f\sim g\) if \(f \le C g\) or \(c g\le f\le Cg\), respectively, for some constants \(c,C>0\) which depend only on the constants appearing in~\eqref{eq:hmb}. For any two positive real numbers \(\omega_*,\omega^*\in\mathbf{R} _+\) by \(\omega_*\ll \omega^*\) we denote that \(\omega_*\le c \omega^*\) for some small constant \(0<c\le 1/100\). We denote vectors by bold-faced lower case Roman letters \({\bm x}, {\bm y}\in\mathbf{C} ^k\), for some \(k\in\mathbf{N}\). Vector and matrix norms, \(\norm{\bm{x}}\) and \(\norm{A}\), indicate the usual Euclidean norm and the corresponding induced matrix norm. For any \(2n\times 2n\) matrix \(A\) we use the notation \(\braket{ A}:= (2n)^{-1}\Tr A\) to denote the normalized trace of \(A\). Moreover, for vectors \({\bm x}, {\bm y}\in\mathbf{C} ^n\) and matrices \(A,B\in \mathbf{C} ^{2n\times 2n}\) we define \[ \braket{ {\bm x},{\bm y}}:= \sum \overline{x}_i y_i, \qquad \braket{ A,B}:= \braket{ A^*B}. \] We will use the concept of ``with very high probability'' meaning that for any fixed \(D>0\) the probability of the event is bigger than \(1-n^{-D}\) if \(n\ge n_0(D)\). Moreover, we use the convention that \(\xi>0\) denotes an arbitrary small constant which is independent of \(n\). \section{Main results}\label{sec:MR} We consider \emph{complex i.i.d.\ matrices} \(X\), i.e.\ \(n\times n\) matrices whose entries are independent and identically distributed as \smash{\(x_{ab}\stackrel{d}{=} n^{-1/2}\chi\)} for some complex random variable \(\chi\), satisfying the following: \begin{assumption}\label{ass:1} We assume that \(\E \chi=\E \chi^2=0\) and \(\E \abs{\chi}^2=1\). In addition we assume the existence of high moments, i.e.\ that there exist constants \(C_p>0\), for any \(p\in\mathbf{N} \), such that \begin{equation}\label{eq:hmb} \E \abs{\chi}^p\le C_p. \end{equation} \end{assumption} The \emph{circular law}~\cite{MR1428519, MR863545, MR2663633, MR3813992, MR866352, MR773436, MR2575411, MR2409368} asserts that the empirical distribution of eigenvalues \(\{\sigma_i\}_{i=1}^n\) of a complex i.i.d.\ matrix \(X\) converges to the uniform distribution on the unit disk \(\mathbf{D}\), i.e. \begin{equation}\label{eq:circlaw} \lim_{n\to \infty}\frac{1}{n}\sum_{i=1}^n f(\sigma_i)=\frac{1}{\pi}\int_\mathbf{D} f(z)\operatorname{d}\!{}^2z, \end{equation} with very high probability for any continuous bounded function \(f\). Our main result is a central limit theorem for the centred \emph{linear statistics} \begin{equation}\label{eq:linstatmainr} L_n(f):=\sum_{i=1}^n f(\sigma_i)-\E \sum_{i=1}^n f(\sigma_i) \end{equation} for general complex i.i.d.\ matrices and generic test functions \(f\). In order to state the result we introduce some notations and certain Sobolev spaces. We fix some open bounded \(\Omega\subset\mathbf{C}\) containing the closed unit disk \(\overline{\mathbf{D}}\subset \Omega\) and having a piecewise \(C^1\)-boundary, or, more generally, any boundary satisfying the \emph{cone property} (see e.g.~\cite[Section~8.7]{MR1817225}). We consider test functions \(f\in H^{2+\delta}_0(\Omega)\) in the Sobolev space \(H^{2+\delta}_0(\Omega)\) which is defined as the completion of the smooth compactly supported functions \(C_c^\infty(\Omega)\) under the norm \[\norm{f}_{H^{2+\delta}(\Omega)}:= \norm{(1+\abs{\xi})^{2+\delta} \widehat f(\xi)}_{L^2(\Omega)}\] and we note that by Sobolev embedding such functions are continuously differentiable, and vanish at the boundary of \(\Omega\). For notational convenience we identify \(f\in H^{2+\delta}_0(\Omega)\) with its extension to all of \(\mathbf{C}\) obtained from setting \(f\equiv 0\) in \(\mathbf{C}\setminus\Omega\). We note that our results can trivially be extended to bounded test functions with non-compact support since due to~\cite[Theorem 2.1]{1907.13631}, with high probability, all eigenvalues satisfy \(\abs{\sigma_i}\le 1+\epsilon\) and therefore non-compactly supported test functions can simply be smoothly cut-off. For \(h\) defined on the boundary of the unit disk \(\partial\mathbf{D} \) we define its Fourier transform \begin{equation}\label{eq:furtra} \widehat{h}(k)=\frac{1}{2\pi}\int_0^{2\pi} h(e^{\mathrm{i}\theta}) e^{-\mathrm{i} \theta k}\operatorname{d}\!{}\theta, \qquad k\in\mathbf{Z} . \end{equation} For \(f,g\in H_0^{2+\delta}(\Omega)\) we define the homogeneous semi-inner products \begin{equation}\label{eq:h12norm} \begin{split} \braket{ g,f}_{\dot{H}^{1/2}(\partial\mathbf{D} )}&:= \sum_{k\in\mathbf{Z} }\abs{k} \widehat{f}(k) \overline{\widehat{g}(k)}, \qquad \norm{ f}^2_{\dot{H}^{1/2}(\partial\mathbf{D})}:= \braket{ f,f}_{\dot{H}^{1/2}(\partial\mathbf{D} )}, \end{split} \end{equation} where, with a slight abuse of notation, we identified \(f\) and \(g\) with their restrictions to \(\partial \mathbf{D} \). \begin{theorem}[Central Limit Theorem for linear statistics]\label{theo:CLT} Let \(X\) be a complex \(n\times n\) i.i.d.\ matrix satisfying Assumption~\ref{ass:1} with eigenvalues \(\{\sigma_i\}_{i=1}^n\), and denote the fourth \emph{cumulant} of \(\chi\) by \(\kappa_4:= \E\abs{\chi}^4-2\). Fix \(\delta>0\), an open complex domain \(\Omega\) with \(\overline\mathbf{D}\subset\Omega\subset\mathbf{C}\) and a complex valued test function \(f\in H_0^{2+\delta}(\Omega)\). Then the centred linear statistics \(L_n(f)\), defined in~\eqref{eq:linstatmainr}, converges \[L_n(f) \Longrightarrow L(f), \] to a complex Gaussian random variable \(L(f)\) with expectation \(\E L(f)=0\) and variance \(\E \abs{L(f)}^2=C(f,f)=: V_f\) and \(\E L(f)^2 = C(\overline f, f)\), where \begin{equation}\label{eq:cov} \begin{split} C\left( g,f\right)&:= \frac{1}{4\pi}\braket{ \nabla g,\nabla f}_{L^2(\mathbf{D} )}+\frac{1}{2}\braket{ g,f}_{\dot{H}^{1/2}(\partial\mathbf{D} )} \\ &\quad + \kappa_4 \left(\frac{1}{\pi}\int_\mathbf{D} \overline{g(z)}\operatorname{d}\!{}^2 z- \frac{1}{2\pi}\int_0^{2\pi} \overline{g(e^{\mathrm{i}\theta})}\operatorname{d}\!{} \theta\right) \\ &\qquad\times\left(\frac{1}{\pi}\int_\mathbf{D} f(z)\operatorname{d}\!{}^2 z-\frac{1}{2\pi}\int_0^{2\pi} f(e^{\mathrm{i}\theta})\operatorname{d}\!{}\theta\right). \end{split} \end{equation} More precisely, any finite moment of \(L_n(f)\) converges at a rate \(n^{-c(k)}\), for some small \(c(k)>0\), i.e. \begin{equation}\label{eq moment convergence} \E L_n(f)^k \overline{L_n(f)}^l = \E L(f)^k \overline{ L(f)}^l +\mathcal{O}\left( n^{-c(k+l)}\right). \end{equation} Moreover, the expectation in~\eqref{eq:linstatmainr} is given by \begin{equation}\label{eq:expv} \E \sum_{i=1}^n f(\sigma_i)=\frac{n}{\pi}\int_\mathbf{D} f(z)\operatorname{d}\!{}^2 z-\frac{\kappa_4}{\pi}\int_\mathbf{D} f(z)(2\abs{z}^2-1)\operatorname{d}\!{}^2 z+\mathcal{O}\left( n^{-c}\right) \end{equation} for some small constant \(c>0\). The implicit constants in the error terms in~\eqref{eq moment convergence}--\eqref{eq:expv} depend on the \(H^{2+\delta}\)-norm of \(f\) and \(C_p\) from~\eqref{eq:hmb}. \end{theorem} \begin{remark}[\(V_f\) is strictly positive]\label{remark Vf pos} The variance \(V_f=\E \abs{L(f)}^2\) in Theorem~\ref{theo:CLT} is strictly positive. Indeed, by the Cauchy-Schwarz inequality it follows that \[ \abs*{\frac{1}{\pi}\int_\mathbf{D} f(z)\, \operatorname{d}\!{} ^2z-\frac{1}{2\pi}\int_0^{2\pi} f(e^{i\theta})\, d\theta}^2\le\frac{1}{8\pi} \int_\mathbf{D} \abs*{ \nabla f}^2\, \operatorname{d}\!{}^2 z. \] Hence, since \(\kappa_4\ge -1\) in~\eqref{eq:cov}, this shows that \[ V_f\ge\frac{1}{8\pi} \int_\mathbf{D} \abs*{ \nabla f}^2\, \operatorname{d}\!{}^2 z + \frac{1}{2}\norm{ f}^2_{\dot{H}^{1/2}(\partial\mathbf{D})}>0. \] \end{remark} By polarisation, a multivariate Central Limit Theorem readily follows from Theorem~\ref{theo:CLT}: \begin{corollary}\label{cor:multCLT} Let \(X\) be an \(n\times n\) i.i.d.\ complex matrix satisfying Assumption~\ref{ass:1}, and let \(L_n(f)\) be defined in~\eqref{eq:linstatmainr}. For a fixed open bounded complex domain \(\Omega\) with \(\overline\mathbf{D}\subset\Omega\subset\mathbf{C}\), \(\delta>0\), \(p\in\mathbf{N} \) and for any finite collection of test functions \(f^{(1)},\dots,f^{(p)} \in H_0^{2+\delta}(\Omega)\) the vector \begin{equation}\label{eq:muclt} (L_n(f^{(1)}),\dots,L_n(f^{(p)}))\Longrightarrow (L(f^{(1)}),\dots,L(f^{(p)})), \end{equation} converges to a multivariate complex Gaussian of zero expectation \(\E L(f)=0\) and covariance \(\E L(f) \overline{L(g)}=\E L(f) L(\overline{g})=C(f,g) \) with \(C\) as in~\eqref{eq:cov}. Moreover, for any mixed \(k\)-moments we have an effective convergence rate of order \(n^{-c(k)}\), as in~\eqref{eq moment convergence} \end{corollary} \begin{remark}\label{rem:compo} We may compare Theorem~\ref{theo:CLT} with the previous results in~\cite[Theorem 1]{MR2361453} and~\cite[Theorem 1.1]{MR2294978}: \begin{enumerate}[label=(\roman*)] \item Note that for a single \(f\colon\mathbf{C} \to \mathbf{R} \) in the Ginibre case, i.e.\ \(\kappa_4=0\), Theorem~\ref{theo:CLT} implies~\cite[Theorem 1]{MR2361453} with \(\sigma_f^2+ \widetilde{\sigma}_f^2=C(f,f)\), using the notation therein and with \(C(f,f)\) defined in~\eqref{eq:cov}. \item If additionally \(f\) is complex analytic in a neighbourhood of \(\overline{\mathbf{D} }\), using the notation \(\partial:= \partial_z\), the expressions in~\eqref{eq:cov},\eqref{eq:expv} of Theorem~\ref{theo:CLT} simplify to \begin{equation}\label{eq:comr} \E \sum_{i=1}^n f(\sigma_i)=nf(0)+\mathcal{O}\left(n^{-\delta'}\right), \quad C\left(f,g\right)= \frac{1}{\pi}\int_\mathbf{D} \partial f(z) \overline{\partial g(z)}\operatorname{d}\!{}^2 z, \end{equation} where we used that for any \(f,g\) complex analytic in a neighbourhood of \(\overline{\mathbf{D} }\) we have \begin{equation} \label{eq:chan} \frac{1}{2\pi}\int_\mathbf{D} \braket{ \nabla g,\nabla f }\operatorname{d}\!{}^2 z=\frac{1}{\pi}\int_{\mathbf{D} } \partial f(z)\overline{\partial g(z)} \operatorname{d}\!{}^2 z=\sum_{k\in \mathbf{Z} } \abs{k} \widehat{f\restriction_{\partial \mathbf{D} }}(k) \overline{\widehat{g\restriction_{\partial \mathbf{D} }}(k)}, \end{equation} and that \[ \frac{1}{\pi}\int_\mathbf{D} f(z)\operatorname{d}\!{}^2 z=\frac{1}{2\pi}\int_0^{2\pi} f(e^{i\theta})\operatorname{d}\!{} \theta=f(0). \] The second equality in~\eqref{eq:chan} follows by writing \(f\) and \(g\) in Fourier series. The result in~\eqref{eq:comr} exactly agrees with~\cite[Theorem 1.1]{MR2294978}. \end{enumerate} \end{remark} \begin{remark}[Mesoscopic regime]\label{rem:meso} We formulated our result for \emph{macroscopic} linear statistics, i.e.\ for test functions \(f\) that are independent of \(n\). One may also consider \emph{mesoscopic} linear statistics as well when \(f(\sigma)\) is replaced with \( \varphi( n^a(\sigma-z_0))\) for some fixed scale \(a>0\), reference point \(z_0\in \mathbf{D}\) and function \(\varphi \in H^{2+\delta}(\mathbf{C})\). Our proof can directly handle this situation as well for any small \(a\le 1/500\)\footnote{The upper bound \(1/500\) for \(a\) is a crude overestimate, we did not optimise it along the proof. The actual value of \(a\) comes from the fact that it has to be smaller than \(\omega_d\) (see of Proposition~\ref{prop:indmr}) and from Lemma~\ref{lem:overb} (which is the main input of Proposition~\ref{prop:indmr}) it follows that \(\omega_d\le 1/100\).}, say, since all our error terms are effective as a small power of \(1/n\). For \(a>0\) the leading term to the variance \(V_f\) comes solely from the \(\norm{\nabla f}^2\) term in~\eqref{eq:cov}, in particular the effect of the fourth cumulant is negligible. \end{remark} \subsection{Connection to the Gaussian free field}\label{sec GFF} It has been observed in~\cite{MR2361453} that for the special case \(\kappa_4=0\) the limiting random field \(L(f)\) can be viewed as a variant of the \emph{Gaussian free field}~\cite{MR2322706}. The Gaussian free field on some bounded domain \(\Omega\subset\mathbf{C}\) can formally be defined as a \emph{Gaussian Hilbert space} of random variables \(h(f)\) indexed by functions in the homogeneous Sobolev space \(f\in \dot H_0^1(\Omega)\) such that the map \(f\mapsto h(f)\) is linear and \begin{equation} \E h(f) = 0 ,\quad \E \overline{h(f)} h(g) = \braket{f,g}_{\dot H^1(\Omega)}. \end{equation} Here for \(\Omega\subset\mathbf{C}\) we defined the homogeneous Sobolev space \(\dot H_0^1(\Omega)\) as the completion of smooth compactly supported function \(C_c^\infty(\Omega)\) with respect to the semi-inner product \[\braket{g,f}_{\dot{H}^{1}(\Omega)}:= \braket{\nabla g,\nabla f}_{L^2(\Omega)}, \qquad \norm{f}_{\dot{H}^{1}(\Omega)}^2:= \braket{f,f}_{\dot{H}^{1}(\Omega)}.\] By the Poincar\'e inequality the space \(\dot H_0^1(\Omega)\) is in fact a Hilbert space and as a vector space coincides with the usual Sobolev space \(H_0^1(\Omega)\) with an equivalent norm but a different scalar product. Since \(\overline{\mathbf{D}}\subset\Omega\), the Sobolev space \(\dot H^1_0(\Omega)\) can be orthogonally decomposed as \[ \dot H_0^1(\Omega) = \dot H_{0}^1(\mathbf{D}) \oplus \dot H_{0}^1(\overline\mathbf{D}^c) \oplus \dot H_0^1((\partial\mathbf{D})^c)^\perp,\] where the complements are understood as the complements within \(\Omega\). The orthogonal complement \smash{\(\dot H_0^1((\partial\mathbf{D})^c)^\perp\)} is (see e.g.~\cite[Thm.~2.17]{MR2322706}) given by the closed subspace of functions which are harmonic in \(\mathbf{D}\cup\overline\mathbf{D}^c=(\partial\mathbf{D})^c\), i.e.\ away from the unit circle. For closed subspaces \(S\subset \dot H^1_0(\Omega)\) we denote the orthogonal projection onto \(S\) by \(P_S\). Then by orthogonality and conformal symmetry it follows~\cite[Lemma 3.1]{MR2361453}\footnote{In Eq.~(3.1), and in the last displayed equation of the proof of Lemma 3.1 factors of \(2\) are missing. In the notation of~\cite{MR2361453} the correct equations read \[\frac{1}{2}\norm{P_{H}f}_{H^1(\mathbf{C})}^2 = \norm{P_H f}_{H^1(\mathbf{U})}^2 = 2\pi \norm{f}_{H^{1/2}(\partial\mathbf U)}^2 \quad \text{and}\quad \braket{g_1,g_2}_{H^1(\mathbf U)}=2\pi\braket{g_1,g_2}_{H^{1/2}(\partial\mathbf U)}.\]} that \begin{gather} \begin{aligned} \norm*{ P_{ \dot H_{0}^1(\mathbf{D})} f + P_{ \dot H_0^1((\partial\mathbf{D})^c)^\perp} f}_{ \dot H^1(\Omega)}^2 &= \norm{ f }^2_{ \dot H^1(\mathbf{D})} + \norm{ P_{ \dot H_0^1((\partial\mathbf{D})^c)^\perp} f}_{ \dot H^1(\mathbf{D})}^2 \\ &= \norm{ f }^2_{ \dot H^1(\mathbf{D})} + 2\pi \norm{f }_{\dot H^{1/2}(\partial\mathbf{D})}^2, \end{aligned}\label{eq f decomp}\raisetag{-4em} \end{gather} where we canonically identify \(f\in \dot H_0^1(\Omega)\) with its restriction to \(\mathbf{D}\). If \(\kappa_4=0\), then the rhs.\ of~\eqref{eq f decomp} is precisely \(4\pi C(f,f)\) and therefore \(L(f)\) can be interpreted~\cite[Corollary 1.2]{MR2361453} as the projection \begin{equation} \label{eq L h proj} L = (4\pi)^{-1/2} P h, \qquad P:= \Bigl(P_{ \dot H_{0}^1(\mathbf{D})} + P_{ \dot H_0^1((\partial\mathbf{D})^c)^\perp} \Bigr) \end{equation} of the Gaussian free field \(h\) onto \( \dot H_{0}^1(\mathbf{D}) \oplus \dot H_0^1((\partial\mathbf{D})^c)^\perp\), i.e.\ the Gaussian free field conditioned to be harmonic in \(\mathbf{D}^c\). The projection~\eqref{eq L h proj} is defined via duality, i.e.\ \((Ph)(f) := h(Pf)\) so that indeed \[ \E\abs*{\left[\frac{1}{\sqrt{4\pi}} P h\right](f) }^2 = \frac{1}{4\pi} \Bigl( \norm{ f }^2_{ \dot H^1(\mathbf{D})} + 2\pi \norm{f }_{\dot H^{1/2}(\partial\mathbf{D})}^2 \Bigr) = C(f,f) = \E \abs{L(f)}^2.\] If \(\kappa_4> 0\), then \(L\) can be interpreted as the sum \begin{equation}\label{eq L kappa4 pos} L = \frac{1}{\sqrt{4\pi}} Ph + \sqrt{\kappa_4} \Bigl(\braket{\cdot}_{\mathbf{D}}-\braket{\cdot}_{\partial\mathbf{D}}\Bigr) \Xi \end{equation} of the Gaussian free field \(Ph\) conditioned to be harmonic in \(\mathbf{D}^c\), and an independent standard real Gaussian \(\Xi\) multiplied by difference of the averaging functionals \(\braket{\cdot}_\mathbf{D}\), \(\braket{\cdot}_{\partial\mathbf{D}}\) on \(\mathbf{D}\) and \(\partial\mathbf{D}\). For \(\kappa_4<0\) there seems to be no direct interpretation of \(L\) similar to~\eqref{eq L kappa4 pos}. \section{Proof strategy} For the proof of Theorem~\ref{theo:CLT} we study the \(2n \times 2n\) matrix \(H^z\) defined in~\eqref{eq:linz1}, that is the Hermitisation of \(X-z\). Denote by \(\{\lambda^z_{\pm i}\}_{i=1}^n\) the eigenvalues of \(H^z\) labelled in an increasing order (we omit the index \(i=0\) for notational convenience). As a consequence of the block structure of \(H^z\) its spectrum is symmetric with respect to zero, i.e.\ \(\lambda^z_{-i}=-\lambda^z_i\) for any \(i\in [n]\). Let \(G(w)=G^z(w):= (H^z-w)^{-1}\) denote the resolvent of \(H^z\) with \(\eta=\Im w\ne 0\). It is well known (e.g.\ see~\cite{MR3770875, 1907.13631}) that \(G^z\) becomes approximately deterministic, as \(n\to \infty\), and its limit is expressed via the unique solution of the scalar equation \begin{equation}\label{eq m} - \frac{1}{m^z} = w + m^z -\frac{\abs{z}^2}{w + m^z}, \quad \eta\Im m^z(w) >0,\quad \eta=\Im w\ne 0, \end{equation} which is a special case of the \emph{matrix Dyson equation} (MDE), see e.g.~\cite{MR3916109}. We note that on the imaginary axis \(m^z(\mathrm{i}\eta)=\mathrm{i}\Im m^z(\mathrm{i}\eta)\). To find the limit of \(G^z\) we define a \(2n\times 2n\) block-matrix \begin{equation}\label{eq M} M^z(w) := \begin{pmatrix} m^z(w) & -z u^z(w) \\ - \overline z u^z(w) & m^z(w) \end{pmatrix}, \quad u^z(w):= \frac{m^z(w)}{w+ m^z(w)}, \end{equation} where each block is understood to be a scalar multiple of the \(n\times n\) identity matrix. We note that \(m,u,M\) are uniformly bounded in \(z,w\), i.e. \begin{equation}\label{eq M bound} \norm{M^z(w)}+\abs{m^z(w)}+\abs{u^z(w)}\lesssim 1. \end{equation} Indeed, taking the imaginary part of~\eqref{eq m} we have (dropping \(z, w\)) \begin{equation}\label{beta ast def} \beta_\ast \Im m = (1-\beta_\ast) \Im w,\qquad \beta_\ast := 1-\abs{m}^2-\abs{u}^2\abs{z}^2, \end{equation} which implies \begin{equation}\label{mubound} \abs{m}^2 + \abs{u}^2\abs{z}^2< 1, \end{equation} as \(\Im m\) and \(\Im w\) have the same sign. Note that~\eqref{mubound} saturates if \(\Im w\to 0\) and \(\Re w\) is in the support of the \emph{self-consistent density of states}, \(\rho^z(E):=\pi^{-1} \Im m^z(E+\mathrm{i} 0)\). Moreover,~\eqref{eq m} is equivalent to \(u= -m^2 + u^2 \abs{z}^2\), thus \(\abs{u}<1\) and~\eqref{eq M bound} follows. For our analysis the derivative \(m'(w)\) in the \(w\)-variable plays a central role and we note that by taking the derivative of~\eqref{eq m} we obtain \begin{equation}\label{beta def} m' = \frac{1-\beta}{\beta}, \qquad \beta:= 1-m^2-u^2 \abs{z}^2. \end{equation} On the imaginary axis, \(w=\mathrm{i}\eta\), where by taking the real part of~\eqref{eq m} it follows that \(\Re m(\mathrm{i}\eta)=0\), we can use~\cite[Eq.~(3.13)]{1907.13631} \begin{equation} \label{eq:expm} \Im m(\mathrm{i}\eta)\sim \begin{cases} \eta^{1/3}+\abs{1-\abs{z}^2}^{1/2} &\text{if}\quad \abs{z}\le 1, \\ \frac{\eta}{\abs{z}^2-1+\eta^{2/3}} &\text{if}\quad \abs{z}> 1, \end{cases},\qquad \eta\lesssim 1 \end{equation} to obtain asymptotics for \begin{equation} \label{eq:bbou} \beta_\ast \sim \frac{\eta}{\Im m}, \quad \beta = \beta_\ast + 2 (\Im m)^2, \qquad \eta\lesssim 1. \end{equation} The optimal local law from Theorem~\cite[Theorem 5.2]{MR3770875} and~\cite[Theorem 5.2]{1907.13631}\footnote{The local laws in~\cite[Theorem 5.2]{MR3770875} and~\cite[Theorem 5.2]{1907.13631} have been proven for \(\eta\ge \eta_f(z)\), with \(\eta_f(z)\) being the fluctuation scale defined in~\cite[Eq.~(5.2)]{1907.13631}, but they can be easily extend to any \(\eta>0\) by a standard argument, see~\cite[Appendix A]{MR4221653}.}, which for the application in Girko's formula~\eqref{girko} is only needed on the imaginary axis, asserts that \(G^z\approx M^z\) in the following sense: \begin{theorem}[Optimal local law for \(G\)]\label{theo:Gll} The resolvent \(G^z\) is very well approximated by the deterministic matrix \(M^z\) in the sense \begin{equation}\label{single local law} \abs{\braket{(G^z(\mathrm{i} \eta)-M^z(\mathrm{i} \eta))A}} \le \frac{\norm{ A} n^\xi }{n\eta}, \qquad \abs{\braket{\bm{x},(G^z(\mathrm{i} \eta)-M^z(\mathrm{i} \eta))\bm{y}}}\le \frac{\norm{\bm{x}}\norm{\bm{y}}n^\xi}{\sqrt{n\eta}}, \end{equation} with very high probability, uniformly for \(\eta>0\) and for any deterministic matrices and vectors \(A,\bm{x},\bm{y}\). \end{theorem} The matrix \(H^z\) can be related to the linear statistics of eigenvalues \(\sigma_i\) of \(X\) via the precise (regularised) version of Girko's Hermitisation formula~\eqref{girko} \begin{gather} \begin{aligned} L_n(f)&=\frac{1}{4\pi} \int_\mathbf{C} \Delta f(z) \Big[\log\abs{\det (H^z-\mathrm{i} T)}-\E \log \abs{\det (H^z-\mathrm{i} T)}\Big]\operatorname{d}\!{}^2 z \\ &\quad-\frac{n}{2\pi \mathrm{i}} \int_\mathbf{C} \Delta f \left[\left(\int_0^{\eta_0}+\int_{\eta_0}^{\eta_c}+\int_{\eta_c}^T \right) \bigl[\braket{ G^z(\mathrm{i}\eta)-\E G^z(\mathrm{i}\eta)}\bigr]\operatorname{d}\!{} \eta\right] \operatorname{d}\!{}^2z \\ &=: J_T+I_0^{\eta_0}+I_{\eta_0}^{\eta_c}+I_{\eta_c}^T, \end{aligned}\label{eq:GirkosplitA}\raisetag{-5em} \end{gather} for \begin{equation}\label{eq:scales} \eta_0:= n^{-1-\delta_0}, \quad \eta_c:= n^{-1+\delta_1}, \end{equation} and some very large \(T>0\), say \(T=n^{100}\). Note that in~\eqref{eq:GirkosplitA} we used that \(\braket{ G^z(\mathrm{i}\eta)}=\mathrm{i}\braket{ \Im G^z(\mathrm{i}\eta)}\) by spectral symmetry. The test function \(f\colon\mathbf{C} \to \mathbf{C} \) is in \(H^{2+\delta}\) and it is compactly supported. \(J_T\) in~\eqref{eq:GirkosplitA} consists of the first line in the rhs., whilst \(I_0^{\eta_0},I_{\eta_0}^{\eta_c},I_{\eta_c}^T\) corresponds to the three different \(\eta\)-regimes in the second line of the rhs.\ of~\eqref{eq:GirkosplitA}. \begin{remark} We remark that in~\eqref{eq:GirkosplitA} we split the \(\eta\)-regimes in a different way compared to~\cite[Eq.~(32)]{MR4221653}. We also use a different notation to identify the \(\eta\)-scales: here we use the notation \(J_T, I_0^{\eta_0}, I_{\eta_0}^{\eta_c}, I_{\eta_c}^T\), whilst in~\cite[Eq.~(32)]{MR4221653} we used the notation \(I_1, I_2, I_3, I_4\). \end{remark} The different regimes in~\eqref{eq:GirkosplitA} will be treated using different techniques. More precisely, the integral \(J_T\) is easily estimated as in~\cite[Proof of Theorem 2.3]{1907.13631}, which uses similar computations to~\cite[Proof of Theorem 2.5]{MR3770875}. The term \(I_0^{\eta_0}\) is estimated using the fact that with high probability there are no eigenvalues in the regime \([0,\eta_0]\); this follows by~\cite[Theorem 3.2]{MR2684367}. Alternatively (see Remark~\ref{rem:2ass} and Remark~\ref{rem:altern} later), the contribution of the regime \(I_0^{\eta_0}\) can be estimated without resorting to the quite sophisticated proof of~\cite[Theorem 3.2]{MR2684367} if the entries of \(X\) satisfy the additional assumption~\eqref{eq:addass}. More precisely, this can be achieved using~\cite[Proposition 5.7]{MR3770875} (which follows adapting the proof of~\cite[Lemma 4.12]{MR2908617}) to bound the very small regime \([0,n^{-l}]\), for some large \(l\in\mathbf{N} \), and then using~\cite[Corollary 4]{1908.01653} to bound the regime \([n^{-l},\eta_0]\). The main novel work is done for the integrals \(I_{\eta_0}^{\eta_c}\) and \(I_{\eta_c}^T\). The main contribution to \(L_n(f)\) comes from the mesoscopic regime in \(I_{\eta_c}^T\), which is analysed using the following Central Limit Theorem for resolvents. \begin{proposition}[CLT for resolvents]\label{prop:CLTresm} Let \(\epsilon,\xi>0\) be arbitrary. Then for \(z_1,\dots,z_p\in\mathbf{C}\) and \(\eta_1,\dots,\eta_p\ge n^{\xi-1} \max_{i\ne j}\abs{z_i-z_j}^{-2}\), denoting the pairings on \([p]\) by \(\Pi_p\), we have \begin{gather} \begin{aligned} \E\prod_{i\in[p]} \braket{G_i-\E G_i} &= \sum_{P\in \Pi_p}\prod_{\{i,j\}\in P} \E \braket{G_i-\E G_i}\braket{G_j-\E G_j} + \mathcal{O}\left(\Psi\right) \\ &= \frac{1}{n^p}\sum_{P\in \Pi_p}\prod_{\{i,j\}\in P} \frac{V_{i,j}+\kappa_4 U_i U_j}{2}+ \mathcal{O}\left(\Psi\right), \end{aligned}\label{eq CLT resovlent}\raisetag{-5em} \end{gather} where \(G_i=G^{z_i}(\mathrm{i} \eta_i)\), \begin{equation}\label{eq psi error} \Psi:= \frac{n^\epsilon}{(n\eta_*)^{1/2}}\frac{1}{\min_{i\ne j}\abs{z_i-z_j}^4}\prod_{i\in[p]}\frac{1}{\abs{1-\abs{z_i}}n\eta_i}, \end{equation} \(\eta_*:= \min_i\eta_i\), and \(V_{i,j}=V_{i,j}(z_i,z_j,\eta_i,\eta_j)\) and \(U_i=U_i(z_i,\eta_i)\) are defined as \begin{equation} \label{eq:exder} \begin{split} V_{i,j}&:= \frac{1}{2}\partial_{\eta_i}\partial_{\eta_j} \log \bigl[ 1+(u_i u_j\abs{z_i}\abs{z_j})^2-m_i^2 m_j^2-2u_i u_j\Re z_i\overline{z_j}\bigr], \\ U_i&:= \frac{\mathrm{i}}{\sqrt{2}}\partial_{\eta_i} m_i^2, \end{split} \end{equation} with \(m_i=m^{z_i}(\mathrm{i}\eta_i)\) and \(u_i=u^{z_i}(\mathrm{i}\eta_i)\). Moreover, the expectation of \(G\) is given by \begin{equation}\label{prop clt exp} \braket{\E G}= \braket{M} - \frac{\mathrm{i}\kappa_4}{4n}\partial_\eta(m^4) + \mathcal{O}\Bigl(\frac{1}{\abs{1-\abs{z}}n^{3/2} (1+\eta)}+\frac{1}{\abs{1-\abs{z}}(n\eta)^2}\Bigr). \end{equation} \end{proposition} \begin{remark} In Section~\ref{sec:PCLT} we will apply this proposition in the regime where \(\min_{i\ne j}\abs{z_i-z_j}\) is quite large, i.e.\ it is at least \(n^{-\delta}\), for some small \(\delta>0\), hence we did not optimise the estimates for the opposite regime. However, using the more precise~\cite[Lemma 6.1]{MR4235475} instead of Lemma~\ref{lemma:betaM} within the proof, one can immediately strengthen Proposition~\ref{prop:CLTresm} on two accounts. First, the condition on \(\eta_*=\min\eta_i\) can be relaxed to \[\eta_*\gtrsim n^{\xi-1} \Bigl(\min_{i\ne j} \abs{z_i-z_j}^2 + \eta_*\Bigr)^{-1}.\] Second, the denominator \(\min_{i\ne j} \abs{z_i-z_j}^4\) in~\eqref{eq psi error} can be improved to \[\Bigl(\min_{i\ne j} \abs{z_i-z_j}^2 +\eta_*\Bigr)^2.\] \end{remark} In order to show that the contribution of \(I_{\eta_0}^{\eta_c}\) to \(L_n(f)\) is negligible, in Proposition~\ref{prop:indmr} we prove that \(\braket{ G^{z_1}(\mathrm{i}\eta_1)}\) and \(\braket{ G^{z_2}(\mathrm{i}\eta_2)}\) are asymptotically independent if \(z_1\), \(z_2\) are far enough from each other, they are well inside \(\mathbf{D}\), and \(\eta_0 \le \eta_1, \eta_2 \le \eta_c\). \begin{proposition}[Independence of resolvents with small imaginary part]\label{prop:indmr} Fix \(p\in \mathbf{N}\). For any sufficiently small \(\omega_d,\omega_h,\omega_f>0\) such that \(\omega_h\ll \omega_f\), there exist \(\omega,\widehat{\omega}, \delta_0,\delta_1>0\) such that \(\omega_h\ll \delta_m\ll \widehat{\omega}\ll \omega\ll\omega_f\), for \(m=0,1\), such that for any \(\abs{z_l}\le 1-n^{-\omega_h}\), \(\abs{z_l-z_m}\ge n^{-\omega_d}\), with \(l,m \in [p]\), \(l\ne m\), it holds \begin{equation} \label{eq:indtrlm} \E \prod_{l=1}^p \braket{ G^{z_l}(\mathrm{i}\eta_l)}=\prod_{l=1}^p\E \braket{ G^{z_l}(\mathrm{i}\eta_l)}+\mathcal{O}\left(\frac{n^{p(\omega_h+\delta_0)+\delta_1}}{n^{\omega}}+\frac{n^{\omega_f+3\delta_0}}{\sqrt{n}}\right), \end{equation} for any \(\eta_1,\dots,\eta_p\in [n^{-1-\delta_0},n^{-1+\delta_1}]\). \end{proposition} The paper is organised as follows: In Section~\ref{sec:PCLT} we conclude Theorem~\ref{theo:CLT} by combining Propositions~\ref{prop:CLTresm} and~\ref{prop:indmr}. In Section~\ref{sec local law G2} we prove a local law for \(G_1A G_2\), for a deterministic matrix \(A\). In Section~\ref{sec:CLTres}, using the result in Section~\ref{sec local law G2} as an input, we prove Proposition~\ref{prop:CLTresm}, the Central Limit Theorem for resolvents. In Section~\ref{sec:IND} we prove Proposition~\ref{prop:indmr} using the fact that the correlation among small eigenvalues of \(H^{z_1}\), \(H^{z_2}\) is ``small'', if \(z_1\), \(z_2\) are far from each other, as a consequence of the local law in Section~\ref{sec local law G2}. \section{Central limit theorem for linear statistics}\label{sec:PCLT} In this section, using Proposition~\ref{prop:CLTresm}--\ref{prop:indmr} as inputs, we prove our main result Theorem~\ref{theo:CLT}. \subsection{Preliminary reductions in Girko's formula} In this section we prove that the main contribution to \(L_n(f)\) in~\eqref{eq:GirkosplitA} comes from the regime \(I_{\eta_c}^T\). This is made rigorous in the following lemma. \begin{lemma}\label{lem:i4} Fix \(p\in \mathbf{N} \) and some bounded open \(\overline\mathbf{D}\subset\Omega\subset\mathbf{C}\), and for any \(l\in [p]\) let \(f^{(l)}\in H_0^{2+\delta}(\Omega)\). Then \begin{equation} \label{eq:allrsma} \E \prod_{l=1}^p L_n\bigl(f^{(l)}\bigr)= \E\prod_{l=1}^p I_{\eta_c}^T\bigl( f^{(l)}\bigr)+\mathcal{O}\left( n^{-c(p)}\right), \end{equation} for some small \(c(p)>0\), with \(L_n(f^{(l)})\) and \(I_{\eta_c}^T( f^{(l)})\) defined in~\eqref{eq:GirkosplitA}. The constant in \(\mathcal{O}(\cdot)\) may depend on \(p\) and on the \(L^2\)-norm of \(\Delta f^{(1)},\dots, \Delta f^{(p)}\). \end{lemma} \begin{remark}\label{rem:2ass} In the remainder of this section we need to ensure that with high probability the matrix \(H^z\), defined in~\eqref{eq:linz1}, does not have eigenvalues very close to zero, i.e.\ that \begin{equation} \label{eq:exversmall} \Prob \left(\Spec(H^z)\cap \left[ -n^{-l},n^{-l}\right] \ne \emptyset\right)\le C_l n^{-l/2}, \end{equation} for any \(l\ge 2\) uniformly in \(\abs{z}\le 1\). The bound~\eqref{eq:exversmall} directly follows from~\cite[Theorem 3.2]{MR2684367}. Alternatively,~\eqref{eq:exversmall} follows by~\cite[Proposition 5.7]{MR3770875} (which follows adapting the proof of~\cite[Lemma 4.12]{MR2908617}), without recurring to the quite sophisticated proof of~\cite[Theorem 3.2]{MR2684367}, under the additional assumption that there exist \(\alpha, \beta>0\) such that the random variable \(\chi\) has a density \(g\colon\mathbf{C} \to \interval{co}{0,\infty}\) which satisfies \begin{equation} \label{eq:addass} g\in L^{1+\alpha}(\mathbf{C} ), \qquad \norm{ g}_{L^{1+\alpha}(\mathbf{C} )}\le n^\beta. \end{equation} \end{remark} We start proving \emph{a priori} bounds for the integrals defined in~\eqref{eq:GirkosplitA}. \begin{lemma}\label{lem:aprior} Fix some bounded open \(\overline\mathbf{D}\subset\Omega\subset \mathbf{C}\) and let \(f\in H_0^{2+\delta}(\Omega)\). Then for any \(\xi>0\) the bounds \begin{equation}\label{eq:apb} \abs{J_T}\le \frac{n^{1+\xi}\norm{ \Delta f}_{L^1(\Omega)}}{T^2}, \qquad \abs*{I_0^{\eta_0}}+\abs*{I_{\eta_0}^{\eta_c}}+\abs{I_{\eta_c}^T}\le n^\xi \norm{ \Delta f}_{L^2(\Omega)} \abs{\Omega}^{1/2}, \end{equation} hold with very high probability, where \(\abs{\Omega}\) denotes the Lebesgue measure of the set \(\Omega\). \end{lemma} \begin{proof} The proof of the bound for \(J_T\) is identical to~\cite[Proof of Theorem 2.3]{1907.13631} and so omitted. The bound for \(I_0^{\eta_0}, I_{\eta_0}^{\eta_c}, I_{\eta_c}^T\) relies on the local law of Theorem~\ref{theo:Gll}. More precisely, by Theorem~\ref{theo:Gll} and~\eqref{prop clt exp} of Proposition~\ref{prop:CLTresm} it follows that \begin{equation} \label{eq:llexp} \abs*{\braket{ G^z-\E G^z}}\le \frac{n^\xi}{n\eta}, \end{equation} with very high probability uniformly in \(\eta>0\) and \(\abs{z}\le C\) for some large \(C>0\). First of all we remove the regime \([0,n^{-l}]\) by~\cite[Theorem 3.2]{MR2684367}, i.e.\ its contribution is smaller than \(n^{-l}\), for some large \(l\in\mathbf{N} \), with very high probability. Alternatively, this can be achieved by~\cite[Proposition 5.7]{MR3770875} under the additional assumption~\eqref{eq:addass} in Remark~\ref{rem:2ass}. Then for any \(a,b\ge n^{-l}\), by~\eqref{eq:llexp}, we have \begin{equation} \label{eq:impbbfin} n\abs*{\int_\Omega \operatorname{d}\!{}^2 z \Delta f(z)\int_a^b \operatorname{d}\!{} \eta \bigl[\braket{ G(\mathrm{i}\eta) -\E G(\mathrm{i}\eta)}\bigr] }\lesssim n^\xi \abs{\Omega}^{1/2} \norm{ \Delta f}_{L^2(\Omega)}, \end{equation} with very high probability. This concludes the proof of the second bound in~\eqref{eq:apb}. \end{proof} We have a better bound for \(I_0^{\eta_0}\), \(I_{\eta_0}^{\eta_c}\) which holds true in expectation. \begin{lemma}\label{lem:bbexp} Fix some bounded open \(\overline\mathbf{D}\subset\Omega\subset \mathbf{C}\) and let \(f\in H_0^{2+\delta}(\Omega)\). Then there exists \(\delta'>0\) such that \begin{equation} \label{eq:impexb} \E \abs*{I_0^{\eta_0}}+\E \abs*{I_{\eta_0}^{\eta_c}}\le n^{-\delta'}\norm{ \Delta f}_{L^2(\Omega)}. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:i4}] Lemma~\ref{lem:i4} readily follows (see e.g.~\cite[Lemma 4.2]{MR4221653}) combining Lemma~\ref{lem:aprior} and Lemma~\ref{lem:bbexp}. \end{proof} We conclude this section with the proof of Lemma~\ref{lem:bbexp}. \begin{proof}[Proof of Lemma~\ref{lem:bbexp}] The bound for \(\E \abs*{I_0^{\eta_0}}\) immediately follows by~\cite[Theorem 3.2]{MR2684367} (see also Remark~\ref{rem:altern} for an alternative proof). By the local law outside the spectrum, given in the second part of~\cite[Theorem 5.2]{1907.13631}, it follows that for \(0<\gamma<1/2\) we have \begin{equation} \label{eq:betll} \abs*{\braket{ G^z(\mathrm{i}\eta)- M^z(\mathrm{i} \eta)}} \le \frac{n^\xi}{n^{1+\gamma/3}\eta}, \end{equation} uniformly for all \(\abs{z}^2\ge 1+(n^\gamma \eta)^{2/3}+n^{(\gamma-1)/2}\), \(\eta>0\), and \(\abs{z}\le 1+\tau^*\), for some \(\tau^*\sim 1\). We remark that the local law~\eqref{eq:betll} was initially proven only for \(\eta\) above the fluctuation scale \(\eta_f(z)\), which is defined in~\cite[Eq.~(5.2)]{1907.13631}, but it can be easily extend to any \(\eta>0\) using the monotonicity of the function \(\eta \mapsto \eta \braket{ \Im G(i\eta)}\) and the fact that \begin{equation} \label{eq:detb} \abs*{n^\xi \eta_f(z)\braket{ M^z(\mathrm{i} n^\xi\eta_f(z)) }} +\abs*{ \eta \braket{ M^z(\mathrm{i} \eta) }}\lesssim n^{2\xi} \frac{\eta_f(z)^2}{\abs{z}^2-1}, \end{equation} uniformly in \(\eta>0\), since \(\Im M^z(\mathrm{i}\eta)=\Im m^z(\mathrm{i}\eta) I\) by~\eqref{eq M}, with \(I\) the \(2n\times 2n\) identity matrix, and \(\Im m^z(\mathrm{i}\eta)\le \eta(\abs{z}^2-1)^{-1}\) by~\cite[Eq.~(3.13)]{1907.13631}. Note that we assumed the additional term \(n^{(\gamma-1)/2}\) in the lower bound for \(\abs{z}^2\) compared with~\cite[Theorem 5.2]{1907.13631} in order to ensure that the rhs.\ in~\eqref{eq:detb}, divided by \(\eta\), is smaller than the error term in~\eqref{eq:betll}. Next, in order to bound \(\E \abs{I_{\eta_0}^{\eta_c}}\), we consider \begin{align} \label{eq:bound2} \E &\abs{I_{\eta_0}^{\eta_c}}^2={}-\frac{n^2}{4\pi^2}\int_\mathbf{C} \operatorname{d}\!{}^2 z_1 (\Delta f)(z_1)\int_\mathbf{C} \operatorname{d}\!{}^2 z_2 (\Delta\overline{f})(z_2) \int_{\eta_0}^{\eta_c} \operatorname{d}\!{} \eta_1 \int_{\eta_0}^{\eta_c} \operatorname{d}\!{} \eta_2 \, F \\ F&=F(z_1,z_2,\eta_1,\eta_2):={} \E \Big[\braket{ G^{z_1}(\mathrm{i}\eta_1) -\E G^{z_1}(\mathrm{i}\eta_1)}\braket{ G^{z_2}(\mathrm{i}\eta_2) -\E G^{z_2}(\mathrm{i}\eta_2)}\Big]. \end{align} By~\eqref{eq:impbbfin} it follows that the regimes \(1-n^{-2\omega_h}\le \abs{z_l}^2 \le 1+n^{-2\omega_h}\), with \(l=1,2\), and \(\abs{z_1-z_2}\le n^{-\omega_d}\) in~\eqref{eq:bound2}, with \(\omega_h, \omega_d\) defined in Proposition~\ref{prop:indmr}, are bounded by \(n^{-2\omega_h+\xi}\) and \(n^{-\omega_d/2+\xi}\), respectively. Moreover, the contribution from the regime \(\abs{z_l}\ge 1+n^{-2\omega_h}\) is also bounded by \(n^{-2\omega_h+\xi}\) using~\eqref{eq:betll} with \(\gamma\le 1-3\omega_h-2\delta_1\), say \(\gamma=1/4\). After collecting these error terms we conclude that \begin{equation} \label{eq:bound3} \begin{split} \E \abs{I_{\eta_0}^{\eta_c}}^2&=\frac{n^2}{4\pi^2}\int_{\abs{z_1}\le 1-n^{-\omega_h}} \operatorname{d}\!{}^2 z_1 \Delta f(z_1)\int_{\substack{\abs{z_2}\le 1-n^{-\omega_h}, \\ \abs{z_2-z_1}\ge n^{-\omega_d}}} \operatorname{d}\!{}^2 z_2 \Delta \overline{f(z_2)} \\ &\qquad\quad\times \int_{\eta_0}^{\eta_c} \operatorname{d}\!{} \eta_1 \int_{\eta_0}^{\eta_c} \operatorname{d}\!{} \eta_2 F+\mathcal{O}\left(\frac{n^\xi}{n^{\omega_h}}+\frac{n^\xi}{n^{\omega_d/2}} \right). \end{split} \end{equation} We remark that the implicit constant in \(\mathcal{O}(\cdot)\) in~\eqref{eq:bound3} and in the remainder of the proof may depend on \(\norm{ \Delta f}_{L^2(\Omega)}\). Then by Proposition~\ref{prop:indmr} it follows that \begin{equation} \label{eq:yetao} \E \Big[ \braket{ G^{z_1}(\mathrm{i}\eta_1) -\E \langle G^{z_1}(\mathrm{i}\eta_1)}\braket{ G^{z_2}(\mathrm{i}\eta_2) -\E G^{z_2}(\mathrm{i}\eta_2)}\Big]=\mathcal{O}\left(\frac{n^{c(\omega_h+\delta_0)+\delta_1}}{n^{\omega}}\right), \end{equation} with \(\omega_h\ll\delta_0\ll \omega\). Hence, plugging~\eqref{eq:yetao} into~\eqref{eq:bound3} it follows that \begin{equation} \label{eq:yetao2} \E \abs{I_{\eta_0}^{\eta_c}}^2=\mathcal{O}\left(\frac{n^{c(\omega_h+\delta_0)+2\delta_1}}{n^{\omega}}\right). \end{equation} This concludes the proof under the assumption \(\omega_h\ll\delta_m\ll \omega\), with \(m=0,1\), of Proposition~\ref{prop:indmr} (see Section~\ref{rem:s} later for a summary on all the scales involved in the proof of Proposition~\ref{prop:indmr}). \end{proof} \begin{remark}[Alternative proof of the bound for \(\E \abs{I_0^{\eta_0}}\)]\label{rem:altern} Under the additional assumption~\eqref{eq:addass} in Remark~\ref{rem:2ass}, we can prove the same bound for \(\E \abs{I_0^{\eta_0}}\) in~\eqref{eq:impexb} without relying on the fairly sophisticated proof of~\cite[Theorem 3.2]{MR2684367}. In order to bound \(\E \abs{I_0^{\eta_0}}\) we first remove the regime \(\eta\in [0,n^{-l}]\) as in the proof of Lemma~\ref{lem:aprior}. Then, using~\eqref{eq:impbbfin} to bound the integral over the regime \(\abs{1-\abs{z}^2}\le 1+n^{-2\omega_h}\), with \(\omega_h\) defined in Proposition~\ref{prop:indmr}, and~\eqref{eq:betll} for the regime \(\abs{z}^2\ge 1-n^{-2\omega_h}\), we conclude that \begin{equation} \label{eq:bound1} \E \abs{I_0^{\eta_0}} =\E \frac{n}{2\pi}\int_{\abs{z}\le 1-n^{-2\omega_h}} \abs*{ \Delta f} \abs*{\int_0^{\eta_0} \braket{ G^z -\E G^z}\operatorname{d}\!{} \eta }\operatorname{d}\!{}^2 z+ \mathcal{O}\left( \frac{n^{\xi}}{n^{\omega_h}}\right). \end{equation} By universality of the smallest eigenvalue of \(H^z\) (which directly follows by Proposition~\ref{pro:ciala} for any fixed \(\abs{z}^2\le 1-n^{-2\omega_h}\); see also~\cite{MR3916329}), and the bound in~\cite[Corollary 2.4]{1908.01653} we have that \[ \Prob \left(\lambda_1^z\le \eta_0 \right)\le n^{-\delta_0/4}, \] with \(\eta_0=n^{-1-\delta_0}\) and \(\omega_h\ll \delta_0\). This concludes the bound in~\eqref{eq:impexb} for \(I_0^{\eta_0}\) following exactly the same proof of~\cite[Lemma 4.6]{MR4221653}, by~\eqref{eq:bound1}. We warn the reader that in~\cite[Corollary 2.4]{1908.01653} \(\lambda_1\) denotes the smallest eigenvalue of \((X-z)(X-z)^*\), whilst here \(\lambda_1^z\) denotes the smallest (positive) eigenvalue of \(H^z\). \end{remark} \subsection{Computation of the expectation in Theorem~\ref{theo:CLT}}\label{sec:exexex} In this section we compute the expectation \(\E \sum_i f(\sigma_i)\) in~\eqref{eq:expv} using the computation of \(\E \braket{ G }\) in~\eqref{prop clt exp} of Proposition~\ref{prop:CLTresm} as an input. More precisely, we prove the following lemma. Note that~\eqref{eq:exval} proves~\eqref{eq:expv} in Theorem~\ref{theo:CLT}. \begin{lemma}\label{lem:compe} Fix some bounded open \(\overline\mathbf{D}\subset\Omega\subset \mathbf{C}\) and let \(f\in H_0^{2+\delta}(\Omega)\), and let \(\kappa_4:= n^2[\E \abs{x_{11}}^4-2(\E \abs{x_{11}}^2)]\), then \begin{equation} \label{eq:exval} \E \sum_{i=1}^n f(\sigma_i)=\frac{n}{\pi}\int_\mathbf{D} f(z)\operatorname{d}\!{}^2 z-\frac{\kappa_4}{\pi}\int_\mathbf{D} f(z)(2\abs{z}^2-1)\operatorname{d}\!{}^2 z+\mathcal{O}\left( n^{-\delta'}\right), \end{equation} for some small \(\delta'>0\). \end{lemma} \begin{proof} By the circular law (e.g.\ see~\cite[Eq.~(2.7)]{MR3770875},~\cite[Theorem 2.3]{1907.13631}) it immediately follows that \begin{equation}\label{eq:claw} \sum_{i=1}^n f(\sigma_i)-\frac{n}{\pi}\int_\mathbf{D} f(z)\operatorname{d}\!{}^2 z=\mathcal{O}(n^\xi), \end{equation} with very high probability. Hence, in order to prove~\eqref{eq:exval} we need to identify the sub-leading term in the expectation of~\eqref{eq:claw}, which is not present in the Ginibre case since \(\kappa_4=0\). First of all by Lemma~\ref{lem:i4} it follows that the main contribution in Girko's formula comes from \(I_{\eta_c}^T\). Since the error term in~\eqref{prop clt exp} is not affordable for \(1-\abs{z}\) very close to zero, we remove the regime \(\abs{1-\abs{z}^2}\le n^{-2\nu}\) in the \(z\)-integral by~\eqref{eq:impbbfin} at the expense of an error term \(n^{-\nu+\xi}\), for some very small \(\nu>0\) we will choose shortly. The regime \(\abs{1-\abs{z}^2}\ge n^{-2\nu}\), instead, is computed using~\eqref{prop clt exp}. Hence, collecting these error terms we conclude that there exists \(\delta'>0\) such that \begin{gather} \begin{aligned} &\E \sum_i f(\sigma_i)-\frac{n}{\pi}\int_\mathbf{D} f(z) \operatorname{d}\!{}^2 z \\ &\qquad = -\frac{n}{2\pi\mathrm{i}} \int_{\abs{1-\abs{z}^2}\ge n^{-2\nu}} \operatorname{d}\!{}^2 z \Delta f \int_{\eta_c}^T \operatorname{d}\!{} \eta\, \E\braket{G-M}+\mathcal{O}\left( n^{-\delta'}+ n^{-\nu+\xi} \right) \\ &\qquad =\frac{\kappa_4}{8\pi} \int \operatorname{d}\!{}^2 z \Delta f \int_0^\infty \operatorname{d}\!{} \eta\, \partial_\eta(m^4)+\mathcal{O}\left( n^{-\delta'} + \frac{n^{2\nu}}{n\eta_c}+n^{2\nu}\eta_c+n^{-\nu+\xi}\right) \\ & \qquad=- \frac{\kappa_4}{\pi} \int_\mathbf{D} f(z) (2\abs{z}^2-1)\operatorname{d}\!{}^2 z+\mathcal{O}\left( n^{-\delta'} + \frac{n^{2\nu}}{n\eta_c}+n^{2\nu}\eta_c+n^{-\nu+\xi}\right), \end{aligned}\label{eq:med}\raisetag{-6.5em} \end{gather} with \(\eta_c=n^{-1+\delta_1}\) defined in~\eqref{eq:scales}. To go from the second to the third line we used~\eqref{prop clt exp}, and then we added back the regimes \(\eta\in [0,\eta_c]\) and \(\eta\ge T\), and the regime \(\abs{1-\abs{z}^2}\le n^{-2\nu}\) in the \(z\)-integration at the price of a negligible error. In particular, in the \(\eta\)-integration we used that \(\abs{\partial_\eta(m^4)}\lesssim n^{2\nu}\) in the regime \(\eta\in [0,\eta_c]\), by~\eqref{beta def}--\eqref{eq:bbou}, and that using \(\abs{m}\le \eta^{-1}\) we have \(\abs{\partial_\eta(m^4)}\lesssim \eta^{-5}\) by~\eqref{beta def}, in the regime \(\eta\ge T\). Choosing \(\nu, \delta'>0\) so that \(\nu\ll\delta_1 \ll \delta'\) we conclude the proof of~\eqref{eq:exval}. \end{proof} \subsection{Computation of the second and higher moments in Theorem~\ref{theo:CLT}}\label{sec variance computation} In this section we conclude the proof of Theorem~\ref{theo:CLT}, i.e.\ we compute \begin{gather} \begin{aligned} \E \prod_{i\in[p]} L_n(f^{(i)}) &= \E\prod_{i\in[p]} I_{\eta_c}^T(f^{(i)}) + \mathcal{O}(n^{-c(p)}) \\ &= \E\prod_{i\in[p]} \biggl[-\frac{n}{2\pi \mathrm{i}}\int_\mathbf{C} \Delta f^{(i)}(z) \int_{\eta_c}^T \braket{G^z(\mathrm{i}\eta)-\E G^z(\mathrm{i}\eta)} \operatorname{d}\!{} \eta\operatorname{d}\!{}^2 z\biggr] \\ &\quad+ \mathcal{O}(n^{-c(p)}) \end{aligned}\label{eq: prod clt}\raisetag{-5em} \end{gather} to leading order using~\eqref{eq CLT resovlent}. \begin{lemma}\label{lem:b} Let \(f^{(i)}\) be as in Theorem~\ref{theo:CLT} and set \(f^{(i)}=f\) or \(f^{(i)}=\overline{f}\) for any \(i\in [p]\), and recall that \(\Pi_p\) denotes the set of pairings on \([p]\). Then \begin{equation} \label{eq:bast} \begin{split} &\E\prod_{i\in[p]} \biggl[-\frac{n}{2\pi\mathrm{i}}\int_\mathbf{C} \Delta f^{(i)}(z) \int_{\eta_c}^T \braket{G^{z}(\mathrm{i}\eta)-\E G^z(\mathrm{i}\eta)} \operatorname{d}\!{} \eta\operatorname{d}\!{}^2 z\biggr] \\ &= \sum_{P\in \Pi_p} \prod_{\{i,j\}\in P} \biggl[-\int_\mathbf{C} \operatorname{d}\!{}^2 z_i \Delta f^{(i)} \int_\mathbf{C}\operatorname{d}\!{}^2 z_j\Delta f^{(j)} \int_{0}^\infty \operatorname{d}\!{}\eta_i\int_{0}^\infty\operatorname{d}\!{}\eta_j \frac{V_{i,j}+\kappa_4 U_i U_j}{8\pi^2}\biggr] \\ &\qquad + \mathcal{O}(n^{-c(p)}), \end{split} \end{equation} for some small \(c(p)>0\), where \(V_{i,j}\) and \(U_i\) are as in~\eqref{eq:exder}. The implicit constant in \(\mathcal{O}(\cdot)\) may depend on \(p\). \end{lemma} \begin{proof} In order to prove the lemma we have to check that the integral of the error term in~\eqref{eq CLT resovlent} is at most of size \(n^{-c(p)}\), and that the integral of \(V_{i,j}+\kappa_4 U_i U_j\) for \(\eta_i\le \eta_c\) or \(\eta_i\ge T\) is similarly negligible. In the remainder of the proof we assume that \(p\) is even, since the terms with \(p\) odd are of lower order by~\eqref{eq CLT resovlent}. Note that by the explicit form of \(m_i, u_i\) in~\eqref{eq m}--\eqref{eq M}, by the definition of \(V_{i,j}\), \(U_i, U_j\) in~\eqref{eq:exder}, the fact that by \(-m_i^2+\abs{z_i}^2u_i^2=u_i\) we have \[ V_{i,j}=\frac{1}{2}\partial_{\eta_i}\partial_{\eta_j}\log \left(1-u_i u_j\Big[ 1-\abs{z_i-z_j}^2+(1-u_i)\abs{z_i}^2+(1-u_j)\abs{z_j}^2\Big]\right), \] and using \(\abs{\partial_{\eta_i} m_i}\le [\Im m^{z_i}(\mathrm{i}\eta_i)+\eta_i]^{-2}\) by~\eqref{beta def}--\eqref{eq:bbou}, we conclude (see also~\eqref{eq:bbst1}--\eqref{eq:bbst2} later) \begin{equation} \label{eq:VWbound} \abs{V_{i,j}}\lesssim \frac{[(\Im m^{z_i}(\mathrm{i}\eta_i)+\eta_i)(\Im m^{z_j}(\mathrm{i}\eta_j)+\eta_j)]^{-2}}{[\abs{z_i-z_j}^2+(\eta_i+\eta_j)(\min \{\Im m^{z_i}, \Im m^{z_j} \}^2)]^2},\,\, \abs{U_i}\lesssim \frac{1}{\Im m^{z_i}(\mathrm{i}\eta_i)^2+\eta_i^3}. \end{equation} Using the bound~\eqref{eq:impbbfin} to remove the regime \(Z_i:= \set{ \abs{1-\abs{z_i}^2}\le n^{-2\nu}}\) for any \(i\in[p]\), for some small \(\nu>0\), we conclude that the lhs.\ of~\eqref{eq:bast} is equal to \begin{equation} \label{eq:wickprod} \frac{(-n)^p}{(2\pi \mathrm{i})^p}\prod_{i\in [p]}\int_{Z_i^c}\operatorname{d}\!{}^2 z_i \Delta f^{(i)}(z_i) \E \prod_{i\in[p]} \int_{\eta_c}^T \braket{G^{z_i}(\mathrm{i}\eta_i)-\E G^{z_i}(\mathrm{i}\eta_i)} \operatorname{d}\!{} \eta_i + \mathcal{O}\left(\frac{n^{p\xi}}{n^\nu}\right), \end{equation} for any very small \(\xi>0\). Additionally, since the error term \(\Psi\) defined in~\eqref{eq psi error} behaves badly for small \(\abs{z_i-z_j}\), we remove the regime \[ \widehat{Z}_i:= \bigcup_{j<i}\set{ z_j : \,\abs{z_i-z_j}\le n^{-2\nu}} \] in each \(z_i\)-integral in~\eqref{eq:wickprod} using~\eqref{eq:impbbfin}, and, denoting \(f^{(i)}=f^{(i)}(z_i)\), get \begin{equation} \label{eq:wickprod2} \frac{(-n)^p}{(2\pi \mathrm{i})^p} \prod_{i\in[p]}\int_{Z_i^c \cap \widehat{Z}_i^c}\operatorname{d}\!{}^2 z_i \Delta f^{(i)}\E \prod_{i\in[p]} \int_{\eta_c}^T \braket{G^{z_i}(\mathrm{i}\eta_i)-\E G^{z_i}(\mathrm{i}\eta_i)} \operatorname{d}\!{} \eta_i+ \mathcal{O}\left(\frac{n^{p\xi}}{n^\nu}\right). \end{equation} Plugging~\eqref{eq CLT resovlent} into~\eqref{eq:wickprod2}, and using the first bound in~\eqref{eq:apb} to remove the regime \(\eta_i\ge T\) for the lhs.\ of~\eqref{eq:bast} we get \begin{gather} \begin{aligned} &\frac{1}{(2\pi \mathrm{i})^p} \prod_{i\in[p]}\int_{Z_i^c \cap \widehat{Z}_i^c}\operatorname{d}\!{}^2 z_i \Delta f^{(i)}\sum_{P\in \Pi_p} \prod_{\{i,j\}\in P} \int_0^\infty\int_0^\infty -\frac{V_{i,j}+\kappa_4 U_i U_j}{8\pi^2} \operatorname{d}\!{} \eta_j\operatorname{d}\!{} \eta_i\\ &\qquad\qquad + \mathcal{O}\left(\frac{n^{p\xi}}{n^\nu}+\frac{n^{20\nu p+\delta_1}}{n}+\frac{n^{\xi p+2p\nu}}{n^{\delta_1/2}}\right), \end{aligned}\label{eq:wickprod3}\raisetag{-5em} \end{gather} where \(\eta_c=n^{-1+\delta_1}\), the second last error term comes from adding back the regimes \(\eta_i\in [0,\eta_c]\) using that \[ \abs{V_{i,j}}\le \frac{n^{20\nu}}{(1+\eta_i^2)(1+\eta_j^2)}, \qquad \abs{U_i}\le \frac{n^{4\nu}}{1+\eta_i^3}, \] for \(z_i\in Z_i^c\cap \widehat{Z}_i^c\) and \(z_j\in Z_j^c\cap \widehat{Z}_j^c\) by~\eqref{eq:VWbound}. The last error term in~\eqref{eq:wickprod3} comes from the integral of \(\Psi\), with \(\Psi\) defined in~\eqref{eq psi error}. Finally, we perform the \(\eta\)-integrations using the explicit formulas~\eqref{eq:expintV} and~\eqref{eq:expintW} below. After that, we add back the domains \(Z_i\) and \(\widehat{Z_i}\) for \(i\in [p]\) at a negligible error, since these domains have volume of order \(n^{-2\nu}\), \(\Delta f^{(i)}\in L^2\), and the logarithmic singularities from~\eqref{eq:expintV} are integrable. This concludes~\eqref{eq:bast} choosing \(\nu\) so that \(\nu\ll \delta_1\ll 1\). \end{proof} In the next three sub-sections we compute the integrals in~\eqref{eq:bast} for any \(i,j\)'s. To make our notation simpler we use only the indices \(1,2\), i.e.\ we compute the integral of \(V_{1,2}\) and \(U_1U_2\). \subsubsection{Computation of the \((\eta_1,\eta_2)\)-integrals} Using the relations in~\eqref{eq:exder} we explicitly compute the \((\eta_1,\eta_2)\)-integral of \(V_{1,2}\): \begin{gather} \begin{aligned} &-\int_0^\infty \int_0^\infty V_{1,2}\, \operatorname{d}\!{}\eta_1 \operatorname{d}\!{} \eta_2 =-\frac{1}{2}\log A\restriction_{\substack{\eta_1=0,\\ \eta_2=0}} \\ &\quad=\Theta(z_1,z_2):= \frac{1}{2}\begin{cases} -\log \abs{z_1-z_2}^2,& \abs{z_1},\abs{z_2} \le 1, \\ \log \abs{z_l}^{2}-\log\abs{z_1-z_2}^2,& \abs{z_m} \le 1, \abs{z_l}>1, \\ \log \abs{z_1 z_2}^{2}-\log\abs{1-z_1\overline{z}_2}^2, & \abs{z_1}, \abs{z_2}>1, \end{cases} \end{aligned}\label{eq:expintV}\raisetag{-4em} \end{gather} with \(A(\eta_1,\eta_2,z_1,z_2)\) defined by \[ A(\eta_1,\eta_2,z_1,z_2):=1+(u_1u_2\abs{z_1}\abs{z_2})^2-m_1^2 m_2^2-2u_1u_2\Re z_1\overline{z}_2. \] Then the \(\eta_i\)-integral of \(U_i\), for \(i\in\{1,2\}\), is given by \begin{equation}\label{eq:expintW} \int_0^\infty U_i\, \operatorname{d}\!{} \eta_i=\frac{\mathrm{i}}{\sqrt{2}}(1-\abs{z_i}^2). \end{equation} Before proceeding we rewrite \(\Theta(z_1,z_2)\) as \[ \begin{split} 2\Theta(z_1,z_2)&=-\log\abs{z_1-z_2}^2+\log\abs{z_1}^{2} \bm1(\abs{z_1}> 1)+\log\abs{z_2}^{2} \bm1(\abs{z_2}> 1)\\ &\quad +\left[\log\abs{z_1-z_2}^2-\log \abs{1-z_1\overline{z}_2}^2 \right]\bm1(\abs{z_1},\abs{z_2}> 1). \end{split} \] In the remainder of this section we use the notations \[ \operatorname{d}\!{} z:= \operatorname{d}\!{} z+\mathrm{i} \operatorname{d}\!{} y, \quad \operatorname{d}\!{}\overline{z}:=\operatorname{d}\!{} x-\mathrm{i} \operatorname{d}\!{} y, \quad \quad\partial_z :=\frac{\partial_x-\mathrm{i}\partial_y}{2}, \quad \partial_{\overline{z}} :=\frac{\partial_x+\mathrm{i}\partial_y}{2}, \] and \(\partial_l:= \partial_{z_l}\), \(\overline{\partial}_l:=\partial_{\overline{z}_l}\). With this notation \(\Delta_{z_l}=4\partial_{z_l}\partial_{\overline{z}_l}\). We split the computation of the leading term in the rhs.\ of~\eqref{eq:bast} into two parts: the integral of \(V_{1,2}\), and the integral of \(U_1U_2\). \subsubsection{Computation of the \((z_1,z_2)\)-integral of \(V_{1,2}\)} In this section we compute the integral of \(V_{1,2}\) in~\eqref{eq:bast}. To make our notation easier in the remainder of this section we use the notation \(f\) and \(g\), instead of \(f^{(1)}\), \(f^{(2)}\), with \(f\) in Theorem~\ref{theo:CLT} and \(g=f\) or \(g=\overline{f}\). \begin{lemma}\label{lem:vi} Let \(V_{1,2}\) be defined in~\eqref{eq:exder}, then \begin{equation} \label{eq:finV} \begin{split} &-\frac{1}{8\pi^2}\int_\mathbf{C} \operatorname{d}\!{}^2 z_1\int_\mathbf{C} \operatorname{d}\!{}^2 z_2 \Delta f(z_1) \Delta \overline{g(z_2)} \int_0^{\infty} \operatorname{d}\!{} \eta_1\int_0^{\infty} \operatorname{d}\!{} \eta_2 V_{1,2} \\ &\qquad =\frac{1}{4\pi} \int_\mathbf{D} \braket{ \nabla g, \nabla f} \operatorname{d}\!{}^2 z+\frac{1}{2}\sum_{m\in\mathbf{Z} } \abs{m} \widehat{f\restriction_{\partial \mathbf{D} }}(m) \overline{\widehat{g\restriction_{\partial \mathbf{D} }}}(m). \end{split} \end{equation} \end{lemma} Note that the rhs.\ of~\eqref{eq:finV} gives exactly the first two terms in~\eqref{eq:cov}. Using the expression of \(V_{1,2}\) in~\eqref{eq:exder} and the computation of its \((\eta_1,\eta_2)\)-integral in~\eqref{eq:expintV}, we have that \begin{equation}\label{eq:comp1} \begin{split} &-\frac{1}{8\pi^2}\int_\mathbf{C} \operatorname{d}\!{}^2 z_1\int_\mathbf{C} \operatorname{d}\!{}^2 z_2 \Delta f(z_1) \Delta \overline{g(z_2)} \int_0^{\infty} \operatorname{d}\!{} \eta_1\int_0^{\infty} \operatorname{d}\!{} \eta_2 V_{1,2} \\ &\qquad = \frac{2}{\pi^2}\int_\mathbf{C} \operatorname{d}\!{}^2 z_1\int_\mathbf{C} \operatorname{d}\!{}^2 z_2 \partial_1\overline{\partial}_1 f(z_1)\partial_2\overline{\partial}_2 \overline{g(z_2)} \Theta(z_1,z_2), \end{split} \end{equation} with \(\Theta(z_1,z_2)\) is defined in the rhs.\ of~\eqref{eq:expintV}. We compute the r.h.s.\ of~\eqref{eq:comp1} as stated in Lemma~\ref{lem:intbp}. The proof of this lemma is postponed to Appendix~\ref{sec:INTBP}. \begin{lemma}\label{lem:intbp} Let \(\Theta(z_1,z_2)\) be defined in~\eqref{eq:expintV}, then we have that \begin{gather} \begin{aligned} &\frac{2}{\pi^2}\int_\mathbf{C} \operatorname{d}\!{}^2 z_1\int_\mathbf{C} \operatorname{d}\!{}^2 z_2 \partial_1\overline{\partial}_1 f(z_1)\partial_2\overline{\partial}_2 \overline{g(z_2)} \Theta(z_1,z_2) =\frac{1}{4\pi} \int_\mathbf{D} \braket{ \nabla g, \nabla f} \operatorname{d}\!{}^2 z \\ &\qquad\quad+ \lim_{\epsilon\to 0}\Bigg[ \frac{1}{2\pi^2} \int_{\abs{z_1}\ge 1} \operatorname{d}\!{}^2 z_1 \int_{\substack{\abs{1-z_1\overline{z}_2}\ge \epsilon, \\ \abs{z_2}\ge 1}} \operatorname{d}\!{}^2 z_2 \, \partial_1 f(z_1) \overline{\partial}_2 \overline{g(z_2)} \frac{1}{(1-\overline{z}_1z_2)^2} \\ &\qquad\qquad\quad+ \frac{1}{2\pi^2} \int_{\abs{z_1}\ge 1} \operatorname{d}\!{}^2 z_1 \int_{\substack{\abs{1-z_1\overline{z}_2}\ge \epsilon, \\ \abs{z_2}\ge 1}} \operatorname{d}\!{}^2 z_2\, \overline{\partial}_1 f(z_1) \partial_2 \overline{g(z_2)} \frac{1}{(1-z_1\overline{z}_2)^2} \Bigg]. \end{aligned}\label{eq:intbp}\raisetag{-5em} \end{gather} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:vi}] By Lemma~\ref{lem:intbp} it follows that to prove Lemma~\ref{lem:vi} it is enough to compute the last two lines in the rhs.\ of~\eqref{eq:intbp}. Note that using the change of variables \(\overline{z}_1\to 1/{\overline{z}_1}\), \(z_2\to 1/z_2\) the integral in the rhs.\ of~\eqref{eq:intbp} is equal to the same integral on the domain \(\abs{z_1},\abs{z_2}\le 1\), \(\abs{1-z_1\overline{z}_2}\ge \epsilon\). By a standard density argument, using that \(f,g\in H_0^{2+\delta}\), it is enough to compute the limit in~\eqref{eq:intbp} only for polynomials, hence, from now on, we consider polynomials \(f\), \(g\) of the form \begin{equation} \label{eq:poly} f(z_1)=\sum_{k,l\ge 0} z_1^k \overline{z}_1^l a_{kl}, \qquad g(z_2)=\sum_{k,l\ge 0} z_2^k \overline{z}_2^l b_{kl}, \end{equation} for some coefficients \(a_{kl}, b_{kl}\in \mathbf{C} \). We remark that the summations in~\eqref{eq:poly} are finite since \(f\) and \(g\) are polynomials. Then, using that \[ \lim_{\epsilon\to 0}\int_{\abs{z_1}\le 1} \int_{\substack{\abs{1-z_1\overline{z}_2}\ge \epsilon, \\ \abs{z_2}\le 1}} z_1^\alpha \overline{z}_1^\beta z_2^{\alpha'} \overline{z}_2^{\beta'}\operatorname{d}\!{}^2 z_1 \operatorname{d}\!{}^2 z_2=\frac{\pi^2}{(\alpha+1)(\alpha'+1)}\delta_{\alpha,\beta}\delta_{\alpha',\beta'}, \] we compute the limit in the rhs.\ of~\eqref{eq:intbp} as follows \begin{gather} \begin{aligned} &\lim_{\epsilon\to 0}\sum_{k,l,k',l', m\ge 0} \frac{1}{2\pi^2}\int_{\abs{z_1}\le 1}\int_{\substack{\abs{1-z_1\overline{z}_2}\ge \epsilon, \\ \abs{z_2}\le 1}} \operatorname{d}\!{}^2 z_1 \operatorname{d}\!{}^2 z_2 \, m a_{kl} \overline{b_{k'l'}} \\ &\qquad\qquad\qquad\times\Big[k k' z_1^{k-1}\overline{z}_1^{l+m-1} z_2^{l'+m-1} \overline{z}_2^{k'-1}+l l' z_1^{k+m-1} \overline{z}_1^{l-1}z_2^{k'+m-1}\overline{z}_2^{l'-1}\Big] \\ &\qquad= \frac{1}{2}\sum_{\substack{k,l,k',l', \\ m\ge0}} m a_{kl} \overline{b_{k'l'}}\Big[\delta_{k,l+m}\delta_{k',l'+m} +\delta_{k,l-m}\delta_{k',l'-m}\Big] \\ &\qquad= \frac{1}{2}\sum_{\substack{k,l,k',l'\ge0, \\ m\in\mathbf{Z}} } \abs{m} a_{kl} \overline{b_{k'l'}}\delta_{k,l+m}\delta_{k',l'+m}. \end{aligned}\label{eq:comp3}\raisetag{-5em} \end{gather} On the other hand \begin{equation} \label{eq:h12} \sum_{m\in\mathbf{Z} } \abs{m} \widehat{f\restriction_{\partial \mathbf{D} }}(m) \overline{\widehat{g\restriction_{\partial \mathbf{D} }}}(m)= \sum_{m\in\mathbf{Z} } \abs{m} \sum_{k,l,k',l'\ge 0} a_{kl}\overline{b_{k'l'}}\delta_{m,k-l} \delta_{m,k'-l'}, \end{equation} where \[ \widehat{f\restriction_{\partial \mathbf{D} }}(k):= \frac{1}{2\pi}\int_0^{2\pi} f\restriction_{\partial \mathbf{D} }(e^{i\theta}) e^{-ik\theta}\operatorname{d}\!{} \theta, \quad f\restriction_{\partial \mathbf{D} }(e^{i\theta_j})=\sum_{k\in\mathbf{Z} } \widehat{f\restriction_{\partial \mathbf{D} }}(k) e^{i\theta_j k}. \] Finally, combining~\eqref{eq:intbp} and~\eqref{eq:comp3}--\eqref{eq:h12}, we conclude the proof of~\eqref{eq:finV}. \end{proof} \subsubsection{Computation of the \((z_1,z_2)\)-integral of \(U_1 U_2\)} In order to conclude the proof of Theorem~\ref{theo:CLT}, in this section we compute the integral of \(U_1 U_2\) in~\eqref{eq:bast}. Similarly to the previous section, we use the notation \(f\) and \(g\), instead of \(f^{(1)}\), \(f^{(2)}\), with \(f\) in Theorem~\ref{theo:CLT} and \(g=f\) or \(g=\overline{f}\). \begin{lemma}\label{lem:wi} Let \(\kappa_4=n^2[\E \abs{x_{11}}^2-2(\E \abs{x_{11}}^2)]\), and let \(U_1\), \(U_2\) be defined in~\eqref{eq:exder}, then \begin{gather} \begin{aligned} &-\frac{\kappa_4}{8\pi^2}\int_\mathbf{C} \operatorname{d}\!{}^2 z_1\int_\mathbf{C} \operatorname{d}\!{}^2 z_2 \Delta f(z_1) \Delta \overline{g(z_2)} \int_0^{\infty} \operatorname{d}\!{} \eta_1\int_0^{\infty} \operatorname{d}\!{} \eta_2 U_1 U_2 \\ &\qquad =\kappa_4 \left(\frac{1}{\pi}\int_\mathbf{D} f(z)\operatorname{d}\!{}^2 z-\widehat{f\restriction_{\partial\mathbf{D} }}(0)\right)\left(\frac{1}{\pi}\int_\mathbf{D} \overline{g(z)}\operatorname{d}\!{}^2 z- \overline{\widehat{g\restriction_{\partial\mathbf{D} }}}(0)\right). \end{aligned}\label{eq:finW}\raisetag{-3.5em} \end{gather} \end{lemma} \begin{proof}[Proof of Theorem~\ref{theo:CLT}] Theorem~\ref{theo:CLT} readily follows combining Lemma~\ref{lem:b}, Lemma~\ref{lem:vi} and Lemma~\ref{lem:wi}. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:wi}] First of all, we recall the following formulas of integration by parts \begin{equation} \label{eq:ibp} \int_\mathbf{D} \partial_z f(z,\overline{z}) \operatorname{d}\!{}^2 z=\frac{\mathrm{i}}{2}\int_{\partial \mathbf{D} } f(z,\overline{z}) \operatorname{d}\!{}\overline{z}, \quad \int_\mathbf{D} \partial_{\overline{z}} f(z,\overline{z})\operatorname{d}\!{}^2 z=-\frac{\mathrm{i}}{2}\int_{\partial\mathbf{D} } f(z,\overline{z}) \operatorname{d}\!{} z. \end{equation} Then, using the computation of the \(\eta\)-integral of \(U\) in~\eqref{eq:expintW}, and integration by parts~\eqref{eq:ibp} twice, we conclude that \[ \begin{split} \int_\mathbf{C} \Delta f\int_0^\infty U\, \operatorname{d}\!{} \eta\operatorname{d}\!{}^2 z &= \mathrm{i} 2\sqrt{2} \int_{\mathbf{D} } \partial\overline{\partial} f(z) (1-\abs{z}^2) \operatorname{d}\!{}^2 z=\mathrm{i} 2\sqrt{2} \int_{\mathbf{D} } \overline{\partial} f(z) \overline{z}\operatorname{d}\!{}^2z \\ &=- \mathrm{i} 2\sqrt{2} \left( \int_{\mathbf{D} } f(z)\,\operatorname{d}\!{}^2z +\frac{\mathrm{i}}{2}\int_{\partial(\mathbf{D} )} f(z) \overline{z}\operatorname{d}\!{} z\right) \\ &=-\mathrm{i} 2\sqrt{2} \left( \int_{\mathbf{D} } f(z)\operatorname{d}\!{}^2 z -\pi \widehat{f\restriction_{\partial\mathbf{D} }}(0)\right). \end{split} \] This concludes the proof of this lemma. \end{proof} \section{Local law for products of resolvents}\label{sec local law G2} The main technical result of this section is a local law for \emph{products} of resolvents with different spectral parameters \(z_1\ne z_2\). Our goal is to find a deterministic approximation to \(\braket{AG^{z_1} B G^{z_2} }\) for generic bounded deterministic matrices \(A,B\). Due to the correlation between the two resolvents the deterministic approximation to \(\braket{A G^{z_1}B G^{z_2}}\) is not simply \(\braket{A M^{z_1}B M^{z_2}}\). In the context of linear statistics such local laws for products of resolvents have previously been obtained e.g.\ for Wigner matrices in~\cite{MR3805203} and for sample-covariance matrices in~\cite{MR4119592} albeit with weaker error bounds. In the current non-Hermitian setting we need such local law twice; for the resolvent CLT in Proposition~\ref{prop:CLTresm}, and for the asymptotic independence of resolvents in Proposition~\ref{prop:indmr}. The key point for the latter is to obtain an improvement in the error term for mesoscopic separation \(\abs{z_1-z_2}\sim n^{-\epsilon}\), a fine effect that has not been captured before. Our proof applies verbatim to both real and complex i.i.d.\ matrices, as well as to resolvents \(G^z(w)\) evaluated at an arbitrary spectral parameter \(w\in \mathbf{H} \). We therefore work with this more general setup in this section, even though for the application in the proofs of Propositions~\ref{prop:CLTresm}--\ref{prop:indmr} this generality is not necessary. We recall from~\cite{1907.13631} that with the shorthand notations \begin{equation}\label{G_i def} G_i:= G^{z_i}(w_i),\quad M_i := M^{z_i}(w_i), \end{equation} the deviation of \(G_i\) from \(M_i\) is computed from the identity \begin{equation}\label{G deviation} G_i = M_i -M_i \underline{W G_i} + M_i \SS[G_i-M_i] G_i,\quad W := \begin{pmatrix} 0&X\\X^\ast &0 \end{pmatrix}.\end{equation} The relation~\eqref{G deviation} requires some definitions. First, the linear \emph{ covariance} or \emph{self-energy operator} \(\SS\colon\mathbf{C}^{2n\times 2n}\to\mathbf{C}^{2n\times 2n}\) is given by \begin{equation}\label{S def} \SS\biggl[\begin{pmatrix} A & B\\ C & D \end{pmatrix}\biggr] := \widetilde\E \widetilde W \begin{pmatrix} A & B\\ C & D \end{pmatrix} \widetilde W= \begin{pmatrix} \braket{D} & 0\\ 0 & \braket{A} \end{pmatrix} ,\quad \widetilde W=\begin{pmatrix} 0& \widetilde X\\ \widetilde X^\ast & 0 \end{pmatrix}, \end{equation} where \(\widetilde X\sim\mathrm{Gin}_\mathbf{C}\), i.e.\ it averages the diagonal blocks and swaps them. Here \(\mathrm{Gin}_\mathbf{C}\) stands for the standard complex Ginibre ensemble. The ultimate equality in~\eqref{S def} follows directly from \(\E \widetilde x_{ab}^2=0\), \(\E\abs{\widetilde x_{ab}}^2=n^{-1}\). Second, underlining denotes, for any given function \(f\colon\mathbf{C}^{2n\times 2n}\to\mathbf{C}^{2n\times 2n}\), the \emph{self-renormalisation} \(\underline{W f(W)}\) defined by \begin{equation}\label{self renom} \underline{W f(W)}:= W f(W) - \widetilde\E \widetilde W (\partial_{\widetilde W} f)(W), \end{equation} where \(\partial\) indicates a directional derivative in the direction \(\widetilde W\) and \(\widetilde W\) denotes an independent random matrix as in~\eqref{S def} with \(\widetilde X\) a complex Ginibre matrix with expectation \(\widetilde\E\). Note that we use complex Ginibre \(\widetilde X\) irrespective of the symmetry class of \(X\). Therefore, using the resolvent identity, it follows that \[\underline{W G} = WG + \widetilde\E \widetilde W G \widetilde W G = WG+\SS[G]G.\] We now use~\eqref{G deviation} and~\eqref{self renom} to compute \begin{gather} \begin{aligned} G_1 B G_2 &= M_1 B G_2 - M_1 \underline{W G_1} B G_2 + M_1 \SS[G_1-M_1]G_1 B G_2\\ &= M_1B M_2 + M_1 B(G_2-M_2) - M_1 \underline{W G_1 B G_2} + M_1 \SS[ G_1 B G_2 ]M_2 \\ &\qquad + M_1 \SS[G_1 B G_2](G_2-M_2) + M_1 \SS[G_1-M_1]G_1 B G_2, \end{aligned}\label{G12 devition}\raisetag{-3em} \end{gather} where, in the second equality, we used \[ \begin{split} \underline{W G_1 B G_2} &= W G_1 B G_2 + \SS[G_1] G_1 B G_2 + \SS[G_1 B G_2] G_2 \\ &= \underline{W G_1} B G_2 + \SS[G_1 B G_2] G_2. \end{split} \] Assuming that the self-renormalised terms and the ones involving \(G_i-M_i\) in~\eqref{G12 devition} are small,~\eqref{G12 devition} implies \begin{equation}\label{G12 approx local law} G_1 B G_2 \approx M_B^{z_1,z_2}, \end{equation} where \begin{equation}\label{eq M12 def} M_B^{z_1,z_2}(w_1,w_2):= (1-M^{z_1}(w_1)\SS[\cdot]M^{z_2}(w_2))^{-1}[M^{z_1}( w_1)B M^{z_2}(w_2)]. \end{equation} We define the corresponding \emph{\(2\)-body stability operator} \begin{equation}\label{eq:stabop12} \widehat\mathcal{B}=\widehat\mathcal{B}_{12}=\widehat\mathcal{B}_{12}(z_1,z_2, w_1,w_2):= 1-M_1 \SS[\cdot]M_2, \end{equation} acting on the space of \(2n \times 2n\) matrices equipped with the usual Euclidean matrix norm which induces a natural norm for \(\widehat\mathcal{B}\). Our main technical result of this section is making~\eqref{G12 approx local law} rigorous in the sense of Theorem~\ref{thm local law G2} below. To keep notations compact, we first introduce a commonly used (see, e.g.~\cite{MR3068390}) notion of high-probability bound. \begin{definition}[Stochastic Domination]\label{def:stochDom} If \[X=\tuple*{ X^{(n)}(u) \given n\in\mathbf{N}, u\in U^{(n)} }\quad\text{and}\quad Y=\tuple*{ Y^{(n)}(u) \given n\in\mathbf{N}, u\in U^{(n)} }\] are families of non-negative random variables indexed by \(n\), and possibly some parameter \(u\), then we say that \(X\) is stochastically dominated by \(Y\), if for all \(\epsilon, D>0\) we have \[\sup_{u\in U^{(n)}} \Prob\left[X^{(n)}(u)>n^\epsilon Y^{(n)}(u)\right]\leq n^{-D}\] for large enough \(n\geq n_0(\epsilon,D)\). In this case we use the notation \(X\prec Y\). \end{definition} \begin{theorem}\label{thm local law G2} Fix \(z_1,z_2\in\mathbf{C}\) and \(w_1,w_2\in\mathbf{C}\) with \(\abs{\eta_i}:=\abs{\Im w_i}\ge n^{-1}\) such that \[ \eta_*:= \min\{\abs{\eta_1},\abs{\eta_2}\}\ge n^{-1+\epsilon}\norm{\widehat\mathcal{B}_{12}^{-1}} \] for some \(\epsilon>0\). Assume that \(G^{z_1}(w_1),G^{z_2}(w_2)\) satisfy the local laws in the form \[ \abs{\braket{A(G^{z_i}-M^{z_i})}} \prec \frac{\norm{A}}{n\abs{\eta_i}}, \quad \abs{\braket{\bm{x},(G^{z_i}-M^{z_i})\bm{y}}} \prec \frac{\norm{\bm{x}}\norm{\bm{y}}}{\sqrt{n\abs{\eta_i}}}\] for any bounded deterministic matrix and vectors \(A,\bm{x},\bm{y}\). Then, for any bounded deterministic matrix \(B\), with \(\norm{B}\lesssim 1\), the product of resolvents \( G^{z_1}B G^{z_2}=G^{z_1}(w_1)BG^{z_2}(w_2)\) is approximated by \(M_B^{z_1,z_2} =M_B^{z_1,z_2}(w_1,w_2)\) defined in~\eqref{eq M12 def} in the sense that \begin{gather} \begin{aligned} \abs{\braket{A (G^{z_1} BG^{z_2}-M_B^{z_1,z_2})}}&\prec \frac{\norm{A}\norm{\widehat\mathcal{B}_{12}^{-1}}}{n\eta_\ast \abs{\eta_1\eta_2}^{1/2} }\\ &\;\; \times\Bigl(\eta_\ast^{1/12}+\eta_\ast^{1/4}\norm{\widehat\mathcal{B}_{12}^{-1}} +\frac{1}{\sqrt{n\eta_\ast}}+\frac{\norm{\widehat\mathcal{B}_{12}^{-1}}^{1/4}}{(n\eta_\ast)^{1/4}}\Bigr), \\ \abs{\braket{\bm{x},(G^{z_1}BG^{z_2}-M_B^{z_1,z_2})\bm{y}}}&\prec \frac{\norm{\bm{x}} \norm{\bm{y}}\norm{\widehat\mathcal{B}_{12}^{-1}}}{(n\eta_\ast)^{1/2}|\eta_1\eta_2|^{1/2}} \end{aligned}\label{final local law}\raisetag{-5em} \end{gather} for any deterministic \(A,\bm{x},\bm{y}\). \end{theorem} The estimates in~\eqref{final local law} will be complemented by a upper bound on \(\norm{\widehat\mathcal{B}^{-1}}\) in Lemma~\ref{lemma:betaM}, where we will prove in particular that \(\norm{\widehat\mathcal{B}^{-1}}\lesssim n^{2\delta}\) whenever \(\abs{z_1-z_2}\gtrsim n^{-\delta}\), for some small fixed \(\delta>0\). The proof of Theorem~\ref{thm local law G2} will follow from a bootstrap argument once the main input, the following high-probability bound on \(\underline{W G_1 B G_2}\) has been established. \begin{proposition}\label{prop prob bound}Under the assumptions of Theorem~\ref{thm local law G2}, the following estimates hold uniformly in \(n^{-1}\lesssim \abs{\eta_1},\abs{\eta_2}\lesssim 1\). \begin{subequations} \begin{enumerate}[label=(\roman*)] \item We have the isotropic bound \begin{equation}\label{prop iso bound} \abs{\braket{\bm{x},\underline{WG_1 B G_2} \bm{y}}} \prec \frac{1}{( n\eta_\ast)^{1/2}\abs{\eta_1\eta_2}^{1/2}} \end{equation} uniformly for deterministic vectors and matrix \(\norm{\bm{x}}+\norm{\bm{y}}+\norm{B}\le1\). \item Assume that for some positive deterministic \(\theta=\theta(z_1,z_2,\eta_\ast)\) an a priori bound \begin{equation}\label{eq a priori theta}\abs{\braket{A G_1 B G_2}}\prec\theta \end{equation} has already been established uniformly in deterministic matrices \(\norm{A}+\norm{B}\le 1\). Then we have the improved averaged bound \begin{equation}\label{prop av bound} \abs{\braket{\underline{W G_1 B G_2 A}}} \prec \frac{1}{n\eta_\ast \abs{\eta_1\eta_2}^{1/2}}\Bigl((\theta\eta_\ast)^{1/4}+\frac{1}{\sqrt{n\eta_\ast}}+\eta_\ast^{1/12}\Bigr), \end{equation} again uniformly in deterministic matrices \(\norm{A}+\norm{B}\le 1\). \end{enumerate} \end{subequations} \end{proposition} \begin{proof}[Proof of Theorem~\ref{thm local law G2}] We note that from~\eqref{eq M12 def} and~\eqref{eq M bound} we have \begin{equation}\label{eq M12 bound} \norm{M_B^{z_1,z_2}}\lesssim \norm{\widehat\mathcal{B}^{-1}} \end{equation} and abbreviate \(G_{12}:= G_1 B G_2\), \(M_{12}:= M_B^{z_1,z_2}\). We now assume an a priori bound \(\abs{\braket{G_{12}A}}\prec \theta_1\), i.e.\ that~\eqref{eq a priori theta} holds with \(\theta=\theta_1\). In the first step we may take \(\theta_1=\abs{\eta_1\eta_2}^{-1/2}\) due to the local law for \(G_i\) from which it follows that \[ \begin{split} \abs{\braket{A G_1 B G_2}}&\le \sqrt{\braket{A G_1 G_1^\ast A^\ast}}\sqrt{\braket{B G_2 G_2^\ast B^\ast}} \\ &=\frac{1}{\sqrt{\abs{\eta_1}\abs{\eta_2}}}\sqrt{\braket{A \Im G_1 A^\ast}}\sqrt{\braket{B \Im G_2 B^\ast}} \prec\theta_1. \end{split} \] By~\eqref{G12 devition} and~\eqref{eq M12 def} we have \begin{gather} \begin{aligned} \widehat\mathcal{B}[G_{12}-M_{12}]&= M_1 B (G_2-M_2)-M_1\underline{WG_{12}}+M_1\SS[G_{12}](G_2-M_2) \\ &\quad +M_1\SS[G_1-M_1]G_{12}, \end{aligned}\label{eq B G-M}\raisetag{-3em} \end{gather} and from~\eqref{single local law} and~\eqref{prop av bound} we obtain \[\begin{split} \abs{\braket{A(G_{12}-M_{12})}} &= \abs{\braket{A^\ast, \widehat\mathcal{B}^{-1}\widehat\mathcal{B}[G_{12}-M_{12}] }} = \abs{\braket{(\widehat\mathcal{B}^{\ast})^{-1}[A^\ast]^\ast \widehat\mathcal{B}[G_{12}-M_{12}]}} \\ & \prec \norm{\widehat\mathcal{B}^{-1}} \Bigl[ \frac{1}{n\eta_\ast} + \frac{(\theta_1\eta_\ast)^{1/4}+(\sqrt{n\eta_\ast})^{-1}+\eta_\ast^{1/12}}{n\eta_\ast\abs{\eta_1\eta_2}^{1/2}}+ \frac{\theta_1}{n\eta_\ast} \Bigr]. \end{split}\] For the terms involving \(G_i-M_i\) we used that \(\SS[R]=\braket{R E_2}E_1 + \braket{R E_1} E_2\) with the \(2n\times 2n\) block matrices \begin{equation}\label{E1 E2 def} E_1=\begin{pmatrix} 1&0\\0&0 \end{pmatrix},\qquad E_2=\begin{pmatrix} 0&0\\0&1 \end{pmatrix}, \end{equation} i.e.\ that \(\SS\) effectively acts as a trace, so that the averaged bounds are applicable. Therefore with~\eqref{eq M12 bound} it follows that \begin{equation}\label{eq G12 iter} \abs{\braket{G_{12}A}} \prec \theta_2 := \norm{\widehat\mathcal{B}^{-1}} \Bigl[1+ \frac{1}{n\eta_\ast} + \frac{(\theta_1\eta_\ast)^{1/4}+(\sqrt{n\eta_\ast})^{-1}+\eta_\ast^{1/12}}{n\eta_\ast\abs{\eta_1\eta_2}^{1/2}}+ \frac{\theta_1}{n\eta_\ast}\Bigr]. \end{equation} By iterating~\eqref{eq G12 iter} we can use \(\abs{\braket{G_{12}A}}\prec\theta_2\ll\theta_1\) as new input in~\eqref{eq a priori theta} to obtain \(\abs{\braket{G_{12}A}}\prec\theta_3\ll\theta_2\) since \(n\eta_\ast\gg\norm{\widehat\mathcal{B}^{-1}}\). Here \(\theta_j\), for \(j=3,4,\dots\), is defined iteratively by replacing \(\theta_1\) with \(\theta_{j-1}\) in the rhs.\ of the defining equation for \(\theta_2\) in~\eqref{eq G12 iter}. This improvement continues until the fixed point of this iteration, i.e.\ until \(\theta_N^{3/4}\) approaches \(\norm{\widehat\mathcal{B}^{-1}}n^{-1}\eta_\ast^{-7/4}\). For any given \(\xi> 0\), after finitely many steps \(N=N(\xi)\) the iteration stabilizes to \[ \theta_\ast \lesssim n^\xi\biggl[ \norm{\widehat\mathcal{B}^{-1}} + \frac{ \norm{\widehat\mathcal{B}^{-1}}}{n\eta_\ast}\frac{\eta_\ast^{1/12}}{\abs{\eta_1\eta_2}^{1/2}} + \frac{1}{\eta_\ast}\Bigl(\frac{ \norm{\widehat\mathcal{B}^{-1}}}{n\eta_\ast}\Bigr)^{4/3}\biggr], \] from which \[ \abs{\braket{A(G_{12}-M_{12})}}\prec \frac{\norm{\widehat\mathcal{B}^{-1}}}{ n\eta_\ast\abs{\eta_1\eta_2}^{1/2}}\Bigl( \eta_\ast^{1/12}+\eta_\ast^{1/4}\norm{\widehat\mathcal{B}^{-1}} +\frac{1}{\sqrt{n\eta_\ast}}+\Bigl(\frac{\norm{\widehat\mathcal{B}^{-1}}}{n\eta_\ast}\Bigr)^{1/4}\Bigr), \] and therefore the averaged bound in~\eqref{final local law} follows. For the isotropic bound in~\eqref{final local law} note that \[\braket{\bm{x},(G_{12}-M_{12})\bm{y}} = \Tr \bigl[(\widehat\mathcal{B}^\ast)^{-1}[\bm{x}\bm{y}^\ast]\bigr]^\ast\widehat\mathcal{B}[G_{12}-M_{12}] \] and that due to the block-structure of \(\widehat\mathcal{B}\) we have \[(\widehat\mathcal{B}^\ast)^{-1}[\bm{x}\bm{y}^\ast] = \sum_{i=1}^4 \bm{x}_i \bm{y}_i^\ast,\qquad \norm{\bm{x}_i}\norm{\bm{y}_i}\lesssim \norm{\widehat\mathcal{B}^{-1}},\] for some vectors \(\bm{x}_i,\bm{y}_i\). The isotropic bound in~\eqref{final local law} thus follows in combination with the isotropic bound in~\eqref{single local law},~\eqref{eq B G-M} and~\eqref{prop iso bound} applied to the pairs of vectors \(\bm{x}_i,\bm{y}_i\). This completes the proof of the theorem modulo the proof of Proposition~\ref{prop prob bound}. \end{proof} \subsection{Probabilistic bound and the proof of Proposition~\ref{prop prob bound}}\label{section prob bound} We follow the graphical expansion outlined in~\cite{MR3941370,MR4134946} adapted to the current setting. We focus on the case when \(X\) has complex entries and additionally mention the few changes required when \(X\) is a real matrix. We abbreviate \(G_{12}=G_1 B G_2\) and use iterated cumulant expansions to expand \(\E\abs{\braket{\bm{x},\underline{WG_{12}}\bm{y}}}^{2p}\) and \(\E\abs{\braket{\underline{WG_{12}}A}}^{2p}\) in terms of polynomials in entries of \(G\). For the expansion of the first \(W\) we have in the complex case \begin{equation}\label{1 cum exp}\begin{split} &\E \Tr(\underline{W G_{12}}A) \Tr(\underline{W G_{12}}A)^{p-1} \Tr(A^\ast\underline{ G_{12}^\ast W})^p \\ &\quad= \frac{1}{n}\E\sum_{ab} R_{ab} \Tr(\Delta^{ab} G_{12}A) \partial_{ba} \Bigl[ \Tr(\underline{W G_{12}}A)^{p-1} \Tr(A^\ast \underline{G_{12}^\ast W})^p \Bigr] \\ &\qquad + \sum_{k\ge 2}\sum_{ab}\sum_{\bm\alpha\in\{ab,ba\}^k} \frac{\kappa(ab,\bm\alpha)}{k!} \\ &\qquad\qquad\quad \times\E\partial_{\bm\alpha} \Bigl[ \Tr(\Delta^{ab} G_{12}A) \Tr(\underline{W G_{12}}A)^{p-1} \Tr(A^\ast \underline{G_{12}^\ast W})^p \Bigr] \end{split}\end{equation} and similarly for \(\braket{\bm{x},\underline{WG_{12}}\bm{y}}\), where unspecified summations \(\sum_a\) are understood to be over \(\sum_{a\in[2n]}\), and \((\Delta^{ab})_{cd}:= \delta_{ac}\delta_{bd}\). Here we introduced the matrix \(R_{ab}:= \bm1(a\le n,b>n)+\bm 1(a>n,b\le n) \) which is the rescaled second order cumulant (variance), i.e.\ \(R_{ab}=n\kappa(ab,ba)\). For \(\bm\alpha=(\alpha_1,\dots,\alpha_k)\) we denote the joint cumulant of \(w_{ab},w_{\alpha_1},\dots,w_{\alpha_k}\) by \(\kappa(ab,\bm\alpha)\) which is non-zero only for \(\bm\alpha\in\{ab,ba\}^k\). The derivative \(\partial_{\bm\alpha}\) denotes the derivative with respect to \(w_{\alpha_1},\dots,w_{\alpha_k}\). Note that in~\eqref{1 cum exp} the \(k=1\) term differs from the \(k\ge 2\) terms in two aspects. First, we only consider the \(\partial_{ba}\) derivative since in the complex case we have \(\kappa(ab,ab)=0\). Second, the action of the derivative on the first trace is not present since it is cancelled by the \emph{self-renormalisation} of \(\underline{WG_{12}}\). In the real case~\eqref{1 cum exp} differs slightly. First, for the \(k=1\) terms both \(\partial_{ab}\) and \(\partial_{ba}\) have to be taken into account with the same weight \(R\) since \(\kappa(ab,ab)=\kappa(ab,ba)\). Second, we chose only to renormalise the effect of the \(\partial_{ba}\)-derivative and hence the \(\partial_{ab}\)-derivative acts on all traces. Thus in the real case, compared to~\eqref{1 cum exp} there is an additional term given by \[\frac{1}{n}\E\sum_{ab}R_{ab} \partial_{ab}\Bigl[\Tr(\Delta^{ab} G_{12} A ) \Tr(\underline{W G_{12}}A)^{p-1} \Tr(A^\ast \underline{G_{12}^\ast W})^p \Bigr].\] The main difference to~\cite[Section 4]{MR3941370} and~\cite[Section 4]{MR4134946} is that therein instead of \(\underline{WG_{12}}\) the single-\(G\) renormalisation \(\underline{WG}\) was considered. With respect to the action of the derivatives there is, however, little difference between the two since we have \[ \partial_{ab}G=-G\Delta^{ab}G, \quad \partial_{ab}G_{12}=-G_1 \Delta^{ab}G_{12} - G_{12}\Delta^{ab}G_2.\] Therefore after iterating the expansion~\eqref{1 cum exp} we structurally obtain the same polynomials as in~\cite{MR3941370,MR4134946}, except of the slightly different combinatorics and the fact that exactly \(2p\) of the \(G\)'s are \(G_{12}\)'s and the remaining \(G\)'s are either \(G_1\) or \(G_2\). Thus, using the local law for \(G_i\) in the form \begin{gather} \begin{aligned} \abs{\braket{\bm{x},G_i\bm{y}}}&\prec 1,\\ \abs{\braket{\bm{x},G_{12}\bm{y}}}&\le \sqrt{\braket{\bm{x},G_{1} G_1^\ast\bm{x}}}\sqrt{\braket{\bm{y},BG_2G_2^\ast B^\ast\bm{y}}}\\ &=\frac{1}{\sqrt{\abs{\eta_1}\abs{\eta_2}}}\sqrt{\braket{\bm{x},(\Im G_1)\bm{x}}}\sqrt{\braket{\bm{y},B(\Im G_2)B^\ast\bm{y}}} \prec \frac{1}{\sqrt{\abs{\eta_1}\abs{\eta_2}}} \end{aligned}\label{eq G trivial bound}\raisetag{-6em} \end{gather} for \(\norm{\bm{x}}+\norm{\bm{y}}\lesssim 1\), we obtain exactly the same bound as in~\cite[Eq.~(23a)]{MR3941370} times a factor of \((\abs{\eta_1}\abs{\eta_2})^{-p}\) accounting for the \(2p\) exceptional \(G_{12}\) edges, i.e. \begin{equation}\label{eq ward before} \E\abs{\braket{\bm{x},\underline{WG_{12}}\bm{y}}}^{2p} \lesssim \frac{n^\epsilon}{(n\eta_\ast)^{p}\abs{\eta_1}^p\abs{\eta_2}^p},\quad \E\abs{\braket{\underline{WG_{12}}A}}^{2p} \lesssim \frac{n^\epsilon}{(n\eta_\ast)^{2p}\abs{\eta_1}^p\abs{\eta_2}^p}. \end{equation} The isotropic bound from~\eqref{eq ward before} completes the proof of~\eqref{prop iso bound}. It remains to improve the averaged bound in~\eqref{eq ward before} in order to obtain~\eqref{prop av bound}. We first have to identify where the bound~\eqref{eq ward before} is suboptimal. By iterating the expansion~\eqref{1 cum exp} we obtain a complicated polynomial expression in terms of entries of \(G_{12},G_1,G_2\) which is most conveniently represented graphically as \begin{equation}\label{eq graph expansion} \E\abs{\braket{\underline{W G_{12}}A}}^{2p} = \sum_{\Gamma\in \mathrm{Graphs}(p)} c(\Gamma)\E\Val(\Gamma) + \mathcal{O}\Bigl(n^{-2p}\Bigr) \end{equation} for some finite collection of \(\mathrm{Graphs}(p)\). Before we precisely define the \emph{value of \(\Gamma\)}, \(\Val(\Gamma)\), we first give two examples. Continuing~\eqref{1 cum exp} in case \(p=1\) we have \begin{subequations}\label{eq graph ex} \begin{equation}\label{eq graph ex a2} \begin{split} &\E\Tr(\underline{W G_{12}}A) \Tr(A^\ast \underline{G_{12}^\ast W}) \\ &= \sum_{ab} \frac{R_{ab}}{n}\E \Tr(\Delta^{ab}G_{12}A)\Tr(A^\ast G_{12}^\ast \Delta^{ba} ) \\ &\quad -\sum_{ab}\frac{R_{ab}}{n}\E \Tr(\Delta^{ab}G_{12}A) \Tr(A^\ast\underline{ G_2^\ast \Delta^{ba} G_{12}^\ast W}) \\ &\quad - \sum_{ab} \frac{R'_{ab}}{2n^{3/2}} \E \Tr(\Delta^{ab}G_1\Delta^{ba}G_{12}A) \Tr(A^\ast G_{12}^\ast \Delta^{ba}) +\cdots \end{split} \end{equation} where, for illustration, we only kept two of the three Gaussian terms (the last being when \(W\) acts on \(G_1^\ast\)) and one non-Gaussian term. For the non-Guassian term we set \(R'_{ab}:=n^{3/2}\kappa(ab,ba,ba)\), \(\abs{R'_{ab}}\lesssim 1\). Note that in the case of i.i.d.\ matrices with \(\sqrt{n}x_{ab}\stackrel{\mathsf{d}}{=}x\), we have \(R'_{ab}=\kappa(x,\overline{x},\overline{x})\) for \(a\le n,b>n\) and \(R'_{ab}=\kappa(x,x,\overline{x})=\overline{\kappa(x,\overline{x},\overline{x})}\) for \(a>n,b\le n\). For our argument it is of no importance whether matrices representing cumulants of degree at least three like \(R'\) are block-constant. It is important, however, that the variance \(\kappa(ab,ba)\) represented by \(R\) is block-constant since later we will perform certain resummations. For the second term on the rhs.\ of~\eqref{eq graph ex a2} we then obtain by another cumulant expansion that \begin{equation}\label{eq graph ex a} \begin{split} &\sum_{ab}\frac{R_{ab}}{n}\E \Tr(\Delta^{ab}G_{12}A) \Tr(A^\ast\underline{ G_2^\ast \Delta^{ba} G_{12}^\ast W})\\ &= - \sum_{ab}\sum_{cd}\frac{R_{ab}R_{cd}}{n^2}\E (G_{12}\Delta^{dc}G_2A)_{ba} \Tr(A^\ast G_2^\ast \Delta^{ba}G_{12}^\ast \Delta^{cd})+\cdots\\ & \;\;- \sum_{ab}\sum_{cd} \frac{R_{ab}R'_{cd}}{2!n^{5/2}} \E (G_{12}\Delta^{dc}G_2A)_{ba} \Tr (A^\ast G_2^\ast\Delta^{ba}G_{12}^\ast\Delta^{dc}G_{1}^\ast\Delta^{cd}), \end{split} \end{equation} where we kept one of the two Gaussian terms and one third order term. After writing out the traces,~\eqref{eq graph ex a2}--\eqref{eq graph ex a} become \begin{equation} \begin{split} &\sum_{ab}\frac{R_{ab}}{n} \E (G_{12}A)_{ba} (A^\ast G_{12}^\ast)_{ab}+\cdots \\ &- \sum_{ab} \frac{R'_{ab}}{n^{3/2}} \E (G_1)_{bb} (G_{12}A)_{aa} (A^\ast G_{12}^\ast)_{ab} \\ & + \sum_{ab}\sum_{cd} \frac{R_{ab}R_{cd}}{n^2} \E (G_{12})_{bd} (G_2A)_{ca} (A^\ast G_2^\ast)_{d b} (G_{12}^\ast)_{ac}\\ & + \sum_{ab}\sum_{cd}\frac{R_{ab}R'_{cd}}{2!n^{5/2}} \E (G_{12})_{bd} (G_{2}A)_{ca} (G_1^\ast)_{cc} (A^\ast G_2^\ast)_{db} (G_{12}^\ast)_{ad}. \end{split} \end{equation} \end{subequations} If \(X\) is real, then in~\eqref{eq graph ex} some additional terms appear since \(\kappa(ab,ab)=\kappa(ab,ba)\) in the real case, while \(\kappa(ab,ab)=0\) in the complex case. In the first equality of~\eqref{eq graph ex} this results in additional terms like \begin{equation}\label{eq real additional terms} \begin{split} \sum_{ab}\frac{R_{ab}}{n} \E \Bigl(& - \Tr(\Delta^{ab}G_1 \Delta^{ab}G_{12}A)\Tr(A^\ast \underline{G_{12}^\ast W}) \\ &+ \Tr( \Delta^{ab}G_{12}A)\Tr(A^\ast G_{12}^\ast \Delta^{ab}) \\ &- \Tr( \Delta^{ab}G_{12}A)\Tr(A^\ast \underline{G_2^\ast \Delta^{ab} G_{12}^\ast W}) +\dots \Bigr). \end{split} \end{equation} Out of the three terms in~\eqref{eq real additional terms}, however, only the first one is qualitatively different from the terms already considered in~\eqref{eq graph ex} since the other two are simply transpositions of already existing terms. After another expansion of the first term in~\eqref{eq real additional terms} we obtain terms like \begin{equation}\label{eq real additional terms 2} \begin{split} &\sum_{ab}\frac{R_{ab}}{n} (G_{12}A)_{ba}(A^\ast G_{12}^\ast)_{ba} +\cdots \\ & + \sum_{ab}\sum_{cd}\frac{R_{ab}R_{cd}}{n^2} (G_1)_{ba}(G_{12}A)_{ba} (A^\ast G_2^\ast)_{dc} (G_{12}^\ast)_{dc}\\ & + \sum_{ab}\sum_{cd}\frac{R_{ab}R_{cd}'}{2!n^{5/2}} (G_{12})_{bc}(G_2A)_{da} (A^\ast G_2^\ast)_{da} (G_{12}^\ast)_{bd} (G_2^\ast)_{cc} \end{split} \end{equation} specific to the real case. Now we explain how to encode~\eqref{eq graph ex} in the graphical formalism~\eqref{eq graph expansion}. The summation labels \(a_i,b_i\) correspond to vertices, while matrix entries correspond to edges between respective labelled vertices. We distinguish between the cumulant- or \(\kappa\)-edges \(E_\kappa\), like \(R,R'\) and \(G\)-edges \(E_G\), like \((A^\ast G_2^\ast)_{db}\) or \((G_{12}^\ast)_{ab}\), but do not graphically distinguish between \(G_1,G_{12}\), \(A^\ast G_2^\ast\), etc. The four terms from the rhs.\ of~\eqref{eq graph ex} would thus be represented as \begin{equation}\label{graph example} \ssGraph{a1[label=\(a\)] --[g,gray,dotted] b1[label=\(b\)]; a1 --[g,bl] b1; b1 --[g,bl] a1;},\quad \ssGraph{a1[label=\(a\)] --[g,gray,dotted] b1[label=\(b\)]; a1 --[g,bl] b1; a1 --[g,gll] a1; b1 --[g,glr] b1;}, \quad \sGraph{a1[label=\(a\)] --[gray, dotted] b1[label=left:\(b\)]; a2[label=above:\(c\)] --[gray, dotted] b2[label=left:\(d\)]; b1 --[bl] b2; a2 --[bl] a1; b2 --[bl] b1; a1 --[bl] a2; }\quad \text{and}\quad\sGraph{a1[label=above:\(a\)] --[gray, dotted] b1[label=left:\(b\)];b1 --[br] b2; a2[label=left:\(c\)] --[gray,dotted] b2[label=above:\(d\)]; a2 --[] a1; a2 --[glt] a2; b2 --[br] b1; a1 --[] b2; }, \end{equation} where the edges from \(E_G\) are solid and those from \(E_\kappa\) dotted. Similarly, the three examples from~\eqref{eq real additional terms 2} would be represented as \begin{equation}\label{eq real graph ex} \ssGraph{a1[label=\(a\)] --[g,gray,dotted] b1[label=\(b\)]; b1 --[g,br] a1; b1 --[g,bl] a1;},\quad \sGraph{a1[label=above:\(a\)] --[gray, dotted] b1[label=right:\(b\)]; a2[label=below:\(c\)] --[gray, dotted] b2[label=left:\(d\)]; b1 --[g,bl] a1; b1 --[g,br] a1; b2 --[g,bl] a2; b2 --[g,br] a2; }\quad \text{and}\quad \sGraph{a1[label=below:\(a\)] --[gray, dotted] b1[label=left:\(b\)]; a2[label=above:\(c\)] --[gray, dotted] b2[label=right:\(d\)]; b1 --[g] a2; b2 --[g,bl] a1; b2 --[g,br] a1; b1 --[g] b2; a2 --[glr] a2; }. \end{equation} It is not hard to see that after iteratively performing cumulant expansions up to order \(4p\) for each remaining \(W\) we obtain a finite collection of polynomial expressions in \(R\) and \(G\) which correspond to graphs \(\Gamma\) from a certain set \(\mathrm{Graphs}(p)\) with the following properties. We consider a directed graph \(\Gamma = (V, E_\kappa\cup E_G)\) with an even number \(\abs{V}=2k\) of vertices, where \(k\) is the number of cumulant expansions along the iteration. The edge set is partitioned into two types of disjoint edges, the elements of \(E_\kappa\) are called \emph{cumulant edges} and the elements of \(E_G\) are called \emph{\(G\)-edges}. For \(u\in V\) we define the \(G\)-degree of \(u\) as \[\begin{split} d_G(u):={}& d_G^\mathrm{out}(u) +d_G^\mathrm{in} (u),\\ d_G^\mathrm{out}(u):={}& \abs{\set{v\in V \given (uv)\in E_G}},\quad d_G^\mathrm{in}(u):=\abs{\set{v\in V \given (vu)\in E_G}}. \end{split}\] We now record some structural attributes. \begin{enumerate}[label=(A\arabic*)] \item\label{perfect matching} The graph \((V,E_\kappa)\) is a perfect matching and in particular \(\abs{V}=2\abs{E_\kappa}\). For convenience we label the vertices by \(u_1,\dots,u_k,v_1,\dots,v_k\) with cumulant edges \((u_1v_1),\dots,(u_k v_k)\). The ordering of the elements of \(E_\kappa\) indicated by \(1,\dots,k\) is arbitrary and irrelevant. \item\label{number of kappa edges} The number of \(\kappa\)-edges is bounded by \(\abs{E_\kappa}\le 2p\) and therefore \(\abs{V}\le 4p\) \item\label{degree equal} For each \((u_i v_i)\in E_\kappa\), the \emph{\(G\)-degree} of both vertices agrees, i.e.\ \(d_G(u_i)=d_G(v_i)=: d_G(i)\). Furthermore the \(G\)-degree satisfies \(2\le d_G(i)\le 4p\). Note that loops \((uu)\) contribute a value of \(2\) to the degree. \item\label{no loops} If \(d_G(i)=2\), then no loops are adjacent to either \(u_i\) or \(v_i\). \item\label{number of G edges} We distinguish two types of \(G\)-edges \(E_G=E_G^1\cup E_G^2\) whose numbers are given by \[\abs{E_G^2}=2p, \quad \abs{E_G^1}=\sum_i d_G(i)-2p, \quad \abs{E_G}=\abs{E_G^1}+\abs{E_G^2}.\] \end{enumerate} Note that in the examples~\eqref{eq real graph ex} above we had \(\abs{E_\kappa}=1\) in the first and \(\abs{E_\kappa}=2\) in the other two cases. For the degrees we had \(d_G(1)=2\) in the first case, \(d_G(1)=d_G(2)=2\) in the second case, and \(d_G(1)=2, d_G(2)=3\) in the third case. The number of \(G\)-edges involving \(G_{12}\) is \(2\) in all cases, while the number of remaining \(G\)-edges is \(0\), \(2\) and \(3\), respectively, in agreement with~\ref{number of G edges}. We now explain how we relate the graphs to the polynomial expressions they represent. \begin{enumerate}[label=(I\arabic*)] \item\label{vertex inter} Each vertex \(u\in V\) corresponds to a summation \(\sum_{a\in[2n]}\) with a label \(a\) assigned to the vertex \(u\). \item\label{edge inter} Each \(G\)-edge \((uv)\in E_G^1\) represents a matrix \(\mathcal{G}^{(uv)}=A_1 G_i A_2\) or \(\mathcal{G}^{(uv)}=A_1 G_i^\ast A_2\) for some norm-bounded deterministic matrices \(A_1,A_2\). Each \(G\)-edge \((uv)\in E_G^2\) represents a matrix \(\mathcal{G}^{(uv)}=A_1 G_{12} A_2\) or \(\mathcal{G}^{(uv)}=A_1 G_{12}^\ast A_2\) for norm bounded matrices \(A_1,A_2\). We denote the matrices \(\mathcal{G}^{(uv)}\) with a calligraphic ``G'' to avoid confusion with the ordinary resolvent matrix \(G\). \item\label{kappa edge rule} Each \(\kappa\)-edge \((uv)\) represents the matrix \[ R_{ab}^{(uv)} = \kappa(\underbrace{\sqrt{n}w_{ab},\dots,\sqrt{n}w_{ab}}_{d_G^\mathrm{in}(u)}, \underbrace{\sqrt{n}\overline{w_{ab}},\dots,\sqrt{n}\overline{w_{ab}}}_{d_G^\mathrm{out}(u)} ),\] where \(d_G^\mathrm{in}(u)=d_G^\mathrm{out}(v)\) and \(d_G^\mathrm{out}(u)=d_G^\mathrm{in}(v)\) are the in- and out degrees of \(u,v\). \item\label{value inter} Given a graph \(\Gamma\) we define its value\footnote{In~\cite{MR4134946} we defined the value with an expectation so that~\eqref{eq graph expansion} holds without expectation. In the present paper we follow the convention of~\cite{MR3941370} and consider the value as a random variable.} as \begin{equation}\label{Val Gamma def} \Val(\Gamma) := n^{-2p} \prod_{(u_i v_i)\in E_\kappa} \biggl(\sum_{a_i,b_i\in[2n]} n^{-d_G(i)/2}R^{(u_i v_i)}_{a_i b_i}\biggr) \prod_{(u_i v_i)\in E_G} \mathcal{G}^{(u_i v_i)}_{a_i b_i},\end{equation} where \(R^{(u_i v_i )}\) is as in~\ref{kappa edge rule} and \(a_i,b_i\) are the summation indices associated with \(u_i,v_i\). \end{enumerate} \begin{proof}[Proof of~\eqref{eq graph expansion}] In order to prove~\eqref{eq graph expansion} we have to check that the graphs representing the polynomial expressions of the cumulant expansion up to order \(4p\) indeed have the attributes~\ref{perfect matching}--\ref{number of G edges}. Here~\ref{perfect matching}--\ref{degree equal} follow directly from the construction, with the lower bound \(d_G(i)\ge 2\) being a consequence of \(\E w_{ab}=0\) and the upper bound \(d_G(i)\le 4p\) being a consequence of the fact that we trivially truncate the expansion after the \(4p\)-th cumulant. The error terms from the truncation are estimated trivially using~\eqref{eq G trivial bound}. The fact~\ref{no loops} that no \(G\)-loops may be adjacent to degree two \(\kappa\)-edges follows since due to the self-renormalisation \(\underline{WG_{12}}\) the second cumulant of \(W\) can only act on some \(W\) or \(G\) in another trace, or if it acts on some \(G\) in its own trace then it generates a \(\kappa(ab,ab)\) factor (only possible when \(X\) is real). In the latter case one of the two vertices has two outgoing, and the other one two incoming \(G\)-edges, and in particular no loops are adjacent to either of them. The counting of \(G_{12}\)-edges in \(E_G^2\) in~\ref{number of G edges} is trivial since along the procedure no \(G_{12}\)-edges can be created or removed. For the counting of \(G_i\) edges in \(E_G^1\) note that the action of the \(k\)-th order cumulant in the expansion of \(\underline{WG_{12}}\) may remove \(k_1\) \(W\)'s and may create additional \(k_2\) \(G_i\)'s with \(k=k_1+k_2\), \(k_1\ge 1\). Therefore, since the number of \(G_i\) edges is \(0\) in the beginning, and the number of \(W\)'s is reduced from \(2p\) to \(0\) the second equality in~\ref{number of G edges} follows. It now remains to check that with the interpretations~\ref{vertex inter}--\ref{value inter} the values of the constructed graphs are consistent in the sense of~\eqref{eq graph expansion}. The constant \(c(\Gamma)\sim 1\) accounts for combinatorial factors in the iterated cumulant expansions and the multiplicity of identical graphs. The factor \(n^{-2p}\) in~\ref{value inter} comes from the \(2p\) normalised traces. The relation~\ref{kappa edge rule} follows from the fact that the \(k\)-th order cumulant of \(k_1\) copies of \(w_{ab}\) and \(k_2\) copies of \(\overline{w_{ab}}=w_{ba}\) comes together with \(k_1\) copies of \(\Delta^{ab}\) and \(k_2\) copies of \(\Delta^{ba}\). Thus \(a\) is the first index of some \(G\) a total of \(k_2\) times, while the remaining \(k_1\) times the first index is \(b\), and for the second indices the roles are reversed. \end{proof} Having established the properties of the graphs and the formula~\eqref{eq graph expansion}, we now estimate the value of any individual graph. \subsubsection*{Naive estimate} We first introduce the so called \emph{naive estimate}, \(\NEst(\Gamma)\), of a graph \(\Gamma\) as the bound on its value obtained by estimating the factors in~\eqref{Val Gamma def} as \(\abs{\mathcal{G}^e_{ab}}\prec 1\) for \(e\in E_G^1\) and \(\abs{\mathcal{G}^e_{ab}}\prec (\abs{\eta_1}\abs{\eta_2})^{-1/2}\) for \(e\in E_G^2\), \(\abs{R^{e}_{ab}}\lesssim 1\) and estimating summations by their size. Thus, we obtain \begin{equation}\label{eq Gamma naive est} \begin{split} \Val(\Gamma)\prec \NEst(\Gamma):&=\frac{1}{n^{2p}\abs{\eta_1}^p\abs{\eta_2}^{p}} \prod_i \Bigl(n^{2-d_G(i)/2}\Bigr) \\ &\le \frac{n^{\abs{E_\kappa^2}} n^{\abs{E_\kappa^3}/2}}{n^{2p}\abs{\eta_1}^p\abs{\eta_2}^{p}} \le \frac{1}{\abs{\eta_1}^p\abs{\eta_2}^{p}}, \end{split} \end{equation} where \[E_\kappa^j:=\set{(u_i,v_i)\given d_G(i)=j}\] is the set of degree \(j\) \(\kappa\)-edges, and in the last inequality we used \(\abs{E_\kappa^2}+\abs{E_\kappa^3}\le \abs{E_\kappa}\le 2p\). \subsubsection*{Ward estimate} The first improvement over the naive estimate comes from the effect that sums of resolvent entries are typically smaller than the individual entries times the summation size. This effect can easily be seen from the \emph{Ward} or resolvent identity \(G^\ast G=\Im G/\eta=(G-G^\ast)/(2\mathrm{i}\eta)\). Indeed, the naive estimate of \(\sum_a G_{ab}\) is \(n\) using \(\abs{G_{ab}}\prec 1\). However, using the Ward identity we can improve this to \[ \biggl\lvert\sum_a G_{ab}\biggr\rvert \le \sqrt{2n} \sqrt{\sum_{a}\abs{G_{ab}}^2} = \sqrt{2n} \sqrt{ (G^\ast G)_{bb} } = \sqrt{\frac{2n}{\eta}} \sqrt{(\Im G)_{bb}} \prec n \frac{1}{\sqrt{n\eta}}, \] i.e.\ by a factor of \((n\eta)^{-1/2}\). Similarly, we can gain two such factors if the summation index \(a\) appears in two \(G\)-factors off-diagonally, i.e.\ \[ \biggl\lvert\sum_a (G_1)_{ab} (G_2)_{ca}\biggr\rvert \le \sqrt{(G_1^\ast G_1)_{bb}}\sqrt{(G_2 G_2^\ast)_{cc}}\prec n\frac{1}{n\eta}. \] However, it is impossible to gain more than two such factors per summation. We note that we have the same gain also for summations of \(G_{12}\). For example, the naive estimate on \(\sum_{a}(G_{12})_{ab}\) is \(n\abs{\eta_1\eta_2}^{-1/2}\) since \(\abs{(G_{12})_{ab}}\prec\abs{\eta_1\eta_2}^{-1/2}\). Using the Ward identity, we obtain an improved bound of \[ \begin{split} \biggl\lvert\sum_a (G_{12})_{ab}\biggr\rvert &\le \sqrt{2n}\sqrt{(G_{12}^\ast G_{12})_{bb}} =\sqrt{\frac{2n}{\abs{\eta_1}}} \sqrt{(G_2^\ast B^\ast(\Im G_1)B G_2 )_{bb}} \\ &\lesssim \sqrt{\frac{n}{\abs{\eta_1}^2}} \sqrt{(G_2^\ast G_2 )_{bb}} \prec \frac{\sqrt{n}}{\abs{\eta_1}\abs{\eta_2}^{1/2}}\le\frac{n}{\abs{\eta_1\eta_2}^{1/2}}\frac{1}{\sqrt{n\eta_\ast}}, \end{split} \] where we recall \(\eta_\ast=\min\{\abs{\eta_1},\abs{\eta_2}\}\). Each of these improvements is associated with a specific \(G\)-edge with the restriction that one cannot gain simultaneously from more than two edges adjacent to any given vertex \(u\in V\) while summing up the index \(a\) associated with \(u\). Note, however, that globally it is nevertheless possible to gain from arbitrarily many \(G\)-edges adjacent to any given vertex, as long as the summation order is chosen correctly. In order to count the number edges giving rise to such improvements we recall a basic definition~\cite{MR266812} from graph theory. \begin{definition} For \(k\ge 1\) a graph \(\Gamma=(V,E)\) is called \emph{\(k\)-degenerate} if any induced subgraph has minimal degree at most \(k\). \end{definition} The relevance of this definition in the context of counting the number of gains of \((n\eta_\ast)^{-1/2}\) lies in the following equivalent characterisation~\cite{MR193025}. \begin{lemma}\label{lemma equiv coloring deg} A graph \(\Gamma=(V,E)\) is \(k\)-degenerate if and only if there exists an ordering of vertices \(\{v_1,\dots,v_n\}=V\) such that for each \(m\in[n]\) it holds that \begin{equation}\deg_{\Gamma[\{v_1,\dots,v_m\}]}(v_m)\le k \label{vertex ordering}\end{equation} where for \(V'\subset V\), \(\Gamma[V']\) denotes the induced subgraph on the vertex set \(V'\). \end{lemma} We consider a subset of non-loop edges \(E_\mathrm{Ward}\subset E_G\setminus\set{(vv)\given v\in V}\) for which Ward improvements will be obtained. We claim that if \(\Gamma_\mathrm{Ward}=(V,E_\mathrm{Ward})\) is \(2\)-degenerate, then we may gain a factor of \((n\eta_\ast)^{-1/2}\) from each edge in \(E_\mathrm{Ward}\). Indeed, take the ordering \(\{v_1,\dots,v_{2\abs{E_\kappa}}\}\) guaranteed to exist in Lemma~\ref{lemma equiv coloring deg} and first sum up the index \(a_1\) associated with \(v_1\). Since \(\Gamma_\mathrm{Ward}\) is \(2\)-degenerate there are at most two edges from \(E_\mathrm{Ward}\) adjacent to \(v_1\) and we can gain a factor of \((n\eta_\ast)^{-1/2}\) for each of them. Next, we can sum up the index associated with vertex \(v_2\) and again gain the same factor for each edge in \(E_\mathrm{Ward}\) adjacent to \(v_2\). Continuing this way we see that in total we can gain a factor of \((n\eta_\ast)^{-\abs{E_\mathrm{Ward}}/2}\) over the naive bound~\eqref{eq Gamma naive est}. \begin{definition}[Ward estimate]\label{def ward est} For a graph \(\Gamma\) with fixed subset \(E_\mathrm{Ward}\subset E_G\) of edges we define \[\WEst(\Gamma):=\frac{\NEst(\Gamma)}{(n\eta_\ast)^{\abs{E_\mathrm{Ward}}/2}}.\] \end{definition} By considering only \(G\)-edges adjacent to \(\kappa\)-edges of degrees \(2\) and \(3\) it is possible to find such a \(2\)-degenerate set with \[\abs{E_\mathrm{Ward}} = \sum_{i} (4-d_G(i))_+\] elements, cf.~\cite[Lemma 4.7]{MR4134946}. As a consequence, as compared with the first inequality in~\eqref{eq Gamma naive est}, we obtain an improved bound \begin{gather} \begin{aligned} \Val(\Gamma) &\prec \WEst(\Gamma)={} \frac{1}{n^{2p}\abs{\eta_1\eta_2}^p} (n\eta_\ast)^{-\abs{E_\mathrm{Ward}}/2}\prod_i\Bigl(n^{2-d_G(i)/2}\Bigr) \\ &={}\frac{1}{n^{2p}\abs{\eta_1\eta_2}^p} \prod_{d_G(i)=2}\Bigl(\frac{n}{n\eta_\ast}\Bigr) \prod_{d_G(i)=3}\Bigl(\frac{\sqrt{n}}{\sqrt{n\eta_\ast}}\Bigr) \prod_{d_G(i)\ge 4}\Bigl(n^{2-d_G(i)/2}\Bigr) \\ &\lesssim{} \frac{1}{(n\eta_\ast)^{2p}\abs{\eta_1\eta_2}^p} \eta_\ast^{2p+\sum_i(d_G(i)/2-2)} \lesssim \frac{1}{(n\eta_\ast)^{2p}\abs{\eta_1\eta_2}^p}, \end{aligned}\label{ward improvement}\raisetag{-5em} \end{gather} where in the penultimate inequality we used \(n^{-1}\le \eta_\ast\), and in the ultimate inequality that \(d_G(i)\ge 2\) and \(\abs{E_\kappa}\le 2p\) which implies that the exponent of \(\eta_\ast\) is non-negative and \(\eta_\ast\lesssim 1\). Thus we gained a factor of \((n\eta_\ast)^{-2p}\) over the naive estimate~\eqref{eq Gamma naive est}. \subsubsection*{Resummation improvements} The bound~\eqref{ward improvement} is optimal if \(z_1=z_2\) and if \(\eta_1,\eta_2\) have opposite signs. In the general case \(z_1\ne z_2\) we have to use two additional improvements which both rely on the fact that the summations \(\sum_{a_i,b_i}\) corresponding to \((u_i,v_i)\in E_\kappa^2\) can be written as matrix products since \(d_G(u_i)=d_G(v_i)=2\). Therefore we can sum up the \(G\)-edges adjacent to \((u_i v_i)\) as \begin{subequations}\label{eq summing up deg 2} \begin{equation}\label{eq summing up deg 2a} \begin{split} &\sum_{a_i b_i} G_{xa_i}G_{a_i y} G_{zb_i}G_{b_i w} R_{a_i b_i} \\ &\quad=\sum_{a_i b_i} G_{xa_i}G_{a_i y} G_{zb_i}G_{b_i w} \Bigl[\bm 1(a_i>n,b_i\le n)+\bm 1(a_i\le n,b_i>n)\Bigr]\\ &\quad =(G E_1 G)_{xy} (GE_2 G)_{zw} + (G E_2 G)_{xy} (GE_1 G)_{zw}, \end{split} \end{equation} where \(E_1\), \(E_2\) are defined in~\eqref{E1 E2 def}, in the case of four involved \(G\)'s and \(d_G^\mathrm{in}=d_G^\mathrm{out}=1\). If one vertex has two incoming, and the other two outgoing edges (which is only possible if \(X\) is real), then we similarly can sum up \begin{equation} \sum_{a b} G_{x a}G_{ya} G_{b z}G_{b w} R_{a b} = (GE_1G^t)_{xy} (G^t E_2 G)_{zw}+(GE_2G^t)_{xy} (G^t E_1 G)_{zw},\end{equation} so merely some \(G\) is replaced by its transpose \(G^t\) compared to~\eqref{eq summing up deg 2a} which will not change any estimate. In the remaining cases with two and three involved \(G\)'s we similarly have \begin{equation} \begin{split} \sum_{a b} G_{b a} G_{a b}R_{a b}& = \Tr G E_1 G E_2 + \Tr G E_2 GE_1\\ \sum_{a b} G_{x a} G_{a b} G_{b y} R_{a b}& = (G E_1 G E_2 G)_{xy} + (G E_2 G E_1 G)_{xy}. \end{split} \end{equation} \end{subequations} By carrying out all available \emph{partial summations} at degree-\(2\) vertices as in~\eqref{eq summing up deg 2} for the value \( \Val(\Gamma)\) of some graph \(\Gamma\) we obtain a collection of \emph{reduced graphs}, in which cycles of \(G\)'s are contracted to the trace of their matrix product, and chains of \(G\)'s are contracted to single edges, also representing the matrix products with two \emph{external} indices. We denote generic cycle-subgraphs of \(k\) edges from \(E_G\) with vertices of degree two by \(\Gamma^\circ_k\), and generic chain-subgraphs of \(k\) edges from \(E_G\) with \emph{internal} vertices of degree two and external vertices of degree at least three by \(\Gamma^-_k\). With a slight abuse of notation we denote the \emph{value} of \(\Gamma_k^\circ\) by \(\Tr\Gamma_k^\circ\), and the \emph{value} of \(\Gamma_k^{-}\) with external indices \((a,b)\) by \((\Gamma_k^-)_{ab}\), where for a fixed choice of \(E_1,E_2\) in~\eqref{eq summing up deg 2} the internal indices are summed up. The actual choice of \(E_1, E_2\) is irrelevant for our analysis, hence we will omit it from the notation. The concept of the \emph{naive} and \emph{Ward} estimates of any graph \(\Gamma\) carry over naturally to these chain and cycle-subgraphs by setting \begin{equation}\label{chain cycle ests} \begin{split} \NEst(\Gamma_k^\circ):={}&\frac{n^k}{\abs{\eta_1\eta_2}^{\abs{E_G^2(\Gamma_k^\circ)}/2}}, \quad \NEst(\Gamma_k^-):=\frac{n^{k-1}}{\abs{\eta_1\eta_2}^{\abs{E_G^2(\Gamma_k^-)}/2}}, \\ \WEst(\Gamma_k^{\circ/-})={}&\frac{\NEst(\Gamma_k^{\circ/-})}{(n\eta_\ast)^{\abs{E_\mathrm{Ward}(\Gamma_k^{\circ/-})}/2}}, \,\,\, E_\mathrm{Ward}(\Gamma_k^{\circ/-})=E_G(\Gamma_k^{\circ/-})\cap E_\mathrm{Ward}(\Gamma). \end{split} \end{equation} After contracting the chain- and cycle-subgraphs we obtain \(2^{\abs{E_\kappa^2}}\) reduced graphs \(\Gamma_\mathrm{red}\) on the vertex set \[ V(\Gamma_\mathrm{red}):= \set{v\in V(\Gamma) \given d_G(v)\ge 3}\] with \(\kappa\)-edges \[ E_\kappa(\Gamma_\mathrm{red}):= E_\kappa^{\ge 3}(\Gamma) \] and \(G\)-edges \[E_G(\Gamma_\mathrm{red}) := \set{(uv)\in E_G(\Gamma)\given \min\{d_G(u),d_G(v)\}\ge 3} \cup E_G^\mathrm{chain}(\Gamma_\mathrm{red}),\] with additional \emph{chain-edges} \[ \begin{split} E_G^\mathrm{chain}(\Gamma_\mathrm{red}):&= \set*{(u_1u_{k+1})\given \parbox{18em}{\(k\ge 2\), \(u_1,u_{k+1}\in V(\Gamma_\mathrm{red})\), \(\exists \Gamma_k^-\subset \Gamma\), \(V(\Gamma_k^-)=(u_1,\dots, u_{k+1})\) }}. \end{split} \] The additional chain edges \((u_1u_{k+1})\in E_G^\mathrm{chain}\) naturally represent the matrices \[\mathcal{G}^{(u_1u_{k+1})} := \bigl((\Gamma_k^-)_{ab}\bigr)_{a,b\in[2n]}\] whose entries are the values of the chain-subgraphs. Note that due to the presence of \(E_1,E_2\) in~\eqref{eq summing up deg 2} the matrices associated with some \(G\)-edges can be multiplied by \(E_1,E_2\). However, since in the definition~\ref{edge inter} of \(G\)-edges the multiplication with generic bounded deterministic matrices is implicitly allowed, this additional multiplication will not be visible in the notation. Note that the reduced graphs contain only vertices of at least degree three, and only \(\kappa\)-edges from \(E_\kappa^{\ge 3}\). The definition of value, naive estimate and Ward estimate naturally extend to the reduced graphs and we have \begin{equation} \label{eq reduced graph cal} \Val(\Gamma)= \sum \Val(\Gamma_\mathrm{red}) \prod_{\Gamma_k^\circ\subset\Gamma} \Tr\Gamma_k^\circ \end{equation} and \begin{equation} \begin{split} \NEst(\Gamma) &= \NEst(\Gamma_\mathrm{red}) \prod_{\Gamma_k^\circ\subset\Gamma} \NEst(\Gamma_k^\circ), \\ \WEst(\Gamma)&= \WEst(\Gamma_\mathrm{red}) \prod_{\Gamma_k^\circ\subset\Gamma} \WEst(\Gamma_k^\circ). \end{split} \end{equation} The irrelevant summation in~\eqref{eq reduced graph cal} of size \(2^{\abs{E_\kappa^2}}\) is due to the sums in~\eqref{eq summing up deg 2}. Let us revisit the examples~\eqref{graph example} to illustrate the summation procedure. The first two graphs in~\eqref{graph example} only have degree-\(2\) vertices, so that the reduced graphs are empty with value \(n^{-2p}=n^{-2}\), hence \[ \Val(\Gamma)=\frac{1}{n^2}\sum \Tr \Gamma_2^\circ \qquad \Val(\Gamma)=\frac{1}{n^2}\sum (\Tr\Gamma_2^\circ) (\Tr \Gamma_2^\circ),\] where the summation is over two and, respectively, four terms. The third graph in~\eqref{graph example} results in no traces but in four reduced graphs \[\Val(\Gamma) = \sum \Val(\hspace{-1em}\sGraph{a --[gray,dotted] b; a --[glt] a; b --[glt,double] b; a --[bl,double] b;}\hspace{-1em}),\] where for convenience we highlighted the chain-edges \(E_G^\mathrm{chain}\) representing \(\Gamma_k^-\) by double lines (note that the two endpoints of a chain edge may coincide, but it is not interpreted as a cycle graph since this common vertex has degree more than two, so it is not summed up into a trace along the reduction process). Finally, to illustrate the reduction for a more complicated graph, we have \[ \Val\left(\sGraph{ a1[label=left:\(a_1\)]--[gray,dotted]b1[label=left:\(b_1\)]; a2[label=right:\(a_2\)]--[gray,dotted]b2[label=right:\(b_2\)]; a3[label=below:\(a_3\)]--[gray,dotted]b3[label=left:\(b_3\)]; a3--a2--a1--[bl]b2--a1; a3--[glr] a3 --[bl] b3 --[glt] b3; b1 --[glb] b1 --[] b3; a4[label=right:\(a_4\)] --[gray,dotted] b4[label=left:\(b_4\)] --[bl] a4 --[bl] b4; }\right)= \sum (\Tr \Gamma_2^-) \Val\left(\sGraph{ a1[label=left:\(a_1\)]--[gray,dotted]b1[label=left:\(b_1\)]; a3[label=right:\(a_3\)]--[gray,dotted] b3[label=right:\(b_3\)]; a3--[double] a1--[glt,double]a1; a3--[glt] a3 --[bl] b3 --[glt] b3; b1 --[glr] b1 --[] b3;}\right) \] where we labelled the vertices for convenience, and the summation on the rhs.\ is over four assignments of \(E_1,E_2\). Since we have already established a bound on \(\Val(\Gamma)\prec\WEst(\Gamma)\) we only have to identify the additional gain from the resummation compared to the \emph{Ward-estimate}~\eqref{ward improvement}. We will need to exploit two additional effects: \begin{enumerate}[label=(\roman*)] \item\label{suboptimal Ward} The Ward-estimate is sub-optimal whenever, after resummation, we have some contracted cycle \(\Tr \Gamma_k^\circ\) or a reduced graph with a chain-edge \(\Gamma_k^-\) with \(k\ge 3\). \item\label{G12 Ward} When estimating \(\Tr \Gamma_k^\circ\), \(k\ge2\) with \(\Gamma_k^\circ\) containing some \(G_{12}\), then also the improved bound from~\ref{suboptimal Ward} is sub-optimal and there is an additional gain from using the a priori bound \(\abs{\braket{G_{12}A}}\prec \theta \). \end{enumerate} We now make the additional gains~\ref{suboptimal Ward}--\ref{G12 Ward} precise. \begin{lemma}\label{gain lemma} For \(k\ge 2\) let \(\Gamma_k^\circ\) and \(\Gamma_k^-\) be some cycle and chain subgraphs. \begin{enumerate}[label=(\roman*)] \begin{subequations} \item We have \begin{equation}\label{long Gk gain} \abs{\Tr \Gamma_k^\circ} \prec (n\eta_\ast)^{-(k-2)/2} \WEst(\Gamma_k^\circ) \end{equation} and for all \(a,b\) \begin{equation}\label{long Gk iso gain} \abs{(\Gamma_k^-)_{ab}}\prec (n\eta_\ast)^{-(k-2)/2} \WEst(\Gamma_k^-). \end{equation} \item If \(\Gamma_k^\circ\) contains at least one \(G_{12}\) then we have a further improvement of \((\eta_\ast\theta)^{1/2}\), i.e. \begin{equation}\label{long Gk gain theta} \abs{\Tr \Gamma_k^\circ} \prec \sqrt{\eta_\ast\theta} (n\eta_\ast)^{-(k-2)/2} \WEst(\Gamma_k^\circ), \end{equation} \end{subequations} where \(\theta\) is as in~\eqref{eq a priori theta}. \end{enumerate} \end{lemma} The proof of Lemma~\ref{gain lemma} follows from the following optimal bound on general products \(G_{j_1\dots j_k}\) of resolvents and generic deterministic matrices. \begin{lemma}\label{lemma general products} Let \(w_1,w_2,\dots\), \(z_1,z_2,\dots\) denote arbitrary spectral parameters with \(\eta_i=\Im w_i>0\). With \(G_j=G^{z_j}(w_{j})\) we then denote generic products of resolvents \(G_{j_1},\dots G_{j_k}\) or their adjoints/transpositions (in that order) with arbitrary bounded deterministic matrices in between by \(G_{j_1\dots j_k}\), e.g.\ \(G_{1i1}=A_1G_1A_2G_i A_3G_1A_4\). \begin{enumerate}[label=(\roman*)] \begin{subequations} \item For \(j_1,\dots j_k\) we have the isotropic bound \begin{equation}\label{eq general iso bound} \abs{\braket{\bm{x},G_{j_1\dots j_k}\bm{y}}} \prec \norm{\bm{x}}\norm{\bm{y}}\sqrt{\eta_{j_1}\eta_{j_k}}\Bigl(\prod_{n=1}^k \eta_{j_n}\Bigr)^{-1}. \end{equation} \item For \(j_1,\dots,j_k\) and any \(1\le s< t\le k\) we have the averaged bound \begin{equation}\label{eq general av bound} \abs{\braket{G_{j_1\dots j_k}}} \prec \sqrt{\eta_{j_{s}}\eta_{j_{t}}}\Bigl(\prod_{n=1}^k \eta_{j_n}\Bigr)^{-1}. \end{equation} \end{subequations} \end{enumerate} \end{lemma} Lemma~\ref{lemma general products} for example implies \(\abs{(G_{1i})_{ab}}\prec (\eta_1\eta_i)^{-1/2}\) or \(\abs{(G_{i1i})_{ab}}\prec (\eta_1\eta_i)^{-1}\). Note that the averaged bound~\eqref{eq general av bound} can be applied more flexibly by choosing \(s,t\) freely, e.g. \[\abs{\braket{G_{1i1i}}}\prec \min\{\eta_1^{-1}\eta_i^{-2},\eta_1^{-2}\eta_i^{-1}\},\] while \(\abs{\braket{\bm{x},G_{1i1i}\bm{y}}}\prec \norm{\bm{x}}\norm{\bm{y}} (\eta_1\eta_i)^{-3/2}\). \begin{proof}[Proof of Lemma~\ref{lemma general products}] We begin with \[ \begin{split} &\abs{\braket{\bm{x},G_{j_1\dots j_k}\bm{y}}} \\ &\quad\le \sqrt{\braket{\bm{x},G_{j_1}G_{j_1}^\ast \bm{x}}}\sqrt{\braket{\bm{y},G_{j_2 \dots j_k}^\ast G_{j_2 \dots j_k}\bm{y}}} \prec \frac{\norm{\bm{x}}}{\sqrt{\eta_{j_1}}} \sqrt{\braket{\bm{y},G_{j_2 \dots j_k}^\ast G_{j_2 \dots j_k}\bm{y}}}\\ & \quad\lesssim \frac{\norm{\bm{x}}}{\sqrt{\eta_{j_1}}} \frac{1}{\eta_{j_2}} \sqrt{\braket{\bm{y},G_{j_3 \dots j_k}^\ast G_{j_3 \dots j_k}\bm{y}}} \lesssim \dots \\ &\quad \lesssim \frac{\norm{\bm{x}}}{\sqrt{\eta_{j_1}}} \frac{1}{\eta_{j_2}\dots \eta_{j_{k-1}}} \sqrt{\braket{\bm{y},G_{j_k}^\ast G_{j_k}\bm{y}}} \prec \frac{\norm{\bm{x}}\norm{\bm{y}}}{\sqrt{\eta_{j_1}\eta_{j_k}}} \frac{1}{\eta_{j_2}\dots \eta_{j_{k-1}}}, \end{split} \] where in each step we estimated the middle \(G_{j_2}^\ast G_{j_2}, G_{j_3}^\ast G_{j_3},\dots\) terms trivially by \(1/\eta_{j_2}^2,1/\eta_{j_3}^2,\dots\), and in the last step we used Ward estimate. This proves~\eqref{eq general iso bound}. We now turn to~\eqref{eq general av bound} where by cyclicity without loss of generality we may assume \(s=1\). Thus \[\begin{split} \abs{\braket{G_{j_1\dots j_k}}} &\le \sqrt{\braket{G_{j_1\dots j_{t-1}}G_{j_1\dots j_{t-1}}^\ast}}\sqrt{\braket{ G_{j_{t}\dots j_{k}}^\ast G_{j_{t}\dots j_{k}}}} \\ &=\sqrt{\braket{G_{j_1\dots j_{t-1}}G_{j_1\dots j_{t-1}}^\ast}}\sqrt{\braket{ G_{j_{t}\dots j_{k}}G_{j_{t}\dots j_{k}}^\ast}} \\ &\lesssim \Bigl(\prod_{n\ne 1,t} \frac{1}{\eta_{j_n}}\Bigr) \sqrt{\braket{G_{j_1}G_{j_1}^\ast}}\sqrt{\braket{G_{j_{t}}G_{j_{t}}^\ast}} \prec \frac{1}{\sqrt{\eta_{j_{1}}\eta_{j_{t}}}}\Bigl(\prod_{n\ne 1,t} \frac{1}{\eta_{j_n}}\Bigr), \end{split}\] where in the second step we used cyclicity of the trace, the norm-estimate in the third step und the Ward-estimate in the last step. \end{proof} \begin{proof}[Proof of Lemma~\ref{gain lemma}] For the proof of~\eqref{long Gk gain} we recall from the definition of the Ward-estimate in~\eqref{chain cycle ests} that for a cycle \(\Gamma_k^\circ\) we have \[ \WEst(\Gamma_k^\circ)\ge \frac{\NEst(\Gamma_k^\circ)}{(n\eta_\ast)^{k/2}} = \frac{n^{k/2}}{\abs{\eta_1\eta_2}^{\abs{E_G^2(\Gamma_k^\circ)}/2}} \frac{1}{\eta_\ast^{k/2}} \] since \(\abs{E_\mathrm{Ward}(\Gamma_k^\circ)}\le \abs{E_G(\Gamma_k^\circ)}\le k\). Thus, together with~\eqref{eq general av bound} and interpreting \(\Tr \Gamma_k^\circ\) as a trace of a product of \(k+\abs{E_G^2(\Gamma_k^\circ)}\) factors of \(G\)'s we conclude \begin{equation}\label{Tr Gamma circ}\abs{\Tr\Gamma_k^\circ} \prec \frac{n}{\abs{\eta_1\eta_2}^{\abs{E_G^2(\Gamma_k^\circ)}}\eta_\ast^{k-\abs{E_G^2(\Gamma_k^\circ)}-1}}\le \frac{n}{\abs{\eta_1\eta_2}^{\abs{E_G^2(\Gamma_k^\circ)}/2}\eta_\ast^{k-1}}\le \frac{\WEst(\Gamma_k^\circ)}{(n\eta_\ast)^{k/2-1}}.\end{equation} Note that Lemma~\ref{lemma general products} is applicable here even though therein (for convenience) it was assumed that all spectral parameters \(w_i\) have positive imaginary parts. However, the lemma also applies to spectral parameters with negative imaginary parts since it allows for adjoints and \(G^z(\overline w)=(G^z(w))^\ast\). The first inequality in~\eqref{Tr Gamma circ} elementarily follows from~\eqref{eq general av bound} by distinguishing the cases \(\abs{E_G^2}=k,k-1\) or \(\le k-2\), and always choosing \(s\) and \(t\) such that the \(\sqrt{\eta_{j_s} \eta_{j_t}}\) factor contains the highest possible \(\eta_\ast\) power. Similarly to~\eqref{Tr Gamma circ}, for~\eqref{long Gk iso gain} we have, using~\eqref{eq general iso bound}, \begin{equation}\label{Tr Gamma chain} \abs{(\Gamma_k^-)_{ab}} \prec \frac{n^{k-1}}{\abs{\eta_1\eta_2}^{\abs{E_G^2(\Gamma_k^-)}/2}} \frac{1}{(n\eta_\ast)^{k/2}}\le \frac{\WEst(\Gamma_k^-)}{(n\eta_\ast)^{k/2-1}}. \end{equation} For the proof of~\eqref{long Gk gain theta} we use a Cauchy-Schwarz estimate to isolate a single \(G_{12}\) factor from the remaining \(G\)'s in \(\Gamma_l^\circ\). We may represent the ``square'' of all the remaining factors by an appropriate cycle graph \(\Gamma_{2(k-1)}^\circ\) of length \(2(k-1)\) with \(\abs{E_G^2(\Gamma_{2(k-1)}^\circ)}=2(\abs{E_G^2(\Gamma_k^\circ)}-1)\). We obtain \[ \begin{split} \abs{\Tr \Gamma_k^\circ} &\le \sqrt{\Tr(G_{12}G_{12}^\ast)}\sqrt{\abs{\Tr \Gamma_{2(k-1)}^\circ}} = \sqrt{\Tr G_1^\ast G_1 B G_2 G_2^\ast B^\ast }\sqrt{\abs{\Tr \Gamma_{2(k-1)}^\circ}} \\ &= \frac{ \sqrt{\Tr (\Im G_1)B(\Im G_2)B^\ast} \sqrt{\abs{\Tr \Gamma^\circ_{2(k-1)}}}}{\sqrt{\abs{\eta_1\eta_2}}} \\ &\prec \frac{\sqrt{\theta n}}{\sqrt{\abs{\eta_1\eta_2}}} \frac{\sqrt{n}}{\abs{\eta_1\eta_2}^{\abs{E_2^G(\Gamma_k^\circ)}/2-1/2 } \eta_\ast^{k-3/2} } \\ &\le \sqrt{\eta_\ast\theta} (n\eta_\ast)^{-(k-2)/2} \WEst(\Gamma_k^\circ) \end{split}\] where in the penultimate step we wrote out \(\Im G=(G-G^\ast)/(2\mathrm{i})\) in order to use~\eqref{eq a priori theta}, and used~\eqref{Tr Gamma circ} for \(\Gamma_{2(k-1)}^\circ\). \end{proof} Now it remains to count the gains from applying Lemma~\ref{gain lemma} for each cycle- and chain subgraph of \(\Gamma\). We claim that \begin{subequations} \begin{equation}\label{eq ward-estimate gain} \WEst(\Gamma) \le \bigl(\eta_\ast^{1/6}\bigr)^{d_{\ge3}} \frac{1}{(n\eta_\ast)^{2p}\abs{\eta_1\eta_2}^{p}}, \qquad d_{\ge3}:= \sum_{d_G(i)\ge 3 } d_G(i). \end{equation} Furthermore, suppose that \(\Gamma\) has \(c\) degree-\(2\) cycles \(\Gamma_k^\circ\) which according to~\ref{degree equal} has to satisfy \(0\le c':= \abs{E_\kappa^2}-c\le \abs{E_\kappa^2}\). Then we claim that \begin{equation}\label{eq value gain} \abs{\Val(\Gamma)} \prec \Bigl(\frac{1}{n\eta_\ast}\Bigr)^{(c'-d_{\ge3}/2)_+} \bigl(\sqrt{\eta_\ast\theta}\bigr)^{(p-c'-d_{\ge3}/2)_+} \WEst(\Gamma). \end{equation} \end{subequations} Assuming~\eqref{eq ward-estimate gain}--\eqref{eq value gain} it follows immediately that \[ \abs{\Val(\Gamma)} \prec \frac{1}{(n\eta_\ast)^{2p}\abs{\eta_1\eta_2}^{p}} \Bigl(\sqrt{\eta_\ast\theta} + \frac{1}{n\eta_\ast} + \eta_\ast^{1/6}\Bigr)^p,\] implying~\eqref{prop av bound}. In order to complete the proof of the Proposition~\ref{prop prob bound} it remains to verify~\eqref{eq ward-estimate gain} and~\eqref{eq value gain}. \begin{proof}[Proof of~\eqref{eq ward-estimate gain}] This follows immediately from the penultimate inequality in~\eqref{ward improvement} and \[\eta_\ast^{2p+\sum_i(d_G(i)/2-2)}\le \eta_\ast^{\sum_i (d_G(i)/2-1)} = \eta_\ast^{\frac{1}{2}\sum_{d_G(i)\ge 3} (d_G(i)-2) }\le \eta_\ast^{\frac{1}{6}\sum_{d_G(i)\ge 3} d_G(i)},\] where we used~\ref{number of kappa edges} in the first inequality. \end{proof} \begin{proof}[Proof of~\eqref{eq value gain}] For cycles \(\Gamma_k^\circ\) or chain-edges \(\Gamma_k^-\) in the reduced graph we say that \(\Gamma_k^{\circ/-}\) has \((k-2)_+\) \emph{excess \(G\)-edges}. Note that for cycles \(\Gamma_k^\circ\) every additional \(G\) beyond the minimal number \(k\ge 2\) is counted as an excess \(G\)-edge, while for chain-edges \(\Gamma_k^-\) the first additional \(G\) beyond the minimal number \(k\ge 1\) is not counted as an excess \(G\)-edge. We claim that: \begin{enumerate}[label=(C\arabic*)] \item\label{count excess} The total number of excess \(G\)-edges is at least \(2c'-d_{\ge 3}\). \item\label{count 12} There are at least \(p-c'-d_{\ge3}/2\) cycles in \(\Gamma\) containing \(G_{12}\). \end{enumerate} Since the vertices of the reduced graph are \(u_i,v_i\) for \(d_G(i)\ge 3\), it follows that the reduced graph has \(\sum_{d_G(i)\ge 3} (d_G(u_i)+d_G(v_i))/2=d_{\ge 3}\) edges while the total number of \(G\)'s beyond the minimally required \(G\)'s (i.e.\ two for cycles and one for edges) is \(2c'\). Thus in the worst case there are at least \(2c'-d_{\ge3}\) excess \(G\)-edges, confirming~\ref{count excess}. The total number of \(G_{12}\)'s is \(2p\), while the total number of \(G_i\)'s is \(2 \abs{E_\kappa^2}+d_{\ge3}-2p\), according to~\ref{number of G edges}. For fixed \(c\) the number of cycles with \(G_{12}\)'s is minimised in the case when all \(G_i\)'s are in cycles of length \(2\) which results in \(\abs{E_\kappa^2}-p+\lfloor d_{\ge3}/2\rfloor\) cycles without \(G_{12}\)'s. Thus, there are at least \[ c - \Bigl(\abs{E_\kappa^2}-p+\lfloor d_{\ge3}/2\rfloor\Bigr)= p - c' - \lfloor d_{\ge 3}/2\rfloor \ge p - c' - d_{\ge 3}/2 \] cycles with some \(G_{12}\), confirming also~\ref{count 12}. The claim~\eqref{eq value gain} follows from~\ref{count excess}--\ref{count 12} in combination with Lemma~\ref{gain lemma}. \end{proof} \section{Central limit theorem for resolvents}\label{sec:CLTres} The goal of this section is to prove the CLT for resolvents, as stated in~Proposition~\ref{prop:CLTresm}. We begin by analysing the \(2\)-body stability operator \(\widehat\mathcal{B}\) from~\eqref{eq:stabop12}, as well as its special case, the \(1\)-body stability operator \begin{equation}\label{cB def} \mathcal{B}:=\widehat\mathcal{B}(z,z,w,w)=1-M\SS[\cdot]M. \end{equation} Note that other than in the previous Section~\ref{sec local law G2}, all spectral parameters \(\eta,\eta_1,\dots,\eta_p\) considered in the present section are positive, or even, \(\eta,\eta_i\ge 1/n\). \begin{lemma}\label{lemma:betaM} For \(w_1=\mathrm{i}\eta_1,w_2=\mathrm{i}\eta_2\in\mathrm{i}\mathbf{R}\setminus\{0\}\) and \(z_1,z_2\in\mathbf{C}\) we have \begin{equation}\label{beta ast bound} \norm{\widehat\mathcal{B}^{-1}}^{-1} \gtrsim (\abs{\eta_1}+\abs{\eta_2})\min\set{(\Im m_1)^2, (\Im m_2)^2 } + \abs{z_1-z_2}^2. \end{equation} Moreover, for \(z_1=z_2=z\) and \(w_1=w_2=\mathrm{i}\eta\) the operator \(\mathcal{B}=\widehat\mathcal{B}\) has two non-trivial eigenvalues \(\beta,\beta_\ast\) with \(\beta,\beta_\ast\) as in~\eqref{beta ast def},~\eqref{beta def}, and the remaining eigenvalues being \(1\). \end{lemma} \begin{proof} Throughout the proof we assume that \(\eta_1,\eta_2>0\), all the other cases are completely analogous. With the shorthand notations \(m_i:= m^{z_i}(w_i), u_i:= u^{z_i}(w_i)\) and the partial trace \(\Tr_2\colon\mathbf{C}^{2n\times 2n}\to \mathbf{C}^4\) rearranged into a \(4\)-dimensional vector, the stability operator \(\widehat\mathcal{B}\), written as a \(4\times 4\) matrix is given by \begin{equation}\label{eq Bhat decomp} \widehat\mathcal{B} = 1- \Tr_2^{-1}\circ\begin{pmatrix} T_1 & 0 \\ T_2 & 0 \end{pmatrix}\circ \Tr_2, \quad \Tr_2 \begin{pmatrix} R_{11}&R_{12}\\mathbf{R}_{21}&R_{22} \end{pmatrix}:= \begin{pmatrix} \braket{R_{11}}\\\braket{R_{22}}\\\braket{R_{12}}\\\braket{R_{21}} \end{pmatrix}. \end{equation} Here we defined \[ T_1 := \begin{pmatrix} z_1 \overline{z_2} u_1 u_2 & m_1 m_2\\ m_1 m_2 & \overline{z_1} z_2 u_1 u_2 \end{pmatrix}, \quad T_2 := \begin{pmatrix} -z_1 u_1 m_2 & -z_2 u_2 m_1 \\ -\overline{z_2} u_2 m_1 & -\overline{z_1} u_1 m_2 \end{pmatrix}, \] and \(\Tr_2^{-1}\) is understood to map \(\mathbf{C}^4\) into \(\mathbf{C}^{2n\times 2n}\) in such a way that each \(n\times n\) block is a constant multiple of the identity matrix. From~\eqref{eq Bhat decomp} it follows that \(\widehat\mathcal{B}\) has eigenvalue \(1\) in the \(4(n^2-1)\)-dimensional kernel of \(\Tr_2\), and that the remaining four eigenvalues are \(1,1\) and the eigenvalues \(\widehat\beta,\widehat\beta_\ast\) of \(B_1:=1-T_1\), i.e.\ \begin{equation}\label{B eigs} \widehat\beta,\widehat\beta_\ast:= 1 - u_1 u_2 \Re z_1 \overline{z_2} \pm \sqrt{ m_1^2 m_2^2 - u_1^2 u_2^2 (\Im z_1 \overline{z_2})^2 }. \end{equation} Thus the claim about the \(w_1=w_2\), \(z_1=z_2\) special case follows. The bound~\eqref{beta ast bound} follows directly from \begin{equation} \label{eq:lowbeta} \abs*{\widehat{\beta}\widehat{\beta}_*}\gtrsim (\eta_1+\eta_2)\min\{(\Im m_1)^2, (\Im m_2)^2 \}+\abs{z_1-z_2}^2, \end{equation} since \(\abs{\widehat{\beta}}, \abs{\widehat{\beta}_*}\lesssim 1\) and \(\norm{\widehat\mathcal{B}^{-1}}\lesssim \norm{B_1^{-1}} = (\min\set{\abs{\widehat\beta},\abs{\widehat\beta_\ast}})^{-1}\) due to \(B_1\) being normal. We now prove~\eqref{eq:lowbeta}. By~\eqref{B eigs}, using that \(u_i=-m_i^2+u_i^2 \abs{z_i}^2\) repeatedly, it follows that \begin{equation} \label{eq:bbst1} \begin{split} \widehat{\beta}\widehat{\beta}_* &=1-u_1u_2\Big[ 1-\abs{z_1-z_2}^2+(1-u_1)\abs{z_1}^2+(1-u_2)\abs{z_2}^2\Big] \\ &=u_1u_2\abs{z_1-z_2}^2+(1-u_1)(1-u_2)-m_1^2u_2\left( \frac{1}{u_1}-1\right) \\ &\quad-m_2^2u_1\left( \frac{1}{u_2}-1\right). \end{split} \end{equation} Then, using \(1-u_i=\eta_i/(\eta_i+\Im m_i)\gtrsim \eta_i/(\Im m_i)\), that \(m_i=\mathrm{i} \Im m_i\), and assuming \(u_1,u_2\in [\delta,1]\), for some small fixed \(\delta>0\), we get that \begin{equation} \label{eq:bbst2} \begin{split} \abs*{\widehat{\beta}\widehat{\beta}_*}&\gtrsim \abs{z_1-z_2}^2+(\Im m_1)^2 (1-u_1)+(\Im m_2)^2 (1-u_2)\\ &\gtrsim \abs{z_1-z_2}^2+\min\{(\Im m_1)^2, (\Im m_2)^2 \}(2-u_1-u_2) \\ &\gtrsim \abs{z_1-z_2}^2+\min\{(\Im m_1)^2, (\Im m_2)^2 \} \left( \frac{\eta_1}{\Im m_1}+\frac{\eta_2}{\Im m_2}\right). \end{split} \end{equation} If instead at least one \(u_i\in [0,\delta]\) then, by the second equality in the display above, the bound~\eqref{eq:lowbeta} is trivial. \end{proof} We now turn to the computation of the expectation \(\E\braket{G^z(\mathrm{i}\eta)}\) to higher precision beyond the approximation \(\braket{G}\approx\braket{M}\). Recall the definition of the \(1\)-body stability operator from~\eqref{cB def} with non-trivial eigenvalues \(\beta,\beta_\ast\) as in~\eqref{beta ast def},~\eqref{beta def}. \begin{lemma}\label{lemma exp} For \(\kappa_4\ne0\) we have a correction of order \(n^{-1}\) to \(\E\braket{G}\) of the form \begin{subequations} \begin{equation}\label{G-M next order} \begin{split} \E\braket{G} = \braket{M} + \mathcal{E} + \mathcal{O}\Bigl(\frac{1}{\abs{\beta}}\Bigl(\frac{1}{n^{3/2} (1+\eta)}+\frac{1}{(n\eta)^2}\Bigr)\Bigr), \end{split} \end{equation} where \begin{equation}\label{beta bound} \frac{1}{\abs{\beta}} = \norm{(\mathcal{B}^{\ast})^{-1}[1]}\lesssim \frac{1}{\abs{1-\abs{z}^2}+\eta^{2/3}} \end{equation} and \begin{equation}\label{cE def} \mathcal{E} := \frac{\kappa_4}{n} m^3\Bigl(\frac{1}{1-m^2-\abs{z}^2}-1\Bigr)=-\frac{\mathrm{i}\kappa_4}{4n}\partial_\eta(m^4). \end{equation} \end{subequations} \end{lemma} \begin{proof} Using~\eqref{G deviation} we find \begin{gather} \begin{aligned} \braket{G-M} &= \braket{1,\mathcal{B}^{-1}\mathcal{B}[G-M]} = \braket{(\mathcal{B}^\ast)^{-1}[1],\mathcal{B}[G-M] } \\ &= -\braket{M^\ast (\mathcal{B}^\ast)^{-1}[1], \underline{WG} } + \braket{M^\ast (\mathcal{B}^\ast)^{-1}[1], \SS[G-M](G-M) } \\ &= -\braket{M^\ast (\mathcal{B}^\ast)^{-1}[1],\underline{WG}} + \mathcal{O}_\prec \Bigl(\frac{\norm{(\mathcal{B}^\ast)^{-1}[1]}}{(n\eta)^{2}}\Bigr). \end{aligned}\label{eq G-M exp}\raisetag{-3em} \end{gather} With \[A:=\big( (\mathcal{B}^\ast)^{-1}[1]\big)^\ast M\] we find from the explicit formula for \(\mathcal{B}\) given in~\eqref{eq Bhat decomp} and~\eqref{beta def} that \begin{equation}\label{MA eq} \braket{MA}= \frac{1-\beta}{\beta} =\frac{1}{1-m^2-\abs{z}^2 u^2}-1=-\mathrm{i}\partial_\eta m, \end{equation} and, using a cumulant expansion we find \begin{equation}\label{eq single WGA exp} \E\braket{\underline{WG}A} = \sum_{k\ge 2}\sum_{ab}\sum_{\bm\alpha\in\{ab,ba\}^k} \frac{\kappa(ba,\bm \alpha)}{k!} \E \partial_{\bm\alpha}\braket{\Delta^{ba} G A}.\end{equation} We first consider \(k=2\) where by parity at least one \(G\) factor is off-diagonal, e.g. \[ \frac{1}{n^{5/2}}\sum_{a\le n}\sum_{b>n} \E G_{ab}G_{aa}(GA)_{bb}\] and similarly for \(a>n\), \(b\le n\). By writing \(G=M+G-M\) and using the isotropic structure of the local law~\eqref{single local law} we obtain \[ \begin{split} &\frac{1}{n^{5/2}}\sum_{a\le n}\sum_{b>n} \E G_{ab}G_{aa}(GA)_{bb} \\ &= \frac{1}{n^{5/2}}\E m (MA)_{n+1,n+1}\braket{E_1\bm 1,G E_2\bm 1} + \mathcal{O}_\prec\Bigl( n^2 n^{-5/2} (n\eta)^{-3/2} \abs{\beta}^{-1}\Bigr)\\ &=\mathcal{O}_\prec\Bigl( \frac{1}{\abs{\beta}n^{3/2} (1+\eta)} + \frac{1}{\abs{\beta}n^2\eta^{3/2}}\Bigr), \end{split} \] where \(\bm1=(1,\dots,1)\) denotes the constant vector of norm \(\norm{\bm1}=\sqrt{2n}\). Thus we can bound all \(k=2\) terms by \(\abs{\beta}^{-1}\bigl(n^{-3/2} (1+\eta)^{-1}+n^{-2}\eta^{-3/2}\bigr)\). For \(k\ge 4\) we can afford bounding each \(G\) entrywise and obtain bounds of \(\abs{\beta}^{-1}n^{-3/2}\). Finally, for the \(k=3\) term there is an assignment \((\bm\alpha)=(ab,ba,ab)\) for which all \(G\)'s are diagonal and which contributes a leading order term given by \begin{equation} -\frac{\kappa_4}{2n^3}\sideset{}{'}\sum_{ab}M_{aa}M_{bb}M_{aa}(MA)_{bb}= -\frac{\kappa_4 }{n}\braket{M}^3\braket{MA}, \label{eq psum def}\end{equation} where \[ \sideset{}{'}\sum_{ab}:= \sum_{a\le n}\sum_{b>n}+\sum_{a>n}\sum_{b\le n}, \] and thus \begin{gather} \begin{aligned} \sum_{k\ge 2}\sum_{ab}\sum_{\bm\alpha\in\{ab,ba\}^k} \frac{\kappa(ba,\bm \alpha) }{k!} \partial_{\bm\alpha}\braket{\Delta^{ba}GA} &= -\frac{\kappa_4 }{n}\braket{M}^3\braket{MA} \\ &\quad+ \mathcal{O}\Bigl(\frac{1}{\abs{\beta} n^{3/2} (1+\eta)}+\frac{1}{\abs{\beta} n^{2}\eta^{3/2}}\Bigr), \end{aligned}\label{g-m single ref}\raisetag{-5em} \end{gather} concluding the proof. \end{proof} We now turn to the computation of higher moments which to leading order due to Lemma~\ref{lemma exp} is equivalent to computing \[ \E\prod_{i\in[p]}\braket{G_i-M_i-\mathcal{E}_i},\quad \mathcal{E}_i:= \frac{\kappa_4}{n}\braket{M_i}^3\braket{M_i A_i},\quad A_i:= \big( (\mathcal{B}_i^\ast)^{-1}[1]\big)^\ast M_i,\] with \(G_i,M_i\) as in~\eqref{G_i def} for \(z_1,\dots,z_k\in\mathbf{C}\), \(\eta_1,\dots,\eta_k>1/n\). Using Lemma~\ref{lemma exp}, Eq.~\eqref{eq G-M exp}, \(\abs{\mathcal{E}_i}\lesssim 1/n\) and the high-probability bound \begin{equation}\label{eq a priori WGA} \abs{\braket{\underline{WG_i A_i}}}\prec \frac{1}{\abs{\beta_i}n \eta_i} \end{equation} we have \begin{equation} \prod_{i\in[p]}\braket{G_i-\E G_i} = \prod_{i\in[p]}\braket{-\underline{WG_i}A_i-\mathcal{E}_i} + \mathcal{O}_\prec\Bigl(\frac{\psi}{n\eta}\Bigr), \quad \psi:= \prod_{i\in[p]}\frac{1}{\abs{\beta_i}n\abs{\eta_i}}.\label{eq G-M G-M reduction} \end{equation} In order to prove Proposition~\ref{prop:CLTresm} we need to compute the leading order term in the local law bound \begin{equation}\label{eq prod WGA naive} \abs*{\prod_{i\in[p]}\braket{-\underline{WG_i}A_i-\mathcal{E}_i} }\prec \psi. \end{equation} \begin{proof}[Proof of Proposition~\ref{prop:CLTresm}] To simplify notations we will not carry the \(\beta_i\)-dependence within the proof because each \(A_i\) is of size \(\norm{A_i}\lesssim\abs{\beta_i}^{-1}\) and the whole estimate is linear in each \(\abs{\beta_i}^{-1}\). We first perform a cumulant expansion in \(\underline{WG_1}\) to compute \begin{gather} \begin{aligned} &\E \prod_{i\in[p]}\braket{-\underline{W G_i}A_i-\mathcal{E}_i} \\ &\quad= -\braket{\mathcal{E}_1}\E\prod_{i\ne 1}\braket{-\underline{W G_i}A_i-\mathcal{E}_i} \\ &\qquad+ \sum_{i\ne1}\E\widetilde\E \braket{-\widetilde W G_1 A_1}\braket{-\widetilde W G_i A_i+\underline{WG_i \widetilde W G_i} A_i}\prod_{j\ne 1,i}\braket{-\underline{W G_j}A_j-\mathcal{E}_j}\\ & \qquad +\sum_{k\ge 2}\sum_{ab}\sum_{\bm\alpha\in\{ab,ba\}^k} \frac{\kappa(ba,\bm \alpha)}{k!} \E \partial_{\bm\alpha}\Bigl[\braket{-\Delta^{ba}G_1A_1}\prod_{i\ne 1}\braket{-\underline{W G_i}A_i-\mathcal{E}_i} \Bigr], \end{aligned}\label{eq g-m 2 first exp}\raisetag{-6em} \end{gather} where \(\widetilde W\) denotes an independent copy of \(W\) with expectation \(\widetilde\E\), and the underline is understood with respect to \(W\) and not \(\widetilde W\). We now consider the terms of~\eqref{eq g-m 2 first exp} one by one. For the second term on the rhs.\ we use the identity \begin{equation}\label{eq tr W tr W} \E\braket{W A}\braket{W B} = \frac{1}{2n^2} \braket{A E_1 B E_2 + A E_2 B E_1 } = \frac{\braket{A E B E'}}{2n^2}, \end{equation} where we recall the block matrix definition from~\eqref{E1 E2 def} and follow the convention that \(E,E'\) are summed over both choices \((E,E')=(E_1,E_2),(E_2,E_1)\). Thus we obtain \begin{equation}\label{eq Wick Gauss term} \begin{split} & \widetilde\E \braket{-\widetilde W G_1 A_1}\braket{-\widetilde W G_i A_i+\underline{WG_i \widetilde W G_i} A_i} \\ &\quad= \frac{1}{2n^2}\braket{G_1 A_1 E G_i A_i E'- G_1 A_1 E \underline{G_i A_i W G_i E'}}\\ &\quad = \frac{1}{2n^2}\braket{G_1 A_1 E G_i A_i E' + G_1\SS[G_1 A_1 E G_i A_i ] G_i E' - \underline{G_1 A_1 E G_i A_i W G_i E'}}. \end{split} \end{equation} Here the self-renormalisation in the last term is defined analogously to~\eqref{self renom}, i.e. \[ \underline{f(W)Wg(W)}:= f(W)Wg(W)-\widetilde\E (\partial_{\widetilde W}f)(W)\widetilde W g(W) - \widetilde\E f(W)\widetilde W (\partial_{\widetilde W}g)(W),\] which is only well-defined if it is clear to which \(W\) the action is associated, i.e.\ \(\underline{WWf(W)}\) would be ambiguous. However, we only use the self-renormalisation notation for \(f(W),g(W)\) being (products of) resolvents and deterministic matrices, so no ambiguities should arise. For the first two terms in~\eqref{eq Wick Gauss term} we use \(\norm{M_{AE_1}^{z_1,z_i}}\lesssim \norm{\widehat\mathcal{B}_{1i}^{-1}}\lesssim \abs{z_1-z_i}^{-2}\) due to~\eqref{beta ast bound} and the first bound in~\eqref{final local law} from Theorem~\ref{thm local law G2} (estimating the big bracket by \(1\)) to obtain \begin{equation}\label{eq GG local law application} \begin{split} &\braket{G_1 A_1 E G_i A_i E' + G_1\SS[G_1 A_1 E G_i A_i ] G_i E'} \\ &\qquad\qquad\qquad = \braket{M^{z_1,z_i}_{A_1E} A_i E'+ M^{z_i,z_1}_{E'} \SS[M^{z_1,z_i}_{A_1E}A_i]} \\ &\qquad\qquad\qquad\quad+ \mathcal{O}_\prec\Bigl(\frac{1}{n\abs{z_1-z_i}^4\eta_\ast^{1i} \abs{\eta_1\eta_i}^{1/2}}+\frac{1}{n^2\abs{z_1-z_i}^4(\eta_\ast^{1i})^2 \abs{\eta_1\eta_i}}\Bigr), \end{split} \end{equation} where \(\eta_\ast^{1i}:=\min\{\eta_1,\eta_i\}\). For the last term in~\eqref{eq Wick Gauss term} we claim that \begin{equation}\label{var trace bound} \E\abs{\braket{\underline{G_1 A_1 E G_i A_i W G_i E'}}}^2\lesssim \Bigl(\frac{1}{n\eta_1\eta_i\eta_\ast^{1i}}\Bigr)^2, \end{equation} the proof of which we present after concluding the proof of the proposition. Thus, using~\eqref{var trace bound} together with~\eqref{eq a priori WGA}, \[ \begin{split} &\abs*{n^{-2}\E \braket{\underline{G_1 A_1 E G_i A_i W G_i E'}}\prod_{j\ne 1,i} \braket{-\underline{W G_j}A_j-\mathcal{E}_i}}\\ &\qquad \lesssim \frac{n^\epsilon}{n^2} \biggl[\prod_{j\ne 1,i}\frac{1}{n\eta_j}\biggr]\Bigl(\E\abs{\braket{\underline{G_1 A_1 E G_i A_i W G_i E'}}}^2\Bigr)^{1/2} \\ &\qquad\lesssim \frac{n^\epsilon}{n\eta_\ast^{1i}} \prod_{j}\frac{1}{n\eta_j} \le \frac{n^\epsilon\psi}{n\eta_\ast}. \end{split}\] Together with~\eqref{eq prod WGA naive} and~\eqref{eq Wick Gauss term}--\eqref{eq GG local law application} we obtain \begin{equation}\label{eq Wick Gauss term 2} \begin{split} &\E\widetilde\E \braket{-\widetilde W G_1 A_1}\braket{-\widetilde W G_i A_i+\underline{WG_i \widetilde W G_i} A_i}\prod_{j\ne 1,i}\braket{-\underline{W G_j}A_j-\mathcal{E}_j}\\ &\qquad\quad= \frac{V_{1,i}}{2n^2} \E \prod_{j\ne 1,i}\braket{-\underline{W G_j}A_j-\mathcal{E}_j} \\ &\qquad\quad\quad+ \landauO*{\psi n^\epsilon \Bigl(\frac{1}{n\eta_\ast}+\frac{\abs{\eta_1\eta_i}^{1/2}}{n\eta_\ast^{1i}\abs{z_1-z_i}^4}+\frac{1}{(n\eta_\ast^{1i})^2\abs{z_1-z_i}^4}\Bigr)} \end{split} \end{equation} since, by an explicit computation the rhs.\ of~\eqref{eq GG local law application} is given by \(V_{1,i}\) as defined in~\eqref{eq:exder}. Indeed, from the explicit formula for \(\mathcal{B}\) it follows that main term on the rhs.\ of~\eqref{eq GG local law application} can be written as \(\widetilde V_{1,i}\), where \begin{equation} \label{eq:vw} \begin{split} \widetilde V_{i,j}:&= \frac{2m_i m_j\bigl[2u_i u_j \Re z_i\overline{z_j} + (u_i u_j \abs{z_i} \abs{z_j})^2\bigl[s_i s_j-4\bigr]\bigr]}{ t_i t_j\bigl[1+(u_i u_j \abs{z_i} \abs{z_j})^2-m_i^2m_j^2-2u_i u_j\Re z_i\overline{z_j}\bigr]^2 } \\ & +\frac{2m_i m_j(m_i^2+u_i^2\abs{z_i}^2)(m_j^2+u_j^2\abs{z_j}^2) }{t_i t_j\bigl[1+(u_i u_j \abs{z_i} \abs{z_j})^2-m_i^2m_j^2-2u_i u_j\Re z_i\overline{z_j}\bigr]^2 }, \end{split} \end{equation} using the notations \(t_i:=1-m_i^2-u_i^2\abs{z_1}^2\), \(s_i:=m_i^2-u_i^2\abs{z_i}^2\). By an explicit computation using the equation~\eqref{eq m} for \(m_i,m_j\) it can be checked that \(\widetilde V_{i,j}\) can be written as a derivative and is given by \(\widetilde V_{i,j}=V_{i,j}\) with \(V_{i,j}\) from~\eqref{eq:exder}. Next, we consider the third term on the rhs.\ of~\eqref{eq g-m 2 first exp} for \(k=2\) and \(k\ge3\) separately. We first claim the auxiliary bound \begin{equation}\label{eq aux var iso bound} \abs{\braket{\bm{x},\underline{G B W G}\bm{y} }} \prec \frac{\norm{\bm{x}}\norm{\bm{y}}\norm{B}}{n^{1/2}\eta^{3/2}}. \end{equation} Note that~\eqref{eq aux var iso bound} is very similar to~\eqref{prop iso bound} except that in~\eqref{eq aux var iso bound} both \(G\)'s have the same spectral parameters \(z,\eta\) and the order of \(W\) and \(G\) is interchanged. The proof of~\eqref{eq aux var iso bound} is, however, very similar and we leave details to the reader. After performing the \(\bm\alpha\)-derivative in~\eqref{eq g-m 2 first exp} via the Leibniz rule, we obtain a product of \(t\ge 1\) traces of the types \(\braket{(\Delta G_i)^{k_i}A_i}\) and \(\braket{\underline{W(G_i \Delta)^{k_i}G_i}A_i}\) with \(k_i\ge0\), \(\sum k_i=k+1\), and \(p-t\) traces of the type \( \braket{\underline{W G_i A_i}+\mathcal{E}_i} \). For the term with multiple self-renormalised \(G\)'s, i.e.\ \(\braket{\underline{W(G_i \Delta)^{k_i}G_i}A_i}\) with \(k_i\ge 1\) we rewrite \begin{gather} \begin{aligned} \braket{\underline{W(G \Delta)^{k}G}A} &= \braket{\underline{GAW(G \Delta)^{k}}} \\ &= \braket{\underline{GAWG}\Delta (G \Delta)^{k-1}}+\sum_{j=1}^{k-1}\braket{GA\SS[(G\Delta)^j G] (G \Delta)^{k-j}} \\ &= \braket{\underline{GAWG}\Delta (G \Delta)^{k-1}}+\sum_{j=1}^{k-1}\braket{GA E (G \Delta)^{k-j}}\braket{GE'(G\Delta)^j }. \end{aligned}\label{long WG rewrite}\raisetag{-4em} \end{gather} \subsubsection*{Case \(k=2\), \(t=1\).} In this case the only possible term is given by \(\braket{\Delta G_1\Delta G_1 \Delta G_1 A_1}\) where by parity at least one \(G=G_1\) is off-diagonal and in the worst case (only one off-diagonal factor) we estimate \[ \begin{split} n^{-1-3/2} \sum_{a\le n}\sum_{b>n} G_{aa} G_{bb} (GA)_{ab} &= \frac{m^2}{n^{5/2}}\braket{E_1\bm 1,GAE_2\bm1} + \mathcal{O}_\prec\Bigl(\frac{1}{n^{1/2}}\frac{1}{(n\eta_1)^{3/2}}\Bigr)\\ &=\mathcal{O}_\prec\Bigl(\frac{1}{n^{3/2}}+\frac{1}{n^2\eta_1^{3/2}}\Bigr), \end{split}\] after replacing \(G_{aa}=m+(G-M)_{aa}\) and using the isotropic structure of the local law in~\eqref{single local law}, and similarly for \(\sum_{a>n}\sum_{b\le n}\). \subsubsection*{Case \(k=2\), \(t=2\).} In this case there are \(2+2\) possible terms \[ \begin{split} &\braket{\Delta G_1 \Delta G_1 A_1} \braket{\Delta G_i A_i+\underline{W G_i \Delta G_i}A_i}\\ &\qquad+\braket{\Delta G_1 A_1} \braket{\Delta G_i\Delta G_i A_i+\underline{W G_i \Delta G_i\Delta G_i }A_i}. \end{split}\] For the first two, in the worst case, we have the estimate \[ \begin{split} &\frac{1}{n^{7/2}} \sideset{}{'}\sum_{ab} (G_1)_{aa} (G_{1}A_1)_{bb} \Bigl((G_i A_i)_{ab}+(\underline{G_i A_i W G_i})_{ab}\Bigr) \\ &\qquad\qquad = \mathcal{O}_\prec\Bigl(\frac{1}{n^{5/2}}+\frac{1}{n^3\eta_1\eta_i^{3/2}}\Bigr) \end{split} \] using~\eqref{eq aux var iso bound}, where we recall the definition of \(\sum'\) from~\eqref{eq psum def}. Similarly, using~\eqref{long WG rewrite} and~\eqref{eq aux var iso bound} for the ultimate two terms, we have the bound \[ \begin{split} &\frac{1}{n^{7/2}} \E\sideset{}{'}\sum_{ab} (G_{1}A_1)_{ab} \Bigl((\underline{G_i A_i WG_i})_{aa} (G_i)_{bb} +\frac{(G_i A_i E G_i)_{ab} (G_i E' G_i)_{ab}}{n}\Bigr) \\ &\qquad= \mathcal{O}_\prec\Bigl(\frac{1}{n^{3}\eta_1^{1/2}\eta_i^2}\Bigr). \end{split} \] \subsubsection*{Case \(k=2\), \(t=3\).} In this final \(k=2\) case we have to consider four terms \[\braket{\Delta G_1A_1}\braket{\Delta G_i A_i +\underline{W G_i \Delta G_i A_i} }\braket{\Delta G_j A_j +\underline{W G_j \Delta G_j A_j} },\] which, using~\eqref{eq aux var iso bound}, we estimate by \[ \begin{split} &\frac{1}{n^{9/2}}\sideset{}{'}\sum_{ab} (G_1A_1)_{ab}\Bigl((G_i A_i)_{ab}+(\underline{G_i A_i W G_i})_{ab}\Bigr)\Bigl((G_j A_j)_{ab}+(\underline{G_j A_j W G_j})_{ab}\Bigr)\\ &\quad = \mathcal{O}_\prec\Bigl(\frac{1}{n^{4} \eta_1^{1/2}\eta_i^{3/2}\eta_j^{3/2}} \Bigr). \end{split}\] By inserting the above estimates back into~\eqref{eq g-m 2 first exp}, after estimating all untouched traces by \(n^\epsilon/(n\eta_i)\) in high probability using~\eqref{eq a priori WGA}, we obtain \begin{equation}\label{k=2 est} \begin{split} &\sum_{k= 2}\sum_{ab}\sum_{\bm\alpha\in\{ab,ba\}^k} \frac{\kappa(ba,\bm \alpha)}{k!} \E \partial_{\bm\alpha}\Bigl[\braket{-\Delta^{ba}G_1A_1}\prod_{i\ne 1}\braket{-\underline{W G_i}A_i-\mathcal{E}_i} \Bigr]\\ &\qquad = \mathcal{O}\Bigl(\frac{\psi n^\epsilon}{\sqrt{n\eta_\ast}}\Bigr). \end{split} \end{equation} \subsubsection*{Case \(k\ge 3\).} In case \(k\ge 3\) after the action of the derivative in~\eqref{eq g-m 2 first exp} there are \(1\le t\le k+1\) traces involving some \(\Delta\). By writing the normalised traces involving \(\Delta\) as matrix entries we obtain a prefactor of \(n^{-t-(k+1)/2}\) and a \(\sum_{ab}\)-summation over entries of \(k+1\) matrices of the type \(G\), \(GA\), \(\underline{GAWG}\) such that each summation index appears exactly \(k+1\) times. There are some additional terms from the last sum in~\eqref{long WG rewrite} which are smaller by a factor \((n\eta)^{-1}\) and which can be bounded exactly as in the \(k=2\) case. If there are only diagonal \(G\) or \(GA\)-terms, then we have a naive bound of \(n^{-t-(k-3)/2}\) and therefore potentially some leading-order contribution in case \(k=3\). If, however, \(k>3\), or there are some off-diagonal \(G,GA\) or some \(\underline{GAWG}\) terms, then, using~\eqref{eq aux var iso bound} we obtain an improvement of at least \((n\eta)^{-1/2}\) over the naive bound~\eqref{eq prod WGA naive}. For \(k=3\), by parity, the only possibility of having four diagonal \(G,GA\) factors, is distributing the four \(\Delta\)'s either into a single trace or two traces with two \(\Delta\)'s each. Thus the relevant terms are \[ \braket{\Delta G_1 \Delta G_1 \Delta G_1 \Delta G_1 A_1},\quad \braket{\Delta G_1 \Delta G_1 A_1}\braket{\Delta G_i \Delta G_i A_i}. \] For the first one we recall from~\eqref{g-m single ref} for \(k=3\) that \begin{equation}\label{eq kappa4 1} \sum_{ab}\sum_{\bm\alpha} \kappa(ba,\bm\alpha) \braket{\Delta^{ba} G_1 \Delta^{\alpha_1} G_1 \Delta^{\alpha_2} G_1 \Delta^{\alpha_3} G_1 A_1}=\mathcal{E}_1 + \mathcal{O}_\prec\Bigl(\frac{1}{n^{3/2}}+\frac{1}{n^2 \eta_1^{3/2}}\Bigr). \end{equation} For the second one we note that only choosing \(\bm\alpha=(ab,ab,ba),(ab,ba,ab)\) gives four diagonal factors, while any other choice gives at least two off-diagonal factors. Thus \begin{equation} \label{eq kappa4 2} \begin{split} &\sum_{ab}\sum_{\bm\alpha} \kappa(ba,\bm\alpha) \braket{\Delta^{ba} G_1 \Delta^{\alpha_1} G_1}\braket{\Delta^{\alpha_2} G_i \Delta^{\alpha_3} G_i A_i} \\ & = \frac{\kappa_4}{n^{2}}\sideset{}{'}\sum_{ab} \braket{\Delta^{ba} G_1 \Delta^{ab} G_1 A_1}\bigl[\braket{\Delta^{ab} G_i \Delta^{ba} G_i A_i}+\braket{\Delta^{ba} G_i \Delta^{ab} G_i A_i}\bigl]+ \mathcal{O}_\prec(\mathcal{E})\\ & = \frac{\kappa_4}{4n^4}\sideset{}{'}\sum_{ab} (G_1)_{aa} (G_1 A_1)_{bb} \bigl[ (G_i)_{bb} (G_i A_i)_{aa}+ (G_i)_{aa}(G_i A_i)_{bb}\bigl]+ \mathcal{O}_\prec(\mathcal{E})\\ & = \frac{\kappa_4}{4n^4}\sideset{}{'}\sum_{ab} m_1 m_i (M_1 A_1)_{bb} \bigl[ (M_i A_i)_{aa}+ (M_i A_i)_{bb}\bigl]+ \mathcal{O}_\prec\left(\sqrt{n\eta_*}\mathcal{E}\right)\\ & = \frac{\kappa_4}{n^2} \braket{M_1} \braket{M_i} \braket{M_1 A_1} \braket{ M_i A_i} + \mathcal{O}_\prec\left(\frac{1}{n^{5/2}\eta_*^{1/2}}\right), \end{split} \end{equation} where \(\mathcal{E}:=(n^3\eta_*)^{-1}\). We recall from~\eqref{MA eq} that \[ \braket{M_1} \braket{M_i} \braket{M_1 A_1} \braket{ M_i A_i}= \frac{1}{2}U_1 U_i \] with \(U_{i}\) defined in~\eqref{eq:exder}. Thus, we can conclude for the \(k\ge3\) terms in~\eqref{eq g-m 2 first exp} that \begin{gather} \begin{aligned} &\sum_{k \ge 3}\sum_{ab}\sum_{\bm\alpha\in\{ab,ba\}^k} \frac{\kappa(ba,\bm \alpha)}{k!} \E \partial_{\bm\alpha}\Bigl[\braket{-\Delta^{ba}G_1A_1}\prod_{i\ne 1}\braket{-\underline{W G_i}A_i-\mathcal{E}_i} \Bigr]\\ &\qquad = \braket{\mathcal{E}_1} \E\prod_{i\ne 1}\braket{-\underline{W G_i}A_i-\mathcal{E}_i} + \sum_{i\ne 1} \frac{\kappa_4 U_{1} U_i }{2n^2} \E\prod_{j\ne 1,i} \braket{-\underline{W G_j}A_j-\mathcal{E}_j} \\ &\qquad\quad+ \mathcal{O}\Bigl(\frac{\psi n^\epsilon}{(n\eta_\ast)^{1/2}}\Bigr). \end{aligned}\label{eq kappa ge 3 conclusion}\raisetag{-5em} \end{gather} By combining~\eqref{eq g-m 2 first exp} with~\eqref{eq Wick Gauss term 2},~\eqref{k=2 est} and~\eqref{eq kappa ge 3 conclusion} we obtain \begin{gather} \begin{aligned} \E\prod_{i}\braket{-\underline{W G_i}A_i-\mathcal{E}_i} &= \sum_{i\ne 1} \frac{V_{1,i}+\kappa_4 U_{1}U_i}{2n^2} \E\prod_{j\ne 1,i} \braket{-\underline{W G_j}A_j-\mathcal{E}_j} \\ &\quad + \landauO*{\frac{\psi n^\epsilon }{\sqrt{n\eta_\ast}}+\frac{\psi n^\epsilon }{n\eta_\ast^{1/2}\abs{z_1-z_i}^4}+\frac{\psi n^\epsilon }{(n\eta_\ast)^2\abs{z_1-z_i}^4}}, \end{aligned}\raisetag{-5em} \end{gather} and thus by induction \begin{gather} \begin{aligned} \E\prod_{i}\braket{-\underline{W G_i}A_i-\mathcal{E}_i} &= \frac{1}{n^{p}}\sum_{P\in\Pi_p}\prod_{\{i,j\}\in P} \frac{V_{i,j}+\kappa_4 U_i U_j}{2} \\ &\quad + \landauO*{\frac{\psi n^\epsilon }{\sqrt{n\eta_\ast}}+\frac{\psi n^\epsilon }{n\eta_\ast^{1/2}\abs{z_1-z_i}^4}+\frac{\psi n^\epsilon }{(n\eta_\ast)^2\abs{z_1-z_i}^4}}, \end{aligned}\raisetag{-5em} \end{gather} from which the equality \(\E \prod_i \braket{G_i-\E G_i}\) and the second line of~\eqref{eq CLT resovlent} follows, modulo the proof of~\eqref{var trace bound}. The remaining equality then follows from applying the very same equality for each element of the pairing. Finally,~\eqref{prop clt exp} follows directly from Lemma~\ref{lemma exp}. \end{proof} \begin{proof}[Proof of~\eqref{var trace bound}] Using the notation of Lemma~\ref{lemma general products}, our goal is to prove that \begin{equation} \label{eq G1Gii claim} \E\abs{\braket{\underline{W G_{i1i} }}}^2 \lesssim \Bigl(\frac{1}{n\eta_1\eta_i\eta_\ast^{1i}}\Bigr)^2. \end{equation} Since only \(\eta_1,\eta_i\) play a role within the proof of~\eqref{var trace bound}, we drop the indices from \(\eta_\ast^{1i}\) and simply write \(\eta_\ast=\eta_\ast^{1i}\). Using a cumulant expansion we compute \begin{equation}\label{G1Gii first} \begin{split} &\E\abs{\braket{\underline{W G_{i1i}}}}^2 \\ &= \E\widetilde\E \braket{\widetilde W G_{i1i} } \Bigl(\braket{\widetilde W G_{i1i} } + \braket{\underline{W G_i \widetilde W G_{i1i} }+\underline{W G_{i1} \widetilde W G_{1i} }+\underline{W G_{i1i} \widetilde W G_i }}\Bigr)\\ &\quad + \sum_{k\ge 2}\mathcal{O}\Bigl(\frac{1}{n^{(k+1)/2}}\Bigr)\sideset{}{'}\sum_{ab} \sum_{k_1+k_2=k-1}\sum_{\bm\alpha_1,\bm\alpha_2} \E\braket{\Delta^{ab}\partial_{\bm\alpha_1} G_{i1i}}\braket{\Delta^{ab}\partial_{\bm\alpha_2} G_{i1i}} \\ &\quad + \sum_{k\ge 2}\mathcal{O}\Bigl(\frac{1}{n^{(k+1)/2}}\Bigr)\sideset{}{'}\sum_{ab} \sum_{k_1+k_2=k}\sum_{\bm\alpha_1,\bm\alpha_2} \E\braket{\Delta^{ab}\partial_{\bm\alpha_1} G_{i1i}}\braket{\underline{W\partial_{\bm\alpha_2} G_{i1i}}}, \end{split} \end{equation} where \(\bm\alpha_i\) is understood to be summed over \(\bm\alpha_i\in\{ab,ba\}^{k_i}\). In~\eqref{G1Gii first} we only kept the scaling \(\abs{\kappa(ab,\bm\alpha)}\lesssim n^{-(k+1)/2}\) of the cumulants, and also absorb combinatorial factors as \(k! \) in \(\mathcal{O}(\cdot)\). We first consider those terms in~\eqref{G1Gii first} which contain no self-renormalisations \(\underline{Wf(W)}\) anymore since those do not have to be expanded further. For the very first term we obtain \begin{equation}\label{Gi1i single Gauss} \widetilde\E \braket{\widetilde W G_{i1i} } \braket{\widetilde W G_{i1i} } = \frac{\braket{G_{i1ii1i}}}{n^2} =\mathcal{O}_\prec\Bigl( \frac{1}{n^2\eta_1^2\eta_i^3}\Bigr). \end{equation} To bound products of \(G_1\) and \(G_i\) we use Lemma~\ref{lemma general products}. For the second line on the rhs.\ of~\eqref{G1Gii first} we have to estimate \[ \begin{split} \mathcal{O}\Bigl(\frac{1}{n^{(k+1)/2+2}}\Bigr)\sum_{k\ge 2}\sideset{}{'}\sum_{ab} \sum_{k_1+k_2=k-1}\sum_{\bm\alpha_1,\bm\alpha_2} \E (\partial_{\bm\alpha_1} (G_{i1i})_{ba}) (\partial_{\bm\alpha_2} (G_{i1i})_{ba}) \end{split} \] and we note that without derivatives we have the estimate \(\abs{(G_{i1i})}\prec (\eta_1\eta_i)^{-1}\). Additional derivatives do not affect this bound since if e.g.\ \(G_i\) is derived we obtain one additional \(G_i\) but also one additional product of \(G\)'s with \(G_i\) in the end, and one additional product with \(G_i\) in the beginning. Due to the structure of the estimate~\eqref{eq general iso bound} the bound thus remains invariant. For example \(\abs{(\partial_{ab} G_{i1i})_{ba}}=\abs{(G_{i})_{bb}(G_{i1i})_{aa}+\dots}\prec (\eta_1\eta_i)^{-1}\). Thus, by estimating the sum trivially we obtain \begin{equation}\label{Gi1i single cum} \frac{1}{n^{(k+1)/2}} \sum_{\substack{k_1+k_2=k-1\\ k\ge 2}}\sideset{}{'}\sum_{ab} \sum_{\bm\alpha_1,\bm\alpha_2} \E\braket{\Delta^{ab}\partial_{\bm\alpha_1} G_{i1i}}\braket{\Delta^{ab}\partial_{\bm\alpha_2} G_{i1i}} = \mathcal{O}_\prec\Bigl(\frac{1}{n^{3/2}\eta_1^2\eta_i^{2}}\Bigr) \end{equation} since \(k\ge 2\). It remains to consider the third line on the rhs.\ of~\eqref{G1Gii first} and the remaining terms from the first line. In both cases we perform a second cumulant expansion and again differentiate the Gaussian (i.e.\ the second order cumulant) term, and the terms from higher order cumulants. Since the two consecutive cumulant expansions commute it is clearly sufficient to consider the Gaussian term for the first line, and the full expansion for the third line. We begin with the latter and compute \begin{equation}\label{eq Gi1i cum second exp} \begin{split} &\E\braket{\Delta^{ab}\partial_{\bm\alpha_1} G_{i1i}}\braket{\underline{W\partial_{\bm\alpha_2} G_{i1i}}} \\ &\quad = \widetilde\E\E \braket{\Delta^{ab}\partial_{\bm\alpha_1}(G_i\widetilde W G_{i1i}+G_{i1}\widetilde W G_{1i}+G_{i1i}\widetilde W G_{i})} \braket{\widetilde W\partial_{\bm\alpha_2} G_{i1i}} \\ &\qquad + \sum_{l\ge 2}\sideset{}{'}\sum_{cd}\sum_{\bm\beta_1,\bm\beta_2} \E \braket{\Delta^{ab}\partial_{\bm\alpha_1}\partial_{\bm\beta_1} G_{i1i}}\braket{\Delta^{cd}\partial_{\bm\alpha_2}\partial_{\bm\beta_2} G_{i1i}}\\ &\quad = \frac{1}{n^2}\E \braket{\partial_{\bm\alpha_1}(G_{i1i}\Delta^{ab}G_i+G_{1i}\Delta^{ab}G_{i1}+G_{i}\Delta^{ab}G_{i1i})\partial_{\bm\alpha_2}(G_{i1i})}\\ &\qquad + \sum_{l\ge 2}\sideset{}{'}\sum_{cd}\sum_{\bm\beta_1,\bm\beta_2} \E \braket{\Delta^{ab}\partial_{\bm\alpha_1}\partial_{\bm\beta_1} G_{i1i}}\braket{\Delta^{cd}\partial_{\bm\alpha_2}\partial_{\bm\beta_2} G_{i1i}}, \end{split} \end{equation} where \(\bm\beta_i\) are understood to be summed over \(\bm\beta_i\in\{cd,dc\}^{l_i}\) with \(l_1+l_2=l\). After inserting the first line of~\eqref{eq Gi1i cum second exp} back into~\eqref{G1Gii first} we obtain an overall factor of \(n^{-3-(k+1)/2}\) as well as the \(\sum_{ab}\)-summation over some \(\partial_{\bm\alpha} (\mathcal{G})_{ab}\), where \(\mathcal{G}\) is a product of either \(2+5\) or \(3+4\) \(G_1\)'s and \(G_i\)'s respectively with \(G_i\) in beginning and end. We can bound \(\abs{\partial_{\bm\alpha} (\mathcal{G})_{ab}}\prec \eta_1^{-2}\eta_i^{-4}+\eta_1^{-3}\eta_i^{-3}\le\eta_1^{-2}\eta_i^{-2}\eta_\ast^{-2}\) and thus can estimate the sum by \(n^{-5/2}\eta_1^{-2}\eta_i^{-2}\eta_\ast^{-2}\) since \(k\ge 2\). Here we used~\eqref{eq general iso bound} to estimate all matrix elements of the form \(\mathcal{G}'_{ab}, \mathcal{G}'_{aa},\dots\) emerging after performing the derivative \(\partial_{\bm \alpha} (\mathcal{G})_{ab}\). Now we turn to the second line of~\eqref{eq Gi1i cum second exp} when inserted back into~\eqref{G1Gii first}, where we obtain a total prefactor of \(n^{-(k+l)/2-3}\), a summation \(\sum_{abcd}\) over \((\partial_{\bm\alpha_1}\partial_{\bm\beta_1}G_{i1i})_{ab}(\partial_{\bm\alpha_2}\partial_{\bm\beta_2}G_{i1i})_{cd}\). In case \(k=l=2\), by parity, after performing the derivatives at least two factors are off-diagonal, while in case \(k+l=5\) at least one factor is off-diagonal. Thus we obtain a bound of \(n^{1-(k+l)/2}\eta_1^{-2}\eta_i^{-2}\) multiplied by a Ward-improvement of \((n\eta_\ast)^{-1}\) in the first, and \((n\eta_\ast)^{-1/2}\) in the second case. Thus we conclude \begin{equation}\label{eq Gi1i cum further exp} \frac{1}{n^{(k+1)/2}} \sum_{\substack{k_1+k_2=k \\ k\ge 2}}\sideset{}{'}\sum_{ab} \sum_{\bm\alpha_1,\bm\alpha_2} \E\braket{\Delta^{ab}\partial_{\bm\alpha_1} G_{i1i}}\braket{\underline{W\partial_{\bm\alpha_2} G_{i1i}}} = \mathcal{O}\Bigl(\frac{1}{n^2\eta_1^2\eta_i^2\eta_\ast^2}\Bigr). \end{equation} Finally, we consider the Gaussian part of the cumulant expansion of the remaining terms in the first line of~\eqref{G1Gii first}, for which we obtain \begin{equation}\label{eq gauss gauss} \begin{split} \frac{1}{n^2}\widetilde \E\braket{ (G_{i1i} \widetilde W G_i+G_{1i}\widetilde WG_{i1}+G_{i}\widetilde W G_{i1i})^2 } = O_\prec\Bigl(\frac{1}{n^2 \eta_1^2\eta_i^2\eta_\ast^2}\Bigr) \end{split} \end{equation} since \[ \begin{split} \abs{\braket{G_i G_i}}&\prec \frac{1}{\eta_i},\quad \abs{\braket{G_i G_{i1}}}\prec \frac{1}{\eta_1\eta_i},\quad \abs{\braket{G_i G_{i1i}}}\prec\frac{1}{\eta_1\eta_i^2}, \\ \abs{\braket{G_{1i}G_{1i}}}&\prec \frac{1}{\eta_1^2\eta_i}, \quad \abs{\braket{G_{1i}G_{i1i}}}\prec \frac{1}{\eta_1^2\eta_i^2}, \quad \abs{\braket{G_{i1i}G_{i1i}}}\prec \frac{1}{\eta_1^2\eta_i^3} \end{split} \] due to~\eqref{eq general av bound}. By combining~\eqref{Gi1i single Gauss}--\eqref{eq gauss gauss} we conclude the proof of~\eqref{var trace bound} using~\eqref{G1Gii first}. \end{proof} \section{Independence of the small eigenvalues of \texorpdfstring{\(H^{z_1}\)}{Hz1} and \texorpdfstring{\(H^{z_2}\)}{Hz2}}\label{sec:IND} Given an \(n\times n\) i.i.d.\ complex matrix \(X\), for any \(z\in\mathbf{C} \) we recall that the Hermitisation of \(X-z\) is given by \begin{equation}\label{eq:her} H^z:= \left( \begin{matrix} 0 & X-z \\ X^*-\overline{z} & 0 \end{matrix}\right). \end{equation} The block structure of \(H^z\) induces a symmetric spectrum with respect to zero, i.e.\ denoting by \(\{\lambda_{\pm i}^z\}_{i=1}^n\) the eigenvalues of \(H^z\), we have that \(\lambda_{-i}^z=-\lambda_i^z\) for any \(i\in[n]\). Denote the resolvent of \(H^z\) by \(G^z\), i.e.\ on the imaginary axis \(G^z\) is defined by \(G^z(\mathrm{i} \eta):= (H^z-\mathrm{i}\eta)^{-1}\), with \(\eta>0\). \begin{convention}\label{rem:no0} We omitted the index \(i=0\) in the definition of the eigenvalues of \(H^z\). In the remainder of this section we always assume that all the indices are not zero, e.g we use the notation \[ \sum_{j=-n}^n := \sum_{j=-n}^{-1}+ \sum_{j=1}^n. \] Similarly, by \(\abs{i}\le A\), for some \(A>0\), we mean \(0<\abs{i}\le A\), etc. \end{convention} The main result of this section is the proof of Proposition~\ref{prop:indmr} which follows by Proposition~\ref{prop:indeig} and rigidity estimates in Section~\ref{sec:ririri}. \begin{proposition}\label{prop:indeig} Fix \(p\in\mathbf{N}\). For any \(\omega_d,\omega_f, \omega_h>0\) sufficiently small constants such that \(\omega_h\ll \omega_f\), there exits \(\omega, \widehat{\omega}, \delta_0,\delta_1>0\) with \(\omega_h\ll \delta_m\ll \widehat{\omega}\ll \omega\ll \omega_f\), for \(m=0,1\), such that for any fixed \(z_1,\dots,z_p\in \mathbf{C} \) such that \(\abs{z_l}\le 1-n^{-\omega_h}\), \(\abs{z_l-z_m}\ge n^{-\omega_d}\), with \(l,m\in [p]\), \(l\ne m\), it holds \begin{equation}\label{eq:indA} \begin{split} \E &\prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\lambda_{i_l}^{z_l})^2+\eta_l^2}=\prod_{l=1}^p \E \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\lambda_{i_l}^{z_l})^2+\eta_l^2} \\ &\quad+\mathcal{O}\left(\frac{n^{\widehat{\omega}}}{n^{1+\omega}} \sum_{l=1}^p\frac{1}{\eta_l}\times\prod_{m=1}^p\left( 1+\frac{n^\xi}{n\eta_m}\right)+\frac{n^{p\xi+2\delta_0} n^{\omega_f}}{n^{3/2}}\sum_{l=1}^p \frac{1}{\eta_l}+\frac{n^{p\delta_0+\delta_1}}{n^{\widehat{\omega}}}\right), \end{split} \end{equation} for any \(\xi>0\), where \(\eta_1,\dots, \eta_p\in [n^{-1-\delta_0},n^{-1+\delta_1}]\) and the implicit constant in \(\mathcal{O}(\cdot)\) may depend on \(p\). \end{proposition} We recall that the eigenvalues of \(H^z\) are labelled by \(\lambda_{-n}\le \dots\le \lambda_{-1}\le\lambda_1\le\dots \le \lambda_n\), hence the summation over \(\abs{i_l}\le n^{\widehat{\omega}}\) in~\eqref{eq:indA} is over the smallest (in absolute value) eigenvalues of \(H^z\). The remainder of Section~\ref{sec:IND} is divided as follows: in Section~\ref{sec:ririri} we state rigidity of the eigenvalues of the matrices \(H^{z_l}\) and a local law for \(\Tr G^{z_l}\), then using these results and Proposition~\ref{prop:indeig} we conclude the proof of Proposition~\ref{prop:indmr}. In Section~\ref{sec:PO} we state the main technical results needed to prove Proposition~\ref{prop:indeig} and conclude its proof. In Section~\ref{sec:BOUNDE} we estimate the overlaps of eigenvectors, corresponding to small indices, of \(H^{z_l}\), \(H^{z_m}\) for \(l\ne m\), this is the main input to prove the asymptotic independence in Proposition~\ref{prop:indeig}. In Section~\ref{sec:fixun} we present Proposition~\ref{pro:ciala} which is a modification of the pathwise coupling of DBMs from~\cite{MR3914908,MR3541852} (adapted to the \(2\times 2\) matrix model~\eqref{eq:her} in~\cite{MR3916329}) which is needed to deal with the (small) correlation of \({\bm \lambda}^{z_l}\), the eigenvalues of \(H^{z_l}\), for different \(l\)'s. In Section~\ref{sec:noncelpiu} we prove some technical lemmata used in Section~\ref{sec:PO}. Finally, in Section~\ref{sec:proofFEU} we prove Proposition~\ref{pro:ciala}. \subsection{Rigidity of eigenvalues and proof of Proposition~\ref{prop:indmr}}\label{sec:ririri} In this section, before proceeding with the actual proof of Proposition~\ref{prop:indeig}, we state the local law away from the imaginary axis, proven in~\cite{MR4235475}, that will be used in the following sections. We remark that the averaged and entry-wise version of this local law for \(\abs{z}\le 1-\epsilon\), for some small fixed \(\epsilon>0\), has already been established in~\cite[Theorem 3.4]{MR3230002}. \begin{proposition}[Theorem 3.1 of~\cite{MR4235475}]\label{theo:trll} Let \(\omega_h>0\) be sufficiently small, and define \(\delta_l:= 1-\abs{z_l}^2\). Then with very high probability it holds \begin{equation} \label{eq:lll} \abs*{\frac{1}{2n}\sum_{1\le\abs{i}\le n} \frac{1}{\lambda_i^{z_l}-w}-m^{z_l}(w)}\le \frac{\delta_l^{-100}n^\xi}{n\Im w}, \end{equation} uniformly in \(\abs{z_l}^2\le 1-n^{-\omega_h}\) and \(0< \Im w\le 10\). Here \(m^{z_l}\) denotes the solution of~\eqref{eq m}. \end{proposition} Note that \(\delta_l:= 1-\abs{z_l}^2\) introduced in Proposition~\ref{theo:trll} are not to be confused with the exponents \(\delta_0, \delta_1\) introduced in Proposition~\ref{prop:indeig}. Let \(\{\lambda^z_{\pm i}\}_{i=1}^n\) denote the eigenvalues of \(H^z\), and recall that \(\rho^z(E)=\pi^{-1} \Im m^z(E+\mathrm{i} 0)\) is the limiting (self-consistent) density of states. Then by Proposition~\ref{theo:trll} the rigidity of \(\lambda^z_i\) follows by a standard application of Helffer-Sj\"ostrand formula (see e.g.~\cite[Lemma 7.1, Theorem 7.6]{MR3068390} or~\cite[Section 5]{MR2871147} for a detailed derivation): \begin{equation}\label{eq:rigneed} \abs*{\lambda_i^{z}-\gamma_i^z }\le \frac{\delta^{-100} n^\xi}{n}, \qquad \abs{i}\le cn, \end{equation} with \(c>0\) a small constant and \(\delta:= 1-\abs{z}^2\), with very high probability, uniformly in \(\abs{z}\le 1-n^{-\omega_h}\). The quantiles \(\gamma_i^z\) are defined by \begin{equation}\label{eq:defquant} \frac{i}{n}=\int_0^{\gamma_i^z} \rho^z(E)\operatorname{d}\!{} E, \qquad 1\le i \le n, \end{equation} and \(\gamma_{-i}^z:= -\gamma_i^z\) for \(-n\le i \le -1\). Note that by~\eqref{eq:defquant} it follows that \(\gamma_i^z\sim i/(n\rho^z(0))\) for \(\abs{i}\le n^{1-10\omega_h}\), where \(\rho^z(0)=\Im m^z(0)=(1-\abs{z}^2)^{1/2}\) for \(\abs{z}< 1\) by~\eqref{eq:expm}. Using the rigidity bound in~\eqref{eq:rigneed}, by Proposition~\ref{prop:indeig} we conclude the proof of Proposition~\ref{prop:indmr}. \begin{proof}[Proof of Proposition~\ref{prop:indmr}] Let \(z_1,\dots, z_p\) such that \(\abs{z_l}\le 1-n^{-\omega_h}\) and \(\abs{z_l-z_m}\ge n^{-\omega_d}\), for any \(l,m\in [p]\), with \(\omega_d,\omega_h\) defined in Proposition~\ref{prop:indmr}. Let \(\omega,\widehat{\omega}, \delta_0,\delta_1\) be as in Proposition~\ref{prop:indeig}, i.e. \[ \omega_h\ll \delta_m\ll \widehat{\omega}\ll \omega\ll \omega_f, \] for \(m=0,1\). For a detailed summary about all the different scales in the proof of Proposition~\ref{prop:indeig} and so of Proposition~\ref{prop:indmr} see Section~\ref{rem:s} later. Write \begin{equation} \label{eq:dedrez} \braket{ G^{z_l}(\mathrm{i}\eta_l) }=\frac{\mathrm{i}}{2n}\left[\sum_{\abs{i}\le \widehat{\omega}}+\sum_{\widehat{\omega}< \abs{i}\le n}\right] \frac{\eta_l}{(\lambda_i^{z_l})^2+\eta_l^2}, \end{equation} for \(\eta_l\in [n^{-1-\delta_0},n^{-1+\delta_1}]\). As a consequence of Proposition~\ref{prop:indeig}, the summations over \(\abs{i}\le n^{\widehat{\omega}}\) are asymptotically independent for different \(l\)'s. We now prove that the sum over \(n^{\widehat{\omega}}<\abs{i}\le n\) in~\eqref{eq:dedrez} is much smaller \(n^{-c}\) for some small constant \(c>0\). Since \(\omega_h\ll \widehat{\omega}\) the rigidity of the eigenvalues in~\eqref{eq:rigneed} holds for \(n^{\widehat{\omega}}\le \abs{i}\le n^{1-10\omega_h}\), hence we conclude the following bound with very high probability: \begin{equation} \label{eq:rigbb} \frac{1}{n}\sum_{n^{\widehat{\omega}}\le \abs{i} \le n} \frac{\eta_l}{(\lambda_i^{z_l})^2+\eta_l^2}\lesssim n^{40\omega_h}\sum_{n^{\widehat{\omega}}\le\abs{i}\le n} \frac{n\eta_l }{i^2 (\rho^{z_l}(0))^2}\lesssim \frac{n^{\delta_1+40\omega_h}}{n^{\widehat{\omega}}}, \end{equation} where we used that \((\lambda_i^z)^2+\eta^2\gtrsim n^{-40\omega_h}\) for \(n^{1-10\omega_h}\le \abs{i}\le n\), and that \(\eta_l\in[n^{-1-\delta_0},n^{-1+\delta_1}]\). In particular, in~\eqref{eq:rigbb} we used that by~\eqref{eq:defquant} it follows \(\gamma_i^{z_l}\sim i/(n\rho^{z_l}(0))\) for \(\abs{i}\le n^{1-10\omega_h}\), where \(\rho^{z_l}(0)=\Im m^{z_l}(0)=(1-\abs{z_l}^2)^{1/2}\) for \(\abs{z_l}^2\le 1\) by~\eqref{eq:expm}. Combining~\eqref{eq:dedrez}--\eqref{eq:rigbb} with Proposition~\ref{prop:indeig} we immediately conclude that \[ \begin{split} \E \prod_{l=1}^p \braket{ G^{z_l}(\mathrm{i}\eta_l) }&= \E \prod_{l=1}^p \frac{\mathrm{i}}{2n}\sum_{\abs{i}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\lambda_i^{z_l})^2+\eta_l^2}+\mathcal{O}\left( \frac{n^{\delta_1+40\omega_h}}{n^{\widehat{\omega}}} \right) \\ &=\prod_{l=1}^p\E \frac{\mathrm{i}}{2n}\sum_{\abs{i}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\lambda_i^{z_l})^2+\eta_l^2}+\mathcal{O}\left(\frac{n^{p\delta_0+\widehat{\omega}}}{n^{\omega}}+ \frac{n^{\delta_1+40\omega_h}}{n^{\widehat{\omega}}} \right) \\ &=\prod_{l=1}^p\E \braket{ G^{z_l}(\mathrm{i}\eta_l)}+\mathcal{O}\left( \frac{n^{\delta_1+40\omega_h}}{n^{\widehat{\omega}}}+\frac{n^{p\delta_0+\widehat{\omega}}}{n^{\omega}} \right). \end{split} \] This concludes the proof of Proposition~\ref{prop:indmr} since \(\omega_h\ll \delta_m\ll \widehat{\omega}\ll\omega\), with \(m=0,1\). \end{proof} We conclude Section~\ref{sec:ririri} with some properties of \(m^z\), the unique solution of~\eqref{eq m}. Fix \(z\in \mathbf{C} \), and consider the \(2n \times 2n\) matrix \(A+F\), with \(F\) a Wigner matrix, whose entries are centred random variables of variance \((2n)^{-1}\), and \(A\) is a deterministic diagonal matrix \(A:= \diag(\abs{z},\dots,\abs{z},-\abs{z},\dots,-\abs{z})\). Then by~\cite[Eq.~(2.1)]{MR4026551},~\cite[Eq.~(2.2)]{MR4134946} it follows that the corresponding \emph{Dyson equation} is given by \begin{equation}\label{eq:dyseq} \begin{cases} -\frac{1}{m_1}= w-\abs{z}+\frac{m_1+m_2}{2} \\ -\frac{1}{m_2}= w+\abs{z}+\frac{m_1+m_2}{2}, \end{cases} \end{equation} which has a unique solution under the assumption \(\Im m_1, \Im m_2>0\). By~\eqref{eq:dyseq} it readily follows that \(m^z\), the solution of~\eqref{eq m}, satisfies \begin{equation}\label{eq:relwig} m^z(w)=\frac{m_1(w)+m_2(w)}{2}. \end{equation} In particular, this implies that all the regularity properties of \(m_1+m_2\) (see e.g.~\cite[Theorem 2.4, Lemma A.7]{MR4031100},~\cite[Proposition 2.3, Lemma A.1]{MR4164728}) hold for \(m^z\) as well, e.g.\ \(m^z\) is \(1/3\)-H\"older continuous for any \(z\in\mathbf{C} \). \subsection{Overview of the proof of Proposition~\ref{prop:indeig}}\label{sec:PO} The main result of this section is the proof of Proposition~\ref{prop:indeig}, which is divided into two further sub-sections. In Lemma~\ref{lem:GFTGFT}, we prove that we can add a common small Ginibre component to the matrices \(H^{z_l}\), with \(l\in [p]\), \(p\in\mathbf{N} \), without changing their joint eigenvalue distribution much. In Section~\ref{sec:COMPPRO}, we introduce comparison processes for the process defined in~\eqref{eq:DBMeA} below, with initial data \({\bm \lambda}^{z_l}=\{\lambda_{\pm i}^{z_l}\}_{i=1}^n\), where we recall that \(\{\lambda_i^{z_l}\}_{i=1}^n\) are the singular values of \(\check{X}_{t_f}-z_l\), and \(\lambda_{-i}^{z_l}=-\lambda_i^{z_l}\) (the matrix \(\check{X}_{t_f}\) is defined in~\eqref{eq:consOU} below). Finally, in Section~\ref{sec:INDFI} we conclude the proof of Proposition~\ref{prop:indeig}. Additionally, in Section~\ref{rem:s} we summarize the different scales used in the proof of Proposition~\ref{prop:indeig}. Let \(X\) be an i.i.d.\ complex \(n\times n\) matrix, and run the Ornstein-Uhlenbeck (OU) flow \begin{equation}\label{eq:OUflow} d\widehat{X}_t=-\frac{1}{2}\widehat{X}_t \operatorname{d}\!{} t+\frac{\operatorname{d}\!{} \widehat{B}_t}{\sqrt{n}}, \qquad \widehat{X}_0=X, \end{equation} for a time \begin{equation}\label{eq:time1} t_f:= \frac{n^{\omega_f}}{n}, \end{equation} with some small exponent \(\omega_f>0\) given in Proposition~\ref{prop:indeig}, in order to add a small Gaussian component to \(X\). \(\widehat{B}_t\) in~\eqref{eq:OUflow} is a standard matrix valued complex Brownian motion independent of \(\widehat{X}_0\), i.e.\ \(\sqrt{2}\Re \widehat{B}_{ab}\), \(\sqrt{2}\Im \widehat{B}_{ab}\) are independent standard real Brownian motions for any \(a,b\in [n]\). Then we construct an i.i.d.\ matrix \(\check{X}_{t_f}\) such that \begin{equation}\label{eq:consOU} \widehat{X}_{t_f}\stackrel{d}{=}\check{X}_{t_f}+\sqrt{ct_f} U, \end{equation} for some constant \(c>0\) very close to \(1\), and \(U\) is a complex Ginibre matrix independent of \(\check{X}_{t_f}\). Next, we define the matrix flow \begin{equation}\label{eq:DBMmA} \operatorname{d}\!{} X_t=\frac{\operatorname{d}\!{} B_t}{\sqrt{n}}, \quad X_0=\check{X}_{t_f}, \end{equation} where \(B_t\) is a standard matrix valued complex Brownian motion independent of \(X_0\) and \(\widehat{B}_t\). Note that by construction \(X_{ct_f}\) is such that \begin{equation}\label{eq:impGFT} X_{ct_f}\stackrel{d}{=}\widehat{X}_{t_f}. \end{equation} Define the matrix \(H_t^{z_l}\) as in~\eqref{eq:her} replacing \(X-z\) by \(X_t-z_l\), for any \(l\in [p]\), then the flow in~\eqref{eq:DBMmA} induces the following DBM flow on the eigenvalues of \(H_t^{z_l}\) (cf.~\cite[Eq.~(5.8)]{MR2919197}): \begin{equation}\label{eq:DBMeA} \operatorname{d}\!{} \lambda_i^{z_l}(t)=\sqrt{\frac{1}{2 n}}\operatorname{d}\!{} b_i^{z_l}+\frac{1}{2n}\sum_{j\ne i} \frac{1}{\lambda_i^{z_l}(t)-\lambda_j^{z_l}(t)} \operatorname{d}\!{} t, \qquad 1\le \abs{i}\le n, \end{equation} with initial data \(\{\lambda_{\pm i}^{z_l}(0)\}_{i=1}^n\), where \(\lambda_i^{z_l}(0)\), with \(i\in [n]\) and \(l\in [p]\), are the singular values of \(\check{X}_{t_f}-z_l\), and \(\lambda_{-i}^{z_l}=-\lambda_i^{z_l}\). The well-posedness of~\eqref{eq:DBMeA} follows by~\cite[Appendix A]{MR3916329}. It follows from this derivation that the Brownian motions \(\{b_i^{z_l}\}_{i=1}^n\), omitting the \(t\)-dependence, are defined as \begin{equation}\label{eq:formbm} \operatorname{d}\!{} b_i^{z_l}:= \sqrt{2}\left(\operatorname{d}\!{} B_{ii}^{z_l}+\operatorname{d}\!{} \overline{B_{ii}^{z_l}}\right), \qquad \operatorname{d}\!{} B_{ij}^{z_l}:= \sum_{a,b=1}^n \overline{ u_i^{z_l}(a)} \operatorname{d}\!{} B_{ab}v_j^{z_l}(b), \end{equation} where \(({\bm u}_i^{z_l},\pm {\bm v}_i^{z_l})\) are the orthonormal eigenvectors of \(H_t^{z_l}\) with corresponding eigenvalues \(\lambda_{\pm i}^{z_l}\), and \(B_{ab}\) are the entries of the Brownian motion defined in~\eqref{eq:DBMmA}. For negative indices we define \(b_{-i}^{z_l}:= -b_i^{z_l}\). It follows from~\eqref{eq:formbm} that for each fixed \(l\) the collection of Brownian motions \({\bm b}^{z_l}=\{b_i^{z_l}\}_{i=1}^n\) consists of i.i.d.\ Brownian motions, however the families \({\bm b}^{z_l}\) are not independent for different \(l\)'s, in fact their joint distribution is not necessarily Gaussian. The derivation of~\eqref{eq:DBMeA} follows standard steps, see e.g.~\cite[Section 12.2]{MR3699468}. For the convenience of the reader we included this derivation in Appendix~\ref{sec:derdbm}. \begin{remark} We point out that in the formula~\cite[Eq.~(3.9)]{MR3916329} analogous to~\eqref{eq:DBMeA} the term \(j=-i\) in~\eqref{eq:DBMeA} is apparently missing. This additional term does not influence the results in~\cite[Section 3]{MR3916329} (that are proven for the real DBM for which the term \(j=-i\) is actually not present). \end{remark} As a consequence of~\eqref{eq:impGFT} we conclude the following lemma. \begin{lemma}\label{lem:GFTGFT} Let \({\bm \lambda}^{z_l}=\{\lambda_{\pm i}^{z_l}\}_{i=1}^n\) be the eigenvalues of \(H^{z_l}\) and let \({\bm \lambda}^{z_l}(t)\) be the solution of~\eqref{eq:DBMeA} with initial data \({\bm \lambda}^{z_l}\), then \begin{equation} \label{eq:stgft2} \begin{split} \E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\lambda_{i_l}^{z_l})^2+\eta_l^2}&=\E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\lambda_{i_l}^{z_l}(ct_f))^2+\eta_l^2} \\ &\quad+\mathcal{O}\left(\frac{n^{p\xi+2\delta_0} t_f}{n^{1/2}}\sum_{l=1}^p \frac{1}{\eta_l}+\frac{n^{k\delta_0+\delta_1}}{n^{\widehat{\omega}}}\right), \end{split} \end{equation} for any sufficiently small \(\widehat{\omega}, \delta_0,\delta_1>0\) such that \(\delta_m\ll \widehat{\omega}\), where \(\eta_l\in [n^{-1-\delta_0},n^{-1+\delta_1}]\) and \(t_f\) defined in~\eqref{eq:time1}. \begin{proof} The equality in~\eqref{eq:stgft2} follows by a standard Green's function comparison (GFT) argument (e.g.\ see~\cite[Proposition 3.1]{MR4221653}) for the \(\braket{ G^{z_l}(\mathrm{i}\eta_l)}\), combined with the same argument as in the proof of Proposition~\ref{prop:indmr}, using the local law~\cite[Theorem 5.1]{MR3770875} and~\eqref{eq:impGFT}, to show that the summation over \(n^{\widehat{\omega}}<\abs{i}\le n\) is negligible. We remark that the GFT used in this lemma is much easier than the one in~\cite[Proposition 3.1]{MR4221653} since here we used GFT only for a very short time \(t_f\sim n^{-1+\omega_f}\), for a very small \(\omega_f>0\), whilst in~\cite[Proposition 3.1]{MR4221653} the GFT is considered up to a time \(t=+\infty\). The scaling in the error term in~\cite[Proposition 3.1]{MR4221653} is different compared to the error term in~\eqref{eq:stgft2} since the scaling therein refers to the cusp-scaling. \end{proof} \end{lemma} \subsubsection{Definition of the comparison processes for \({\bm \lambda}^{z_l}(t)\)}\label{sec:COMPPRO} The philosophy behind the proof of Proposition~\ref{prop:indeig} is to compare the distribution of \({\bm \lambda}^{z_l}(t)=\{\lambda_{\pm i}^{z_l}(t)\}\), the strong solutions of~\eqref{eq:DBMeA} for \(l\in [p]\), which are correlated for different \(l\)'s and realized on a probability space \(\Omega_b\), with carefully constructed independent processes \({\bm \mu}^{(l)}(t)=\{\mu^{(l)}_{\pm i}(t)\}_{i=1}^n\) on a different probability space \(\Omega_\beta\). We choose \({\bm \mu}^{(l)}(t)\) to be the solution of \begin{equation}\label{eq:ginev} \operatorname{d}\!{}\mu_i^{(l)}(t)=\frac{\operatorname{d}\!{} \beta_i^{(l)}}{\sqrt{2n}}+\frac{1}{2n}\sum_{j\ne i} \frac{1}{\mu_i^{(l)}(t)-\mu_j^{(l)}(t)} \operatorname{d}\!{} t, \quad \mu_i^{(l)}(0)=\mu_i^{(l)}, \end{equation} for \(\abs{i}\le n\), with \(\mu_i^{(l)}\) the eigenvalues of the matrix \[ H^{(l)}:= \left(\begin{matrix} 0 & X^{(l)} \\ (X^{(l)})^* & 0 \end{matrix}\right) \] where \(X^{(l)}\) are independent Ginibre matrices, \({\bm \beta}^{(l)}=\{\beta_i^{(l)}\}_{i=1}^n\) are independent vectors of i.i.d.\ standard real Brownian motions, and \(\beta_{-i}^{(l)}=-\beta_i^{(l)}\). We let \(\mathcal{F}_{\beta,t}\) denote the common filtration of the Brownian motions \({\bm \beta}^{(l)}\) on \(\Omega_\beta\). In the remainder of this section we define two processes \( \widetilde{{\bm \lambda}}^{(l)}\), \( \widetilde{{\bm \mu}}^{(l)}\) so that for a time \(t\ge 0\) large enough \(\widetilde{\lambda}_i^{(l)}(t)\), \(\widetilde{\mu}_i^{(l)}(t)\) for small indices \(i\) will be close to \(\lambda^{z_l}_i(t)\) and \(\mu_i^{(l)}(t)\), respectively, with very high probability. Additionally, the processes \(\widetilde{{\bm \lambda}}^{(l)}\), \( \widetilde{{\bm \mu}}^{(l)}\) will be such that they have the same joint distribution: \begin{equation}\label{eq:nefdis} \left( \widetilde{{\bm \lambda}}^{(1)}(t),\dots, \widetilde{{\bm \lambda}}^{(p)}(t)\right)_{t\ge 0}\stackrel{d}{=}\left(\widetilde{{\bm \mu}}^{(1)}(t),\dots, \widetilde{{\bm \mu}}^{(p)}(t)\right)_{t\ge 0}. \end{equation} Fix \(\omega_A>0\) and define the process \(\widetilde{\bm \lambda}(t)\) to be the solution of \begin{equation}\label{eq:nuproc} \operatorname{d}\!{} \widetilde{\lambda}^{(l)}_i(t)=\frac{1}{2n}\sum_{j\ne i} \frac{1}{\widetilde{\lambda}^{(l)}_i(t)-\widetilde{\lambda}^{(l)}_j(t)} \operatorname{d}\!{} t+\begin{cases} \sqrt{\frac{1}{2 n}}\operatorname{d}\!{} b_i^{z_l} &\text{if} \quad \abs{i}\le n^{\omega_A} \\ \sqrt{\frac{1}{2 n}}\operatorname{d}\!{} \widetilde{b}_i^{(l)} &\text{if} \quad n^{\omega_A}< \abs{i}\le n, \end{cases} \end{equation} with initial data \(\widetilde{\bm \lambda}^{(l)}(0)\) being the singular values, taken with positive and negative sign, of independent Ginibre matrices \(\widetilde{Y}^{(l)}\) independent of \({\bm \lambda}^{z_l}(0)\). Here \(\operatorname{d}\!{} b_i^{z_l}\) is from~\eqref{eq:DBMeA}; this is used for small indices. For large indices we define the driving Brownian motions to be an independent collection \(\set{\{\widetilde{b}_i^{(l)}\}_{i=n^{\omega_A}+1}^n \given l\in [p]}\) of \(p\) vector-valued i.i.d.\ standard real Brownian motions which are also independent of \(\set{\{b_{\pm i}^{z_l}\}_{i=1}^n\given l\in [p]}\), and that \(\widetilde{b}_{-i}^{(l)}=-\widetilde{b}_i^{(l)}\). The Brownian motions \({\bm b}^{z_l}\), with \(l \in [p]\), and \(\set{\{\widetilde{b}_i^{(l)}\}_{i=n^{\omega_A}+1}^n \given l\in [p]}\) are defined on a common probability space that we continue to denote by \(\Omega_b\) with the common filtration \(\mathcal{F}_{b,t}\). We conclude this section by defining \(\widetilde{{\bm \mu}}^{(l)}(t)\), the comparison process of \({\bm \mu}^{(l)}(t)\). It is given as the solution of the following DBM\@: \begin{equation}\label{eq:nuproc2} \operatorname{d}\!{} \widetilde{\mu}^{(l)}_i(t)=\frac{1}{2n}\sum_{j\ne i} \frac{1}{\widetilde{\mu}^{(l)}_i(t)-\widetilde{\mu}^{(l)}_j(t)} \operatorname{d}\!{} t+\begin{cases} \sqrt{\frac{1}{2 n}}\operatorname{d}\!{} \zeta_i^{z_l} &\text{if} \quad \abs{i}\le n^{\omega_A} \\ \sqrt{\frac{1}{2 n}}\operatorname{d}\!{} \widetilde{\zeta}_i^{(l)} &\text{if} \quad n^{\omega_A}< \abs{i}\le n, \end{cases} \end{equation} with initial data \(\widetilde{\bm \mu}^{(l)}(0)\) so that they are the singular values of independent Ginibre matrices \(Y^{(l)}\), which are also independent of \(\widetilde{Y}^{(l)}\). We now explain how to construct the driving Brownian motions in~\eqref{eq:nuproc2} so that~\eqref{eq:nefdis} is satisfied. We only consider positive indices, since the negative indices are defined by symmetry. For indices \(n^{\omega_A}< i\le n\) we choose \(\{\widetilde{\zeta}_{\pm i}^{(l)}\}_{n^{\omega_A}+1}^n\) to be independent families (for different \(l\)'s) of i.i.d.\ Brownian motions, defined on the same probability space of \(\{{\bm \beta}^{(l)}: l\in [p]\}\), that are independent of the Brownian motions \(\{ \beta^{(l)}_{\pm i}\}_{i=1}^n\) used in~\eqref{eq:ginev}. For indices \(1\le i \le n^{\omega_A}\) the families \(\set{\{\zeta_i^{z_l}\}_{i=1}^{n^{\omega_A}}\given l\in [p]}\) will be constructed from the independent families \(\set{\{\beta_i^{(l)}\}_{i=1}^{n^{\omega_A}} \given l\in [p]}\) as follows. Arranging \(\set{\{\beta_i^{(l)}\}_{i=1}^{n^{\omega_A}}\given l\in [p]}\) into a single vector, we define the \(pn^{\omega_A}\)-dimensional vector \begin{equation}\label{eq:vecla} \underline{\beta}:=(\beta_1^{(1)},\dots,\beta_{n^{\omega_A}}^{(1)},\dots, \beta_1^{(p)}, \dots, \beta_{n^{\omega_A}}^{(p)}). \end{equation} Similarly we define the \(pn^{\omega_A}\)-dimensional vector \begin{equation}\label{eq:vecla1} \underline{b}:=(b_1^{z_1},\dots,b_{n^{\omega_A}}^{z_1},\dots, b_1^{z_p}, \dots, b_{n^{\omega_A}}^{z_p}) \end{equation} which is a continuous martingale. To make our notation easier, in the following we assume that \(n^{\omega_A}\in\mathbf{N} \). For any \(i,j\in [pn^{\omega_A}]\), we use the notation \begin{equation}\label{eq:frakind} i=(l-1) n^{\omega_A}+\mathfrak{i}, \qquad j=(m-1) n^{\omega_A}+\mathfrak{j}, \end{equation} with \(l,m \in [p]\) and \(\mathfrak{i}, \mathfrak{j}\in [n^{\omega_A}]\). Note that in the definitions in~\eqref{eq:frakind} we used \((l-1), (m-1)\) instead of \(l,m\) so that \(l\) and \(m\) exactly indicate in which block of the matrix \(C(t)\) in~\eqref{eq:matC} the indices \(i,j\) are. With this notation, the covariance matrix of the increments of \( \underline{b}\) is the matrix \(C(t)\) consisting of \(p^2\) blocks of size \(n^{\omega_A}\) is defined as \begin{equation}\label{eq:matC} C_{ij}(t) \operatorname{d}\!{} t:= \Exp*{\operatorname{d}\!{} b_{\mathfrak{i}}^{z_l} \operatorname{d}\!{} b_{\mathfrak{j}}^{z_m}\given\mathcal{F}_{b,t}} =\begin{cases} \Theta_{\mathfrak{i}\mathfrak{j}}^{z_l,z_m}(t) \operatorname{d}\!{} t &\text{if} \quad l\ne m, \\ \delta_{\mathfrak{i}\mathfrak{j}} \operatorname{d}\!{} t&\text{if} \quad l=m. \end{cases} \end{equation} Here \begin{equation} \label{eq:ovcorr} \Theta_{\mathfrak{i}\mathfrak{j}}^{z_l,z_m}(t):= 4\Re\bigl[\braket{ {\bm u}_{\mathfrak{i}}^{z_l}(t),{\bm u}_{\mathfrak{j}}^{z_m}(t)}\braket{ {\bm v}_{\mathfrak{i}}^{z_m}(t),{\bm v}_{\mathfrak{j}}^{z_l}(t)} \bigr], \end{equation} with \(\{{\bm w}_{\pm i} \}_{i\in [n]}=\{({\bm u}_i^{z_l}(t), \pm {\bm v}_i^{z_l}(t))\}_{i\in [n]}\) the orthonormal eigenvectors of \(H_t^{z_l}\). Note that \(\{{\bm w}_i \}_{\abs{i}\le n}\) are not well-defined if \(H_t^{z_l}\) has multiple eigenvalues. However, without loss of generality, we can assume that almost surely \(H_t^{z_l}\) does not have multiple eigenvalues for any \(l\in [p]\), as a consequence of~\cite[Lemma 6.2]{MR4242226} (which is the adaptation of~\cite[Proposition 2.3]{MR4009717} to the \(2\times 2\) block structure of \(H_t^{z_l}\)). By Doob's martingale representation theorem~\cite[Theorem 18.12]{MR1876169} there exists a standard Brownian motion \( {\bm \theta}_t \in \mathbf{R}^{pN^{\omega_A}} \) realized on an extension \( (\widetilde\Omega_b, \widetilde{\mathcal{F}}_{b,t} )\) of the original filtrated probability space \( (\Omega_b, \mathcal{F}_{b,t}) \) such that \( \operatorname{d}\!{} \underline{\bm b}= \sqrt{C} \operatorname{d}\!{} {\bm \theta}\). Here \( {\bm \theta}_t \) and \( C(t)\) are adapted to the filtration \( \widetilde{\mathcal{F}}_{b,t} \) and note that \( C=C(t)\) is a positive semi-definite matrix and \( \sqrt{C}\) denotes its positive semi-definite matrix square root. For the clarity of the presentation the original processes \( {\bm \lambda}^{z_l}\) and the comparison processes \({\bm \mu}^{(l)}\) will be realized on completely different probability spaces. We thus construct another copy \( (\Omega_\beta, \mathcal{F}_{\beta,t} )\) of the filtrated probability space \( (\widetilde\Omega_b, \widetilde{\mathcal{F}}_{b,t} )\) and we construct a matrix valued process \(C^\#(t)\) and a Brownian motion \( \underline{\beta} \) on \(( \Omega_\beta, \mathcal{F}_{\beta,t}) \) such that \( (C^\#(t), \underline{\beta}(t) ) \) are adapted to the filtration \(\mathcal{F}_{\beta,t}\) and they have the same joint distribution as \( (C(t), {\bm \theta}(t)) \). The Brownian motion \( \underline{\beta} \) is used in~\eqref{eq:ginev} for small indices. Define the process \begin{equation}\label{eq:defzeta} \underline{\zeta}(t):=\int_0^t\sqrt{C^\#(s)} \operatorname{d}\!{} \underline{\beta}(s),\quad \underline{\zeta}=(\zeta_1^{z_1},\dots,\zeta_{n^{\omega_A}}^{z_1},\dots, \zeta_1^{z_p}, \dots, \zeta_{n^{\omega_A}}^{z_p}), \end{equation} on the probability space \(\Omega_\beta\) and define \(\zeta_{-i}^{z_l}:=-\zeta_i^{z_l}\) for any \(1\le i\le n^{\omega_A}\), \(l\in[p]\). Since \(\underline{\beta}\) are i.i.d.\ Brownian motions, we clearly have \begin{equation}\label{eq:newocov} \Exp*{\operatorname{d}\!{} \zeta_{\mathfrak{i}}^{z_l}(t)\operatorname{d}\!{} \zeta_{\mathfrak{j}}^{z_m}(t)\given \mathcal{F}_{\beta,t}}= C^\#(t)_{ij} \operatorname{d}\!{} t, \qquad \abs{\mathfrak{i}},\abs{\mathfrak{j}}\le n^{\omega_A}. \end{equation} By construction we see that the processes \( ( \{b_{\pm i}^{z_l}\}_{i=1}^{n^{\omega_A}})_{l=1}^k \) and \( ( \{\zeta_{\pm i}^{z_l}\}_{i=1}^{n^{\omega_A}} )_{l=1}^k \) have the same distribution. Furthermore, since by definition the two collections \[\set*{\{\widetilde{b}_{\pm i}^{(l)}\}_{i=n^{\omega_A}+1}^n, \{\widetilde{\zeta}^{(l)}_{\pm i}\}_{i=n^{\omega_A}+1}^n \given l\in [k]}\] are independent of \[\set*{\{b_{\pm i}^{z_l}\}_{i=1}^{n^{\omega_A}}, \{\beta_{\pm i}^{(l)}\}_{i=1}^{n^{\omega_A}}\given l\in [k]}\] and among each other, we have \begin{equation}\label{eq:BMsost} \left( \{b_{\pm i}^{z_l}\}_{i=1}^{n^{\omega_A}}, \{\widetilde{b}_{\pm i}^{(l)}\}_{i=n^{\omega_A}+1}^n\right)_{l=1}^p\stackrel{d}{=}\left( \{\zeta_{\pm i}^{z_l}\}_{i=1}^{n^{\omega_A}}, \{\widetilde{\zeta}_{\pm i}^{(l)}\}_{i=n^{\omega_A}+1}^n\right)_{l=1}^p. \end{equation} Finally, by the definitions in~\eqref{eq:nuproc},~\eqref{eq:nuproc2}, and~\eqref{eq:BMsost}, it follows that the Dyson Brownian motions \(\widetilde{\bm \lambda}^{(l)}\) and \(\widetilde{\bm \mu}^{(l)}\) have the same distribution, i.e. \begin{equation}\label{eq:samedistr} \left(\widetilde{\bm \lambda}^{(1)}(t), \dots, \widetilde{\bm \lambda}^{(p)}(t)\right)\stackrel{d}{=} \left(\widetilde{\bm \mu}^{(1)}(t), \dots, \widetilde{\bm \mu}^{(p)}(t)\right) \end{equation} since their initial conditions, as well as their driving processes~\eqref{eq:BMsost}, agree in distribution. Note that these processes are Brownian motions for each fixed \( l\) since \( C_{ij}(t)=\delta_{\mathfrak{i}\mathfrak{j}} \) if \( l=m\), but jointly they are not necessarily Gaussian due to the non-trivial correlation \( \Theta_{\mathfrak{i}\mathfrak{j}}^{z_l,z_m} \) in~\eqref{eq:matC}. \subsubsection{Proof of Proposition~\ref{prop:indeig}}\label{sec:INDFI} In this section we conclude the proof of Proposition~\ref{prop:indeig} using the comparison processes defined in Section~\ref{sec:COMPPRO}. More precisely, we use that the processes \({\bm \lambda}^{z_l}(t)\), \(\widetilde{\bm \lambda}^{(l)}(t)\) and \({\bm \mu}^{(l)}(t)\), \(\widetilde{\bm \mu}^{(l)}(t)\) are close pathwise at time \(t_f\), as stated below in Lemma~\ref{lem:firststepmason} and Lemma~\ref{lem:secondstepmason}, respectively. The proofs of these lemmas are postponed to Section~\ref{sec:noncelpiu}. They will be a consequence of Proposition~\ref{pro:ciala}, which is an adaptation to our case of the main technical estimate of~\cite{MR3914908}. The main input is the bound on the eigenvector overlap in Lemma~\ref{lem:overb}, since it gives an upper bound on the correlation structure in~\eqref{eq:newocov}. Let \(\rho_{sc}(E) =\frac{1}{2\pi}\sqrt{4-E^2}\) denote the semicircle density. \begin{lemma}\label{lem:firststepmason} Fix \(p\in \mathbf{N} \), and let \({\bm \lambda}^{z_l}(t)\), \(\widetilde{\bm \lambda}^{(l)}(t)\), with \(l\in [p]\), be the processes defined in~\eqref{eq:DBMeA} and~\eqref{eq:nuproc}, respectively. For any small \(\omega_h,\omega_f>0\) such that \(\omega_h\ll \omega_f\) there exist \(\omega, \widehat{\omega}>0\) with \(\omega_h\ll \widehat{\omega}\ll \omega\ll \omega_f\), such that for any \(\abs{z_l}\le 1-n^{-\omega_h}\) it holds \begin{equation} \label{eq:firshpb} \abs*{\rho^{z_l}(0)\lambda_i^{z_l}(ct_f)-\rho_{sc}(0) \widetilde{\lambda}_i^{(l)}(ct_f) }\le n^{-1-\omega}, \qquad \abs{i}\le n^{\widehat{\omega}}, \end{equation} with very high probability, where \(t_f:= n^{-1+\omega_f}\) and \(c>0\) is defined in~\eqref{eq:impGFT}. \end{lemma} \begin{lemma}\label{lem:secondstepmason} Fix \(p\in \mathbf{N} \), and let \({\bm \mu}^{(l)}(t)\), \(\widetilde{\bm \mu}^{(l)}(t)\), with \(l\in [p]\), be the processes defined in~\eqref{eq:ginev} and~\eqref{eq:nuproc2}, respectively. For any small \(\omega_h,\omega_f, \omega_d>0\) such that \(\omega_h\ll \omega_f\) there exist \(\omega, \widehat{\omega}>0\) with \(\omega_h\ll \widehat{\omega}\ll \omega\ll \omega_f\), such that for any \(\abs{z_l}\le 1-n^{-\omega_h}\), \(\abs{z_l-z_m}\ge n^{-\omega_d}\), with \(l\ne m\), it holds \begin{equation} \label{eq:firshpb2} \abs*{\mu_i^{(l)}(ct_f)- \widetilde{\mu}_i^{(l)}(ct_f) }\le n^{-1-\omega}, \qquad \abs{i}\le n^{\widehat{\omega}}, \end{equation} with very high probability, where \(t_f:= n^{-1+\omega_f}\) and \(c>0\) is defined in~\eqref{eq:impGFT}. \end{lemma} \begin{proof}[Proof of Proposition~\ref{prop:indeig}] In the following we omit the trivial scaling factors \(\rho^{z_l}(0)\), \(\rho_{sc}(0)\) in the second term in the lhs.\ of~\eqref{eq:firshpb} to make our notation easier. We recall that by Lemma~\ref{lem:GFTGFT} we have \begin{equation}\label{eq:stgft} \begin{split} \E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\lambda_{i_l}^{z_l})^2+\eta_l^2}&=\E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\lambda_{i_l}^{z_l}(ct_f))^2+\eta_l^2} \\ &\quad +\mathcal{O}\left(\frac{n^{p\xi+2\delta_0} t_f}{n^{1/2}}\sum_{l=1}^p \frac{1}{\eta_l}+\frac{n^{p\delta_0+\delta_1}}{n^{\widehat{\omega}}}\right), \end{split} \end{equation} where \(\lambda_i^{z_l}(t)\) is the solution of~\eqref{eq:DBMeA} with initial data \(\lambda_i^{z_l}\). Next we replace \(\lambda_i^{z_l}(t)\) with \(\widetilde \lambda_i^{z_l}(t)\) for small indices by using Lemma~\ref{lem:firststepmason}; this is formulated in the following lemma whose detailed proof is postponed to the end of this section. % % \begin{lemma}\label{lem:stanc} Fix \(p\in\mathbf{N} \), and let \(\lambda_i^{z_l}(t)\), \(\widetilde{\lambda}_i^{(l)}(t)\), with \(l\in [p]\), be the solution of~\eqref{eq:DBMeA} and~\eqref{eq:nuproc}, respectively. Then \begin{equation}\label{eq:hhh1} \E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\lambda_{i_l}^{z_l})^2+\eta_l^2}=\E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\widetilde{\lambda}_{i_l}^{(l)}(ct_f))^2+\eta_l^2}+\mathcal{O}(\Psi), \end{equation} where \(\lambda_{i_l}^{z_l}=\lambda_{i_l}^{z_l}(0)\), \(t_f=n^{-1+\omega_f}\), and the error term is given by \[ \Psi:= \frac{n^{\widehat{\omega}}}{n^{1+\omega}}\left(\sum_{l=1}^p \frac{1}{\eta_l}\right)\cdot \prod_{l=1}^p \left(1+\frac{n^\xi}{n\eta_l}\right)+\frac{n^{p\xi+2\delta_0} t_f}{n^{1/2}}\sum_{l=1}^p \frac{1}{\eta_l}+\frac{n^{p\delta_0+\delta_1}}{n^{\widehat{\omega}}}. \] \end{lemma} By~\eqref{eq:samedistr} it readily follows that \begin{equation} \label{eq:hhh2} \E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\widetilde{\lambda}_{i_l}^{(l)}(ct_f))^2+\eta_l^2}=\E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\widetilde{\mu}_{i_l}^{(l)}(ct_f))^2+\eta_l^2}. \end{equation} Moreover, by~\eqref{eq:firshpb2}, similarly to Lemma~\ref{lem:stanc}, we conclude \begin{equation} \label{eq:hhh3} \E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\widetilde{\mu}_{i_l}^{(l)}(ct_f))^2+\eta_l^2}=\E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\mu_{i_l}^{(l)}(ct_f))^2+\eta_l^2}+\mathcal{O}(\Psi). \end{equation} Additionally, by the definition of the processes \({\bm \mu}^{(l)}(t)\) in~\eqref{eq:ginev} it follows that \({\bm \mu}^{(l)}(t)\), \({\bm \mu}^{(m)}(t)\) are independent for \(l\ne m\) and so that \begin{equation} \label{eq:hhh6} \E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\mu_{i_l}^{(l)}(ct_f))^2+\eta_l^2}= \prod_{l=1}^p \E \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\mu_{i_l}^{(l)}(ct_f))^2+\eta_l^2}. \end{equation} Combining~\eqref{eq:hhh1}--\eqref{eq:hhh6}, we get \begin{equation} \label{eq:hhh4} \E \prod_{l=1}^p \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\lambda_{i_l}^{z_l})^2+\eta_l^2}= \prod_{l=1}^p \E \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\mu_{i_l}^{(l)}(ct_f))^2+\eta_l^2}+\mathcal{O}(\Psi). \end{equation} Then, by similar computation to the ones in~\eqref{eq:stgft}--\eqref{eq:hhh4} we conclude that \begin{equation} \label{eq:hhh5} \prod_{l=1}^p \E \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\lambda_{i_l}^{z_l})^2+\eta_l^2}= \prod_{l=1}^p \E \frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \frac{\eta_l}{(\mu_{i_l}^{(l)}(ct_f))^2+\eta_l^2}+\mathcal{O}(\Psi). \end{equation} We remark that in order to prove~\eqref{eq:hhh5} it would not be necessary to introduce the additional comparison processes \(\widetilde{\bm \lambda}^{(l)}\) and \(\widetilde{\bm \mu}^{(l)}\) of Section~\ref{sec:COMPPRO}, since in~\eqref{eq:hhh5} the product is outside the expectation, so one can compare the expectations one by one; the correlation between these processes for different \(l\)'s plays no role. Hence, already the usual coupling (see e.g.~\cite{MR3541852,MR3916329, MR3914908}) between the processes \({\bm \lambda}^{z_l}(t)\), \({\bm \mu}^{(l)}(t)\) defined in~\eqref{eq:DBMeA} and~\eqref{eq:ginev}, respectively, would be sufficient to prove~\eqref{eq:hhh5}. Finally, combining~\eqref{eq:hhh4}--\eqref{eq:hhh5} we conclude the proof of Proposition~\ref{prop:indeig}. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:stanc}] We show the proof for \(p=2\) in order to make our presentation easier. The case \(p\ge 3\) proceeds exactly in the same way. In order to make our notation shorter, for \(l\in \{1,2\}\), we define \[ T_{i_l}^{(l)} := \frac{\eta_l}{ (\lambda_{i_l}^{z_l}(ct_f))^2 + \eta_l^2}. \] Similarly, replacing \(\lambda_{i_l}^{z_l}(ct_f)\) with \(\widetilde{\lambda}_{i_l}^{(l)}(ct_f)\), we define \(\widetilde{T}_l\). Then, by telescopic sum, we have \begin{gather} \begin{aligned} &\abs*{\E \prod_{l=1}^{2}\frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} T_{i_l}^{(l)}-\E \prod_{l=1}^{2}\frac{1}{n}\sum_{\abs{i_l}\le n^{\widehat{\omega}}} \widetilde{T}_{i_l}^{(l)} }\\ &\quad= \frac{1}{n^2}\abs*{\E \sum_{\abs{i_1}, \abs{i_2}\le n^{\widehat{\omega}}} \left[T_{i_1}^{(1)}-\widetilde{T}_{i_1}^{(1)}\right] T_{i_2}^{(2)} -\E \sum_{\abs{i_1},\abs{i_2}\le n^{\widehat{\omega}}} \left[T_{i_2}^{(2)}-\widetilde{T}_{i_2}^{(2)}\right] \widetilde{T}_{i_1}^{(1)} }\\ &\quad \lesssim \sum_{\substack{l,m=1 \\ l\ne m}}^2\left(1+\frac{n^\xi}{n\eta_l}\right) \E \frac{1}{n}\sum_{\abs{i_m}\le n^{\widehat{\omega}}} \frac{T_{i_m}^{(m)}\widetilde{T}_{i_m}^{(m)}}{\eta_m}\abs*{(\widetilde{\lambda}_{i_m}^{(m)}(ct_f))^2-(\lambda_{i_m}^{z_m}(ct_f))^2} \\ &\quad \lesssim \frac{n^{\widehat{\omega}}}{n^{1+\omega}}\left(\frac{1}{\eta_1}+\frac{1}{\eta_2}\right)\cdot \prod_{l=1}^2\left(1+\frac{n^\xi}{n\eta_l}\right), \end{aligned}\label{eq:thirdstemason}\raisetag{-8em} \end{gather} where we used the local law~\eqref{theo:Gll} in the first inequality and~\eqref{eq:firshpb} in the last step. Combining~\eqref{eq:thirdstemason} with~\eqref{eq:stgft} we conclude the proof of Lemma~\ref{lem:stanc}. \end{proof} Before we continue, we summarize the scales used in the entire Section~\ref{sec:IND}. \subsubsection{Relations among the scales in the proof of Proposition~\ref{prop:indeig}}\label{rem:s} Scales in the proof of Proposition~\ref{prop:indeig} are characterized by various exponents \(\omega\)'s of \(n\) that we will also refer to scales, for simplicity. The basic input scales in the proof of Proposition~\ref{prop:indeig} are \(0<\omega_d,\omega_h,\omega_f\ll 1\), the others will depend on them. The exponents \(\omega_h,\omega_d\) are chosen within the assumptions of Lemma~\ref{lem:overb} to control the location of \(z\)'s as \(\abs{z_l}\le 1-n^{-\omega_h}\), \(\abs{z_l-z_m}\ge n^{-\omega_p}\), with \(l\ne m\). The exponent \(\omega_f\) defines the time \(t_f=n^{-1+\omega_f}\) so that the local equilibrium of the DBM is reached after \(t_f\). This will provide the asymptotic independence of \(\lambda_i^{z_l}\), \(\lambda_j^{z_m}\) for small indices and for \(l\ne m\). The primary scales created along the proof of Proposition~\ref{prop:indeig} are \(\omega\), \(\widehat{\omega}\), \(\delta_0\), \(\delta_1\), \(\omega_E\), \(\omega_B\). The scales \(\omega_E, \omega_B\) are given in Lemma~\ref{lem:overb}: \(n^{-\omega_E}\) measures the size of the eigenvector overlaps from~\eqref{eq:ovcorr} while the exponent \(\omega_B\) describes the range of indices for which these overlap estimates hold. Recall that the overlaps determine the correlations among the driving Brownian motions. The scale \(\omega\) quantifies the \(n^{-1-\omega}\) precision of the coupling between various processes. These couplings are effective only for small indices \(i\), their range is given by \(\widehat{\omega}\) as \(\abs{i}\le n^{\widehat{\omega}}\). Both these scales are much bigger than \(\omega_h\) but much smaller than \(\omega_f\). They are determined in Lemma~\ref{lem:firststepmason}, Lemma~\ref{lem:secondstepmason}, in fact both lemmas give only a necessary upper bound on the scales \(\omega, \widehat{\omega}\), so we can pick the smaller of them. The exponents \(\delta_0, \delta_1\) determine the range of \(\eta\in [n^{-1-\delta_0}, n^{-1+\delta_1}]\) for which Proposition~\ref{prop:indeig} holds; these are determined in Lemma~\ref{lem:GFTGFT} after \(\omega, \widehat{\omega}\) have already been fixed. These steps yield the scales \(\omega, \widehat{\omega}, \delta_0, \delta_1\) claimed in Proposition~\ref{prop:indeig} and hence also in Proposition~\ref{prop:indmr}. We summarize order relation among all these scales as \begin{equation} \label{eq:chain} \omega_h\ll \delta_m \ll \widehat{\omega}\ll \omega\ll \omega_B\ll\omega_f\ll \omega_E\ll1, \qquad m=0,1. \end{equation} We mention that three further auxiliary scales emerge along the proof but they play only a local, secondary role. For completeness we also list them here; they are \(\omega_1,\omega_A,\omega_l\). Their meanings are the following: \(t_1:= n^{-1+\omega_1}\), with \(\omega_1\ll \omega_f\), is the time needed for the DBM process \(x_i(t,\alpha)\), defined in~\eqref{eq:intflowA}, to reach local equilibrium, hence to prove its universality; \(t_0:=t_f-t_1\) is the initial time we run the DBM before starting with the actual proof of universality so that the solution \({\bm \lambda}^{z_l}(t_0)\) of~\eqref{eq:DBMeA} at time \(t_0\) and the density \(\operatorname{d}\!{} \rho(E,t,\alpha)\) (which we will define in Section~\ref{sec:PR}) satisfy certain technical regularity conditions~\cite[Lemma 3.3-3.5]{MR3916329},~\cite[Lemma 3.3-3.5]{MR3914908}. Note that \(t_0\sim t_f\), in fact they are almost the same. The other two scales are technical: \(\omega_l\) is the scale of the short range interaction, and \(\omega_A\) is a cut-off scale such that \(x_i(t,\alpha)\) is basically independent of \(\alpha\) for \(\abs{i}\le n^{\omega_A}\). These scales are inserted in the above chain of inequalities~\eqref{eq:chain} between \(\omega, \omega_B\) as follows \[ \omega_h\ll\delta_m\ll\widehat{\omega}\ll \omega\ll \omega_1\ll \omega_l\ll \omega_A\le \omega_B\ll \omega_f \ll \omega_E\ll 1, \quad m=0,1. \] In particular, the relation \(\omega_A\ll \omega_E\) ensures that the effect of the correlation is small, see the bound in~\eqref{eq:imperrest} later. We remark that introducing the additional initial time layer \(t_0\) is not really necessary for our proof of Proposition~\ref{prop:indeig} since the initial data \({\bm \lambda}^z(0)\) of the DBM in~\eqref{eq:DBMeA} and their deterministic density \(\rho^z\) already satisfy~\cite[Lemma 3.3-3.5]{MR3916329},~\cite[Lemma 3.3-3.5]{MR3914908} as a consequence of~\eqref{eq:lll} (see Remark~\ref{rem:t0diff} and Remark~\ref{rem:ne} for more details). We keep it only to facilitate the comparison with ~\cite{MR3916329, MR3914908}. \subsection{Bound on the eigenvector overlap for large \texorpdfstring{\(\abs{z_1-z_2}\)}{|z1-z2|}}\label{sec:BOUNDE} For any \(z\in\mathbf{C} \), let \(\{{\bm w}^z_{\pm i}\}_{i=1}^n\) be the eigenvectors of the matrix \(H^z\). They are of the form \({\bm w}^z_{\pm i}=({\bm u}_i^z,\pm {\bm v}_i^z)\), with \({\bm u}_i^z, {\bm v}_i^z\in\mathbf{C} ^n\), as a consequence of the symmetry of the spectrum of \(H^z\) induced by its block structure. The main input to prove Lemma~\ref{lem:firststepmason}--\ref{lem:secondstepmason} is the following high probability bound on the almost orthogonality of the eigenvectors belonging to distant \(z_l\), \(z_m\) parameters and eigenvalues close to zero. With the help of the Dyson Brownian motion (DBM), this information will then be used to establish almost independence of these eigenvalues. \begin{lemma}\label{lem:overb} Let \(\{{\bm w}^{z_l}_{\pm i}\}_{i=1}^n=\{({\bm u}_i^{z_l},\pm {\bm v}_i^{z_l})\}_{i=1}^n\), for \(l=1,2\), be the eigenvectors of matrices \(H^{z_l}\) of the form~\eqref{eq:her} with i.i.d.\ entries. Then for any sufficiently small \(\omega_d, \omega_h>0\) there exist \(\omega_B, \omega_E>0\) such that if \(\abs{z_1-z_2}\ge n^{-\omega_d}\), \(\abs{z_l}\le 1-n^{-\omega_h}\) then \begin{equation} \label{eq:bbev} \abs*{\braket{ {\bm u}_i^{z_1}, {\bm u}_j^{z_2}}}+\abs*{\braket{ {\bm v}_i^{z_1}, {\bm v}_j^{z_2}}}\le n^{-\omega_E}, \quad 1\le i,j\le n^{\omega_B}, \end{equation} with very high probability. \end{lemma} \begin{proof} Using the spectral symmetry of \(H^z\), for any \(z\in\mathbf{C} \) we write \(G^z\) in spectral decomposition as \[ G^z(\mathrm{i}\eta)=\sum_{j>0} \frac{2}{(\lambda_j^z)^2+\eta^2}\left( \begin{matrix} \mathrm{i} \eta {\bm u}_j^z ({\bm u}_j^z)^* & \lambda_j^z {\bm u}_j^z ({\bm v}_j^z)^* \\ \lambda_j^z {\bm v}_j^z({\bm u}_j^z)^* & \mathrm{i} \eta {\bm v}_j^z ({\bm v}_j^z)^* \end{matrix}\right). \] Let \(\eta \ge n^{-1}\), then by rigidity of the eigenvalues in~\eqref{eq:rigneed}, for any \(i_0, j_0\ge 1\) such that \(\lambda_{i_0}^{z_l},\lambda_{j_0}^{z_l}\lesssim \eta\), with \(l=1,2\), and any \(z_1, z_2\) such that \(n^{-\omega_d} \lesssim \abs{z_1-z_2}\lesssim 1\), for some \(\omega_d>0\) we will choose shortly, it follows that \begin{gather} \begin{aligned} &\abs*{\braket{ {\bm u}_{i_0}^{z_1}, {\bm u}_{j_0}^{z_2}}}^2+\abs*{\braket{ {\bm v}_{i_0}^{z_1}, {\bm v}_{j_0}^{z_2}}}^2 \\ &\qquad\lesssim \sum_{i,j=1}^n \frac{4\eta^4}{((\lambda_i^{z_1})^2+\eta^2)((\lambda_j^{z_2})^2+\eta^2)} \left(\abs*{\braket{ {\bm u}_i^{z_1}, {\bm u}_j^{z_2}}}^2+\abs*{\braket{ {\bm v}_i^{z_1}, {\bm v}_j^{z_2}}}^2 \right) \\ &\qquad=\eta^2\Tr (\Im G^{z_1})(\Im G^{z_2}) \lesssim \frac{n^{8\omega_d/3}}{(n\eta)^{1/4}}+(\eta^{1/12}+n\eta^2) n^{2\omega_d} \\ &\qquad\lesssim \frac{n^{2\omega_d+100\omega_h}}{n^{1/23}}. \end{aligned}\label{eq:ooo}\raisetag{-6em} \end{gather} The first inequality in the second line of~\eqref{eq:ooo} is from Theorem~\ref{thm local law G2} and the lower bound on \(\abs{\widehat{\beta}_*}\) from~\eqref{beta ast bound}. In the last inequality we choose \(\eta=n^{-12/23}\), under the assumption that \(\omega_d \le 1/100\) and that \(i_0,j_0\le n^{1/5}\) (in order to make sure that the first inequality in~\eqref{eq:ooo} hold). We also used that the first term in the lhs.\ of the last inequality is always smaller than the other two for \(\eta\ge n^{-5/9}\), and in the second line of~\eqref{eq:ooo} we used that \(M_{12}\), the deterministic approximation of \(\Tr \Im G^{z_1}\Im G^{z_2}\) in Theorem~\ref{thm local law G2}, is bounded by \(\norm{ M_{12}}\lesssim \abs{z_1-z_2}^{-2}\). This concludes the proof by choosing \(\omega_B\le 1/5\) and \(\omega_d= 1/100\), which implies a choice of \(\omega_E=-(2\omega_d+100\omega_h-1/23)\). \end{proof} \subsection{Pathwise coupling of DBM close to zero}\label{sec:fixun} This section is the main technical result used in the proof of Lemma~\ref{lem:firststepmason} and Lemma~\ref{lem:secondstepmason}. We compare the evolution of two DBMs whose driving Brownian motions are nearly the same for small indices and are independent for large indices. In Proposition~\ref{pro:ciala} we will show that the points with small indices in the two processes become very close to each other on a certain time scale \(t_f\). This time scale is chosen to be larger than the local equilibration time, but not too large so that the independence of the driving Brownian motions for large indices do not yet have an effect on particles with small indices. \begin{remark}\label{rem:t0diff} The main result of this section (Proposition~\ref{pro:ciala}) is stated for general deterministic initial data \({\bm s}(0)\) satisfying Definition~\ref{eq:defregpro} even if for its applications in the proof of Proposition~\ref{prop:indeig} we only consider initial data which are eigenvalues of i.i.d.\ random matrices. \end{remark} The proof of Proposition~\ref{pro:ciala} follows the proof of fixed energy universality in~\cite{MR3541852,MR3916329,MR3914908}, adapted to the block structure~\eqref{eq:her} in~\cite{MR3916329} (see also~\cite{MR4009717,MR4242226} for further adaptations of~\cite{MR3541852,MR3914908} to different matrix models). The main novelty in our DBM analysis compared to~\cite{MR3541852,MR3916329, MR3914908} is that we analyse a process for which we allow not (fully) coupled driving Brownian motions (see Assumption~\ref{ass:close}). Define the processes \(s_i(t)\), \(r_i(t)\) to be the solution of \begin{equation}\label{eq:lambdapr} \operatorname{d}\!{} s_i(t)=\sqrt{\frac{1}{2 n}}\operatorname{d}\!{} \mathfrak{b}^s_i(t)+\frac{1}{2n}\sum_{j\ne i} \frac{1}{s_i(t)-s_j(t)} \operatorname{d}\!{} t, \qquad 1\le \abs{i}\le n, \end{equation} and \begin{equation}\label{eq:mupr} \operatorname{d}\!{} r_i(t)=\sqrt{\frac{1}{2 n}}\operatorname{d}\!{} \mathfrak{b}^r_i(t)+\frac{1}{2n}\sum_{j\ne i} \frac{1}{r_i(t)-r_j(t)} \operatorname{d}\!{} t, \qquad 1\le \abs{i}\le n, \end{equation} with initial data \(s_i(0)=s_i\), \(r_i(0)=r_i\), where \({\bm s}=\{s_{\pm i}\}_{i=1}^n\) and \({\bm r}=\{r_{\pm i}\}_{i=1}^n\) are two independent sets of particles such that \(s_{-i}=-s_i\) and \(r_{-i}=-r_i\) for \(i\in [n]\). The driving standard real Brownian motions \(\{\mathfrak{b}^s_i\}_{i=1}^n\), \(\{\mathfrak{b}^r_i\}_{i=1}^n\) in~\eqref{eq:lambdapr}--\eqref{eq:mupr} are two i.i.d.\ families and they are such that \(\mathfrak{b}^s_{-i}=-\mathfrak{b}^s_i\), \(\mathfrak{b}^r_{-i}=-\mathfrak{b}^r_i\) for \(i\in [n]\). For convenience we also assume that \(\{r_{\pm i}\}_{i=1}^n\) are the singular values of \(\widetilde{X}\), with \(\widetilde{X}\) a Ginibre matrix. This is not a restriction; indeed, once a process with general initial data \({\bm s}\) is shown to be close to the reference process with Ginibre initial data, then processes with any two initial data will be close. Fix an \(n\)-dependent parameter \(K=K_n=n^{\omega_K}\), for some \(\omega_K>0\). On the correlation structure between the two families of i.i.d.\ Brownian motions \(\{\mathfrak{b}^s_i\}_{i=1}^n\), \(\{\mathfrak{b}^r_i\}_{i=1}^n\) we make the following assumptions: \begin{assumption}\label{ass:close} Suppose that the families \(\{\mathfrak{b}^s_{\pm i}\}_{i=1}^n\), \(\{\mathfrak{b}^r_{\pm i}\}_{i=1}^n\) in~\eqref{eq:lambdapr} and\eqref{eq:mupr} are realised on a common probability space with a common filtration \(\mathcal{F}_t\). Let \begin{equation} \label{eq:defL} L_{ij}(t) \operatorname{d}\!{} t:= \Exp*{\bigl(\operatorname{d}\!{} \mathfrak{b}^s_i(t)-\operatorname{d}\!{} \mathfrak{b}^r_i(t)\bigr) \bigl(\operatorname{d}\!{} \mathfrak{b}^s_j(t)-\operatorname{d}\!{} \mathfrak{b}^r_j(t)\bigr)\given \mathcal{F}_t} \end{equation} denote the covariance of the increments conditioned on \(\mathcal{F}_t\). The processes satisfy the following assumptions: \begin{enumerate}[label=(\alph*)] \item\label{close1} \(\{ \mathfrak{b}^s_i\}_{i=1}^n\), \(\{ \mathfrak{b}^r_i\}_{i=1}^n\) are two families of i.i.d.\ standard real Brownian motions. \item\label{close2} \(\{ \mathfrak{b}^r_{\pm i}\}_{i=K+1}^n\) is independent of \(\{\mathfrak{b}^s_{\pm i}\}_{i=1}^n\), and \(\{ \mathfrak{b}^s_{\pm i}\}_{i=K+1}^n\) is independent of \(\{\mathfrak{b}^r_{\pm i}\}_{i=1}^n\). \item\label{close3} Fix \(\omega_Q>0\) so that \(\omega_K\ll \omega_Q\). We assume that the subfamilies \(\{\mathfrak{b}^s_{\pm i}\}_{i=1}^K\), \(\{\mathfrak{b}^r_{\pm i}\}_{i=1}^K\) are very strongly dependent in the sense that for any \(\abs{i}, \abs{j}\le K\) it holds \begin{equation} \label{eq:assbqv} \abs{L_{ij}(t)}\le n^{-\omega_Q} \end{equation} with very high probability for any fixed \(t\ge 0\). \end{enumerate} \end{assumption} Furthermore we assume that the initial data \(\{s_{\pm i}\}_{i=1}^n\) is regular in the following sense (cf.~\cite[Definition 3.1]{MR3916329},~\cite[Definition 2.1]{MR3914908}, motivated by~\cite[Definition 2.1]{MR3687212}). \begin{definition}[\((g,G)\)-regular points]\label{eq:defregpro} Fix a very small \(\nu>0\), and choose \(g\) and \(G\) such that \[ n^{-1+\nu}\le g\le n^{-2\nu}, \qquad G\le n^{-\nu}. \] A set of \(2n\)-points \({\bm s}=\{s_i\}_{i=1}^{2n}\) on \(\mathbf{R} \) is called \((g,G)\)-\emph{regular} if there exist constants \(c_\nu,C_\nu>0\) such that \begin{equation} \label{eq:upbv} c_\nu \le \frac{1}{2n}\Im \sum_{i=-n}^n \frac{1}{s_i-(E+\mathrm{i} \eta)}\le C_\nu, \end{equation} for any \(\abs{E}\le G\), \(\eta \in [g, 10]\), and if there is a constant \(C_s\) large enough such that \(\norm{ {\bm s}}_\infty\le n^{C_s}\). Moreover, \(c_\nu,C_\nu\sim 1\) if \(\eta \in [g, n^{-2\nu}]\) and \(c_\nu\ge n^{-100\nu}\), \(C_\nu\le n^{100\nu}\) if \(\eta\in [n^{-2\nu},10]\). \end{definition} \begin{remark} We point out that in~\cite[Definition 3.1]{MR3916329} and~\cite[Definition 2.1]{MR3914908} the constants \(c_\nu, C_\nu\) do not depend on \(\nu>0\), but this change does not play any role since \(\nu\) will always be the smallest exponent of scale involved in the analysis of the DBMs~\eqref{eq:lambdapr}--\eqref{eq:mupr}, hence negligible. \end{remark} Let \(\rho_{\mathrm{fc},t}(E)\) be the deterministic approximation of the density of the particles \(\{s_{\pm i}(t)\}_{i=1}^n\) that is obtained from the semicircular flow acting on the empirical density of the initial data \(\{s_{\pm i}(0)\}_{i=1}^n\), see~\cite[Eq.~(2.5)--(2.6)]{MR3914908}. Recall that \(\rho_{sc}(E)\) denotes the semicircular density. \begin{proposition}\label{pro:ciala} Let the processes \({\bm s}(t)=\{s_{\pm i}(t)\}_{i=1}^n\), \({\bm r}(t)=\{r_{\pm i}(t)\}_{i=1}^n\) be the solutions of~\eqref{eq:lambdapr} and~\eqref{eq:mupr}, respectively, and assume that the driving Brownian motions in~\eqref{eq:lambdapr}--\eqref{eq:mupr} satisfy Assumption~\ref{ass:close}. Additionally, assume that \({\bm s}(0)\) is \((g,G)\)-regular in the sense of Definition~\ref{eq:defregpro} and that \({\bm r}(0)\) are the singular values of a Ginibre matrix. Then for any small \(\nu,\omega_f>0\) such that \(\nu\ll \omega_K\ll \omega_f\ll \omega_Q\) and that \(g n^\nu\le t_f\le n^{-\nu}G^2\), there exist \(\omega,\widehat{\omega}>0\) with \(\nu\ll\widehat{\omega}\ll \omega\ll \omega_f\), and such that it holds \begin{equation} \label{eq:hihihi} \abs*{ \rho_{\mathrm{fc},t_f}(0) s_i(t_f)- \rho_{\mathrm{sc}}(0) r_i(t_f)}\le n^{-1-\omega}, \qquad \abs{i}\le n^{\widehat{\omega}}, \end{equation} with very high probability, where \(t_f:= n^{-1+\omega_f}\). \end{proposition} The proof of Proposition~\ref{pro:ciala} is postponed to Section~\ref{sec:proofFEU}. \begin{remark}\label{ren:res} Note that, without loss of generality, it is enough to prove Proposition~\ref{pro:ciala} only for the case \(\rho_{\mathrm{fc},t_f}(0)=\rho_{sc}(0)\), since we can always rescale the time: we may define \(\widetilde{s}_i:= (\rho_{\mathrm{fc},t_f}(0)s_i/\rho_{sc}(0))\) and notice that \(\widetilde{s}_i(t)\) is a solution of the DBM~\eqref{eq:lambdapr} after rescaling as \(t'=(\rho_{\mathrm{fc},t_f}(0)/\rho_{sc}(0))^2t\). \end{remark} \subsection{Proof of Lemma~\ref{lem:firststepmason} and Lemma~\ref{lem:secondstepmason}}\label{sec:noncelpiu} In this section we prove that by Lemma~\ref{lem:overb} and Proposition~\ref{pro:ciala} Lemmas~\ref{lem:firststepmason}--\ref{lem:secondstepmason} follow. \subsubsection{Application of Proposition~\ref{pro:ciala} to \({\bm{\lambda}}^{z_l}(t)\) and \(\tilde{\bm{\lambda}}^{(l)}(t)\)}\label{sec:lamtillam} In this section we prove that for any fixed \(l\) the processes \({\bm \lambda}^{z_l}(t)\) and \(\widetilde{\bm \lambda}^{(l)}(t)\) satisfy Assumption~\ref{ass:close}, Definition~\ref{eq:defregpro} and so that by Proposition~\ref{pro:ciala} we conclude the lemma. \begin{proof}[Proof of Lemma~\ref{lem:firststepmason}] For any fix \(l\in[p]\), by the definition of the driving Brownian motions of the processes~\eqref{eq:DBMeA} and~\eqref{eq:nuproc} it is clear that they satisfy Assumption~\ref{ass:close} choosing \({\bm s}(t)={\bm \lambda}^{z_l}(t)\), \({\bm r}(t)=\widetilde{\bm \lambda}^{(l)}(t)\), and \(K=n^{\omega_A}\), since \(L_{ij}(t)\equiv 0\) for \(\abs{i}, \abs{j}\le K\). We now show that the set of points \(\{\lambda_{\pm i}^{z_l}\}_{i=1}^n\), rescaled by \(\rho^{z_l}(0)/\rho_{sc}(0)\), is \((g,G)\)-\emph{regular} for \begin{equation} \label{eq:defgG} g=n^{-1+\omega_h}\delta_l^{-100}, \qquad G=n^{-\omega_h}\delta_l^{10}, \qquad \nu=\omega_h. \end{equation} with \(\delta_l:= 1-\abs{z_l}^2\), for any \(l\in [p]\). By the local law~\eqref{eq:lll}, together with the regularity properties of \(m^{z_l}\) which follow by~\eqref{eq:relwig}, namely that \(m^{z_l}\) is \(1/3\)-H\"older continuous, we conclude that there exist constants \(c_{\omega_h},C_{\omega_h}>0\) such that \begin{equation} \label{eq:checkass} c_{\omega_h}\le \Im \frac{1}{2n} \sum_{i=-n}^n \frac{1}{[\rho^{z_l}(0)\lambda^{z_l}_i/\rho_{sc}(0)]-(E+\mathrm{i} \eta)}\le C_{\omega_h}, \end{equation} for any \(\abs{E}\le n^{-\omega_h}\delta_l^{10}\), \(n^{-1}\delta_l^{-100}\le \eta \le 10\). In particular, \(c_{\omega_h},C_{\omega_h}\sim 1\) for \(\eta\in [g, n^{-2\omega_h}]\), and \(c_{\omega_h}\gtrsim n^{-100\omega_h}\), \(C_{\omega_h}\lesssim n^{100\omega_h}\) for \(\eta\in [n^{-2\omega_h},10]\). This implies that the set \({\bm \lambda}^{z_l}=\{\lambda_{\pm i}^{z_l}\}_{i=1}^n\) satisfies Definition~\ref{eq:defregpro} and it concludes the proof of this lemma. \end{proof} \subsubsection{Application of Proposition~\ref{pro:ciala} to \({\bm \mu}^{(l)}(t)\) and \(\tilde{\bm \mu}^{(l)}(t)\)}\label{sec:mutilmu} We now prove that for any fixed \(l\) the processes \({\bm \mu}^{(l)}(t)\) and \(\widetilde{\bm \mu}^{(l)}(t)\) satisfy Assumption~\ref{ass:close}, Definition~\ref{eq:defregpro} and so that by Proposition~\ref{pro:ciala} we conclude the lemma. \begin{proof}[Proof of Lemma~\ref{lem:secondstepmason}] For any fixed \(l\in [p]\), we will apply Proposition~\ref{pro:ciala} with the choice \({\bm s}(t)={\bm \mu}^{(l)}(t)\), \({\bm r}(t)=\widetilde{\bm \mu}^{(l)}(t)\) and \(K=n^{\omega_A}\). Since the initial data \(s_i(0)=\mu_i^{(l)}(0)\) are the singular values of a Ginibre matrix \(X^{(l)}\), it is clear that the assumption in Definition~\ref{eq:defregpro} holds choosing \(g= n^{-1+\delta}\) and \(G=n^{-\delta}\), and \(\nu=0\), for any small \(\delta>0\) (see e.g.\ the local law in~\eqref{eq:lll}). We now check Assumption~\ref{ass:close}. By the definition of the families of i.i.d.\ Brownian motions \begin{equation} \label{eq:twofam} \left( \{\zeta_{\pm i}^{z_l}\}_{i=1}^{n^{\omega_A}}, \{\widetilde{\zeta}_{\pm i}^{(l)}\}_{i=n^{\omega_A}+1}^n\right)_{l=1}^p, \qquad \left(\{\beta_{\pm i}^{(l)}\}_{i=1}^n\right)_{l=1}^p, \end{equation} defined in~\eqref{eq:nuproc2} and~\eqref{eq:ginev}, respectively, it immediately follows that they satisfy~\ref{close1} and~\ref{close2} of Assumption~\ref{ass:close}, since \(\{\widetilde{\zeta}^{(l)}_{\pm i}\}_{i=n^{\omega_A}+1}^n\) are independent of \( \{\beta_{\pm i}^{(l)}\}_{i=1}^n\) as well as \( \{\beta_{\pm i}^{(l)}\}_{i=n^{\omega_A}+1}^n\) are independent of \(\{\widetilde{\zeta}^{(l)}_{\pm i}\}_{i=1}^n\) by construction. Recall that \(\mathcal{F}_{\beta,t}\) denotes the common filtration of all the Brownian motions \({\bm \beta}^{(m)}=\{ \beta_i^{(m)}\}_{i=1}^n\), \(m\in[p]\). Finally, we prove that also~\ref{close3} of Assumption~\ref{ass:close} is satisfied. We recall the relations \(i=\mathfrak{i}+(l-1)n^{\omega_A}\) and \(j=\mathfrak{j}+(l-1)n^{\omega_A}\) from~\eqref{eq:frakind} which, for any fixed \(l\), establish a one to one relation between a pair \(\mathfrak{i}, \mathfrak{j}\in [n^{\omega_B}]\) and a pair \(i,j\) with \((l-1)n^{\omega_A}+1\le i,j \le l n^{\omega_A}\). % By the definition of \(\{\zeta_{\pm i}^{z_l}\}_{i=1}^{n^{\omega_A}}\) it follows that \begin{equation} \label{eq:qvexw} \operatorname{d}\!{} \zeta^{z_l}_{\mathfrak{i}}-\operatorname{d}\!{} \beta_{\mathfrak{i}}^{(l)}=\sum_{m=1}^{p n^{\omega_A}} \left(\sqrt{ C^\#(t)}-I\right)_{im} \operatorname{d}\!{} (\underline{\beta})_m, \qquad 1\le \mathfrak{i} \le n^{\omega_A}, \end{equation} with \(\underline{\beta}\) defined in~\eqref{eq:vecla}, and so that for any \(1\le \mathfrak{i}, \mathfrak{j} \le n^{\omega_A}\) and fixed \(l\) we have \[ \begin{split} &\Exp*{\bigl(\operatorname{d}\!{} \zeta^{z_l}_{\mathfrak{i}}-\operatorname{d}\!{} \beta_{\mathfrak{i}}^{(l)}\bigr)\bigl(\operatorname{d}\!{} \zeta^{z_l}_{\mathfrak{j}}-\operatorname{d}\!{} \beta_{\mathfrak{j}}^{(l)}\bigr)\given \mathcal{F}_{\beta,t}} \\ &\quad=\sum_{m_1,m_2=1}^{p n^{\omega_A}} \left(\sqrt{C^\#(t)}-I\right)_{im_1}\left(\sqrt{C^\#(t)}-I\right)_{jm_2} \Exp*{\operatorname{d}\!{} (\underline{\beta})_{m_1} \operatorname{d}\!{} (\underline{\beta})_{m_2}\given \mathcal{F}_{\beta,t}} \\ &\quad = \left[\left(\sqrt{C^\#(t)}-I\right)^2\right]_{ij}\operatorname{d}\!{} t, \end{split} \] since \(\sqrt{C^\#(t)}\) is real symmetric. Hence, \(L_{ij}(t)\) defined in~\eqref{eq:defL} in this case is given by \[ L_{ij}(t)= \left[\left(\sqrt{C^\#(t)}-I\right)^2\right]_{ij}. \] Then, by Cauchy-Schwarz inequality, we have that \begin{equation} \label{eq:imperrest} \begin{split} \abs{L_{ij}(t)}&% \le \left[\left(\sqrt{C^\#(t)}-I\right)^2\right]^{1/2}_{ii} \left[\left(\sqrt{C^\#(t)}-I\right)^2\right]^{1/2}_{jj} \\ & \le \Tr \left[(\sqrt{C^\#(t)}-I)^2 \right] \le \Tr \left[(C^\#(t)-I)^2 \right] % \lesssim \frac{p^2n^{2\omega_A}}{n^{4\omega_E}}, \end{split} \end{equation} with very high probability, where in the last inequality we used that \(C^\#(t)\) and \(C(t)\) have the same distribution and the bound~\eqref{eq:bbev} of Lemma~\ref{lem:overb} holds for \(C(t)\) hence for \(C^\#(t)\) as well. This implies that for any fixed \(l\in [p]\) the two families of Brownian motions \(\{\beta^{(l)}_{\pm i}\}_{i=1}^n\) and \(( \{\zeta_{\pm i}^{z_l}\}_{i=1}^{n^{\omega_A}}, \{\widetilde{\zeta}_{\pm i}^{(l)}\}_{i=n^{\omega_A}+1}^n)\) satisfy Assumption~\ref{ass:close} with \(K=n^{\omega_A}\) and \(\omega_Q=4\omega_E-2\omega_A\). Applying Proposition~\ref{pro:ciala} this concludes the proof of Lemma~\ref{lem:secondstepmason}. \end{proof} \subsection{Proof of Proposition~\ref{pro:ciala}}\label{sec:proofFEU} We divide the proof of Proposition~\ref{pro:ciala} into four sub-sections. In Section~\ref{sec:DIP} we introduce an interpolating process \({\bm x}(t,\alpha)\) between the processes \({\bm s}(t)\) and \({\bm r}(t)\) defined in~\eqref{eq:lambdapr}--\eqref{eq:mupr}, and in Section~\ref{sec:PR} we introduce a measure which approximates the particles \({\bm x}(t,\alpha)\) and prove their rigidity. In Section~\ref{sec:SR} we introduce a cut-off near zero (this scale will be denoted by \(\omega_A\) later) such that we only couple the dynamics of the particles \(\abs{i}\le n^{\omega_A}\), as defined in~\ref{close3} of Assumption~\ref{ass:close}, i.e.\ we will choose \(\omega_A=\omega_K\). Additionally, we also localise the dynamics on a scale \(\omega_l\) (see Section~\ref{rem:s}) since the main contribution to the dynamics comes from the nearby particles. We will refer to the new process \(\widehat{\bm x}(t,\alpha)\) (see~\eqref{eq:intflowshortA} later) as the \emph{short range approximation} of the process \({\bm x}(t,\alpha)\). Finally, in Section~\ref{sec:endsec} we conclude the proof of Proposition~\ref{pro:ciala}. Large parts of our proof closely follow~\cite{MR3916329, MR3914908} and for brevity we will focus on the differences. We use~\cite{MR3916329, MR3914908} as our main references since the \(2\times 2\) block matrix setup of~\cite{MR3916329} is very close to the current one and~\cite{MR3916329} itself closely follows~\cite{MR3914908}. However, we point out that many key ideas of this technique have been introduced in earlier papers on universality; e.g.\ short range cut-off and finite speed of propagation in~\cite{MR3372074, MR3606475}, coupling and homogenisation in~\cite{MR3541852}; for more historical references, see~\cite{MR3914908}. The main novelty of~\cite{MR3914908} itself is a mesoscopic analysis of the fundamental solution \(p_t(x,y)\) of~\eqref{eq:conteqA} which enables the authors to prove short time universality for general deterministic initial data. They also proved the result with very high probability unlike~\cite{MR3541852} that relied on level repulsion estimates. We also mention a related but different more recent technique to prove universality~\cite{1812.10376}, which has been recently adapted to the singular values setup, or equivalently to the \(2\times 2\) block matrix structure, in~\cite{1912.05473}. \subsubsection{Definition of the interpolated process}\label{sec:DIP} For \(\alpha\in[0,1]\) we introduce the continuous interpolation process \({\bm x}(t,\alpha)\), between the processes \({\bm s}(t)\) and \({\bm r}(t)\) in~\eqref{eq:lambdapr}--\eqref{eq:mupr}, defined as the solution of the flow \begin{equation}\label{eq:intflowA} \operatorname{d}\!{} x_i(t,\alpha)= \alpha \frac{\operatorname{d}\!{} \mathfrak{b}_i^s}{\sqrt{2n}}+(1-\alpha)\frac{\operatorname{d}\!{} \mathfrak{b}_i^r}{\sqrt{2n}}+\frac{1}{2n}\sum_{j\ne i} \frac{1}{x_i(t,\alpha)-x_j(t,\alpha)}\operatorname{d}\!{} t, \end{equation} with initial data \begin{equation}\label{eq:indatA} {\bm x}(0,\alpha)=\alpha{\bm s}(t_0)+(1-\alpha){\bm r}(t_0), \end{equation} with some \(t_0\) that is a slightly smaller than \(t_f\). In fact we will write \(t_0+t_1= t_f\) with \(t_1\ll t_f\), where \(t_1\) is the time scale for the equilibration of the DBM with initial condition~\eqref{eq:indatA} (see~\eqref{eq:oldt1}). To make our notation consistent with~\cite{MR3916329, MR3914908} in the remainder of this section we assume that \(t_0=n^{-1+\omega_0}\), for some small \(\omega_0>0\), such that \(\omega_K\ll \omega_0\ll\omega_Q\). The reader can think of \(\omega_0=\omega_f\). Note that the strong solution of~\eqref{eq:intflowA} is well defined since the variance of its driving Brownian motion is smaller than \(\frac{1}{2n}(1-2\alpha(1-\alpha) n^{-\omega_Q})\) by~\eqref{eq:assbqv}, which is below the critical variance for well-posedness of the DBM since we are in the complex symmetry class (see e.g.~\cite[Lemma 4.3.3]{MR2760897}). By~\eqref{eq:intflowA} it clearly follows that \({\bm x}(t,0)={\bm r}(t+t_0)\) and \({\bm x}(t,1)={\bm s}(t+t_0)\), for any \(t\ge 0\). Note that the process~\eqref{eq:intflowA} is almost the same as~\cite[Eq.~(3.13)]{MR3914908},~\cite[Eq.~(3.13)]{MR3916329}, except for the stochastic term, which in our case depends on \(\alpha\). Also, to make the notation clearer, we remark that in~\cite{MR3916329, MR3914908} the interpolating process is denoted by \({\bm z}(t,\alpha)\). We changed this notation to \({\bm x}(t,\alpha)\) to avoid confusions with the \(z_l\)-parameters introduced in the previous sections where we apply Proposition~\ref{pro:ciala} to the processes defined in Section~\ref{sec:COMPPRO}. \begin{remark}\label{rem:ne} Even if all processes \({\bm \lambda}(t)\), \(\widetilde{\bm \lambda}(t)\), \(\widetilde{\bm \mu}(t)\), \({\bm \mu}(t)\) introduced in Section~\ref{sec:COMPPRO} already satisfy~\cite[Lemma 3.3-3.5]{MR3916329},~\cite[Lemma 3.3-3.5]{MR3914908} as a consequence of the local law~\eqref{eq:lll} and the rigidity estimates~\eqref{eq:rigneed}, we decided to present the proof of Proposition~\ref{pro:ciala} for general deterministic initial data \({\bf s}(0)\) satisfying Definition~\ref{eq:defregpro} (see Remark~\ref{rem:t0diff}). Hence, an additional time \(t_0\) is needed to ensure the validity of~\cite[Lemma 3.3-3.5]{MR3916329},~\cite[Lemma 3.3-3.5]{MR3914908}. More precisely, we first let the DBMs~\eqref{eq:lambdapr}--\eqref{eq:mupr} evolve for a time \(t_0:= n^{-1+\omega_0}\), and then we consider the process~\eqref{eq:intflowA} whose initial data in~\eqref{eq:indatA} is given by a linear interpolation of the solutions of~\eqref{eq:lambdapr}--\eqref{eq:mupr} at time \(t_0\). \end{remark} Before proceeding with the analysis of~\eqref{eq:intflowA} we give some definitions and state some preliminary results necessary for its analysis. \subsubsection{Interpolating measures and particle rigidity}\label{sec:PR} Using the convention of~\cite[Eq.~(3.10)--(3.11)]{MR3916329}, given a probability measure \(\operatorname{d}\!{} \rho(E)\), we define the \(2n\)-quantiles \(\gamma_i\) by \begin{equation}\label{eq:quantA} \begin{split} \gamma_i &:= \inf\set*{x\given \int_{-\infty}^x \operatorname{d}\!{} \rho(E)\ge \frac{n+i-1}{2n} }, \quad 1\le i \le n, \\ \gamma_i &:= \inf\set*{x\given\int_{-\infty}^x \operatorname{d}\!{} \rho(E)\ge \frac{n+i}{2n} }, \qquad -n\le i \le -1, \end{split} \end{equation} Note that \(\gamma_1=0\) if \(\operatorname{d}\!{} \rho(E)\) is symmetric with respect to \(0\). Let \(\rho_{fc,t}(E)\) be defined above Proposition~\ref{pro:ciala} (see e.g.~\cite[Eq.~(2.5)--(2.6)]{MR3914908} for more details), and let \(\rho_{sc}(E)\) denote the semicircular density, then by \(\gamma_i(t)\), \(\gamma_i^{sc}\) we denote the \(2n\)-quantiles, defined as in~\eqref{eq:quantA}, of \(\rho_{fc,t}\) and \(\rho_{sc}\), respectively. Following the construction of~\cite[Lemma 3.3-3.4, Appendix A]{MR3914908},~\cite[Section 3.2.1]{MR3916329}, we define the interpolating (random) measure \(\operatorname{d}\!{} \rho(E,t,\alpha)\) for any \(\alpha\in [0,1]\). More precisely, the measure \(\operatorname{d}\!{} \rho(E,t,\alpha)\) is deterministic close to zero, and it consists of delta functions of the position of the particles \(x_i(t,\alpha)\) away from zero. Denote by \(\gamma_i(t,\alpha)\) the quantiles of \(\operatorname{d}\!{}\rho(E,\alpha,t)\), and by \(m(w,t,\alpha)\), with \(w\in\mathbf{H} \), its Stieltjes transform. Fix \(q_*\in (0,1)\) throughout this section, and let \(k_0=k_0(q_*)\in\mathbf{N}\) be the largest index such that \begin{equation}\label{eq:ko} \abs{\gamma_{\pm k_0}(t_0)}, \abs{\gamma_{\pm k_0}^{sc}}\le q_*G, \end{equation} with \(G\) defined in~\eqref{eq:defgG}, then the measure \(\operatorname{d}\!{} \rho(E,t,\alpha)\) has a deterministic density (denoted by \(\rho(E,\alpha,t)\) with a slight abuse of notation) on the interval \begin{equation}\label{eq:ga} \mathcal{G}_\alpha:= [\alpha\gamma_{-k_0}(t_0)+(1-\alpha)\gamma_{-k_0}^{sc}, \alpha\gamma_{k_0}(t_0)+(1-\alpha)\gamma_{k_0}^{sc}]. \end{equation} Outside \(\mathcal{G}_\alpha\) the measure \(\operatorname{d}\!{} \rho(E,t,\alpha)\) consists of \(1/(2n)\) times delta functions of the particle locations \(\delta_{x_i(t,\alpha)}\). \begin{remark} By the construction \(\operatorname{d}\!{}\rho(E,t,\alpha)\) as in~\cite[Lemma 3.3-3.4, Appendix A]{MR3914908},~\cite[Section 3.2.1]{MR3916329} all the regularity properties of \(\operatorname{d}\!{}\rho(E,\alpha,t)\), its quantiles \(\gamma_i(t,\alpha)\), and its Stieltjes transform \(m(E+\mathrm{i} \eta,t,\alpha)\) in~\cite[Lemma 3.3-3.4]{MR3914908},~\cite[Lemma 3.3-3.4]{MR3916329} hold without any change. In particular, it follows that \begin{equation} \abs{\gamma_i(t,\alpha)-\gamma_j(t,\alpha)}\sim \frac{\abs{i-j}}{n}, \qquad \abs{i},\abs{j}\le q_*G, \end{equation} with \(q_*\) defined above~\eqref{eq:ko}, and \(G\) in~\eqref{eq:defgG}. \end{remark} Define the Stieltjes transform of the empirical measure of the particle configuration \(\{x_{\pm i}(t,\alpha)\}_{i=1}^n\) by \begin{equation}\label{eq:empmA} m_n(w,t,\alpha):= \frac{1}{2n}\sum_{i=-n}^n\frac{1}{x_i(t,\alpha)-w}, \quad w\in\mathbf{H} . \end{equation} We recall that the summation does not include the term \(i=0\) (see Remark~\ref{rem:no0}). Then by the local law and optimal rigidity for short time for singular values in~\cite[Lemma 3.5]{MR3916329}, which has been proven adapting the local laws for short time of~\cite[Appendix A-B]{MR3914908} and~\cite[Section 3]{MR3687212}, we conclude the following local law and optimal eigenvalue rigidity. \begin{lemma}\label{lem:local} Fix \(q\in (0,1)\) and \(\tilde{\epsilon}>0\). Define \(\widehat{C}_q:= \{j:\abs{j}\le qk_0\}\), with \(k_0\) defined in~\eqref{eq:ko}. Then for any \(\xi>0\), with very high probability we have the optimal rigidity \begin{equation} \label{eq:rigA} \sup_{0\le t\le t_0 n^{-\tilde{\epsilon}}}\sup_{i\in \widehat{C}_q}\sup_{0\le\alpha\le 1}\abs{x_i(t,\alpha)-\gamma_i(t,\alpha)}\le \frac{n^{\xi+100\nu}}{n}, \end{equation} and the local law \begin{equation} \label{eq:trllA} \sup_{n^{-1+\tilde{\epsilon}}\le \eta\le 10} \sup_{0\le t\le t_0 n^{-\tilde{\epsilon}}}\sup_{0\le \alpha\le 1} \sup_{E\in q\mathcal{G}_\alpha}\abs{m_n(E+\mathrm{i}\eta,t,\alpha)-m(E+\mathrm{i}\eta,t,\alpha)}\le \frac{n^{\xi+100\nu}}{n\eta}, \end{equation} for sufficiently large \(n\), with \(\nu>0\) from Definition~\ref{eq:defregpro}. \end{lemma} Without loss of generality in Lemma~\ref{lem:local} we assumed \(k_1=k_0\) in~\cite[Eq.~(3.25)--(3.26)]{MR3916329}. \subsubsection{Short range analysis}\label{sec:SR} In the following of this section we perform a local analysis of~\eqref{eq:intflowA} adapting the analysis of~\cite{MR3916329, MR3914908} and explaining the minor changes needed for the analysis of the flow~\eqref{eq:intflowA}, for which the driving Brownian motions \(\mathfrak{\bm b}^s\), \(\mathfrak{\bm b}^r\) satisfy Assumption~\ref{ass:close}, compared to the analysis of~\cite[Eq.~(3.13)]{MR3916329},~\cite[Eq.~(3.13)]{MR3914908}. More precisely, we run the DBM~\eqref{eq:intflowA} for a time \begin{equation}\label{eq:oldt1} t_1:= \frac{n^{\omega_1}}{n}, \end{equation} for any \(\omega_1>0\) such that \(\nu\ll \omega_1\ll \omega_K\), with \(\nu,\omega_K\) defined in Definition~\ref{eq:defregpro} and above Assumption~\ref{ass:close}, respectively, so that~\eqref{eq:intflowA} reaches its local equilibrium (see Section~\ref{rem:s} for a summary on the different scales). Moreover, since the dynamics of \(x_i(t,\alpha)\) is mostly influenced by the particles close to it, in the following we define a short range approximation of the process \({\bm x}(t,\alpha)\) (see~\eqref{eq:intflowshortA} later), denoted by \(\widehat{\bm x}(t,\alpha)\), and use the homogenisation theory developed in~\cite{MR3914908}, adapted in~\cite{MR3916329} for the singular values flow, for the short range kernel. \begin{remark} We do not need to define the shifted process \(\widetilde{\bm x}(t,\alpha)\) as in~\cite[Eq.~(3.29)--(3.32)]{MR3916329} and~\cite[Eq.~(3.36)--(3.40)]{MR3914908}, since in our case the measure \(\operatorname{d}\!{} \rho(E,t,\alpha)\) is symmetric with respect to \(0\) by assumption, hence, using the notation in~\cite[Eq.~(3.29)--(3.32)]{MR3916329}, we have \(\widetilde{\bm x}(t,\alpha)={\bm x}(t,\alpha)-\gamma_1(t,\alpha)={\bm x}(t,\alpha)\). Hence, from now on we only use \({\bm x}(t,\alpha)\) and the reader can think \(\widetilde{\bm x}(t,\alpha)\equiv {\bm x}(t,\alpha)\) for a direct analogy with~\cite{MR3916329, MR3914908}. \end{remark} Our analysis will be completely local, hence we introduce a short range cut-off. Fix \(\omega_l, \omega_A>0\) so that \begin{equation}\label{eq:relscalA} 0< \omega_1 \ll \omega_l\ll \omega_A\ll \omega_0\ll \omega_Q, \end{equation} with \(\omega_1\) defined in~\eqref{eq:oldt1}, \(\omega_0\) defined below~\eqref{eq:indatA}, and \(\omega_Q\) in~\ref{close3} of Assumption~\ref{ass:close}. Moreover, we assume that \(\omega_A\) is such that \begin{equation}\label{eq:choosekn} K_n=n^{\omega_A}, \end{equation} with \(K_n=n^{\omega_K}\) in Assumption~\ref{ass:close}, i.e.\ \(\omega_A=\omega_K\). We remark that it is enough to choose \(\omega_A\ll\omega_K\), but to avoid further splitting in~\eqref{eq:intflowshortA} we assumed \(\omega_K=\omega_A\). For any \(q\in(0,1)\), define the set \begin{equation}\label{eq:shortscsetA} A_q:= \set*{(i,j)\given \abs{i-j}\le n^{\omega_l} \; \text{or} \; ij>0, i\notin \widehat{C}_q, j\notin \widehat{C}_q}, \end{equation} and denote \(A_{q,(i)}:= \set{j\given(i,j)\in A_q }\). In the remainder of this section we will often use the notations \[ \sum_j^{A_{q,(i)}}:= \sum_{j\in A_{q,(i)}}, \qquad \sum_j^{A_{q,(i)}^c}:= \sum_{j\notin A_{q,(i)}}. \] Let \(q_*\in (0,1)\) be defined above~\eqref{eq:ko}, then we define the short range process \(\widehat{\bm x}(t,\alpha)\) (cf.~\cite[Eq.~(3.35)--(3.36)]{MR3916329},~\cite[Eq.~(3.45)--(3.46)]{MR3914908}) as follows \begin{equation}\label{eq:intflowshortA} \begin{split} \operatorname{d}\!{} \widehat{x}_i(t,\alpha)&= \frac{1}{2n}\sum_j^{A_{q_*,(i)}} \frac{1}{\widehat{x}_i(t,\alpha)-\widehat{x}_j(t,\alpha)} \operatorname{d}\!{} t \\ &\quad +\begin{cases} \alpha \frac{\operatorname{d}\!{} \mathfrak{b}^s}{\sqrt{2n}}+(1-\alpha)\frac{\operatorname{d}\!{} \mathfrak{b}^r}{\sqrt{2n}} &\text{if} \quad \abs{i}\le n^{\omega_A}, \\ \alpha \frac{\operatorname{d}\!{} \mathfrak{b}^s}{\sqrt{2n}}+(1-\alpha)\frac{\operatorname{d}\!{} \mathfrak{b}^r}{\sqrt{2n}}+J_i(\alpha,t) \operatorname{d}\!{} t &\text{if} \quad n^{\omega_A}<\abs{i}\le n, \end{cases} \end{split} \end{equation} where \begin{equation}\label{eq:defJA} J_i(\alpha,t):= \frac{1}{2n}\sum_j^{A_{q_*,(i)}^c} \frac{1}{x_i(t,\alpha)-x_j(t,\alpha)}, \end{equation} and initial data \(\widehat{\bm x}(0,\alpha)={\bm x}(0,\alpha)\). Note that \begin{equation}\label{eq:boundJA} \sup_{0\le t\le t_1}\sup_{0\le\alpha\le 1}\abs{J_1(\alpha,t)}\le \log n, \end{equation} with very high probability. \begin{remark} Note that the SDE defined in~\eqref{eq:intflowshortA} has the same form as in~\cite[Eq.~(3.70)]{MR3914908}, with \(F_i=0\) in our case, except for the stochastic term in~\eqref{eq:intflowshortA} that looks slightly different, in particular it depends on \(\alpha\). Nevertheless, by Assumption~\ref{ass:close}, the quadratic variation of the driving Brownian motions in~\eqref{eq:intflowshortA} is also bounded by one uniformly in \(\alpha\in[0,1]\). Moreover, the process defined in~\eqref{eq:intflowshortA} and the measure \(\operatorname{d}\!{} \rho(E,t,\alpha)\) satisfy~\cite[Eq.~(3.71)--(3.77)]{MR3914908}. \end{remark} Since when we consider the difference process \(\widehat{\bm x}(t,\alpha)-{\bm x}(t,\alpha)\) the stochastic differential disappears, by~\cite[Lemma 3.8]{MR3914908}, without any modification, it follows that \begin{equation}\label{eq:shortlongA} \sup_{0\le t\le t_1}\sup_{0\le \alpha\le 1}\sup_{\abs{i}\le n}\abs{\widehat{x}_i(t,\alpha)-x_i(t,\alpha)}\le n^{\xi+100\nu} t_1\left( \frac{1}{n^{\omega_l}}+\frac{n^{\omega_A}}{n^{\omega_0}}+\frac{1}{\sqrt{nG}}\right), \end{equation} for any \(\xi>0\) with very high probability, with \(G\) defined in~\eqref{eq:defgG}. In particular,~\eqref{eq:shortlongA} implies that the short range process \(\widehat{\bm x}(t,\alpha)\), defined in~\eqref{eq:intflowshortA}, approximates very well (i.e.\ they are closer than the fluctuation scale) the process \({\bm x}(t,\alpha)\) defined in~\eqref{eq:intflowA}. Next, in order to use the smallness of~\eqref{eq:defL}--\eqref{eq:assbqv} in Assumption~\ref{ass:close} for \(\abs{i}\le n^{\omega_A}\), we define \({\bm u}(t,\alpha):= \partial_\alpha \widehat{\bm x}(t,\alpha)\), which is the solution of the following discrete SPDE (cf.~\cite[Eq.~(3.38)]{MR3916329},~\cite[Eq.~(3.63)]{MR3914908}): \begin{equation}\label{eq:pareqA} \operatorname{d}\!{} {\bm u}=\sum_j^{A_{q_*,(i)}} B_{ij}(u_j-u_i) \operatorname{d}\!{} t+\operatorname{d}\!{} {\bm \xi}_1+{\bm\xi}_2 \operatorname{d}\!{} t=-\mathcal{B}{\bm u} \operatorname{d}\!{} t+\operatorname{d}\!{} {\bm \xi}_1+{\bm\xi}_2 \operatorname{d}\!{} t, \end{equation} where \begin{equation}\label{eq:shortrankerA} \begin{split} B_{ij}&:= \frac{\bm1_{j\ne \pm i}}{2n(\widehat{x}_i-\widehat{x}_j)^2}, \quad \operatorname{d}\!{} \xi_{1,i}:= \frac{\operatorname{d}\!{} \mathfrak{b}_i^s}{\sqrt{2n}}-\frac{\operatorname{d}\!{} \mathfrak{b}_i^r}{\sqrt{2n}} \\ \xi_{2,i}&:= \begin{cases} 0 &\text{if} \quad \abs{i}\le n^{\omega_A}, \\ \partial_\alpha J_i(\alpha,t)&\text{if} \quad n^{\omega_A}<\abs{i}\le n, \end{cases} \end{split} \end{equation} with \(J_i(\alpha,t)\) defined in~\eqref{eq:defJA}. We remark that the operator\footnote{The operator \(\mathcal{B}\) defined here is not to be confused with the completely unrelated one in~\eqref{cB def}.} \(\mathcal{B}\) defined via the kernel in~\eqref{eq:shortrankerA} depends on \(\alpha\) and \(t\). It is not hard to see (e.g.\ see~\cite[Eq.~(3.65), Eq.~(3.68)--(3.69)]{MR3914908}) that the forcing term \({\bm \xi}_2\) is bounded with very high probability by \(n^C\), for some \(C>0\), for \(n^{\omega_A}<\abs{i}\le n\). Note that the only difference in~\eqref{eq:pareqA} compared to~\cite[Eq.~(3.38)]{MR3916329},~\cite[Eq.~(3.63)]{MR3914908} is the additional term \(\operatorname{d}\!{} {\bm \xi}_1\) which will be negligible for our analysis. Let \(\mathcal{U}\) be the semigroup associated to \(\mathcal{B}\), i.e.\ if \(\partial_t {\bm v}=-\mathcal{B}{\bm v}\), then for any \(0\le s\le t\) we have that \[ v_i(t)=\sum_{j=-n}^n \mathcal{U}_{ij}(s,t,\alpha) v_j(s), \qquad \abs{i}\le n. \] The first step to analyse the equation in~\eqref{eq:pareqA} is the following finite speed of propagation estimate (cf.~\cite[Lemma 3.9]{MR3916329},~\cite[Lemma 3.7]{MR3914908}). \begin{lemma}\label{lem:finspeedA} Let \(0\le s\le t\le t_1\). Fix \(0<q_1<q_2<q_*\), with \(q_*\in (0,1)\) defined in~\eqref{eq:ko}, and \(\epsilon_1>0\) such that \(\epsilon_1\ll\omega_A\). Then for any \(\alpha\in[0,1]\) we have \begin{equation} \label{eq:finspestA} \abs{U_{ji}(s,t,\alpha)}+\abs{U_{ij}(s,t,\alpha)}\le n^{-D}, \end{equation} for any \(D>0\) with very high probability, if either \(i\in \widehat{C}_{q_2}\) and \(\abs{i-j}> n^{\omega_l+\epsilon_1}\), or if \(i\notin \widehat{C}_{q_2}\) and \(j\in\widehat{C}_{q_1}\). \begin{proof} The proof of this lemma follows the same lines as~\cite[Lemma 3.7]{MR3914908}. There are only two differences that we point out. The first one is that~\cite[Eq.~(4.15)]{MR3914908}, using the notation therein, has to be replaced by \begin{equation} \label{eq:cpl} \sum_k v_k^2(\nu^2(\psi_k')^2+\nu\psi_k'') \Exp*{\operatorname{d}\!{} C_k(\alpha,t)\operatorname{d}\!{} C_k(\alpha,t)\given \mathcal{F}_t}, \end{equation} where \(\mathcal{F}_t\) is the filtration defined in Assumption~\ref{ass:close}, and \(C_k(\alpha,t)\) is defined as \begin{equation} \label{eq:defcka} C_k(\alpha,t):= \alpha \frac{\mathfrak{b}_k^s(t)}{\sqrt{2n}}+(1-\alpha)\frac{\mathfrak{b}_k^r(t)}{\sqrt{2n}}. \end{equation} We remark that \(\nu\) in~\eqref{eq:cpl} should not to be confused with \(\nu\) in Definition~\ref{eq:defregpro}. Then, by Kunita-Watanabe inequality, it is clear that \begin{equation} \label{eq:mindt} \Exp*{\operatorname{d}\!{} C_k(\alpha,t)\operatorname{d}\!{} C_k(\alpha,t)\given \mathcal{F}_t}\lesssim \frac{\operatorname{d}\!{} t}{n}, \end{equation} uniformly in \(\abs{k}\le n\), \(t\ge 0\), and \(\alpha\in [0,1]\). The fact that~\eqref{eq:mindt} holds is the only input needed to bound~\cite[Eq.~(4.21)]{MR3914908}. The second difference is that the stochastic differential \((\sqrt{2}\operatorname{d}\!{} B_k)/\sqrt{n}\) in~\cite[Eq.~(4.21)]{MR3914908} has to be replaced by \(\operatorname{d}\!{} C_k(\alpha,t)\) defined in~\eqref{eq:defcka}. This change is inconsequential in the bound~\cite[Eq.~(4.26)]{MR3914908}, since \(\E \operatorname{d}\!{} C_k(\alpha,t)=0\). \end{proof} \end{lemma} Moreover, the result in~\cite[Lemma 3.8]{MR3916329},~\cite[Lemma 3.10]{MR3914908} hold without any change, since its proof is completely deterministic and the stochastic differential in the definition of the process \(\widehat{\bm x}(t,\alpha)\) does not play any role. In the remainder of this section, before completing the proof of Proposition~\ref{pro:ciala}, we describe the homogenisation argument to approximate the \(t\)-dependent kernel of \(\mathcal{B}\) with a continuous kernel (denoted by \(p_t(x,y)\) below). We follow verbatim~\cite[Section 3-4]{MR3914908} and its adaptation to the singular value flow of~\cite[Section 3.4]{MR3916329}, except for the bound of the rhs.\ of~\eqref{eq:step4A}, where we handle the additional term \(\operatorname{d}\!{} {\bm \xi}_1\) in~\eqref{eq:shortrankerA}. Fix a constant \(\epsilon_B>0\) such that \(\omega_A-\epsilon_B>\omega_l\), and let \(a\in\mathbf{Z} \) be such that \(0<\abs{a}\le n^{\omega_A-\epsilon_B}\). Define also the equidistant points \(\gamma_j^f:= j (2n\rho_{sc}(0))^{-1}\), which approximate the quantiles \(\gamma_j(t,\alpha)\) very well for small \(j\), i.e.\ \(\abs{\gamma_j^f-\gamma_j(t,\alpha)}\lesssim n^{-1}\) for \(\abs{j}\le n^{\omega_0/2}\) (see~\cite[Eq.~(3.91)]{MR3914908}). Consider the solution of \begin{equation}\label{eq:wsolA} \partial_t w_i=-(\mathcal{B}w)_i, \quad w_i(0)=2n\delta_{ia}, \end{equation} and define the cut-off \(\eta_l:= n^{\omega_l} (2n\rho_{sc}(0))^{-1}\). Let \(p_t(x,y)\) be the fundamental solution of the equation \begin{equation}\label{eq:conteqA} \partial_t f(x)=\int_{\abs{x-y}\le \eta_l}\frac{f(y)-f(x)}{(x-y)^2}\rho_{sc}(0)\operatorname{d}\!{} y. \end{equation} The idea of the homogenisation argument is that the deterministic solution \(f\) of~\eqref{eq:conteqA} approximates very well the random solution of~\eqref{eq:wsolA}. This is formulated in terms of the solution kernels of the two equations in Proposition~\ref{prop:homA}. Following~\cite[Lemma 3.9-3.13, Corollary 3.14, Theorem 3.15-3.17]{MR3916329}, which are obtained adapting the proof of~\cite[Section 3.6]{MR3914908}, we will conclude the following proposition. \begin{proposition}\label{prop:homA} Let \(a,i\in\mathbf{Z} \) such that \(\abs{a}\le n^{\omega_A-\epsilon_B}\) and \(\abs{i-a}\le n^{\omega_l}/10\). Fix \(\epsilon_c>0\) such that \(\omega_1-\epsilon_c>0\), let \(t_1:= n^{-1+\omega_1}\) and \(t_2:= n^{-\epsilon_c}t_1\), then for any \(\alpha\in [0,1]\) and for any \(\abs{u}\le t_2\) we have \begin{equation} \label{eq:homapprA} \abs*{\mathcal{U}_{ia}(0,t_1+u,\alpha)-\frac{p_{t_1}(\gamma_i^f,\gamma_a^f)}{n}}\le \frac{n^{100\nu+\epsilon_c}}{nt_1}\left(\frac{(nt_1)^2}{n^{\omega_l}}+\frac{1}{(nt_1)^{1/10}}+\frac{1}{n^{3\epsilon_c/2}}\right), \end{equation} with very high probability. \begin{proof} The proof of this proposition relies on~\cite[Section 3.6]{MR3914908}, which has been adapted to the \(2\times 2\) block structure in~\cite[Lemma 3.9--3.13, Corollary 3.14, Theorem 3.15--3.17]{MR3916329}. We thus present only the differences compared to\cite{MR3916329, MR3914908}; for a complete proof we defer the reader to these works. The only difference in the proof of this proposition compared to the proof of~\cite[Theorem 3.17]{MR3916329},~\cite[Theorem 3.11]{MR3914908} is in~\cite[Eq.~(3.121) of Lemma 3.14]{MR3914908} and~\cite[Eq.~(3.148) of Lemma 3.14]{MR3914908}. The main goal of~\cite[Lemma 3.14]{MR3914908} and~\cite[Lemma 3.14]{MR3914908} is to prove that \begin{equation} \label{eq:maindiff} \operatorname{d}\!{} \frac{1}{2n}\sum_{1\le \abs{i}\le n}(w_i-f_i)^2=-\braket{{\bm w}(t)-{\bm f}(t),\mathcal{B}({\bm w}(t)-{\bm f}(t))}+\text{Lower order}, \end{equation} where \(f_i:=f(\widehat{x}_i(t,\alpha),t)\), with \(\widehat{x}_i(t,\alpha)\) being the solution of~\eqref{eq:intflowshortA}, and \({\bm w}(t)\), \({\bm f}(t)\) being the solutions of~\eqref{eq:wsolA} and~\eqref{eq:conteqA} with \(x=\widehat{x}_i(t,\alpha)\), respectively. In order to prove~\eqref{eq:maindiff}, following~\cite[Eq.~(3.121)]{MR3914908} and using the notation therein (with \(N=2n\) and replacing \(\widehat{z}_i\) by \(\widehat{x}_i\)), we compute \begin{equation} \label{eq:replace1} \begin{split} &\operatorname{d}\!{} \frac{1}{2n}\sum_{1\le \abs{i}\le n}(w_i-f_i)^2 \\ &\,\,= \frac{1}{n}\sum_{1\le \abs{i}\le n}(w_i-f_i)\left[\partial_t w_i \operatorname{d}\!{} t-(\partial_t f)(t,\widehat{x}_i)\operatorname{d}\!{} t-f'(t,\widehat{x}_i)\operatorname{d}\!{} \widehat{x}_i\right]\\ &\,\,\quad +\frac{1}{n}\sum_{1\le \abs{i}\le n} \left(-(w_i-f_i)f''(t, \widehat{x}_i)+(f'(t,\widehat{x}_i))^2\right) \Exp*{\operatorname{d}\!{} C_i(\alpha,t)\operatorname{d}\!{} C_i(\alpha,t)\given\mathcal{F}_t}, \end{split} \end{equation} where \[ C_i(\alpha,t):= \alpha \frac{\mathfrak{b}_i^s(t)}{\sqrt{2n}}+(1-\alpha)\frac{ \mathfrak{b}_i^r(t)}{\sqrt{2n}}. \] As a consequence of the slight difference in definition of \(\operatorname{d}\!{} \widehat{x}_i\) in~\eqref{eq:intflowshortA}, compared to the definition of \(\operatorname{d}\!{} \widehat{z}_i\) in~\cite[Eq. (3.70)]{MR3914908}, the martingale term in~\eqref{eq:replace1} is given by (cf.~\cite[Eq.~(3.148)]{MR3914908}) \begin{equation} \label{eq:replace2} \operatorname{d}\!{} M_t=\frac{1}{2n}\sum_{1\le \abs{i}\le n} (w_i-f_i) f_i' \operatorname{d}\!{} C_i(\alpha,t). \end{equation} The terms in the first line of the rhs.\ of~\eqref{eq:replace1} are bounded exactly as in~\cite[Eq.~(3.124)--(3.146), (3.149)--(3.154)]{MR3914908}. It remains to estimate the second line in the rhs.\ of~\eqref{eq:replace1}. The expectation of the second line of~\eqref{eq:replace1} is bounded by a constant times \(n^{-1}\operatorname{d}\!{} t\), exactly as in~\eqref{eq:mindt}. This is the only input needed to bound the terms~\eqref{eq:replace1} in~\cite[Eq. (3.122)-(3.123)]{MR3914908}. Hence, in order to conclude the proof of this proposition we are left with the term in~\eqref{eq:replace2}. The quadratic variation of the term in~\eqref{eq:replace2} is given by \[ \operatorname{d}\!{} \braket{ M}_t=\frac{1}{2n}\sum_{1\le \abs{i}, \abs{j}\le n} (w_i-f_i)(w_j-f_j) f_i' f_j' \Exp*{\operatorname{d}\!{} C_i(\alpha,t)\operatorname{d}\!{} C_j(\alpha,t)\given\mathcal{F}_t}, \] using the notation in~\cite[Eq.~(3.155)--(3.157)]{MR3914908} is used. By~\ref{close2} of Assumption~\ref{ass:close} it follows that \begin{gather} \begin{aligned} \operatorname{d}\!{} \braket{ M}_t&=\frac{1}{4n^2}\sum_{1\le \abs{i}, \abs{j}\le n^{\omega_A}} (w_i-f_i)(w_j-f_j) f_i' f_j' \Exp*{\operatorname{d}\!{} C_i(\alpha,t)\operatorname{d}\!{} C_j(\alpha,t)\given\mathcal{F}_t} \\ &\quad +\frac{\alpha^2+(1-\alpha)^2}{8n^3} \sum_{n^{\omega_A}< \abs{i}\le n} (w_i-f_i)^2 (f_i')^2 \operatorname{d}\!{} t. \end{aligned}\label{eq:qM1}\raisetag{-5em} \end{gather} Then, by~\ref{close3} of Assumption~\ref{ass:close}, for \(\abs{i}, \abs{j}\le n^{\omega_A}\) we have \begin{gather} \begin{aligned} \Exp*{\operatorname{d}\!{} C_i(\alpha,t)\operatorname{d}\!{} C_j(\alpha,t)\given \mathcal{F}_t}&=\bigl[\alpha^2+(1-\alpha)^2 \bigr] \frac{\delta_{ij}}{2n} \operatorname{d}\!{} t \\ &\quad+\frac{\alpha(1-\alpha)}{2n}\Exp*{\bigl(\operatorname{d}\!{} \mathfrak{b}_i^s\operatorname{d}\!{} \mathfrak{b}_j^r+\operatorname{d}\!{} \mathfrak{b}_i^r\operatorname{d}\!{} \mathfrak{b}_j^s\bigr)\given\mathcal{F}_t}, \end{aligned}\label{eq:qM2}\raisetag{-4em} \end{gather} and that \begin{equation} \label{eq:qM3} \Exp*{ \operatorname{d}\!{} \mathfrak{b}_i^s\operatorname{d}\!{} \mathfrak{b}_j^r\given\mathcal{F}_t}=\Exp*{ (\operatorname{d}\!{} \mathfrak{b}_i^s-\operatorname{d}\!{} \mathfrak{b}_i^r)\operatorname{d}\!{} \mathfrak{b}_j^r\given\mathcal{F}_t}+\delta_{ij} \operatorname{d}\!{} t \lesssim (\abs{L_{ii}(t)}^{1/2}+\delta_{ij} )\operatorname{d}\!{} t, \end{equation} where in the last inequality we used Kunita-Watanabe inequality. % Combining~\eqref{eq:qM1}--\eqref{eq:qM3} we finally conclude that \begin{gather} \begin{aligned} \operatorname{d}\!{} \braket{ M}_t&\le \frac{1}{8n^3} \sum_{1\le \abs{i}\le n} (w_i-f_i)^2 (f_i')^2 \operatorname{d}\!{} t \\ &\quad + \frac{\alpha(1-\alpha)}{4n^3}\sum_{1\le \abs{i}, \abs{j}\le n^{\omega_A}} \abs{L_{ii}(t)}^{1/2} \abs*{ (w_i-f_i)(w_j-f_j) f_i' f_j' } \operatorname{d}\!{} t. \end{aligned}\label{eq:qM4}\raisetag{-4em} \end{gather} Since \(\alpha\in [0,1]\), \(\abs{L_{ii}(t)}\le n^{-\omega_Q}\) and \(\omega_A\ll \omega_Q\) by~\eqref{eq:assbqv} and~\eqref{eq:relscalA}--\eqref{eq:choosekn}, using Cauchy-Schwarz in~\eqref{eq:qM4}, we conclude that \begin{equation} \label{eq:qM5} \operatorname{d}\!{} \braket{ M}_t\lesssim\frac{1}{n^3} \sum_{1\le \abs{i}\le n} (w_i-f_i)^2 (f_i')^2 \operatorname{d}\!{} t, \end{equation} which is exactly the lhs.\ in~\cite[Eq.~(3.155)]{MR3914908}, hence the high probability bound in~\cite[Eq.~(3.155)]{MR3914908} follows. Then the remainder of the proof of~\cite[Lemma 3.14]{MR3914908} proceeds exactly in the same way. Given~\eqref{eq:replace1} as an input, the proof of~\eqref{eq:homapprA} is concluded following the proof of~\cite[Theorems 3.16-3.17]{MR3914908} line by line. % \end{proof} \end{proposition} \subsubsection{Proof of Proposition~\ref{pro:ciala}}\label{sec:endsec} We conclude this section with the proof of Proposition~\ref{pro:ciala} following~\cite[Section 3.6]{MR3916329}. We remark that all the estimates above hold uniformly in \(\alpha\in[0,1]\) when bounding an integrand by~\cite[Appendix E]{MR3914908}. \begin{proof}[Proof of Proposition~\ref{pro:ciala}] For any \(\abs{i}\le n\), by~\eqref{eq:shortlongA}, it follows that \begin{equation} \label{eq:step1A} s_i(t_0+t_1)-r_i(t_0+t_1)=x_i(t_1,1)-x_i(t_1,0)=\widehat{x}_i(t_1,1)-\widehat{x}_i(t_1,0)+\mathcal{O}\left(\frac{n^\xi t_1}{n^{\omega_l}} \right). \end{equation} We remark that in~\eqref{eq:step1A} we ignored the scaling~\eqref{eq:hihihi} since it can be removed by a simple time-rescaling (see Remark~\ref{ren:res} for more details). Then, using that \(u_i=\partial_\alpha \widehat{x}_i\) we have that \begin{equation} \label{eq:step2A} \widehat{x}_i(t_1,1)-\widehat{x}_i(t_1,0)=\int_0^1 u_i(t_1,\alpha)\operatorname{d}\!{} \alpha. \end{equation} We recall that \(u\) is a solution of \[ \operatorname{d}\!{} {\bm u}=\mathcal{B}{\bm u} \operatorname{d}\!{} t+\operatorname{d}\!{} {\bm \xi}_1+{\bm \xi}_2 \operatorname{d}\!{} t, \] as defined in~\eqref{eq:pareqA}--\eqref{eq:shortrankerA}, with \begin{equation} \label{eq:step3A} \abs{\xi_{2,i}(t)}\le \bm1_{\{\abs{i}> n^{\omega_A}\}}n^C, \end{equation} with very high probability for some constant \(C>0\) and any \(0\le t\le t_1\). Define \({\bm v}={\bm v}(t)\) as the solution of \[ \partial_t {\bm v}=\mathcal{B}{\bm v}, \quad {\bm v}(0)={\bm u}(0), \] then, omitting the \(\alpha\)-dependence from the notation, by Duhamel formula we have \begin{equation} \label{eq:step4A} \begin{split} u_i(t_1)-v_i(t_1)&=\int_0^{t_1} \sum_{\abs{p}\le n}\mathcal{U}_{ip}(s,t_1)(\operatorname{d}\!{} \xi_{1,p}(s)+\xi_{2,p} \operatorname{d}\!{} s)\\ &=\int_0^{t_1} \sum_{\abs{p}\le n^{\omega_A}}\mathcal{U}_{ip}(s,t_1)\operatorname{d}\!{} \xi_{1,p}(s) \\ &\quad + \int_0^{t_1} \sum_{n^{\omega_A}<\abs{p}\le n}\mathcal{U}_{ip}(s,t_1)(\operatorname{d}\!{} \xi_{1,p}(s)+\xi_{2,p} \operatorname{d}\!{} s). \end{split} \end{equation} In the remainder of this section we focus on the estimate of the rhs.\ of~\eqref{eq:step4A} for \(\abs{i}\le n^{\omega_A}/2\). Note that \(\operatorname{d}\!{} \xi_{1,p}\) in~\eqref{eq:step4A} is a new term compared with~\cite[Eq.~(3.84)]{MR3916329}. In the remainder of this section we focus on its estimate, whilst \(\xi_{2,p}\) is estimated exactly as in~\cite[Eq.~(3.84)--(3.85)]{MR3916329}. The term \(\operatorname{d}\!{} \xi_{1,p}\) for \(\abs{p}\le n^{\omega_A}\) is estimated similarly as the term \((A_N\operatorname{d}\!{} B_i)/\sqrt{N}\) of~\cite[Eq.~(4.25)]{MR4009717} in~\cite[Lemma 4.2]{MR4009717}, using the notation therein. % % % % % By~\eqref{eq:defL}--\eqref{eq:assbqv} in Assumption~\ref{ass:close} and the fact that \(\sqrt{2n}\operatorname{d}\!{} \xi_{1,p}=\operatorname{d}\!{} \mathfrak{b}_p^s-\operatorname{d}\!{} \mathfrak{b}_p^r\), it follows that the quadratic variation of the first term in the rhs.\ of the second equality of~\eqref{eq:step4A} is bounded by \begin{equation}\label{eq:newb1} n^{-1}\int_0^{t_1}\sum_{\abs{p},\abs{q}\le n^{\omega_A}} \mathcal{U}_{ip}(s,t_1)\mathcal{U}_{iq}(s,t_1) L_{pq}(s) \operatorname{d}\!{} s \lesssim \frac{t_1 \norm{ \mathcal{U}^* \delta_i}_1^2}{n^{1+\omega_Q}}\lesssim \frac{t_1}{n^{1+\omega_Q}}. \end{equation} Note that in~\eqref{eq:newb1} we used that the bound \(\abs{L_{pq}(t)}\le n^{-\omega_Q}\) holds with very high probability uniformly in \(t\ge 0\) when \(L_{pq}(t)\) is integrated in time (see e.g.~\cite[Appendix~E]{MR3914908}). The rhs.\ of~\eqref{eq:newb1} is much smaller than the rigidity scale under the assumption \(\omega_1\ll \omega_Q\) (see~\eqref{eq:relscalA}). Note that in the last inequality we used the contraction of the semigroup \(\mathcal{U}\) on \(\ell^1\) to bound \( \norm{ \mathcal{U}^*\delta_i}_1^2\le 1\). Then, using Burkholder-Davis-Gundy (BDG) inequality, we conclude that \begin{equation}\label{eq:newb2} \abs*{\sup_{0\le t\le t_1} \int_0^t \sum_{\abs{p}\le n^{\omega_A}}\mathcal{U}_{ip}(s,t)\operatorname{d}\!{} \xi_{1,p}(s)}\lesssim \sqrt{\frac{t_1}{n^{1+\omega_Q}}}, \end{equation} with very high probability. On the other hand, using Kunita-Watanabe inequality, we bound the quadratic variation of the sum over \(\abs{p}> n^{\omega_A}\) of \(\operatorname{d}\!{} \xi_{1,p}\) in~\eqref{eq:step4A} as \begin{gather} \begin{aligned} & \frac{1}{n}\int_0^{t_1}\sum_{\substack{\abs{p}> n^{\omega_A}\\ \abs{q}> n^{\omega_A}}} \mathcal{U}_{ip}(s,t_1)\mathcal{U}_{iq}(s,t_1) \Exp*{\left(\operatorname{d}\!{} \mathfrak{b}^s_p(s)-\operatorname{d}\!{} \mathfrak{b}^r_p(s)\right)\left(\operatorname{d}\!{} \mathfrak{b}^s_q(s)-\operatorname{d}\!{} \mathfrak{b}^r_q(s)\right)\given \mathcal{F}_t} \\ &\qquad\qquad\qquad\le 4 n^{-1}\int_0^{t_1}\left(\sum_{n^{\omega_A}< \abs{p}\le n} \mathcal{U}_{ip}(s,t_1)\right)^2 \operatorname{d}\!{} s \le n^{-D}, \end{aligned}\label{eq:newb3}\raisetag{-4em} \end{gather} for any \(D>0\) with very high probability, by finite speed of propagation~\eqref{eq:finspestA} since \(\abs{i}\le n^{\omega_A/2}\) and \(\abs{p}> n^{\omega_A}\). We conclude a very high probability bound for the \(\operatorname{d}\!{} \xi_{1,p}\)-term in the last line of~\eqref{eq:step4A} using BDG inequality as in~\eqref{eq:newb2}. This concludes the bound of the new term \(\operatorname{d}\!{} {\bm \xi}_1\). % % The remainder of the proof of Proposition~\ref{pro:ciala} proceeds exactly in the same way of~\cite[Eq.~(3.86)--(3.99)]{MR3916329}, hence we omit it. Since \(t_f=t_0+t_1\), choosing \(\omega=\omega_1/10\), \(\widehat{\omega}\le \omega/10\), the above computations conclude the proof of Proposition~\ref{pro:ciala}. \end{proof}
1,108,101,566,143
arxiv
\section{Introduction} The famous $O(3)$ model in two dimensions has various applications in condensed matter and high energy physics. From the point of view of gauge field theories, it is one of the few toy models that exhibits asymptotic freedom and a dynamical mass generation. The other main feature of the $O(3)$ model is the existence of {\em solitonic solutions} \cite{polyakov:75}. They are stabilized by a topological quantum number belonging to the second homotopy group of the color two-sphere\footnote{ because every configuration of finite action can be compactified by including spatial infinity}. The solitons of any topological charge are known explicitly making use of the {\em complex structure} of the Bogomolnyi equation, see below. All these properties, including the new findings in this letter, find immediate generalizations in $CP(N)$ models. That the soliton of unit topological charge can be parametrized by two locations (see Eq.~(\ref{eqn_inst_rat_para})), has initiated speculations, whether it is actually made of two constituents, named `instanton quarks' \cite{belavin:79}. In the profile of the topological charge (or action) density, however, there is no sign of the constituents, they rather generate one lump. On the other hand, the measure on the moduli space of classical solutions can be written in terms of the constituent locations \cite{fateev:79_berg:79_diakonov:99}. At this point I invoke some knowledge from gauge theories in four dimensions: Yang-Mills instantons at finite temperature, called calorons \cite{harrington:78}, have magnetic monopoles as constituents, provided the holonomy is nontrivial \cite{kraan:98a_lee:98b}. These calorons are obtained by two effects. One is to squeeze instantons by compactifying the time-like direction to the usual circle of circumference $\beta=1/k_B T$. The second ingredient is the different color orientation of the instanton copies along that compact direction (for a recent review see \cite{bruckmann:07a}). A similar picture applies to doubly periodic Yang-Mills instantons \cite{ford:02a}. The $O(3)$ model has been investigated on the two-torus, too \cite{richard:83_aguado:01}. On $R^2$, fractional charge objects only exist at the expense of singularities \cite{gross:78_zhitnitsky:89}. In this Letter I will show that large instantons in the $O(3)$ model at finite temperature {\em dissociate into two constituents}, provided one allows for a nontrivial transition function in the time-like direction being part of the global symmetry of the model. The constituents are static and of (in general different) {\em fractional topological charge} governed by the new holonomy parameter. \section The model and its solitons on $R^2$} Conventionally, the $O(3)$ model is defined in terms of a three-vector $\phi^a$, taking values on a two-sphere $S^2_c$ in color space, $\phi^a\phi^a=1,\, a=1,2,3$ (the sum convention is used). Its action is just the usual kinetic term, whereas the integer-valued topological charge reads \begin{equation} Q=\frac{1}{8\pi}\int d^2 x\, \epsilon_{\mu\nu}\epsilon_{abc}\phi^a\partial_\mu\phi^b\partial_\nu\phi^c \in\mathbb{Z}\,,\quad \mu=1,2\,. \label{eqn_inst_Qphi} \end{equation} It is useful to introduce a complex structure, \begin{equation} \frac{\phi^1+i\phi^2}{1-\phi^3}=u(z,z^*)\,,\quad z=x_1+i x_2\,. \label{eqn_inst_wintro} \end{equation} The zeroes and poles of $u$ have the immediate interpretation of $\phi$ being on the north and south pole of $S^2_c$, respectively. Such a behavior is necessary for the field to have a winding number, since it has to fully cover $S^2_c$. Configurations of minimal action in $S\geq 4\pi|Q|$ fulfil first order `selfduality' equations {\em solved by $u$ being a function of $z$ (or $z^*$) alone}. The topological charge in terms of $u(z)$ reads \begin{equation} Q=\int d^2x\, q(x)\,,\quad q(x)=\frac{1}{\pi}\frac{1}{(1+|u|^2)^2}\left|\frac{\partial u}{\partial z}\right|^2\,. \label{eqn_inst_Qu} \end{equation} Concerning the solutions, the meromorphic ansatz, \begin{equation} u(z)=\frac{\lambda}{z-z_0}\,, \label{eqn_inst_wmero} \end{equation} has a pole at $z_0$ and a zero at infinity, which makes it plausible that this configuration has charge 1. Indeed, the profile of the topological charge density \begin{equation} q(x)=\frac{1}{\pi}\frac{\lambda^2}{(|z-z_0|^2+\lambda^2)^2} \label{eqn_inst_qprofile} \end{equation} integrates to $Q=1$. One recognizes the size $\lambda$ and the location $z_0$ of the instanton. The model has a global $O(3)$ symmetry rotating $\phi^a$ (the topological charge $Q$ from Eq.~(\ref{eqn_inst_Qphi}) for example is a triple product and thus a pseudoscalar under this symmetry). An $SO(2)$ subgroup of rotations of $(\phi^1,\phi^2)$ acts on $u$ by multiplication with a complex phase, which leaves $q$ unchanged, see Eqs.~(\ref{eqn_inst_wintro}) and (\ref{eqn_inst_Qu}). An analytic ansatz $1/u$ with the roles of pole and zero (i.e. north and south pole on $S^2_c$) interchanged gives the same profile $q(x)$. The function $u(z)$ can also have both its zero and pole at finite $z$, e.g.\ in the rational function \begin{equation} u_{\rm rat}(z)=\frac{z-\hat{z}}{z-\check{z}}\,. \label{eqn_inst_rat_para} \end{equation} This reparametrization offers the possibility of constituents at locations $z=\{\hat{z},\check{z}\}$. However, the topological density still has one lump; it is of the form (\ref{eqn_inst_qprofile}) with size $\lambda=|\hat{z}-\check{z}|/2$ around the center of mass $z_0=(\hat{z}+\check{z})/2$. \section{The case of finite temperature} Higher charge solutions are given by a simple product ansatz $\prod_{k=1}^Q\lambda/(z-z_{0,k})$. For a solution in the finite temperature setting, i.e.~identifying $z\sim z+i\beta$, one would have to consider the infinite product \begin{equation} \prod_{k=-\infty}^{\infty}\frac{\lambda}{z-z_0-ik\beta}\,, \label{eqn_prod_inf} \end{equation} which is infinite. Neglecting an infinite factor and using the product representation of the sine function, a regularized version of (\ref{eqn_prod_inf}) is \begin{equation} \frac{\lambda}{\sinh((z-z_0)\frac{\pi}{\beta})}\,. \label{eqn_cal_naivereg} \end{equation} This function changes sign under $z \to z+i\beta$. More systematically, let $u(z)$ be a function with simple poles at $z=z_0+i k\beta$. Then the Mittag-Leffler theorem fixes the singular part of $u(z)$ uniquely, provided the residues at these points are given. For a periodic $u(z)$ all the residues must be the same, which gives $ \lambda/(\exp((z-z_0)\frac{2\pi}{\beta})-1) $ up to an analytic part. \begin{figure*} \includegraphics[width=0.32\linewidth]{q_omega0_lambda1.eps} \includegraphics[width=0.32\linewidth]{q_omega0_lambda10.eps} \includegraphics[width=0.32\linewidth]{q_omega0_lambda100.eps}\\ \includegraphics[width=0.32\linewidth]{q_omega033_lambda1.eps} \includegraphics[width=0.32\linewidth]{q_omega033_lambda10.eps} \includegraphics[width=0.32\linewidth]{q_omega033_lambda100.eps}\\ \includegraphics[width=0.32\linewidth]{q_omega05_lambda1.eps} \includegraphics[width=0.32\linewidth]{q_omega05_lambda10.eps} \includegraphics[width=0.32\linewidth]{q_omega05_lambda100.eps} \caption{Logarithm of the topological density of solitons with different size and holonomy parameter (plugging (\ref{eqn_cal_u}) into (\ref{eqn_inst_Qu}) and cut off below $e^{-5}$). From left to right $\lambda=1,\,10,\,100$ (with locations growing like $\ln \lambda$ according to Eq.~(\protect\ref{eqn_const_locs})). From top to bottom the periodic case $\omega=0$ with one lump (the massless constituent being infinitely spread like for the Harrington-Shepard caloron), an intermediate case, $\omega=1/3$, and the antiperiodic case $\omega=1/2$ with identical constituents.} \label{fig_densities} \end{figure*} Utilizing the $SO(2)$ symmetry, let $u(z)$ be {\em periodic up to a phase} $\exp(2\pi i\omega),\: \omega\in [0,1]$. This phase fixes the relative residues at consecutive poles and the new solution is \begin{equation} u(z;\omega)=\frac{\lambda\exp(\omega(z-z_0)\frac{2\pi}{\beta})}{\exp((z-z_0)\frac{2\pi}{\beta})-1}\,. \label{eqn_cal_u} \end{equation} It is this solution that {\em gives rise to instanton constituents} for large $\lambda$. This phenomenon is in agreement with the large size regime of Yang-Mills calorons. The parameter $\omega$ gives the complex orientation of the residues in $u(z;\omega)$ along ${\rm Im}\, z$, corresponding to the color orientations of Yang-Mills instanton copies in the ADHM formalism, and will therefore be called holonomy parameter. The only difference to the case of nonabelian gauge theories is that there the nonperiodicity of the primary object, the gauge field $A_\mu(x)$, can be compensated by a time-dependent gauge transformation, whereas in the $O(3)$ model this cannot be done as the gauge symmetry is global. \section{Constituent properties} Vanishing parameter $\omega$ (as well as the equivalent $\omega=1$) refer to the periodic case, while $\omega=1/2$ is the antiperiodic case with $u(z;1/2)$ agreeing with Eq.~(\ref{eqn_cal_naivereg}) up to a factor 2. As Fig.~\ref{fig_densities} shows, the periodic case consists of one lump of action density, while the antiperiodic soliton dissociates into two identical lumps, when the size parameter $\lambda$ is large. Moreover, these constituent lumps are almost static, i.e.\ Im $z$-independent (although $u(z)$ is not). The general case of holonomy parameter $\omega$ reveals lumps of `masses' (that is energy density integrated along Re $z$) of $\omega/\beta$ and $\bar{\omega}/\beta,\:\bar{\omega}\equiv 1-\omega\in[0,1]$, in complete analogy to the YM caloron constituents. Their sum multiplied by the Im $z$-extension $\beta$ gives $Q=1$. This can be best understood by rewriting Eq. (\ref{eqn_cal_u}) into \begin{equation} u(z;\omega)=\frac{1}{\exp(\bar{\omega}(z-z_2)\frac{2\pi}{\beta})- \exp(-\omega(z-z_1)\frac{2\pi}{\beta})} \label{eqn_cal_u_rewritten} \end{equation} with locations \begin{equation} z_1=z_0-\beta\,\frac{\ln\lambda}{2\pi\omega}\,,\quad z_2=z_0+\beta\,\frac{\ln\lambda}{2\pi\bar{\omega}}\,. \label{eqn_const_locs} \end{equation} For $\lambda>1$ the ordering is Re $z_1<$ Re $z_0<$ Re $z_2$. That these locations are really the centers of constituents can be seen for large sizes $\lambda\gg 1$, where Re $z_2\gg$ Re $z_1$, such that around $z_{1,2}$ one of the terms in Eq.~(\ref{eqn_cal_u_rewritten}) is exponentially suppressed leading to \begin{equation} \lambda\gg 1:\:\: u(z;\omega)\simeq\left\{\begin{array}{ll} \exp(-\bar{\omega}(z-z_2)\frac{2\pi}{\beta}) & \mbox{for } z\simeq z_2\,,\\ -\exp(\omega(z-z_1)\frac{2\pi}{\beta}) & \mbox{for } z\simeq z_1\,. \end{array}\right. \end{equation} The corresponding constituent solution \begin{equation} u_{\rm const}(z;\omega)=\exp(\omega z \, \frac{2\pi}{\beta})\,, \label{eqn_const_1} \end{equation} gives rise to an exponentially localized profile \begin{equation} q(x)=\frac{\pi\omega^2}{\beta^2 \cosh^2(\omega \mbox{ Re\,} z\,\frac{2\pi}{\beta})}\,, \end{equation} which is static and yields $Q=\omega$. The other constituent $ \exp(-\bar{\omega} z \, \frac{2\pi}{\beta}) $ has the same profile with $\omega$ replaced by $\bar{\omega}$. Actually, $u_{\rm const}(z,\omega+m)$ with any integer $m$ is consistent with the boundary condition and gives $Q=|\omega+m|$. In the far field limit $|$Re $z|\to\infty$ again one of the terms in Eq.~(\ref{eqn_cal_u_rewritten}) is exponentially small and by neglecting it, only the nearest constituent is visible (for Re $z\to\mp\infty$ the one at $z_{1,2}$), in contrast to the YM case, where all constituents are present via algebraic tails. The fractional charge finds its counterpart in the fractional covering of the complex plane by $u(z;\omega)$, see Fig~\ref{fig_fract_cover}. \begin{figure}[!h] \psfrag{i}{Re $z\to\infty$} \psfrag{m}{Re $z\to-\infty$} \psfrag{b}{Im $z=\beta\Rightarrow$} \psfrag{c}{arg $u(z)=2\pi\omega$} \psfrag{z}{Im $z=0$} \psfrag{r}{Re $z=$ const.} \includegraphics[width=0.7\linewidth]{fract_cover.eps} \caption{The image of the function $u_{\rm const}(z;\omega)$, Eq.~(\ref{eqn_const_1}), of a single constituent covers a fraction $\omega$ of the complex plane.} \label{fig_fract_cover} \end{figure} \section{Topology} For the topological description of the possible solitonic solutions I start with the usual argument that a finite topological charge demands an Im $z$-independent function $u(z)$ asymptotically, i.e.\ for large $|$Re $z|$. This is consistent with the nontrivial boundary condition, the phase change $\exp(2\pi i\omega)$ in Im $z$, only if the asymptotic values of $u(z)$ are zero or infinity, i.e.\ the poles $\phi_3=\pm1$ on the complex sphere\footnote{ These poles are distinguished by the choice of the $SO(2)$ subgroup in the boundary condition.}. The topological charge density of Eq.~(\ref{eqn_inst_Qphi}), the (pullback of the) volume form, can locally be written as a curl (exterior derivative of a one-form), \begin{eqnarray} q(x)&=&\frac{1}{4\pi} \epsilon_{\mu\nu}\sin\theta\,\partial_\mu\theta\partial_\nu\varphi \label{eqn_Q_spher} =\frac{1}{4\pi} \epsilon_{\mu\nu}\partial_\mu[(\pm1-\cos\theta)\partial_\nu\varphi]\nonumber\\ &&\phi_3=\cos\theta\,,\:\:\phi_1+i\phi_2=\sin\theta e^{i\varphi}\,, \label{eqn_Q_curl} \end{eqnarray} where the expression in the square bracket is a regular representation in spherical coordinates in $\phi$ around the north and south pole, respectively. Consequently, I now divide the coordinate space into regions, where $\phi(x)$ is on the northern and southern hemisphere, called $N$ and $S$, respectively, and use the corresponding regular expression there. Then $Q$ reduces to boundary integrals, both on the preimage of the equator separating the hemispheres (there $\cos\theta=0$ or equivalently $|u(z)|=1$) and on the boundary of space itself, \begin{eqnarray} Q&=&\frac{1}{4\pi}\int_{\phi_3(x)=0} d\vec{\sigma}\left((+1)-(-1)\right)\vec{\partial}\varphi+ \label{eqn_Q_eq}\\ &&\frac{1}{4\pi}\int dx_1 \left.(\pm1-\cos\theta)\frac{\partial\varphi}{\partial x_1} \right|^{x_2=\beta}_{x_2=0}\,. \end{eqnarray} The second term vanishes since $\varphi(x_1,x_2+\beta)=\varphi(x_1,x_2)+2\pi\omega$ and $\theta(x_1,x_2+\beta)=\theta(x_1,x_2)$ (and because the assignment to the hemispheres is the same on both boundaries). In the first term the curves are the borders of two regions with opposite sign in Eq.~(\ref{eqn_Q_curl}), endowing the curve with opposite orientations. Thus, {\em the topological charge $Q$ is given as the sum of oriented changes in the angular variable $\varphi=\arctan(\phi_2/\phi_1)$ along the `equator lines' $\phi_3=0$ divided by $2\pi$}. For these equator lines three types of configurations are possible. The first one is a curve stretching from one boundary of space to (the same $x_1$ at) the other boundary. It separates $S$ on its lhs.\ from $N$ on its rhs.\ or vice versa. Such a line picks up a $\varphi$-change of $\pm 2\pi\omega$ up to multiples of $2\pi$. Hence, these equator lines contribute fractional topological charges $\omega+m$ or $-\omega+n$. For the definiteness of $Q$ the integers are restricted, $m\geq 0,n\geq 1$. \begin{figure}[b] \psfrag{o}{$\omega$} \psfrag{p}{$1-\omega$} \psfrag{z1}{$z_1$} \psfrag{z2}{$z_2$} \includegraphics[width=0.8\linewidth]{eq_line_inst_large.eps} \vspace{0.5cm} \psfrag{1}{$\,1$} \includegraphics[width=0.8\linewidth]{eq_line_inst_small.eps} \caption{The distribution of the southern and northern hemisphere for a large (top) and small (bottom) instanton and the contributions to the topological charge $Q$ from the oriented equator lines. To get the picture for the individual constituents, one simply cuts the upper plot vertically in the middle.} \label{fig_topology} \end{figure} In fact, individual or well-separated constituents are examples of this phenomenon, because the equator lines run through their centers. Fig.~\ref{fig_topology} top depicts the situation for a large instanton, where $m$ and $n$ take values 0 and 1, respectively (cf. the discussion below Eq.~(\ref{eqn_const_1})). The second type of equator line returns to the same boundary. Because of the periodicity of $\phi_3$, there will be another equator line starting and ending at the same $x_1$ at the other boundary. It is easy to see that the sum of $\varphi$-changes on these two lines is an integer. A small instanton provides an example of this type of equator line with contribution 1, see Fig.~\ref{fig_topology} bottom. Finally, a closed curve encircling $N$ or $S$ is the third possibility for an equator line. It does not feel the boundary condition and therefore is well-known, e.g.\ from the Wu-Yang construction of the Dirac monopole. The contribution of such an equator line is again an integer, the winding number of $\varphi$ (around a zero or pole of $u(z)$). The essence of the topological considerations is that whenever the preimage of one hemisphere is an `island' in the preimage of the other hemisphere, there is an integer contribution to the topological charge (the last two types). {\em The fractional part of $Q$ emerges from equator lines between the boundaries separating different hemispheres} (for like hemispheres the contributions in Eq.~(\ref{eqn_Q_eq}) have the same sign and cancel). This has the interesting consequence, that the fractional charge cannot accumulate -- to say $\pm2\omega$ -- without being accompanied by a contribution $\mp\omega$ inbetween (including integers $m$ and $n$) and that the fraction of the topological charge $Q$ is determined purely by the asymptotics $|x_1|\to\infty$. Different poles there give $\omega$ or $-\omega$ (examples are the individual constituents discussed in the previous section), whereas same asymptotic poles give no fractional charge (like for the instantons, see Fig.~\ref{fig_topology}). Of course, these calculations cannot determine the local distribution of the topological charge density $q(x)$; in the examples, however, the latter turns out to be concentrated around the equator lines. \section{Conclusions} At finite temperature, instantons in the $O(3)$ model reveal fractional charge constituents. These novel solutions have been given by simple analytic expression using complex functions. In general, the topological charge consists of a fractional part $\pm\omega$, where $\omega$ is the holonomy parameter in the boundary conditions governing the masses of the static constituents, plus possible integers from undissociated instantons being time-dependent. The size parameter of large instantons transmutes into the distance of its constituents, which themselves have a size proportional to $\beta$. This also resolves the puzzle, why the instanton quark locations -- even if they are locations of constituents in the zero temperature limit (which can be achieved) -- do not show up as individual topological lumps: for $\beta\to\infty$ the constituents become large and inevitably overlap. Put differently, in the zero temperature case, there is no other scale besides the distance of the instanton quarks, that could localize the latter. The finite temperature case might be viewed as the limit of elongated space-time tori (with nontrivial boundary conditions), where the smaller extension governs the size of the constituents, which can separate in the direction of the larger extension. Many of these features are similar to those of Yang-Mills calorons. It would be interesting to investigate how far this analogy goes, in particular since $CP(N)$ models can be parametrized by virtue of a gauge field. From the corresponding Yang-Mills phenomenon, {\em fermions} are expected to localize to the constituents in the background and to hop with their boundary conditions. The constituent picture should also be of relevance for understanding the physical mechanisms of the $O(3)$ model at finite temperature.\\ \noindent The author thanks P.~van Baal and A.~Wipf for discussions and acknowledges support by DFG (BR 2872/4-1).
1,108,101,566,144
arxiv
\section{Introduction} We work over the field $\mathbb{C}$ of complex numbers. Let $X$ be a compact K\"ahler space with only rational singularities. Given an automorphism $f\in\textup{Aut}(X)$, the \textit{first dynamical degree} $d_1(f)$ is defined as the spectral radius $\rho(f^*)$ of the induced linear operation $f^*$ on the Bott-Chern cohomology space $H_{\textup{BC}}^{1,1}(X)$ (cf.~\cite[Definition 4.6.2]{BEG13}). We say that $f$ is of \textit{positive entropy} if $d_1(f)>1$. Note that the definition for $d_1(f)$ here coincides with the usual one when $X$ is smooth (cf.~the arXiv version \cite[Remark 1.2]{Zho22}). A subgroup $G\subseteq\textup{Aut}(X)$ is of \textit{positive entropy}, if every non-trivial element of $G$ is of positive entropy. Due to the existence of the $G$-equivariant resolution (cf. e.g. \cite[Theorem 2.0.1]{Wlo09}), it was proved by Dinh and Sibony that the rank of a free abelian group $G$, which is of positive entropy, is no more than $\dim X-1$; see \cite[Theorem I]{DS04}. We refer readers to \cite[Theorem 1.1]{Zha09} for the extension to the solvable group case. As mentioned in \cite{DS04}, it is interesting to consider the extremal case when $G$ achieves the maximal rank $\dim X-1$ (cf.~\cite[Problem 1.1]{Zho22}). For projective varieties, it has been intensively studied by Zhang in his series of papers (cf.~\cite{Zha09}, \cite{Zha16}). In the analytic case, we showed in \cite{Zho22} that, a $\mathbb{Q}$-factorial compact K\"ahler terminal threefold $X$ admitting an action of a free abelian group $G$, which is of positive entropy and of maximal rank, is either rationally connected or bimeromoprhic to a \textit{$Q$-complex torus}, i.e., there is a finite surjective morphism from a complex torus $T\to X$, which is \'etale in codimension one. In the proof of this main result \cite[Theorem 1.3]{Zho22} however, we applied the abundance theorem for K\"ahler threefolds; see \cite[Theorem 1.1]{CHP16}. As recently claimed in \cite{CHP21}, the proof of \cite[Theorem 1.1]{CHP16} for the case when the Kodaira dimension $\kappa(X)=0$ and the algebraic dimension $a(X)=0$ seems incomplete. In this note, we shall provide an independent proof of \cite[theorem 1.3]{Zho22} by constructing the $G$-equivariant minimal model program ($G$-MMP for short) for K\"ahler threefolds so as to fix the resulting issue in our previous paper. Moreover, thanks to the recent inspiring progress of the minimal model theory on K\"ahler threefolds (\cite{DO22}, \cite{DH22}; cf.~\cite{HP15}, \cite{HP16}, \cite{CHP16} and \cite{CHP21}), we can strengthen \cite[Theorem 1.3]{Zho22} by weakening the condition of $X$ having terminal singularities to that of $X$ having only klt singularities (cf.~Theorem \ref{thm-main}). We refer readers to \cite[Introduction]{Zho22} and the references therein for the background of this project. In this paper, we will focus on a clean proof of the existence of the $G$-MMP on K\"ahler threefolds (cf.~Remark \ref{rmk-difference}). Before we state the main result, let us consider the following assumption. \setlength\parskip{8pt}\par \vskip 0.2pc \noindent \textbf{\hypertarget{Hypothesis}{Hypothesis}:} Let $(X,D)$ be a normal $\mathbb{Q}$-factorial compact K\"ahler threefold pair with only klt singularities (where the boundary $D$ is some effective $\mathbb{Q}$-divisor). Suppose that there is an action of a free abelian group $G\cong\mathbb{Z}^2\subseteq \textup{Aut}(X)$, which is of positive entropy and of maximal rank. Suppose further that every irreducible component of the support of $D$ is $G$-periodic. The following is our main result of this paper; see \cite[Theorem 1.3]{Zho22} for the case of terminal threefolds. \setlength\parskip{0pt} \par \vskip 0pc \noindent \begin{theorem}\label{thm-main} Let $(X,D,G)$ be a pair satisfying \hyperlink{Hypothesis}{Hypothesis}. If $X$ is not rationally connected, then with $G$ replaced by a finite-index subgroup, the following assertions hold. \begin{enumerate} \item There is a $G$-equivariant bimeromorphic map $X\dashrightarrow Z$ (i.e., $G$ descends to a biregular action on $Z$) such that $Z$ is a $Q$-complex torus. \item $Z\cong T/H$ where $H$ is a finite group acting freely outside a finite set of a complex $3$-torus $T$. Moreover, the quotient morphism $T\to Z$ is also $G$-equivariant. \item There is no positive dimensional $G$-periodic proper subvariety of $Z$; in particular, the bimeromorphic map $X\dashrightarrow Z$ in (1) is holomorphic. \end{enumerate} \end{theorem} \begin{remark}[Differences with the previous paper]\label{rmk-difference} Comparing the present paper with \cite{Zho22}, we give a few remarks on the differences of the approaches in the proofs. \begin{enumerate} \item As mentioned in \cite[Remark 1.6]{Zho22}, we were not able to run the $G$-MMP in \cite{Zho22}. One of the difficulty to show the $G$-equivariancy we met was the unknown of the finiteness of $(K_X+\xi)$-negative extremal rays where $\xi$ is a nef and big class. Our method therein was to reduce $X$ to its minimal model $X_{\textup{min}}$ with the induced action $G|_{X_{\textup{min}}}$ consisting of bimeomorphic transformations. Aapplying the trick of Albanese closure and the speciality of threefolds, we showed that the descended $G|_{X_{\textup{min}}}$ indeed lies in $\textup{Aut}(X_{\textup{min}})$, i.e., the elements in $G|_{X_{\textup{min}}}$ are biholomorphic. \item As kindly pointed out by Doctor Sheng Meng, one can fix the finiteness issue after applying the recent progress on the log minimal model theory for K\"ahler threefolds (cf.~\cite{DO22} and \cite{DH22}); see Proposition \ref{pro-nef-big-finite}. Further, the full log version of the base point free theorem \cite[Theorem 1.7]{DH22} for K\"ahler threefolds provides us a powerful tool to achieve the end product of the $G$-MMP (cf.~Proposition \ref{prop-end-pdt}). Then we can follow the strategy in \cite{Zha16} with several necessary modifications (cf.~e.g. Proposition \ref{pro-nef-big-finite}, Claim \ref{claim-equivariancy}) to confirm the existence of the $G$-MMP for K\"ahler threefolds and finally prove Theorem \ref{thm-main}. \item In contrast to the previous proof of \cite[Theorem 1.3]{Zho22}, our present proof in this paper does not rely on the incomplete case of the abundance any more. It only depends on the abundance for numerically trivial log-canonical classes, which has been confirmed in \cite[Corollary 1.18]{CGP20} and \cite[Theorem 1.1 (3)]{DO22}. \item The $G$-equivariancy of the log minimal model program has its own interest, especially in the classification problem. The existence of such $G$-MMP indicates the fundamental building blocks of automorphism groups in the K\"ahler spaces. \end{enumerate} \end{remark} \subsubsection*{\textbf{\textup{Acknowledgments}}} The author would like to deeply thank Professor De-Qi Zhang and Doctor Sheng Meng for many inspiring discussions. He would also like to thank Professor Andreas H\"oring for answering his question on the log minimal model program. He is supported by the Institute for Basic Science (IBS-R032-D1). \section{Preliminaries} In this section, we unify some notations and prepare some results for the construction of the $G$-MMP. We follow the notations and terminologies in \cite[Section 2]{KM98}, \cite{HP16}, \cite{DH22} and \cite{Zho22} (cf.~\cite[Section 2]{Zho21}). For the convenience of readers and us, we recall the following definition which will be crucially used in this paper. \begin{definition}[{\cite[Section 3]{HP16}, \cite[Section 2]{DH22}}] Let $(X,\omega)$ be a normal compact K\"ahler space, where $\omega$ is a fixed K\"ahler form, and $H_{\textup{BC}}^{1,1}(X)$ the Bott-Chern cohomology of real closed $(1,1)$-forms with local potentials, or closed $(1,1)$-currents with local potentials. Let $\textup{N}_1(X)$ be the vector space of real closed currents of bi-dimension $(1,1)$ modulo the equivalence: $T_1\equiv T_2$ if and only if $T_1(\eta)=T_2(\eta)$ for all real closed $(1,1)$-forms $\eta$ with local potentials. Let $\overline{\textup{NA}}(X)\subseteq \textup{N}_1(X)$ be the closure of the cone generated by the classes of positive closed currents of bi-dimension $(1,1)$. \begin{enumerate} \item A positive closed $(1,1)$-current $T$ with local potentials is called a \textit{K\"ahler current}, if $T\ge \varepsilon\omega$ for some $\varepsilon>0$. A $(1,1)$-class $\alpha\in H_{\textup{BC}}^{1,1}(X)$ is called \textit{big} if it contains a K\"ahler current. \item A $(1,1)$-class $\alpha\in H_{\textup{BC}}^{1,1}(X)$ is called \textit{nef} if it can be represented by a form $\eta$ with local potentials such that for every $\varepsilon$, there exists a $\mathcal{C}^{\infty}$ function $f_\varepsilon$ such that $\eta+dd^cf_\varepsilon\ge-\varepsilon\omega$. Denote by $\textup{Nef}(X)$ the cone generated by nef $(1,1)$-classes. \item A $(1,1)$-class $\alpha\in H_{\textup{BC}}^{1,1}(X)$ is called \textit{pseudo-effective}, if it can be represented by a positive closed $(1,1)$-current $T$ which is locally the form $dd^cf$ for some plurisubharmonic function $f$. \item A big class $\alpha$ is called \textit{modified K\"ahler} if it contains a K\"ahler current $T$ such that the Lelong number $\nu(T,D)=0$ for every prime divisor $D$ on $X$ (cf.~\cite[Definition 2.2]{Bou04}). \end{enumerate} \end{definition} \begin{notation}\label{not-xi} Let $(X,D,G)$ be a pair satisfying \hyperlink{Hypothesis}{Hypothesis}. Let $\pi:\widetilde{X}\to X$ be a $G$-equivariant resolution (cf.~e.g.~\cite[Theorem 2.0.1]{Wlo09}). Applying the proof of \cite[Theorems 4.3 and 4.7]{DS04} to the (lifted) action of $G$ on $\pi^*\textup{Nef}(X)$, there are three nef classes $\pi^*\xi_j$ with $j=1,2,3$ on $\widetilde{X}$ as common eigenvectors of $G$ such that $\xi_1\cdot\xi_2\cdot\xi_3\neq 0$. To be more precise, there are three characters $\chi_j:G\to\mathbb{R}_{>0}$ such that $g^*(\pi^*\xi_j)=\chi_j(g)(\pi^*\xi_j)$ for each $g\in G$. Since $X$ is a threefold, each $\xi_j$ is also nef (cf.~\cite[Lemma 3.13]{HP16}); hence there exist three nef common eigenvectors of $G$ on $\textup{Nef}(X)$. Let $\xi:=\xi_1+\xi_2+\xi_3$. Then $\xi^3>0$ and $\xi$ is thus a nef and big class on $X$ (cf.~\cite[Theorem 0.5]{DP04} and \cite[Proposition 2.6]{Zho21}). \end{notation} The following proposition plays a significant role in the proof of Theorem \ref{thm-main}. \begin{proposition}\label{pro-nef-big-finite} Let $X$ be a normal $\mathbb{Q}$-factorial compact K\"ahler threefold and $\Delta_0$ an effective $\mathbb{Q}$-divisor such that $(X,\Delta_0)$ has only klt singularities. Let $\alpha$ be a nef and big class on $X$. Suppose that one of the following conditions holds. \begin{enumerate} \item $K_X+\Delta_0$ is pseudo-effective; or \item $K_X+\Delta_0$ is not pseudo-effective but $K_X+\Delta_0+\alpha$ is a big class. \end{enumerate} Then there are only finitely many $(K_X+\Delta_0+\alpha)$-negative extremal rays, all of which are generated by rational curves. \end{proposition} Before proving the above proposition, we prepare the following lemma, which shares a similar proof as in the projective setting. \begin{lemma}[{cf.~\cite[Corollary 1.19]{KM98}}]\label{lem-compact} Let $X$ be a compact K\"ahler space with a fixed K\"ahler class $\omega$. Then the subset $S_M:=\{c\in\overline{\textup{NA}}(X)~|~\omega\cdot c\le M\}$ is compact for every positive number $M$. \end{lemma} \begin{proof} Let us fix a norm $||\cdot||$ on the finite dimensional vector space $\textup{N}_1(X)$, and assume that $S_M$ is not compact. Then there is a sequence $\{c_i\}\subseteq \overline{\textup{NA}}(X)$ such that $||c_i||\to\infty$ as $i\to\infty$. However, the normalized set $\{\frac{c_i}{||c_i||}\}$ is bounded; thus one may pick a subsequence $\{\frac{c_{i_k}}{||c_{i_k}||}\}$ which converges to a non-zero point $c_0\in\overline{\textup{NA}}(X)$ as $k\to\infty$. So $$(\omega\cdot c_0)=\lim_{k\to\infty}\frac{(\omega\cdot c_{i_k})}{||c_{i_k}||}=0,$$ a contradiction to $\omega$ being a K\"ahler class (cf.~\cite[Proposition 3.15]{HP16}). \end{proof} \begin{proof}[Proof of Proposition \ref{pro-nef-big-finite}] Since $\alpha$ is nef and big, by \cite[Lemma 2.36]{DH22}, there exist a modified K\"ahler class $\eta$ and an effective $\mathbb{Q}$-divisor $F$ such that $\alpha=\eta+F$ and $(X,\Delta:=\Delta_0+F)$ is klt. We will show the finiteness of $(K_X+\Delta+\eta)$-negative extremal rays. First, we assume (1), i.e., $K_X+\Delta_0$ is pseudo-effective. Then every $(K_X+\Delta_0+\alpha)$-negative extremal ray $R$ is $(K_X+\Delta_0)$-negative, and hence generated by a rational curve $R=\mathbb{R}_{\ge 0}[\ell]$ (see \cite[Theorem 10.12]{DO22} and \cite[Theorem 4.2]{CHP16}). Since $(X,\Delta)$ is also klt with $K_X+\Delta$ being pseudo-effective, by \cite[Theorem 10.12]{DO22}, there is a positive number $d$ such that every $(K_X+\Delta+\eta)$-negative extremal ray $R_i$ with $R_i=\mathbb{R}_{\ge}[\ell_i]$ satisfies $-(K_X+\Delta)\cdot \ell_i\le d$ (note that here, we include the case $(K_X+\Delta)\cdot \ell_i\ge 0$). Therefore, $\eta\cdot \ell_i<-(K_X+\Delta)\cdot\ell_i\le d$. We shall prove that, there are only finitely many such curve class with $\eta\cdot [\ell_i]<d$. Indeed, since $\eta$ is modified K\"ahler, applying \cite[Proposition 2.3]{Bou04} and \cite[Page 990, Footnote (5)]{CHP16} to $\eta$ (on the singular space $X$), we have a suitable resolution $\pi:\widetilde{X}\to X$ such that $\pi^*\eta=\widetilde{\eta}+E$ where $\widetilde{\eta}$ is a K\"ahler class and $E$ is an effective $\pi$-exceptional $\mathbb{R}$-divisor on $\widetilde{X}$. Suppose to the contrary that there are infinitely many curve classes $[\ell_i]$ with $\eta\cdot [\ell_i]<d$. Since $\pi(E)$ is the union of finitely many curves and points, we may assume that all of such curves $\ell_i$ are not contained in $\pi(E)$. Let $\widetilde{\ell}_i$ be the proper transform of $\ell_i$ along $\pi$. Now that different $\ell_i$ and $\ell_j$ lie in different numerical classes, their proper transforms $\widetilde{\ell}_i$ and $\widetilde{\ell}_j$ also lie in different numerical classes. Moreover, $\widetilde{\eta}\cdot\widetilde{\ell}_i=\pi^*\eta\cdot\widetilde{\ell}_i-E\cdot\widetilde{\ell}_i\le\eta\cdot\ell_i\le d$. So there are infinitely many curve classes $[\widetilde{\ell}_i]$ on $\widetilde{X}$ such that $\widetilde{\eta}\cdot\widetilde{\ell}_i\le d$. However, by Lemma \ref{lem-compact}, $S_d$ is compact; hence such curve classes (as a closed discrete set) in $S_d$ are finitely many, which produces a contradiction. Therefore, if $K_X+\Delta_0$ is pseudo-effective, there are only finitely many $(K_X+\Delta+\eta)=(K_X+\Delta_0+\alpha)$-negative extremal rays. Second, we assume (2), i.e., $K_X+\Delta_0$ is not pseudo-effective. Then $X$ is uniruled (cf.~\cite{Bru06}). Also, we may assume that the object of the MRC fibration of $X$ is a non-uniruled surface; otherwise, $X$ is projective (cf.~\cite[Introduction]{HP15} or \cite[Lemma 2.39]{DH22}) and our proposition then follows from the usual Kodaira's lemma and the cone theorem. Since $K_X+\Delta+\eta$ is big by assumption, for a general fibre $F$ of the MRC fibration, $(K_X+\Delta+\eta)\cdot F>0$. With the same argument as in the above paragraph after replacing \cite[Theorem 10.12]{DO22} by \cite[Corollary 4.10]{DH22}, we finish the proof. \end{proof} Given a nef and big class $\alpha$ on a normal compact K\"ahler space, we define the \textit{null locus} $\textup{Null}(\alpha)$ as the union of the proper subvarieties $V$ such that $\alpha|_V$ is not big (cf.~\cite{CT15}). \begin{lemma}[{cf.~\cite[Lemma 3.9]{Zha16}}]\label{lem-null-periodic} With the same assumption and notations as in \hyperlink{Hypothesis}{Hypothesis} and Notation \ref{not-xi}, the null locus of the nef and big class $\xi$ $$\textup{Null}(\xi):=\bigcup_{\xi|_V~\textup{not big}}V=\bigcup_{V~\textup{is}~ G\textup{-periodic}}V.$$ In particular, there are only finitely many $G$-periodic proper analytic subvarieties. \end{lemma} \begin{proof} For every closed subvariety $V$ on $X$, the restriction $\xi|_V$ is still nef. Hence, $\xi|_V$ being not big is equivalent to $0=(\xi|_V)^{\dim V}=\xi^{\dim V}\cdot V$ (cf.~\cite[Theorem 0.5]{DP04} and \cite[Proposition 2.6]{Zho21}). Applying a theorem of Collins and Tosatti \cite{CT15} to the pull-back of the class $\xi$ to a resolution of $X$, $\textup{Null}(\xi)$ has only finitely many irreducible components. First, let $V$ (with $d:=\dim V$) be a proper $G$-periodic subvariety. With $G$ replaced by a finite-index subgroup, we may assume $g^*V=V$ for any $g\in G$. On the one hand, by the projection formula, $\chi_1\cdot\chi_2\cdot\chi_3=1$. On the other hand, for any $j_1,j_2$, the image of $\varphi:G\to\mathbb{R}^2$ sending $g$ to $(\log\chi_{j_1}(g),\log\chi_{j_2}(g))$ is a spanning lattice (cf.~\cite[Section 4]{DS04}). As a result, for any $i_1,\cdots,i_d$ (with $d\le 2$), we can choose $g\in G$ such that $\chi_{i_k}(g)>1$ for every $i_k$. Thus, $\xi_{i_1}\cdots\xi_{i_d}\cdot V=0$, which implies $\xi^d\cdot V=0$. One inclusion is verified. Second, let $V$ (with $d:=\dim V$) be an irreducible closed subvariety such that $\xi^{d}\cdot V=0$. By the nefness of $\xi_i$, we have $\xi_{i_1}\cdots\xi_{i_d}\cdot V=0$ for all $i_k$. With the same reason as above, $\xi_{i_1}\cdots\xi_{i_d}\cdot g(V)=0$ and then $\xi^{d}\cdot g(V)=0$ for any $g\in G$. So $g(V)\subseteq\textup{Null}(\xi)$ and there is a natural inclusion $V\subseteq\overline{\cup_{g\in G}g(V)}\subseteq \textup{Null}(\xi)$. Since the closure $\overline{\cup_{g\in G}g(V)}$ is $G$-stable and contained in $\textup{Null}(\xi)$ which is a union of finitely many proper subvarieties, $\textup{Null}(\xi)$ is contained in the right hand side. \end{proof} We end up this section by recalling the following lemma from previous papers \begin{lemma}[{\cite[Lemma 2.7]{Zho22}; cf.~\cite[Lemma 3.7]{Zha16}}]\label{lem-periodic} With the same assumption and notations as in \hyperlink{Hypothesis}{Hypothesis} and Notation \ref{not-xi}, for every $G$-periodic $(k,k)$-class $\eta$ with $k=1$ or $2$, $\xi^{3-k}\cdot \eta=0$; in particular, $\xi^2\cdot c_1(X)=\xi\cdot c_1(X)^2=0$ and $\xi\cdot \widetilde{c_2}(X)=0$ (where $\widetilde{c_2}(X)$ is the orbifold second Chern class as defined in \cite[Section 5]{GK20}). \end{lemma} \section{Equivariancy of the MMP, Proof of Theorem \ref{thm-main}} In this section, we establish the $G$-equivariant minimal model program for K\"ahler threefolds and prove Theorem \ref{thm-main}. Throughout this section, we stick to \hyperlink{Hypothesis}{Hypothesis} and Notation \ref{not-xi}. Roughly speaking, when $X$ is non-uniruled, the $G$-MMP is to contract the null locus of $\xi$ so as to get a K\"ahler class with good properties (cf.~\cite{Zha16}). Let us begin with the proposition below, which reveals the end product of such $G$-MMP. In contrast to the projective setting, the nef and big $(1,1)$-class $\xi$ here is possibly not an $\mathbb{R}$-divisor. As a consequence, the proof of the $G$-equivariancy of the fibration indeced by the base point free theorem \cite[Theorem 1.7]{DH22} does not follow immediately from Kodaira's lemma and more arguments are needed (cf.~Claim \ref{claim-equivariancy}). \begin{proposition}\label{prop-end-pdt} Theorem \ref{thm-main} holds true if $K_X+D+\xi$ is already nef. \end{proposition} \begin{proof}[Proof of Proposition \ref{prop-end-pdt}] In the view of \cite[Theorem 1.3 (1) and its proof]{Zho22}, we only need to consider the case when $X$ is not uniruled (cf.~\cite[Lemma 2.10]{Zha09}). With $\xi$ replaced by $2\xi$ if necessary, we may assume that $K_X+D+\xi$ is also big. Since $K_X+D+\xi$ is nef, by the base point free theorem \cite[Theorem 1.7]{DH22}, there exist a proper surjective morphism with connected fibres $\psi:X\to Z$ onto a normal K\"ahler variety $Z$ and a K\"ahler class $\beta$ on $Z$ such that $K_X+D+\xi=\psi^*\beta$. By the bigness of $K_X+D+\xi$, $\psi$ is bimeromorphic. \begin{claim}\label{claim-equivariancy} $\psi$ is $G$-equivariant. \end{claim} \noindent \textbf{Proof of Claim \ref{claim-equivariancy}:} Let $R$ be a $(K_X+D+\xi)$-trivial extremal ray. If $R$ is $\xi$-positive, then taking a positive number $\varepsilon<1$, one has $$(K_X+D+(1-\varepsilon)\xi)\cdot R<(K_X+D+\xi)\cdot R=0.$$ By Proposition \ref{pro-nef-big-finite}, there are only finitely many such $(K_X+D+\xi)$-trivial but $\xi$-positive extremal rays. Therefore, after replacing $\xi$ by a multiple, we may assume that every $(K_X+D+\xi)$-trivial extremal ray is $\xi$-trivial. By \cite[Theorem 1.3]{CHP16}, every $(K_X+D+\xi)$-trivial curve is thus $\xi$-trivial. We shall follow \cite[Proof of Theorem 6.4]{DH22} to establish the $G$-equivariancy of $\psi$. By \cite[Lemma 2.36]{DH22}, there is a decomposition $\xi=\eta+F$ such that $\eta$ is modified K\"ahler and $(X,D+F)$ is still klt. Let us first prove that the support of $F$ is $G$-periodic so that the pair $(X,D+F,G)$ still satisfies \hyperlink{Hypothesis}{Hypothesis}. Let $\pi:\widetilde{X}\to X$ be a $G$-equivariant resolution with $\pi^*\xi$ nef and big. By \cite[Theorem 3.17 (ii)]{Bou04}, there is a K\"ahler current $T\in \pi^*\xi$ with analytic singularities precisely along the non-K\"ahler locus, i.e., $\textup{Sing}\,~T=E_{\textup{nK}}(\pi^*\xi)$ (cf.~ \cite{Dem92} and \cite[Theorem 2.2]{CT15}). Note also that $E_{\textup{nK}}(\pi^*\xi)$ coincides with $\textup{Null}(\pi^*\xi)$ (cf.~\cite[Theorem 1.1]{CT15}). After blowing up the associated coherent ideal sheaf (with its support along $E_{\textup{nK}}(\pi^*\xi)$) and taking Hironaka's resolution of singularities (cf.~e.g.~\cite[Definition 2.5.1 and Proof of Proposition 2.3]{Bou04}), we get a (not necessarily $G$-equivariant) bimeromorphic holomorphic map $\pi':\widetilde{X}'\to \widetilde{X}$ such that $\pi'^*\pi^*\xi=\omega+E$ where $\omega$ is some K\"ahler class and $E$ is an effective $\mathbb{R}$-divisor with its $\pi'$-image having the support along $\textup{Supp}\,\pi'(E)=\textup{Null}(\pi^*\xi)$. By Lemma \ref{lem-null-periodic}, the support of $F$ (as the divisorial component of $\pi(\pi'(E))$; cf.~\cite[Lemma 2.36]{DH22}) is $G$-periodic. In the following, we can run $(K_X+D+\xi)$-trivial but $(K_X+D+F)$-negative minimal model program $G$-equivariantly. \setlength\parskip{3pt}\par \vskip 0.3pc \noindent \textbf{Step 1.} First, we deal with the case when every $(K_X+D+\xi)$-trivial curve is $(K_X+D+F)$-non-negative. Then the null locus $\textup{Null}(K_X+D+\xi)$ is the union of finitely many curves (cf.~\cite[Paragraph 4, Page 48, Proof of Theorem 6.4]{DH22}). By \cite[Proposition 6.2]{DH22} (cf.~\cite[Theorem 4.2]{CHP16}), there is a proper bimeromorphic holomorphic map $\psi_0:X\to Z$ contracting $\textup{Null}(K_X+D+\xi)$ to a single point. Moreover, $K_X+D+\xi=\psi_0^*\beta$ for some K\"ahler class $\beta$ on $Z$ (cf.~\cite[Ending part of Proof of Theorem 6.4]{DH22}). For every curve $\ell$ contracted by $\psi_0$, it follows from the projection formula that $\xi_i\cdot g(\ell)=0$, noting that each $\xi_i$ is nef and $\ell$ is also $\xi$-trivial. Then $0=(K_X+D+\xi)\cdot g(\ell)=\beta\cdot \psi_0(g(\ell))$, which implies that $g(\ell)$ is also contracted by $\psi_0$ for every $g\in G$. By the rigidity lemma (cf.~\cite[Lemma 4.1.13]{BS95}), $G$ descends to $Z$ holomorphically, i.e., $\psi_0$ is $G$-equivariant. \setlength\parskip{3pt}\par \vskip 0.3pc \noindent \textbf{Step 2.} Suppose in the following that there is a $(K_X+D+\xi)$-trivial extremal ray $R$, which is $\xi$-trivial but $(K_X+D+F)$-negative. Then it follows from \cite[Theorem 1.3]{CHP16} that such $R=\mathbb{R}_{\ge0}[C]$ is generated by a curve $C$ with $-(K_X+D+F)\cdot C\le 4$. Since $\xi=\eta+F$, we have $\eta\cdot C\le 4$ and thus there are only finitely many extremal rays $R$ such that $(K_X+D+\xi)\cdot R=0$, $\xi\cdot R=0$ but $(K_X+D+F)\cdot R<0$ (cf.~Proof of Proposition \ref{pro-nef-big-finite}). Since the support of $D+F$ is $G$-periodic, with $G$ replaced by a finite-index subgroup, the $G$-image of $R$ is also $(K_X+D+\xi)$-trivial, $\xi$-trivial but $(K_X+D+F)$-negative. With $G$ replaced by a finite-index subgroup again, $G$ fixes all of such extremal rays and the contraction $\phi: X\to Y$ of $R$ is thus $G$-equivariant (the existence of $\phi$ is due to \cite[Theorems 1.5 and 2.18]{DH22}; cf.~\cite[Section 10]{DO22}). Furthermore, since $R$ is $\xi$-trivial, each $\xi_i=\phi^*\xi_{i,Y}$ for some nef $(1,1)$-class $\xi_{i,Y}$ on $Y$.\setlength\parskip{0pt} If $\phi$ is divisorial, then set $X_1:=Y$, $\xi_{X_1}:=\sum \xi_{i,Y}$, $\eta_1:=\phi_*\eta$, $F_1:=\phi_*F$ and $D_1:=\phi_*D$. If $\phi$ is small with the flipped contraction $\phi^+:X^+\to Y$ still being $G$-equivariant by the choice of $X^+:=\textup{Proj}_Y\oplus_{m\ge 0}\phi_*\mathcal{O}_X(\lfloor m(K_X+D+F)\rfloor)$, then set $X_1:=X^+$, $\xi_{X_1}:=\sum (\phi^+)^*\xi_{i,Y}$ and set $\eta_1, F_1, D_1$ to be the direct images of $\eta,F,D$ under the flip $X\dashrightarrow X^+$ respectively. Then we continue this procedure with the new pair $(X_1,D_1)$, noting that $\eta_1$ is still modified K\"ahler (cf.~\cite[Lemma 2.35]{DH22}) and the pair $(X_1,D_1+F_1,G)$ satisfies \hyperlink{Hypothesis}{Hypothesis}. By \cite[Theorem 1.1]{DH22}, this program will terminate in finitely many steps and thus we finally arrived at Step 1. By the ending part of \cite[Proof of Theorem 6.4]{DH22}, the composite map $\psi:X\dashrightarrow Z$ is holomorphic and $(K_X+D+\xi)$-trivial. So we finish the proof of Claim \ref{claim-equivariancy} by showing the $G$-equivariancy step by step. \qed \setlength\parskip{6pt}\par \vskip 1pc Now we come back to proving Proposition \ref{prop-end-pdt}. By \cite[Lemma 3.3]{HP16}, every nef $G$-eigenvector $\xi_i=\psi^*\xi_{Z,i}$ and then $\xi=\psi^*(\xi_Z:=\sum\xi_{Z,i})$. Since $K_X+D=\psi^*(K_Z+D_Z)$ and $(X,D)$ is klt, the pair $(Z,D_Z:=\psi_*(D))$ is also klt. Further, $D_Z$ is also $G$-periodic. \setlength\parskip{0pt} \begin{claim}\label{claim-nef-case} $K_Z+D_Z\sim_{\mathbb{Q}}0$. \end{claim} Suppose Claim \ref{claim-nef-case} for the time being. Then $\xi_Z\sim_\mathbb{Q} K_Z+D_Z+\xi_Z=\beta$ is a K\"ahler class. By Lemma \ref{lem-null-periodic}, there is no positive dimensional $G$-periodic subvariety on $Z$ and hence $D_Z=0$. In particular, $K_Z\sim_\mathbb{Q}0$ and Theorem \ref{thm-main} (3) is proved. Take an integer $m$ such that $mK_Z\sim 0$. Let $\tau:Z':=\textbf{Spec}\,\oplus_{i=0}^{m-1}\mathcal{O}_Z(-iK_Z)\to Z$ be the global index one cover with $K_{Z'}=\tau^*K_Z\sim 0$. Then $Z'$ has only canonical singularities (cf.~\cite[Proposition 5.20 and Corollary 5.24]{KM98}). Clearly, one can lift $G$ to $Z'$ via their actions on $K_Z$ and $\xi_{Z'}:=\tau^*\xi_Z$ is still a K\"ahler class (cf.~\cite[Propsoition 3.5]{GK20}). By \cite[Proposition 3.6]{GK20} and Lemma \ref{lem-periodic}, $\widetilde{c_2}(Z')\cdot \xi_{Z'}=0$. Therefore, $Z'$ is a $Q$-complex torus (cf.~\cite[Theorem 1.1]{GK20}). By the Galois closure trick (cf.~\cite[Lemma 7.4]{GK20}), the quotient $Z$ is also a $Q$-complex torus. So Theorem \ref{thm-main} (1) is proved. Let $a:T\to Z$ be the Albanese closure (cf.~\cite[Definition 2.5]{Zho22}) such that the group $G$ (on $Z$) lifts to $G_T$ (on $T$) holomorphically. Let $H:=\textup{Gal}(T/Z)$ be the galois group. Since each $g_T\in G_T$ normalizes $H$, the composite $a\circ g_T$ is $H$-invariant. By the universality of the quotient morphism $a$ over the \'etale locus, $a\circ g_T$ factors through $a$. Hence, the Albanese closure $a$ is $G$-equivariant with $G\cong G_T/H$. In the view of Lemma \ref{lem-null-periodic}, the singular locus $\textup{Sing}\,Z$ consists of finitely many isolated points, noting that $\textup{Sing}\,Z$ is $G$-periodic and $\xi_Z$ is K\"ahler (cf.~Lemma \ref{lem-periodic}). So the group $H$ acts freely outside finitely many isolated points by the purity of branch loci and Theorem \ref{thm-main} (2) is proved. \setlength\parskip{4pt}\par \vskip 0.3pc \noindent \textbf{Proof of Claim \ref{claim-nef-case} (End of Proof of Proposition \ref{prop-end-pdt}):} Since $Z$ is non-uniruled by our assumption in the beginning of the proof, $K_Z$ is pseudo-effective (cf.~\cite{Bru06}). Let $m\in\mathbb{N}$ be the Cartier index of $K_Z+D_Z$ and take a $G$-equivariant resolution $\sigma:\widetilde{Z}\to Z$. Then the null locus of $\xi_{\widetilde{Z}}:=\sigma^*\xi_Z$, which is the union of positive dimensional $G$-periodic proper subvarieties of $\widetilde{Z}$ (cf.~Lemma \ref{lem-null-periodic}), coincides with the non-K\"ahler locus of $\xi_{\widetilde{Z}}$ (cf.~\cite[Theorem 1.1]{CT15}). By Lemma \ref{lem-periodic}, $\sigma^*(K_Z+D_Z)\cdot\xi_{\widetilde{Z}}^2=0$. \setlength\parskip{0pt} We show that, for every $G$-periodic subvariety $B\subseteq\widetilde{Z}$, the restriction $\xi_{\widetilde{Z}}|_{B}=0$. By Lemma \ref{lem-periodic}, we may assume $\dim B=\dim\sigma(B)=2$, for otherwise, the triviality of $\xi_{\widetilde{Z}}|_B$ follows from that of $\xi_Z|_{\sigma(B)}$, noting that $\dim \sigma(B)\le 1$ and $\sigma(B)$ is also $G$-periodic. Then, $\sigma^*(K_Z+D_Z+\xi_Z)|_B$ is nef and big. Since both $\sigma^*(K_Z+D_Z)$ and $B$ are $G$-periodic, applying Lemma \ref{lem-periodic} once more, we have $$\sigma^*(K_Z+D_Z+\xi_Z)|_B\cdot \xi_{\widetilde{Z}}|_B=\sigma^*(K_Z+D_Z+\xi_Z)\cdot \xi_{\widetilde{Z}}\cdot B=0.$$ Thus, $(\xi_{\widetilde{Z}}|_B)^2\le 0$ and hence $(\xi_{\widetilde{Z}}|_B)^2=0$ by the nefness of $\xi_{\widetilde{Z}}$. Applying the Hodge index theorem and noting that $(\sigma^*(K_Z+D_Z+\xi_Z)|_B)^2>0$, we finally have $\xi_{\widetilde{Z}}|_B\equiv 0$. Since $\xi_{\widetilde{Z}}$ is nef and big, with the same proof as in Claim \ref{claim-equivariancy}, there is a bimeromorphic holomorphic map (which is not necessarily $G$-equivariant) $\sigma':\widetilde{Z}'\to \widetilde{Z}$ such that $\sigma'^*\xi_{\widetilde{Z}}=\omega+E$ where $\omega$ is some K\"ahler class and $E$ is an effective $\mathbb{R}$-divisor with the support $\textup{Supp}\,\sigma'(E)=\textup{Null}(\xi_{\widetilde{Z}})$ (which is $G$-periodic). Then $(\sigma'^*\xi_{\widetilde{Z}})|_E=(\sigma'|_E)^*(\xi_{\widetilde{Z}}|_{\sigma'(E)})\equiv 0$ by the above paragraph. Therefore, \begin{equation}\label{eq1}\tag{\dag} 0=(\varphi:=\sigma\circ\sigma')^*(K_Z+D_Z)\cdot (\varphi^*\xi_Z)^2=\varphi^*(K_Z+D_Z)\cdot(\omega+E)\cdot\omega. \end{equation} Furthermore, the nefness of the following class $$\varphi^*(K_Z+D_Z)|_E=(\sigma'|_E)^*(\sigma^*(K_Z+D_Z)|_{\sigma'(E)})=(\sigma'|_E)^*(\sigma^*(K_Z+D_Z+\xi_Z)|_{\sigma'(E)}),$$ implies that $\varphi^*(K_Z+D_Z)\cdot E\cdot\omega\ge 0$. Together with Equation (\ref{eq1}), $\varphi^*(K_Z+D_Z)\cdot\omega^2=0$. By \cite[Theorem 3.1]{DS04}, $\varphi^*(K_Z+D_Z)^2\cdot\omega\le 0$ and the equality holds if and only if $\varphi^*(K_Z+D_Z)\equiv 0$. However, for $e\gg 1$, $\varphi^*(K_Z+D_Z)+e\omega$ is also a K\"ahler class. By the pseudo-effectivity of $\varphi^*(K_Z+D_Z)$, one has $$0\le\varphi^*(K_Z+D_Z)\cdot(\varphi^*(K_Z+D_Z)+e\omega)\cdot\omega=\varphi^*(K_Z+D_Z)^2\cdot\omega\le 0.$$ So $\varphi^*(K_Z+D_Z)\equiv 0$ as classes. Since $\dim Z=3$, $K_Z+D_Z\equiv 0$ (cf.~\cite[Proposition 3.14]{HP16}). So the abundance for numerically trivial pairs gives us $K_Z+D_Z\sim_{\mathbb{Q}}0$ (cf.~\cite[Corollary 1.18]{CGP20} or \cite[Theorem 1.1]{DO22}). We finish the proof of Claim \ref{claim-nef-case}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm-main}] In the view of \cite[Theorem 1.3 (1)]{Zho22} and its proof, we may assume that $X$ is not uniruled. In the following, let us run the $G$-equivariant minimal model program for the pair $(X,D+\xi)$ (cf.~\hyperlink{Hypothesis}{Hypothesis} and Notation \ref{not-xi}). If $K_X+D+\xi$ is already nef, then we are done by Proposition \ref{prop-end-pdt}. We consider the case when $K_X+D+\xi$ is not nef. Then there is a $(K_X+D+\xi)$-negative extremal ray $R$. By Proposition \ref{pro-nef-big-finite}, such $(K_X+D+\xi)$-negative extremal rays are finitely many. Therefore, with $\xi$ replaced by a multiple, we may assume that all of such extremal rays are $\xi$-trivial (and hence $\xi_i$-trivial for $i=1,2,3$). If $R=\mathbb{R}_{\ge 0}[\ell]$ is one of such ray with $(K_X+D+\xi)\cdot\ell<0$ and $\xi\cdot\ell=0$, then $\xi_i\cdot g(\ell)=0$ for each $i$ and thus $\xi\cdot g(\ell)=0$. So $(K_X+D+\xi)\cdot g(\ell)<0$ and $g_*R$ is also one of such $(K_X+D+\xi)$-negative extremal rays. With $G$ replaced by a finite-index subgroup, we may assume that $R$ is $G$-stable. Let $\phi: X\to Y$ be the contraction of $R$ (cf.~\cite[Theorems 1.5 and 2.18]{DH22}, \cite[Section 10]{DO22}). Then $G$ descends to a biregular action on $Y$ and each $\xi_i=\phi^*\xi_{Y,i}$ for some nef common $G|_Y$-eigenvector $\xi_{Y,i}$ (cf.~\cite[Lemma 3.3]{HP16}). If $\phi$ is divisorial, with $(X,D,G)$ replaced by $(Y,D_Y:=\phi(D),G|_Y)$, we come back and continue this program, noting that $(Y,D_Y:=\phi(D),G|_Y)$ satisfies \hyperlink{Hypothesis}{Hypothesis}; see \cite[Theorem 1.1]{DH22}. If $\phi$ is small, then \cite[Theorem 4.3]{CHP16} confirms the existence of the flip $\phi^+:X^+\to Y$. Indeed, $X=\textup{Proj}_Y\oplus_{m\ge 0}\phi_*\mathcal{O}_X(\lfloor-m(K_X+D_X)\rfloor)$ and $X^+=\textup{Proj}_Y\oplus_{m\ge 0}\phi_*\mathcal{O}_X(\lfloor m(K_X+D_X)\rfloor)$. Clearly, $G$ descends to $Y$ and lifts to $X^+$ holomorphically. Then with $(X,D,G)$ replaced by $(X^+,D^+:=(\phi^+)^{-1}(\phi(D)), G|_{X^+})$, we come back and continue this program, noting that we pull back $\xi_{Y,i}$ to $X^+$ to get new nef common $G|_{X^+}$-eigenvectors. Similarly, $(X^+,D^+:=(\phi^+)^{-1}(\phi(D)), G|_{X^+})$ satisfies \hyperlink{Hypothesis}{Hypothesis}; see~\cite[Theorem 1.1]{DH22}. By \cite[Theorem 1.1]{DH22}, such ($G$-equivariant) log minimal model program will terminate after finitely many steps. So we finally arrive at the model $X_n$ with nef $K_{X_n}+D_n+\xi_n$ and get the fibration $X_n\to Z$ by the base point free theorem (cf.~\cite[Theorem 1.7]{DH22}). By Proposition \ref{prop-end-pdt}, we are remained to prove that, the $G$-equivariant bimeromorphic composite map $X\dashrightarrow Z$ is indeed holomorphic. Suppose to the contrary that $X\dashrightarrow Z$ is not holomorphic. Then there exists some flip $X_i\dashrightarrow X_{i+1}:=X_i^+$ over $Y$ in the $G$-MMP, such that $Y\dashrightarrow Z$ is not holomorphic. Let $\phi_i:X_i\to Y$ be the corresponding flipping contraction of the extremal ray $R_i$ (with the flipped contraction $\phi_i^+:X_{i+1}\to Y$). By the rigidity lemma (cf.~\cite[Lemma 4.1.13]{BS95}), there is some curve $C\in R_i$ such that $(\phi_i^+)^{-1}(\phi_i(C))$ is not contracted by $X_i^+\dashrightarrow Z$. But then, the image of $(\phi_i^+)^{-1}(\phi_i(C))$ on $Z$ is a $G$-periodic curve, contradicting the first assertion of Theorem \ref{thm-main} (3). So we finish the proof of Theorem \ref{thm-main}. \end{proof}
1,108,101,566,145
arxiv
\section{Introduction} There is a general consensus that the matter produced at RHIC behaves as a near perfect fluid \cite{perfectfluid}. One of the key findings that led to this interpretation is the strong anisotropic flow of produced hadrons and its description by ideal hydrodynamic simulations \cite{idealhydro}. Although it is too early to draw definitive conclusions it appears that the deviations from ideal hydrodynamic behavior may be ascribed to dissipative effects. This has been suggested in many of the recent works on dissipative hydrodynamics \cite{vishydro}. Care must be taken when drawing conclusions about the viscosity of the early matter at RHIC based on hadronic observables alone. As hadrons interact strongly, they are sensitive to the later stages of the evolution leaving an ambiguity about whether the viscous effects seen in spectra stem from the hadronic or QGP phase. On the other hand, electromagnetic probes are emitted throughout the entire space-time evolution reaching the detector without any final state interactions and are not sensitive to the dynamics of freeze-out. As the next generation of experiments shift towards a precision study of the matter produced at RHIC it is imperative to have multiple observables constraining the medium properties inferred from the data. This work demonstrates that direct photons can be used to constrain the shear viscosity. It is also shown, that by neglecting the presence of viscosity, incorrect thermalization times will be extracted from experiments. There is a long history of photon calculations that we can't summarize here. Many of the works relevant to experiment \cite{photonphem} have relied on kinetic equilibrium and others \cite{anisdilep} have studied the effects of early momentum-space anisotropies. Only recently, however, has there been a measurement \cite{:2008fqa} that is precise enough to suggest the presence of an early hot stage of matter. \section{Photon Rates with Viscosity} In this section we show how the presence of viscosity modifies the photon spectra. In order to demonstrate the effect we look at the $2\to 2$ processes in fig.~\ref{fig:dia}. There are additional diagrams that contribute to the thermal photon rate at leading order \cite{Arnold:2002ja}, which will not be examined in this leading log analysis. The emission rate of photons having momentum $\vec{q}$ and energy $E_q=\vec{q}$ is \beqa E_q\frac{dN}{d{\vec q}}&=&4 \sum_f \int\frac{d^3 p_a d^3 p_b}{(2\pi)^6}\nonumber\\ &\times& f_a(p_a) f_b(p_b) \left[1\pm f_2(p_a+p_b-q)\right] E_q\frac{d\sigma}{d{\vec q}} v_{ab}\,,\spc\spc\spc\spc\spc\spc\spc \eeqa where the sum is over quark flavors and $f_a(E_a,{\vec p}_a)$ is particle specie $a$'s distribution function, which is not necessarily in equilibrium. We have used the upper (lower) sign for a final state boson (fermion). \begin{figure}[hbtp] \vspace{9pt} \centerline{\hbox{ \hspace{0.0in} \includegraphics[scale=1.2]{annihilation} \hspace{0.13in} \includegraphics[scale=1.2]{compton} } } \vspace{9pt} \caption{Feynman diagrams for the annihilation process (left) and the Compton process (right).} \label{fig:dia} \end{figure} For both the Compton and annihilation process the leading log behavior comes from when the exchanged momentum is soft ({\em i.e.} $ \sim gT$). In this case the amplitude will be dominated by forward scattering. Following \cite{WongBk} we can approximate the differential cross section as \beqa E_q\frac{d\sigma}{d{\vec q}}\approx E_q \sigma_{tot}(s) \delta^3(\vec{q}-\vec{p}_a)\,, \eeqa where $\sigma_{tot}$ is the total cross section of either process. Performing the integration over ${\vec p}_a$ we are left with \beqa E_q\frac{dN}{d{\vec q}}=\frac{2}{(2\pi)^6} f_a(q) \sum_f \int d^3 p_b f_b(p_b) \left[1\pm f_{2}(p_b)\right] \frac{s \sigma(s)}{E_b}\,. \label{eq:rate1} \eeqa By examining the above equation one can already see where the viscous correction will come into play. The expression for the photon rates is proportional to the distribution function of particle $a$ (the quark). Therefore, if viscosity modifies the quark's distribution function it will modify the photon emission rates accordingly. Following \cite{Bellac} we take a one parameter {\em ansatz} for the viscous correction to the distribution function \beqa f^a(p_a) = f_0^a(p_a)+\frac{C_a}{2T^3} f_0^a\left[ 1\pm f_0^a\right] p^\mu_a p^\nu_a \partial_{\langle \mu} u_{\nu \rangle} \,, \eeqa where $f_0^a$ is the particle's equilibrium distribution function and $C_a$ is a constant determined in Appendix A and the notation $\langle \cdots \rangle$ designates that the quantity in brackets should be symmetrized and made traceless. In general each distribution function in the above rate equation~(\ref{eq:rate1}) must be replaced by its viscous counterpart, $f_0+\delta f$, and then one must drop terms of $\mathcal{O}(\delta f^2)$. A full analysis, which will be presented elsewhere, has found that the viscous corrections to the distribution functions occurring under the phase space integral lead to corrections to the coefficient under the log. Therefore, to leading log order, we neglect the viscous correction to $f_b$ and perform the above phase space integrals in the same manner as done in \cite{WongBk, Kapusta:1991qp} where the diverging differential cross section is regulated using the re-summation technique\footnote{In principal one must also include the viscous correction when computing the resummed propagator. It turns out that these corrections can be taken into account by introducing a generalized momentum dependent thermal mass. This correction to the rates will not be enhanced by the logarithm and can therefore be neglected in this leading log approach.} of Braaten and Pisarski \cite{Braaten:1989mz}. The final result of both the Compton and $\overline{q}q$ annihilation processes is \beqa E_q\frac{dN}{d{\vec q}}= \frac{5}{9} \frac{\alpha_e \alpha_s}{2\pi^2} f_a(q) T^2\ln\left[\frac{3.7388 E_q}{g^2 T}\right]\,, \label{eq:log} \eeqa where $f_a$ is the quark's off-equilibrium distribution function \beqa f^a(q) = f_0^a(q)+1.3\frac{\eta/s}{2T^3} f_0^a\left[ 1- f_0^a\right] q^\mu q^\nu \partial_{\langle \mu} u_{\nu \rangle} \,. \eeqa \section{Spectra in Ultra-relativistic Heavy-Ion Collisions} In this section the above photon rates are integrated over the space-time evolution of the collision region determined by a 2+1 dimensional boost invariant viscous hydrodynamic model \cite{Dusling:2007gi}. The same bag model equation of state and Glauber model initial conditions of \cite{Teaney:2001av} are used since it was able to predict many hadronic observables. We have considered photon production from the QGP phase only. An impact parameter of $b=6.5$ fm and $\eta/s=1/4\pi$ is used throughout. The one parameter in the rate equations, $\alpha_s$, is evaluated at the scale $\mu=\pi T$ from the two loop $\beta$ function. Because we are using the leading-log rates the results below $q_\perp\sim 1$ GeV are speculative. Above 1 GeV the expression under the log in eq.~\ref{eq:log} remains larger than one. There is one technicality regarding the viscous evolution model which must be discussed. We have chosen to use the ideal results for the evolution model for the viscous case as well. This amounts to neglecting the viscous corrections to the flow and temperature profiles which tend to be small, especially in the early stages of the evolution. Even though this approach is fundamentally inconsistent (energy-momentum is violated when converting from hydro to particles) the corrections are small. This procedure is convenient for two reasons. First, arbitrarily large gradients can be treated. And second, the final multiplicities will remain unchanged so we will not have to worry about modifying the initial conditions in order to re-fit hadronic observables at finite viscosity. The entire viscous effect, in this work, comes from the modification to the rates as outlined in the previous section. A previous work \protect\cite{Dusling:2008xj} studied the emission of dileptons from a viscous medium taking into account the viscous correction to the underlying flow and temperature profiles. While the viscous correction to the hydrodynamic variables modified the yields the shape of the spectrum (as seen through $T_{\mbox{eff}}$) was largely undistorted. This is further motivation for neglecting the viscous corrections to flow. Fig.~\ref{fig:qt} shows the thermal photon transverse momentum spectra for the hydrodynamic model having starting times of $\tau_0=0.2, 0.6, 1.0$ fm/c. The ideal results are shown as solid lines. The dominant contribution at higher momentum comes from the early stages of the evolution where the temperature is highest. This leads to the expected increase in yields at higher momentum for earlier starting times. Shown as lines with symbols are the corresponding viscous results. A hardening of the transverse momentum spectra is seen, reminiscent of the known effect of viscosity on single particle spectra \cite{Teaney:2003kp}. These results should not come as a surprise. At earlier times the collision geometry has larger gradients and the underlying quasi-particles are furthest from local thermal equilibrium. Viscosity introduces a power law correction to the spectra (reminiscent of particle production in perturbative QCD). As the system evolves, thermalization occurs by transferring momentum by \brem or collisions to softer modes until the spectra eventually become thermal. Since the relaxation time grows with energy incomplete thermalization enhances the quark distribution at high $q_T$. The harder distribution of quarks leads to a harder spectrum of photons. \begin{figure}[hbtp] \vspace{2pt} \centerline{\hbox{ \hspace{0.0in} \includegraphics[scale=.35]{photonpt} } } \vspace{2pt} \centerline{\hbox{ \hspace{0.4in} \includegraphics[scale=.32]{photonptrat} } } \vspace{2pt} \caption{$q_\perp$ spectra of thermal photons from the QGP (top) and the ratio of the viscous correction over the ideal result (bottom). The band in the lower figure indicates where the $q_\perp$ spectra can no longer be reliably calculated.} \label{fig:qt} \end{figure} The dominate contribution to the transverse momentum spectra at high $q_\perp$ is from the earliest times when the medium is hottest and the gradients are largest\footnote{The viscous correction is proportional to $\partial_\mu u^\mu\sim 1/\tau$ at early times}. This leads to a larger viscous correction, and the subsequent breakdown of the hydrodynamic description, at large $q_\perp$. The lower plot of fig.~\ref{fig:qt} shows the ratio of the viscous correction to the ideal result. The correction becomes of order one at $q_\perp\approx 2.5$ GeV. Fig.~\ref{fig:v2} shows the photon elliptic flow defined as \beqa v_2 \equiv \frac{\int d\phi \cos(2\phi)\spc dN +\delta dN}{\int d\phi\spc dN +\delta dN}\approx \frac{\int d\phi \cos(2\phi)\spc dN +\delta dN}{\int d\phi\spc dN} -\frac{\int d\phi \spc \delta dN \int d\phi \cos(2\phi)\spc dN}{(\int d\phi\spc dN)^2} \label{eq:v2def} \eeqa where $dN=dN/d^3q$ is the space-time integrated ideal photon spectra and $\delta dN=\delta dN/d^3q$ is the viscous correction. In the rightmost expression the denominator has been expanded in order to keep terms up to $\mathcal{O}(\delta f)$ only. The solid red line shows the ideal result. As explained in \cite{Chatterjee:2005de}, the total photon $v_2$ follows from a weighted average of the flow over the proper time of the evolution. A linear rise in $v_2$, as expected from hydro, is observed. The suppression at large $q_\perp$ is due to the early non-flowing phase, which dominates the yields at large momentum. Including viscosity, shown as the line with symbols, has a large effect on the $v_2$. The ideal result is suppressed by a factor as large as $\sim 2$. In contrast to the hadronic $v_2$, where the largest effect of viscosity is at high $p_\perp$, we find large corrections at {\em all} $q_\perp$ in the case of photons. The solid magenta line in Fig.~\ref{fig:v2} shows the viscous result using the expansion in the rightmost expression of eq.~\ref{eq:v2def}. When the two results disagree this observable can no longer be reliably calculated. This happens when $q_\perp\approx 1.5$ GeV as shown by the solid band. \begin{figure} \includegraphics[scale=.35]{photonv2} \caption{Elliptic flow of thermal photons from the QGP.} \label{fig:v2} \end{figure} \section{Discussion} The sensitivity of the spectra to the shear viscosity makes the extraction of the thermalization time more subtle. It also makes it possible to extract $\eta/s$ from the data. In order to demonstrate the process, the inverse slope of the photon spectra is computed by a fit to $1/q_\perp dN/dq_\perp\propto \exp(-q_\perp/T_{\mbox{eff}})$ in the momentum region\footnote{Realistic model calculations \protect\cite{photonphem} find that the QGP contribution dominates the yields in this kinematic region.} $1.5 \leq q_\perp$ (GeV) $\leq 2.5$. In fig.~\ref{fig:phteff} the effective temperature $T_{\mbox{eff}}$ is plotted versus the thermalization time $\tau_0$ for $\eta/s=0, 1/4\pi$ and $2/4\pi$. Both viscosity and earlier thermalization cause the effective temperature to increase in a non-trivial way. We should clarify to the reader that what we call {\em thermalization time} is really the hydrodynamic starting time. In this work any production before $\tau_0$ has been neglected. If one included non-equilibrium production from the early evolution prior to $\tau_0$ we would expect to see the strong dependence on $\tau_0$ to be reduced. From fig.~\ref{fig:phteff}, it is clear that fitting the data with ideal hydrodynamic simulations will result in the extraction of earlier thermalization times. For example, the same $T_{\mbox{eff}}$ is observed for an ideal (viscous) evolution starting at $\tau_0=$0.6 (1.0) fm/c. A precise measurement of the inverse slope could constrain a combination of $\tau_0$ and $\eta/s$. In order to discern between the ideal and viscous results shown here a measurement must pin down $T_{\mbox{eff}}$ to within 20 MeV. The band in fig.~\ref{fig:phteff} shows the experimentally measured slope \cite{:2008fqa}, including both systematic and statistical errors, in order to demonstrate the current quality of the data. \begin{figure} \includegraphics[scale=.35]{phteff} \caption{Effective temperature of final photon spectra for starting times of $\tau_0=0.2,0.6,1.0$ and 1.4. Lines are shown to guide the eye. The solid band is the measured slope from minimum bias data \protect\cite{:2008fqa}.} \label{fig:phteff} \end{figure} There are a number of caveats which must be discussed before a fair comparison can be made with data. The largest uncertainty comes from using the leading log results. Examining the full leading order results from \cite{Arnold:2002ja} we estimate that the leading log contribution used here comprises about fifty percent of the thermal QGP photon yields in the kinematic regions of interest. Therefore, even if the viscous correction to the remaining leading order (non leading log) results turned out to be negligible, one would still expect the above conclusions to hold at a qualitative level. \section{Conclusions} The viscous correction to thermal photons at leading log order is computed. The rates are then integrated over the space-time history of a hydrodynamic simulation. The viscosity sets a bound on where a hydrodynamic description is reliable ($q_\perp\sim 2.5$ GeV for spectra and $q_\perp\sim 1.5$ GeV for elliptic flow). We have shown that viscosity increases $T_{\mbox{eff}}$ and by neglecting its presence wrong conclusions about the thermalization time will be reached. Our model calculation has shown that the photon spectra can place stringent constraints on both $\tau_0$ and $\eta/s$. \begin{acknowledgments} I am grateful to Raju Venugopalan for a careful reading of this manuscript and making many valuable suggestions. I would also like to thank Ulrich Heinz, Shu Lin, Rob Pisarski, Derek Teaney and Werner Vogelsang for useful discussions. This work was supported by the US-DOE grant DE-AC02-98CH10886. \end{acknowledgments}
1,108,101,566,146
arxiv
\section{Introduction} Twisted bilayer graphene (TBG) has emerged as a highly-tunable platform to observe correlated electron behaviour, such as insulating phases or unconventional superconductivity \cite{Cao2018,Cao_Insulator,Yankowitz2019}. By twisting two sheets of graphene by an angle $\theta$, a moir\'{e} pattern emerges and gives rise to a larger superlattice unit cell. The corresponding Brillouin zone (BZ) is much smaller than the BZ of a single graphene sheet and thus is called mini-BZ. At a `magic' twist angle of $\theta \approx 1.1 ^{\circ}$ the bands near the Fermi energy become exceptionally flat \cite{Barticevic2010,CastroNeto2007,CastroNeto2012,SpectrumTBG,MacDonald2011}. As the kinetic energy in these bands is small, electron-electron interactions become increasingly important. Because of spin and valley degeneracies the bands in the mini-BZ are four-fold degenerate. Besides controlling the bandstructure via the twist angle, the charge carrier density in twisted bilayer graphene can also be controlled by external electrostatic gating.\\ In the beginning of 2019, Sharpe \textit{et al}.\ found experimental evidence for a ferromagnetic state at filling $\nu=3$ \cite{Sharpe2019}. They measured an anomalous Hall effect which shows a hysteresis in an external magnetic field. There are many publications suggesting that the interactions may lift spin- and valley-degeneracies which could lead to different kinds of magnetic order \cite{MacDonald2020,Dodaro2018,Scheuer2018,Ochi2018,bultinck2019ground,Zaletel_skyrmions,Honerkamp2019}. Later in 2019, a quantized anomalous Hall effect was measured by Serlin \textit{et al}.\ in TBG on a hexagonal boron nitride (h-BN) substrate for filling $\nu=3$ \cite{Serlin2019}. Because of the substrate the two-fold rotation symmetry of the TBG is broken, which gaps out the Dirac cones and the electronic bands aquire a non-zero Chern number $C$ \cite{GapsHBN,Jung2015,Zaletel2019,Zhang2019,Zhang_AQHE}. As the resulting ground state is a fully spin- and valley-polarized Chern insulator at filling $\nu=3$ \cite{Zaletel_skyrmions,liu2019correlated,alavirad2019ferromagnetism,repellin2019ferromagnetism,WuCollective2020}, there can be other charged excitations besides simple particle-hole pairs, namely skyrmions. In the quantum Hall phase there is a sizable Mott gap for charge excitations. Experimentally an activation gap of $ \SI{30}{K}$ has been measured in transport \cite{Serlin2019,Young2020}. Similarly, the valley degree of freedom is also gapped as its continuous rotation triggers a sign change of $\sigma_{xy}$ and thus closes the charge gap. Therefore, we do not expect topological textures involving the valley degree of freedom \cite{skyrmionQHRosch} and focus our study on the only remaining low-energy degree of freedom, the magnetization. The spin structure can be described by a continuous vector field describing the classical magnetization $\hat{m}(\vec{r},t)$. A skyrmion has a non-trivial topology characterized by its winding number $W$: \begin{equation} W = \dfrac{1}{4 \pi} \int \limits_{\mathbb{R}^2} \mathrm{d}^2 r\ \hat{m}(\vec{r})\cdot\left(\dfrac{\partial \hat{m}}{\partial x} \times \dfrac{\partial \hat{m}}{\partial y}\right) \in \mathbb{Z} \label{eq:WindingNumber} \end{equation} If the Chern number of the electronic bands is independent of the spin orientation (as in the case of TBG), the skyrmion aquires a charge given by the product of Chern- and winding number \cite{phaseSpaceBerry}. Skyrmions are fermionic (bosonic) for odd (even) products. Interestingly, it has been argued \cite{khalaf2020charged} that superconductivity can arise from the condensation of bosonic skyrmions for $C=2$. Long ago, it has been established both theoretically \cite{Sondhi1993,Fertig1994,Fertig1997} and experimentally \cite{quantumHallFlippedSpin}, that spin-polarized electrons in the Landau levels of quantum Hall systems can form skyrmions which carry electric charge. The same is true for flat bands with a finite Chern number. The electric charge density $\rho_{\mathrm{el}}$ in this case is proportional to the topological winding density $\rho$: \begin{equation} \rho_{\mathrm{el}} = C e \rho \quad \text{with} \quad \rho = \dfrac{1}{4\pi} \ \hat{m} \cdot \left( \dfrac{\partial \hat{m}}{\partial x} \times \dfrac{\partial \hat{m}}{\partial y}\right) \label{eq:ChargeDensity} \end{equation} In the following we will numerically investigate topological textures induced by gating. Besides the expected skyrmion lattices we also find novel textures, which we dub double-tetarton lattices. We study the phase diagram in a magnetic field and argue that a rapid change of magnetization as function of doping is a smoking guns signature of the double-tetarton phase. \section{The model} The free energy of the magnetic sector can be described by a non-linear sigma model~\cite{Sondhi1993}: \begin{align} F[\hat{m}] =& \dfrac{J}{2}\int \limits_{\mathbb{R}^2} (\nabla \hat{m})^2 \ \mathrm{d}^2 \vec r -\int \limits_{\mathbb{R}^2} \vec{B} \cdot \hat{m}\ \mathrm{d}^2 \vec r \label{eq:EnergyFunctional} \\ & + \dfrac{U_c}{2} \int \limits_{\mathbb{R}^2} \int \limits_{\mathbb{R}^2}\dfrac{(\rho_{\mathrm{el}}(\vec{r})-\Delta \nu)(\rho_{\mathrm{el}}(\pvec{r}')-\Delta \nu)}{|\vec{r}-\pvec{r}'|}\ \mathrm{d}^2 \vec r \ \mathrm{d}^2 \vec r' \nonumber \end{align} Here all lengths are measured in units of $L_M=\sqrt{A_M} \approx \SI{12}{nm}$, where $A_M$ is the area of the moir\'e unit cell \cite{Cao2018}. The first two terms describe the spin-stiffness $J$ of the ferromagnetic state and a Zeeman couling to an external field $B$. The third term is the long-ranged Coulomb interaction, $U_c=\frac{1}{4 \pi \epsilon_0 \epsilon L_M}$, between (topological) charges, where $\rho$ is the topological charge density defined in Eq.~\ \eqref{eq:ChargeDensity}. $J,B,U_c$ have units of energy. $\Delta \nu$ is a background charge measured from filling $\nu=3$, which can be controlled by an external gate. We assume that the distance to the gate is much larger than the average distance of charges. In this limit, the average charge density is fixed by $\Delta \nu$: \begin{equation} \int \limits_{\mathbb{R}^2} \left( \rho(r)-\Delta \nu \right) \mathrm{d}^2r=0 \end{equation} A scaling analysis, where all lengths are rescaled by the factor $\lambda$, reveals that the Coulomb energy and Zeeman energy scale with $\lambda^{-1}$ and $\lambda^2$, respectively, while the exchange term remains invariant. Coulomb repulsion (Zeeman energy) favors large (small) skyrmions. By minimizing the energy with respect to $\lambda$, one obtains an estimate for the radius of a single skyrmion in a magnetic field \begin{equation} R \sim L_M \left( \dfrac{U_c}{B} \right)^{1/3}=R^* \label{eq:Radius} \end{equation} \section{Groundstate at $B=0$} \label{Groundstate for B=0} To determine the groundstate in the absence of a magnetic field at fixed winding number density, we performed numerical simulations for different unit cell geometries, see Appendix~\ref{simulations}. The lowest energy is found for a triangular lattice with a hexagonal unit cell shown in Fig.~\ref{fig:Groundstate_NoB}. We first note that the total winding number within the magnetic unit cell, white hexagon in Fig.~\ref{fig:Groundstate_NoB}, is $-2$, but the resulting spin configuration is {\em not} a lattice of skyrmions. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{fig1.png} \caption{Groundstate spin configuration for $B=0$ and $\Delta \nu \left(\frac{U_c}{J}\right)^2 =0.098$ (grey arrows: magnetization in the x-y-plane, colors: z-component of the spin with blue for up and red for down spins). The total winding number within the magnetic unit cell (white hexagon) is $W=-2$. The black hexagons depict the building blocks of the magnetic structure, a `double-tetarton', see text and Fig.~\ref{fig:Tetron_cover}. The figure also shows contour lines of the topological charge density.} \label{fig:Groundstate_NoB} \end{figure} The primary building block is instead the magnetic structure in the central black hexagon of Fig.~\ref{fig:Groundstate_NoB}. Here the spins cover exactly one quarter of the unit-sphere {\em twice} (a skyrmion covers the full unit-sphere once). When one tracks the direction of spins moving along the edge of the central black hexagon, one obtains a path shown in Fig.~\ref{fig:Tetron_cover} which winds twice around the northpole. In analogy to a `meron' (half of a skyrmion), we call this structure `double-tetarton'. \begin{figure}[t] \centering \includegraphics[width=0.4 \linewidth]{fig2.pdf} \caption{The basic building block of the texture shown in Fig.~\ref{fig:Groundstate_NoB} is a `double tetarton': the spins cover exactly one quarter of the unit sphere twice. The red line shows a path along the edge of the central black hexagon which winds twice around the colored area. The magnetic texture of the other hexagons in Fig.~\ref{fig:Groundstate_NoB} can be obtained by rotating the spins by $180^\circ$ around one of the blue axes.} \label{fig:Tetron_cover} \end{figure} Moving from one black hexagon to the six next-nearest neighbours, the magnetic structure is rotated by 180$^\circ$ around one of the three axes shown in Fig.~\ref{fig:Tetron_cover}. The group of magnetic symmetry transformations is -- up to the translations -- isomorphic to the octahedral group $O_h$, see Appendix~\ref{Symmetries}. Four double-tetartons thereby give the magnetic unit cell which therefore has winding number $W=4 \times 2 \times \left(-\frac{1}{4}\right)=-2$. By symmetry, the groundstate has no net magnetization $ \int m_i(\vec{r}) \mathrm{d}^2r = 0$. This is an important observation which distinguishes our double-tetarton lattice from skyrmion lattices. \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{fig3.pdf} \end{center} \caption{Phase diagram for magnetic textures with Coulomb interactions. One representative spin configuration (color scale as in Fig.~\ref{fig:Groundstate_NoB}, arrows indicate the helicity) and the corresponding charge density for each phase is shown. In the case of $B=0$ the double-tetarton lattice (lower left corner) is the groundstate. For small magnetic fields a hexagonal lattice has the lowest energy (lower middle picture), while at low density and large magnetic field the groundstate is a triangular lattice of skyrmions with $120^\circ$ helicity order (upper left corner). At intermediate fields we obtain a triangular lattice with striped helicity order as well as a square lattice with an `antiferromagnetic' helicity order.} \label{fig:phaseDiagram} \end{figure*} Furthermore, we can look at the contours of the electric charge density depicted in Fig.~\ref{fig:Groundstate_NoB}. The black hexagons define the unit cell of the charge density which has minima in their centers. The spin configuration spontaneously breaks global spin-rotation invariance and one can thus obtain other configurations just by rotating all spins, see Appendix~\ref{Symmetries}, the charge density remains invariant under such rotations. In the limit $U_c \rightarrow 0$, when the energy is only determined by the exchange interaction, the energy $E_{UC}$ per magnetic unit-cell of the double-tetarton lattice is in the continuum limit exactly given by $E_{UC}=8 \pi J$, twice the energy of the Polyakov skyrmion \cite{Polyakov}. This follows from the fact that the Polyakov skyrmion is a lower bound for the energy per winding number of topological textures in the presence of exchange interactions and that one can also construct an upper bound to the energy using lattices of Polyakov skyrmions. It is also consistent with our numerical results where we obtain for small $U_c$, $E_{UC} \approx 8 \pi J+ 0.04 \frac{U_c}{\sqrt{\Delta \nu}}$. If this energy is smaller than twice the Mott gap (the energy required to add two electrons into higher bands), then a topological magnetic texture will form whenever the system is doped slightly. \section{Phase diagram} In Fig.~\ref{fig:phaseDiagram} the phase diagram as a function of doping and magnetic field is shown. A small magnetic field in z-direction breaks the $O(3)$ spin-rotation invariance. Numerically we find (see Appendix.~\ref{simulations}) that for a small magnetic field in the z-direction, the ground state smoothly evolves from the double-tetarton configuration shown in Fig.~\ref{fig:Groundstate_NoB}. Due to the lowered symmetry, the double-tetarton lattice can now be smoothly deformed to a hexagonal lattice of skyrmions located at the six edges of the magnetic unit cell. Each skyrmion has an internal degree of freedom, called `helicity', which can be identified with the inplane-spin direction when moving from the skyrmion center in the $+\hat x$ direction. In the hexagonal small-field phase, the helicity (arrows in Fig.~\ref{fig:phaseDiagram}) shows an antiferromagnetic order. In the opposite limit of large magnetic fields and small densities, the groundstate is given by magnetic skyrmions in a ferromagnetic background. These skyrmions are small and far apart from each other, so we can treat them as point-like particles which interact via Coulomb interactions. To minimize Coulomb repulsions, they form a triangular lattice. For large skyrmion distance, the helicity forms a $120^\circ$ order (triangular phase A), reminiscent of the magnetic order of antiferromagnetically coupled spins on triangular lattices. Indeed the helicities of neighbouring skyrmions are weakly (exponentially suppressed in the skyrmion distance) antiferromagnetically coupled via the ferromagnetic exchange interaction of spins. When the skyrmion radius $R \sim (U_c/B)^{1/3}$ becomes of the same order as the skyrmion distance $\sim 1/\sqrt{\Delta \nu}$, i.e., for $B\sim U_c (\Delta \nu)^{3/2}$, the skyrmions deform and helicity order changes to a striped state with opposite helicities (triangular phase B). Furthermore, we also obtain a centered square lattice between the hexagonal phase and the triangular skyrmion phases, see Fig.~\ref{fig:Groundstate_NoB}. In this phase the skyrmions show antiferromagnetic helicity order. For an order-of-magnitude estimate of experimental parameters we assume $J \sim \SI{10}{meV}$ (of the same order of magnitude as the bandwidth) and $U_c=\frac{e^2}{4 \pi \epsilon_0 L_M}\sim \SI{100}{meV}$ (assuming $\epsilon \sim 1$). In our units a magnetic field of one Tesla is equivalent to $B = \SI{0.06}{meV}$. The triple point in the phase diagram Fig.~\ref{fig:phaseDiagram}, where the triangular and the quadratic phases meet, is therefore predicted to occur at a doping of $\Delta \nu \approx 0.066 (J/U_c)^2 \sim 10^{-3}$ and a field of $B \approx 6.3 \frac{J^3}{\mu_B U_c^2} \sim \SI{10}{T}$. For a larger doping of a few percent, we expect that the system remains in the hexagonal phase for all experimentally accessible fields. \section{Magnetization} A central experimental signature \cite{quantumHallFlippedSpin} is the dependence of the magnetization per spin, $m_z$, on the charge or, equivalently, the skyrmion density. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{fig4.pdf} \caption{Magnetization (in units of $\mu_B$ per TBG moir\'e unit cell) as a function of the (rescaled) charge density. Inset: For $B=0$ the magnetization jumps to zero for infinitesimal doping. The jump is broadened at finite $B$. The shape of the markers (triangular, quadratic or hexagonal) indicate the magnetic phases (triangular, quadratic or hexagonal). Note that there are tiny jumps at the first order transitions between two phases. Curves are taken for $U_c = 5J$.} \label{fig:magnetizationFitted} \end{figure} For $B=0$ the ground state with finite winding number has zero net magnetization as discussed above. This implies that at $T=0$ and for $B \to 0$, the magnetization jumps from a fully polarized state, $m_z=1$, to a state with zero magnetization for an arbitrarily small doping, $|\Delta \nu|>0$! The inset of Fig.~\ref{fig:magnetizationFitted} shows that at finite $B$ field this jump is broadened to a crossover. For small $\Delta \nu$ the magnetization per skyrmion is given by \begin{align} M_{\rm sky} \approx 28 \left(\frac{U_c}{B}\right)^{2/3} \sim {R^*}^2 \quad \text{for } \Delta \nu\to 0, \label{eq:deltaM} \end{align} which diverges for $B\to 0$, consistent with Eq.~\ref{eq:Radius}. This result suggests that the magnetization $m_z$ is a function of $\Delta \nu \left(\frac{U_c}{B}\right)^{2/3}$, \begin{equation} m_z \approx f\left(\Delta \nu\left( \dfrac{U_c}{B}\right)^{2/3}\right)\label{scaling} \end{equation} which is confirmed by the scaling plot of Fig.~\ref{fig:magnetizationFitted}. Note that we obtain only tiny jumps in the magnetization when one crosses one of the first order transitions of Fig.~\ref{fig:phaseDiagram} and the magnetization of all phases is approximately described by the same scaling curve. For $B \to 0$, $m_z$ is linear in $B$ and therefore the scaling ansatz \eqref{scaling} predicts $f(x\to \infty) \sim x^{-3/2}$ or $m_z \sim (\Delta \nu)^{-3/2} B/U_c$ in this limit. Our analysis has ignored the effects of dipolar interactions. Remarkably, simple power counting arguments show that dipolar interactions should become important in the limit of infinitesimal doping $\Delta \nu \to 0$. However, an analysis of the relevant prefactors shows, that for realistic parameters the effects of dipolar interactions are negligible, see Appendix~\ref{Dipolar}. \section{discussion} Twisted bilayer graphene provides a unique opportunity to discover new topological states of matter. Importantly, the anomalous quantum Hall effect in this system observed for $\nu=3$ is not induced by spin-orbit coupling but arises from the ordering of the valley degree of freedom. Thus the spin degree of freedom can rotate without closing the gap. We have argued that for small doping away from $\nu=3$, one therefore naturally realizes a topological magnetic texture with finite winding number and zero net magnetization best described as a lattice of `double tetartons', i.e., textures which cover $1/4$ of the unit sphere two times and which are connected to neighbouring tetartons by the three two-fold rotation axes of a tetrahedron. Experimentally, the most direct way to measure topological textures in twisted bilayer graphene is to use spin-polarized scanning tunneling microscopy. Also measurements of the magnetization as a function of the gate voltage can be used: whenever the number of flipped spins per added charge is large (according to Eq.~\ref{eq:deltaM} about 300 spins flip in a field of $\SI{10}{T}$), this clearly indicates the presence of skyrmionic excitations. As the double-tetarton lattice carries zero magnetization, we predict that this number diverges in the low-temperature, low-field limit. An interesting question is whether tetartons can exist as single particles. Here it is useful to consider the analogy with merons, half-skyrmions which cover $1/2$ of the unit sphere. They are realized in two-dimensional ferromagnets with an easy-plane anisotropy as vortex states \cite{MeronsAnisotropy,MeronsEasyPlane}. Similarly, we have checked that tetartons covering exactly $1/4$ of the unit sphere naturally arise in two-dimensional ferromagnets with certain cubic anisotropies when, for example, three domains with orientation $(1,1,1)$, $(1,-1,-1)$ and $(-1,1,-1)$ meet. In (anomalous) quantum Hall systems with Chern number $1$ such textures naturally carry the charge $1/4$. For the future it will be interesting to investigate how such topological textures can be controlled by currents and fields and to explore possible classical and quantum liquids generated from such states. \acknowledgements The numerical simulations have been performed with the open-source micromagnetic simulation program MuMax3 \cite{MuMax3,MinimizeMuMax} with custom additions, see Appendix \ref{simulations}, on the CHEOPS cluster at the RRZK Cologne. We thank S. Ilani, A. Vishvanath, and M. Zirnbauer for useful discussions and the DFG for financial support (CRC1238, project number 277146847, subproject C02). We thank M. Antonoyiannakis and A. Melikyan for suggesting the name tetarton.
1,108,101,566,147
arxiv
\section{Introduction} \label{sec1} This survey deals with an area in which measure theory is intertwined with combinatorics and asymptotic analysis; its applications to dynamics, algebra, and other problems will be partly touched upon in this paper, as well as in subsequent publications. The survey contains a~lot of new problems from the dynamic theory of graphs and representation theory of groups and algebras. \subsection{A~simple example and a~difficult question} \label{ssec1.1} We begin with an elementary example that illustrates the notion of filtration and problems of combinatorial measure theory. Consider the space of all one-sided sequences of zeros and ones: $$ X=\bigl\{\{x_n\}_{n=1}^{\infty}\bigr\},\qquad x_n=0 \vee 1,\quad n=1,2,\dots, $$ i.\,e., the infinite product of the two-point space. We will regard~$X$ as a~dyadic (Cantor-like) compactum in the weak topology, and also as a~Borel space. In~$X$ we introduce the ``tail'' equivalence relation and the tail filtration of $\sigma$-subalgebras of sets. Namely, two sequences~$\{x_k\}$,~$\{x'_k\}$ are -- $n$-equivalent if $x_{k+n}=x'_{k+n}$ for all $k\geqslant 0$; and -- equivalent with respect to the tail equivalence relation if they are $n$-equivalent for some~$n$. Denote by~${\mathfrak A}_n$, $n=0,1,2,\dots$, the $\sigma$-algebra of Borel subsets in~$X$ that along with every point~$x$ contain all points $n$-equivalent to~$x$. In other words, ${\mathfrak A}_n$, $n=0,1,2,\dots$, is the $\sigma$-subalgebra of Borel sets determined by conditions on the coordinates with indices~$\ge n$. The decreasing sequence $$ {\mathfrak A}_0 \supset {\mathfrak A}_1 \supset {\mathfrak A}_2\supset \cdots $$ is called the \textit{tail Borel filtration of the space~$X$ regarded as an infinite direct product}, $$ X=\prod_{n=1}^{\infty}[0;1]. $$ If we endow the space~$X$ with an arbitrary Borel measure~$\mu$, then the same sequence of $\sigma$-subalgebras, which are now understood as $\sigma$-subalgebras of classes of sets coinciding $\operatorname{mod} 0$ (with respect to the given measure~$\mu$), provides an example of a~filtration in a~standard measure space. For instance, take~$\mu$ to be the Bernoulli measure equal to the infinite product of the measures $\theta=(1/2,1/2)$; the resulting filtration is called the \textit{dyadic Bernoulli filtration}. By the famous Kolmogorov's zero--one (``all-or-none'') law, the intersection $\bigcap\limits_n{\mathfrak A}_n$ is the trivial $\sigma$-algebra~$\mathfrak N$, which contains only two classes of sets, of measure either zero or one. For every positive integer~$n$, the space $(X,\mu)$ can be represented as a~direct product of measure spaces: $$ (X,\mu)=\biggl(\,\prod_1^n\{(0;1),\theta\}\biggr) \times (X_n,\mu_n), $$ where $(X_n,\mu_n)=\displaystyle\prod_{n+1}^{\infty}\{(0;1),\theta\}$. Now we are going to state the main problem. Assume that we have a~dyadic compactum~$X'$ with a~Borel probability measure~$\mu'$ satisfying Kolmogorov's zero--one law, and for every positive integer~$n$ there is an isomorphism of measure spaces $$ (X',\mu') \simeq \biggl(\,\prod_1^n\{(0;1),\theta\}\biggr) \times (X'_n,\mu'_n), $$ where $(X'_n,\mu'_n)$ are some measure spaces. \textit{Can we claim that there exists an isomorphism~$T$ of measure spaces, $T(X',\mu')=(X,\mu)$, for which $T(X'_n,\mu')=(X_n,\mu)$ for all~$n$?} In the language to be described below, this is the question of whether a~finitely Bernoulli dyadic ergodic filtration is unique up to isomorphism. It is similar to the following question from the theory of infinite tensor products of $C^*$-algebras and perhaps looks even more natural. Consider a~$C^*$-algebra~$\mathscr A$ and assume that there is a~decreasing sequence ${\mathscr A}_n \subset \mathscr A$ of $C^*$-subalgebras of~$\mathscr A$ whose intersection is the space of constants, $\bigcap\limits_n {\mathscr A}_n=\{\operatorname{Const}\}$, such that for all~$n$ $$ {\mathscr A}\simeq M_2(\mathbb C)^{{\otimes}n}\otimes{\mathscr A}_n $$ (here $\simeq$ means an isomorphism of $C^*$-algebras). \textit{Is it true that there exists an isomorphism} $$ {\mathscr A}\simeq\prod_{1}^{\infty} {\vphantom{\prod}}^{\!\otimes} M_2(\mathbb C) $$ \textit{that sends~${\mathscr A}_n$ to the subalgebra $\displaystyle\prod_{n+1}^{\infty}{\vphantom{\prod}}^{\!\otimes} M_2(\mathbb C)\subset {\mathscr A}$ for all~$n$?} These problems were raised by the author about fifty years ago. In the original terms, the first of them looked as follows: is it true that every filtration such that its finite segments are isomorphic to finite segments of a~Bernoulli filtration and the intersection of all its $\sigma$-algebras is trivial is isomorphic to a~Bernoulli filtration? The problem originated from ergodic theory, as we will discuss below. In~\cite{59}, \textit{both questions were answered in the negative}; moreover, it turned out that there is a~continuum of measure spaces (respectively, $C^*$-algebras) with dyadic structures that are finitely isomorphic but pairwise nonisomorphic globally. This discovery launched an important field, which, however, still has not gained sufficient attention: the asymptotic theory of filtrations. For this survey, we have selected the most important facts, both known for a~long time and new, and, most importantly, new problems in this area. The theory of filtrations has applications to measure theory, random processes, and dynamical systems, as well as to representation theory of groups and algebras. The main problem concerns the complexity of the asymptotic behavior at infinity of monotone sequences of algebras and a~possible classification of such sequences. The second question, about tensor products of $C^*$-algebras, will be considered in another paper; here we have mentioned it to emphasize the parallelism between problems from very different areas of mathematics. Both these problems belong to asymptotic algebraic analysis. Conceptually new ideas, as compared with the previous research on filtrations, appeared in connection with the theory of graded graphs (Bratteli diagrams); this is one of the central topics of this survey. Apparently, this connection has not been noticed earlier, though the tail filtration is a~most important object in the theory of $\operatorname{AF}$-algebras. From the standpoint of the theory of graded graphs, filtrations were considered in the author's recent paper~\cite{82}; here, on the contrary, we focus on the measure-theoretic aspect of the problem and use notions and techniques related to graphs. \subsection{Three languages of measure theory} \label{ssec1.2} Let us return to the measure-the\-o\-re\-tic statement of the first problem. If we consider the algebra~$L^{\infty}(X,\mu)$ of all classes of measurable bounded functions on the space $(X,\mu)$ coinciding $\operatorname{mod} 0$, we obtain a~filtration of subalgebras: $$ L^{\infty}(X,\mu) \equiv{\mathscr A}_0\supset {\mathscr A}_1 \supset {\mathscr A}_2 \supset \cdots, $$ where~${\mathscr A}_n$ is the subalgebra in~$L^{\infty}(X,\mu)$ consisting of all functions that depend on coordinates with indices~$\ge n$ ($n=0,1,2,\dots$). Now let us describe this example in the language of partitions. Let~$\xi_n$ be the partition of $(X,\mu)$ into the classes of sequences in which the coordinates with indices greater than~$n$ coincide; then ${\mathfrak A}_n$ is the $\sigma$-algebra of sets measurable with respect to the partition~$\xi_n$ (i.\,e., sets composed of elements of this partition). And ${\mathscr A}_n$ is the space of functions measurable with respect to~$\xi_n$. The fact that the sequence of partitions is decreasing means that almost every element of~$\xi_n$ is a~union of some elements of~$\xi_{n-1}$, $n=1,2,\dots$\,. (The partial order in the space of measurable partitions is discussed in Sec.~\ref{ssec2.2}.) In these terms, ${\mathscr A}_n$ is the algebra~$L^{\infty}(X/{\xi_n},\mu_n)$, where $X/{\xi_n}$ is the quotient of~$(X,\mu)$ by~$\xi_n$ and $\mu_{\xi_n}$ is the quotient measure on~ $X/{\xi_n}$. Thus, there is a~functorial equivalence of the three languages of the theory of filtrations described above: the language of filtrations of $\sigma$-subalgebras of a~measure space, the language of filtrations of subspaces of measurable functions, and the language of filtrations of measurable partitions (precise definitions will be given below). By filtrations we will mean either infinite decreasing sequences of subalgebras of a~commutative algebra with involution (for instance, $L^{\infty}(X,\mu)$), or (equivalently) infinite decreasing sequences of $\sigma$-subalgebras of sets in a~standard measure space~$(X,\mu)$, or, finally (in yet another equivalent geometric language which we will use most frequently), infinite decreasing sequences of measurable partitions. All three languages are equivalent; the difference between them is terminological. The first context (filtrations of subalgebras of an algebra with involution) is the most general one, it admits an important generalization achieved by abandoning the commutativity of the ambient algebra and passing to $\operatorname{AF}$-algebras; as mentioned above, this generalization is not touched upon here, though it is closely related to the content of the paper. We will also consider decreasing sequences of $\sigma$-subalgebras of sets in a~standard Borel space, i.\,e., Borel filtrations. In this case, no measure on the space is given, and the problem arises of describing all measures that agree with the given filtration. This statement covers almost all problems related to invariant measures in different contexts. { \makeatletter \def\@ssect#1#2#3#4#5{\@tempskipa #3\relax \ifdim \@tempskipa>\z@ \begingroup #4\@hangfrom{\hskip #1}{\interlinepenalty \@M #5\par}% \endgroup \else \def\@svsechd{#4\hskip #1\relax #5.}\fi \@xsect{#3}} \newbox\@subsection@box@ \def\@sect#1#2#3#4#5#6[#7]#8{% \setbox\@subsection@box@=\hbox{#8}% \ifnum #2>\c@secnumdepth \let\@svsec\@empty\else \refstepcounter{#1}\protected@edef\@svsec{\csname the#1\endcsname.}% \fi \@tempskipa #5\relax \ifdim \@tempskipa>\z@ \begingroup #6\relax \@hangfrom{\hskip #3\relax\textup{\@svsec}}{\interlinepenalty \@M #8\par}% \endgroup \csname #1mark\endcsname{#7}\addcontentsline {toc}{#1}{\ifnum #2>\c@secnumdepth \else \protect\numberline{\csname the#1\endcsname}\fi #7}\else \def\@svsechd{#6\hskip #3\relax \textup{\@svsec}% \ifdim\wd\@subsection@box@>0pt\enspace #8\fi \csname #1mark\endcsname {#7}\addcontentsline {toc}{#1}{\ifnum #2>\c@secnumdepth \else \protect\numberline{\csname the#1\endcsname}\fi #7}}\fi \@xsect{#5}} \makeatother \subsection{Where do filtrations appear?} \label{ssec1.3} Filtrations appear in the theory of random processes, potential theory and the theory of Markov processes, ergodic theory, topological and metric dynamics. The most interesting applications belong to the theory of graded graphs (Bratteli diagrams), the theory of $\operatorname{AF}$-algebras, and asymptotic combinatorics. Here are several general examples. } \smallskip \textbf{A.~The ``past'' of random processes.} Consider an arbitrary random process, for instance, a~process~$\{x_m\}_m$ with real values and discrete time, $m \leqslant 0$ (it is convenient to index the random variables by the negative integers). The $\sigma$-algebra~${\mathfrak A}_n$ consists of all measurable sets that can be described by the ``past'' of the process, i.\,e., by the random variables with indices $m <-n$. In this context, $\{{\mathfrak A}_n\}_n$ is the \textit{past filtration of the random process}. In a~similar way, we can define the ``future'' filtration of a~random process. A~case of importance in ergodic theory is that of a~stationary filtration: $$ \{\xi_n\}_{n=0}^{\infty}\simeq \{\xi_n/{\xi_1}\}_{n=1}^{\infty}, $$ where $\simeq$ stands for metric isomorphism, i.\,e., means the existence of a~measure-pre\-serv\-ing automorphism that sends the first filtration to the second one. In this case, the process~$\{x_m\}_m$ described above is stationary, i.\,e., the corresponding probability measure is invariant under the left shift ${T\{x_m\}}_m=\{x_{m-1}\}_m$, $m<0$. In other words, the shift executes the passage to the quotient by the first partition, and the quotient filtration is isomorphic to the original one. There are several papers on the theory of stationary filtrations; for applications of the standardness criterion, see Sec.~\ref{sec5}. However, one should keep in mind that by no means all invariants of a~stationary filtration are endomorphism invariants. A~remarkable example of such an invariant is Ornstein's ``very weak Bernoulli'' (VWB) condition. \smallskip \textbf{B.~Filtrations in approximation theory.} Consider an ergodic automorhism of a~measure space~$(X,\mu)$: $T\colon X \rightarrow X$, $T\mu=\mu$. By the classical Rokhlin's theorem, it can be approximated in the uniform topology by periodic automorphisms. Periodic approximations can be amended in such a~way that the partitions into their trajectories decrease, i.\,e., form a~\textit{filtration of partitions into the trajectories of periodic approximations}. It is this construction that leads to the so-called adic realization of an automorphism~\cite{67},~\cite{68}. Many important metric invariants of the automorphism can be extracted from the properties of this filtration; this will also be discussed below. This example can be extended to amenable groups and amenable actions of arbitrary groups (see~\cite{37}). Consider a~dynamical system in a~space~$X$ with an invariant measure~$\mu$ in which ``time'' is a~countable amenable group~$G$. In other words, the group~$G$ is represented by automorphisms of the space $(X,\mu)$, i.\,e., measurable measure-preserving transformations: \begin{align*} T\colon G &\to \operatorname{Aut}(X,\mu), \\ g &\mapsto T_g\colon X\to X, \\ &\qquad\quad\;\;\mu\, \mapsto\, \mu. \end{align*} A~fundamental theorem of ergodic theory says that if the group~$G$ is amenable, i.\,e., has an invariant mean, then the orbit partition of~$G$ is tame (hyperfinite), i.\,e., there exists a~(non-unique) filtration that determines a~decreasing sequence of partitions which tends to the partition into the trajectories of the group action. There is a~particularly natural link between the theory of filtrations and the theory of actions of locally finite groups: in this case, $G=\bigcup\limits_n G_n$ is an inductive limit of finite groups, and for every action of~$G$ there is a~canonically defined sequence of partitions into the trajectories of the finite subgroups~$G_n$. Some invariants of filtrations constructed in this way, e.\,g., entropy (see below), are also invariants of the action. The adic realization of a~group action is a~kind of alternative to the generally accepted symbolic realization of this action by a~group of shifts in a~space of functions on the group. It can be regarded as a~generalization of the odometer to an arbitrary graded graph. This realization determines an approximation of the group action by finite groups, and thus it is a~far-reaching generalization of Rokhlin's lemma on periodic approximation. \smallskip \textbf{C.~Invariant and central measures.} \label{ssecC} Assume that we have an $\mathbb N$-graded locally finite graph $\Gamma=\displaystyle\coprod_n \Gamma_n$, $\Gamma_0=\{\varnothing\}$, and let~$T(\Gamma)$ be the space of infinite paths in~$\Gamma$ starting from the vertex~$\varnothing$. The space~$T(\Gamma)$, being an inverse spectrum of finite spaces (of finite paths), is a~Cantor-like topological space, and hence is endowed with a~Borel structure. In~$T(\Gamma)$, the \textit{tail filtration} is defined: the $n$th $\sigma$-algebra consists of all sets that do not change if we take an arbitrary path from such a~set and modify its initial segment up to level~$n$. If one endows the space of paths with a~Borel measure, then the tail filtration turns into the filtration from Example~A above. But here a~new problem arises, related to \textit{measures with given cotransition probabilities}; namely, the problem of describing the central measures (see Sec.~\ref{sec7}). This problem appeared in the theory of $C^*$-algebras ($\operatorname{AF}$-algebras) and locally finite groups (the list of traces and characters), in the theory of Markov processes and invariant Markov measures (``entrance and exit boundaries'' in the sense of Dynkin), in the theory of invariant measures for dynamical systems. This circle of problems has equivalent combinatorial and geometric statements interesting in their own right. Note that this class of examples of filtrations has important additional specific features, which we will use in what follows. One of the results of this paper is as follows (Sec.~\ref{ssec4.1}). \begin{theorem}\label{th1} Every ergodic locally finite filtration is isomorphic to a~Markov filtration. \end{theorem} This means that one can study the most interesting class of filtrations by considering only one-sided Markov chains (in general, inhomogeneous in time and with an arbitrary state space). In particular, this provides a~fruitful reduction of the theory of $\operatorname{AF}$-algebras to the probabilistic theory of Markov chains, as well as the reverse transition from the theory of Markov chains to the theory of Bratteli diagrams with their algebraic underpinning. Actually, this theorem is a~refinement of the theorem on the existence of an adic realization for actions of amenable groups. It is also appropriate to mention that theorems on tail filtrations of path spaces of graded graphs are closely related to filtrations of commutants of finite-dimensional subalgebras of $\operatorname{AF}$-algebras. \smallskip \textbf{D.~Statistical physics and inverse limits of simplices.} \label{ssecD} Finally, the most popular reason for considering filtrations is provided by the modern formalism of statistical physics, both classical or quantum. Consider a~lattice, or even an arbitrary countable graph~$\Gamma$, and represent it as a~union of finite subsets (volumes): $$ \Gamma=\bigcup_n \Gamma_n. $$ Consider the space of configurations (e.\,g., subsets) on~$\Gamma$, i.\,e., the space of functions taking values in some alphabet (e.\,g., the alphabet $(0,1)$). Endow it with a~structure of a~Borel space by taking a~Borel base consisting of the cylinders, i.\,e., the sets of configurations determined by conditions on a~finite part of a~configuration (its restriction to some volume~$\Gamma_n$) and arbitrary outside this volume. Then we have a~filtration in this Borel space: its $n$th $\sigma$-subalgebra consists of the cylinder sets determined by conditions on the restriction of a~configuration to the volume~$\Gamma_n$. In other words, two configurations lie in the same element of the partition if they coincide outside~$\Gamma_n$ for some~$n$. If we have a~probability measure (statistic) on the space of configurations, then we obtain a~filtration of this measure space. A~popular scheme for studying phase transitions and other properties of statistics, adopted in recent years after the work by R.~L.~Dobrushin and others~\cite{5},~\cite{36}, is as follows: one defines not a~measure, but only a~system of conditional probabilities of the restriction of a~configuration to an arbitrary volume, under the condition that its restriction outside this volume is fixed, and studies the measures that have these conditional measures. This is a~method of defining measures in infinite-dimensional spaces alternative to the widely accepted Kolmogorov's method via joint distributions. This scheme for studying filtrations is briefly described below (see Sec.~\ref{sec3}, especially Subsections~\ref{ssec3.5} and \ref{ssec3.6}). We emphasize that the above list of areas related to filtrations does not pretend to be exhaustive, and this survey, as well as the list of references, is far from being complete. Especially interesting and little studied links are those with combinatorics; possible material on this topic can be drawn from~\cite{54},~\cite{55}. We mean that an asymptotic combinatorial problem always involves an inductive family of problems with a~parameter (the number of objects, the collection of dimensions, etc.), but often one can also define a~filtration, whose properties are hidden quite deep, and without an analysis of these properties one cannot fully reveal the true nature of the problem. An example of such a~property is standardness, to be discussed below. \subsection{Finite and infinite in classification problems; standardness as a~generalization of independence} \label{ssec1.4} In mathematics and physics there is a~lot of classification problems involving ``finite'' and ``infinite'' invariants, and usually the core of the classification problem is to find the latter ones. The problem under study, that of metric classification of filtrations, seems to be typical for the class of problems involving analytic, algebraic, and combinatorial components and, moreover, a~nontrivial relation between finite and infinite invariants. Perhaps, the ideas described here will be useful in other problems. In what follows, classification and types of filtrations are understood in the sense of measure-preserving transformations, and equivalence is meant with respect to the group of such transformations. Instead of ``equivalent objects (filtrations),'' we will more often say ``isomorphic objects.'' Every infinite filtration of algebras (or similar objects) $$ \{{\mathfrak A}_n\}_{n=0}^{\infty}={\mathfrak A}_0 \supset {\mathfrak A}_1\supset \cdots $$ can be regarded as an infinite collection of its finite fragments, i.\,e., finite filtrations: $$ \{{\mathfrak A}_k\}_{k=0}^{n},\qquad n=0,1,2,\dots\,. $$ Assume that we can give a~classification, with respect to some equivalence relation, of finite filtrations of given length~$n$ for all finite~$n$. More exactly, assume that we can construct a~topological or Borel \textit{module space~${\mathscr M}_n$}, $n=0,1,2,\dots$, i.\,e., the space of complete invariants of finite filtrations of a~given length. We will say that two infinite filtrations are \textit{finitely isomorphic if for every~$N$ their fragments of length~$N$ are equivalent}, i.\,e., determine the same point in the space~${\mathscr M}_N$, $N=0,1,2,\dots$\;. Of course, the fact that infinite filtrations are finitely isomorphic does not in general imply that they are isomorphic. In other words, even if the module space of infinite filtrations can be well defined, it is not in general the union of the module spaces of finite fragments of filtrations. We will consider this phenomenon in more detail below. The main subject of this paper and the new results presented here are in one way or another related to the problem of classification of infinite filtrations and to selection of classes most important for applications. There exist infinite filtrations whose isomorphism type is uniquely determined by the finite isomorphism type. For example, if the group of symmetries of a~finite fragment of a~filtration is trivial, then the (isomorphism) type of this fragment already determines the type of the whole filtration. Thus a~meaningful theory should deal only with the class of filtrations for which the group of symmetries of every infinite ``tail'' is infinite; partly because of this, we consider only locally finite filtrations (for a~definition, see Sec.~\ref{sec2}): for them, the corresponding groups are infinite, and this feature of these filtrations is already reminiscent of the Markov property. In the class of locally finite filtrations, finite isomorphism does not impose substantial restrictions on the ``infinite properties'' of filtrations, in particular, it does not even determine whether the filtration is ergodic or not. Attempts to give~a classification of infinite sequences should be preceded by building a~classification of finite filtrations (i.\,e., finite decreasing sequences of subalgebras), and in the categories we are interested in, it either is known, or can easily be obtained. This classification was initiated by V.\,A.~Rokhlin (in the case of one $\sigma$-algebra or one partition), and one can use this pattern to construct invariants of finite filtrations (see, e.\,g.,~\cite{21}). We retell the classification of arbitrary finite decreasing sequences of partitions in the convenient language of graphs and trees. In these cases, the invariants, i.\,e., the module spaces, are quite manageable: these are spaces of finite measured trees. This is the content of a~part of Sec.~\ref{sec2}. But what else should we add to finite isomorphism to obtain full isomorphism? In other words, what ``infinite invariants'' are not determined by finite ones? And do we really always need them? Are there classes of filtrations for which finite isomorphism already implies isomorphism? Note that the analysis of finite invariants does not allow one to deduce whether or not they suffice to determine the type of a~filtration. This is one of the main questions, which is rather typical for any discussions about finite and infinite. The answer to the above question is positive: the desired class is the class of so-called \textit{standard filtrations}. In the case of homogeneous (e.\,g., dyadic) filtrations, it consists, as follows immediately from the standardness criterion (see~\cite{59} and Sec.~\ref{sec4} below), of Bernoulli filtrations, i.\,e., past filtrations of arbitrary sequences of independent random variables. For general locally finite filtrations, even the statement of the problem is not so obvious and reads as follows. Let us try to generalize the problem of classification of filtrations, i.\,e., endow filtrations with a~certain structure in such a~way that the behavior of finite invariants of a~filtration in the augmented problem allows one to determine whether the ordinary finite invariants uniquely determine the type of this filtration. Filtrations for which this condition is satisfied will be called \textit{standard}: this class generalizes the notion of independence (\,= Bernoulli property). But how do this generalized problem and these generalized finite invariants look? The answer is that one should construct a~classification of finite fragments of filtrations together with some additional data; for instance, this can be fixed measurable functions defined on the base space, but the most appropriate choice is to take a~\textit{metric on the measure space.} Then the measured trees which are invariants of finite fragments of filtrations will be also endowed with a~metric, and these new terms allow one to state the \textit{criterion of asymptotic behavior of invariants}, called the \textit{standardness criterion}; the validity of its condition is exactly equivalent to the fact that the finite invariants completely determine the type of a~filtration. Moreover, the choice of a~specific metric does not affect the validity of this condition, it should only be admissible (see \cite{92},~\cite{93}). This is precisely the main idea, which we describe in Sec.~\ref{sec5}. Thus the module space is exactly the space of finite invariants, and the standardness criterion is the condition meaning that the invariants of the conditional filtrations (on the elements of the $n$th partition) converge in measure as $n$ tends to infinity. Invariants of finite filtrations whose all partitions have finite elements are measures on spaces of trees endowed with a~measure and a~metric. Below we also state this criterion in terms similar to those of martingale theorems. At first sight, it seems that one can continue this procedure of selecting good classes, extending the problem of classification of filtrations and obtaining further classes subject to classification. But this is not the case: outside the standardness class there is essentially one class, indivisible and very interesting, that of \textit{totally nonstandard filtrations}, for which there is still no complete classification (and perhaps it does not exist), but there are nontrivial invariants, both resembling old invariants, such that scale and (secondary) entropy, and, possibly, new ones, such as ``higher zero--one laws,'' which provide far-reaching generalizations of Kolmogorov's laws. Perhaps, the terminology may change, because standardness can be understood both in the hard sense (as minimality) and in the mild sense (as proper standardness); the future will show a~convenient terminology. It is interesting to compare this with Ornstein's theory of the very weak Bernoulli property (VWB), $d$- and $f$-metrics, etc.; indeed, the VWB property is similar to standardness, the difference being only in the choice of metrics on the space of conditional measures, as well as in the fact that ergodic theory deals only with standard filtrations. We show that finite classification of filtrations is a~tame problem, i.\,e., the corresponding module space is a~reasonable Borel space. But the general problem of metric classification of arbitrary filtrations is ``wild'' in the accepted sense and seems to be just as difficult as the problem of classification of automorphisms of spaces and similar problems. \subsection{A~summary of the paper} \label{ssec1.5} This is mainly a~survey paper which partially summarizes the research on filtrations from the introduction (in 1969) of the notion of standardness (more exactly, the discovery of examples of non-Bernoulli filtrations finitely isomorphic to Bernoulli ones) and the standardness criterion. We omitted several important topics directly related to the theory of filtrations, e.\,g., the notion of ``cosy'' filtrations in the sense of Tsirelson, recent papers by S.~Laurent, etc. Our main purpose is, first, to extend basic results on standardness to inhomogeneous and non finitely Bernoulli filtrations and, second, to combine the theory of filtrations with the theory of graded graphs (Bratteli diagrams), and thus with the theory of $\operatorname{AF}$-algebras, combinatorics of graphs and Markov chains. Hence our main interest is in locally finite and locally representable filtrations, i.\,e., filtrations in which fibers (elements of partitions) are finite for every partition and the number of different types of conditional measures is also finite. A~detailed introduction, including Rokhlin's theory of one partition, is given in Sec.~\ref{sec2}. Links with the theory of graded graphs and Markov chains are described in Secs.~\ref{sec3} and~\ref{sec4}. It should be mentioned that the existence of these links enriches both the theory of filtrations and the theory of graphs, Markov chains, and algebras. Examples that have appeared in recent years brilliantly demonstrate this. There have appeared many interesting and complicated graphs originating from the theory of adic (i.\,e., ``graphic'') approximations of dynamical systems (the graphs of ordered and unordered pairs, graphs of words, etc.) which provide examples of nonstandard filtrations; on the other hand, graph theory is a~source of new problems related to algebras and new realizations of filtrations. The theorem on a~Markov realization of a~locally finite filtration is proved in Sec.~\ref{sec4}. The key notion of the theory is standardness. For homogeneous and finitely Bernoulli ergodic filtrations, it coincides with independence, i.\,e., a~standard filtration is the ``past'' of a~Bernoulli scheme. For general locally finite filtrations, this notion is more sophisticated, and we give its definition and a~criterion for testing it. In particular, we state the standardness criterion in terms of random processes, namely, as a~strengthening of the martingale convergence theorem. This is a~theorem on diagonal (in a~certain sense) convergence. There appears a~scale of strengthenings, which we inteprete as a~scale of zero--one laws: at one end of this scale, we have Kolmogorov's zero--one law, and on the other end, standardness or (in the homogeneous case) Bernoulli property. These questions are discussed in Sec.~\ref{sec5}. It should be said at once that in the case of general filtrations one can define different degrees of ``nonstandardness,'' or different degrees of closeness to standard filtrations. Nonstandardness in the homogeneous (e.\,g., dyadic) case is of most interest. In Sec.~\ref{sec6} we analyze several key examples, in particular, a~scheme of random walks in random scenery. But this work is still in progress, so this section is of preliminary nature. Finally, Sec.~\ref{sec7} deals with filtrations in Borel spaces and the problem of enumerating all measures with a~given equipment. This subject has wide applications, we describe its links with the theory of random walks on groups and the theory of traces of algebras and characters of groups. I~considered it useful to supplement the survey with several historical remarks on the interesting history of the theory of filtrations. \section{The definition of filtrations in measure spaces} \label{sec2} \subsection{Measure theory: a~Lebesgue space, basic facts} \label{ssec2.1} We consider a~certain category of measure spaces, that of \textit{Lebesgue spaces in the sense of Rokh\-lin~\cite{45}}, or standard measure spaces. A~Lebesgue space is a~space endowed with a~separable $\sigma$-algebra of sets that is complete with respect to some (and hence every) basis; on this $\sigma$-algebra, a~probability $\sigma$-additive measure is defined. The separability of a~$\sigma$-algebra means the existence of a~countable basis of measurable sets that separates the points of the space; the definition of a~basis includes the condition that an arbitrary measurable set~$A$ should be contained in some set~$B$ from the Borel hull of the basis with $\mu(B\setminus A)=0$ (the weaker condition $\mu(B\bigtriangleup A)=0$ is not sufficient; we do not dwell on this here, see~\cite{45}). All objects and notions, such as spaces, bases, completeness, sets, functions, partitions, maps, etc., should be understood $\operatorname{mod} 0$, i.\,e., up to modifications on sets of measure zero. A~Lebesgue space is a~class of Lebesgue spaces coinciding $\operatorname{mod} 0$, a~measurable function is a~class of functions coinciding $\operatorname{mod} 0$, etc. Such an agreement makes it necessary to check that all notions, definitions, statements, proofs are well defined with respect to $\operatorname{mod} 0$, the fact that is usually ignored by all authors; however, experienced authors have never got into trouble because of this neglect. Morphisms in the category of Lebesgue spaces are measurable (in a~clear sense) measure-preserving maps, more exactly, classes (in the image and inverse image) of maps coinciding $\operatorname{mod} 0$. In more detail, a~measurable map of a~Lebesgue space into an arbitrary ``measurable'' space, i.\,e., a~space endowed with a~$\sigma$-algebra (for instance, into a~standard Borel space with countable base, the interval $[0,1]$), is a~map such that the inverse image of a~measurable set is measurable. If the image space is also endowed with a~measure and the measure of an image is equal to the measure of the inverse image, then the map is called measure-preserving. If the same is true for the inverse map, then it is called an isomorphism (in the case where the image and the inverse image coincide, an automorphism) of measure spaces. Note that the image of a~Lebesgue space under a~measurable map is a~Le\-bes\-gue space (cf.\ the corresponding theorem for compact sets). Two objects corresponding to each other under an isomorphism and the inverse isomorphism are called isomorphic (and sometimes equivalent). The following classification theorem due to Rokhlin refined an earlier von Neumann's~\cite{96} theorem, which was of the same type, but was based on a~more complicated axiomatics of measure spaces. \textit{There exists a~unique, up to isomorphism, Lebesgue space with continuous (i.\,e., having no atoms) probability measure}. The interval $[0,1]$ endowed with the Lebesgue measure is a~universal example of a~Lebesgue space with continuous measure. Every separable compact space with continuous Borel probability measure is another such example. The most convenient universal example of a~Lebesgue space is the countable product of two-point spaces, i.\,e., the dyadic compactum with an arbitrary continuous Borel measure. Thus, in the category of Lebesgue spaces, the most interesting objects (with continuous measure) are isomorphic. Note that the construction of the theory of Lebesgue spaces in the sense of Rokhlin in~\cite{45} essentially reproduces the theory of metric compacta and, even more specifically, of the dyadic compactum, with necessary modifications taking into account the specifics of measure theory. Passing to classification of general Lebesgue spaces, i.\,e., those with an arbitrary measure, consists in adding at most countably many atoms of positive measure. Thus a~complete (metric) invariant of a~general Lebesgue space is an ordered sequence of nonnegative numbers $$ m_1\geqslant m_2 \geqslant m_3 \geqslant \dots \geqslant 0,\qquad \sum_i m_i \leqslant 1, $$ namely, the sequence of sizes of all atoms, i.\,e., the points of positive measure. The zero sequence corresponds to a~space with continuous measure, and a~sequence with unit sum, $\displaystyle\sum_i m_i=1$, to a~space with atomic measure. \subsection{Measurable partitions, filtrations} \label{ssec2.2} A~measurable partition~$\xi$ (see Fig.~\ref{fig1}) of a~Lebesgue space $(X,\mu)$ is the partition of~$X$ into the inverse images of points under a~measurable map $f\colon X\to R$, where $R$ is an arbitrary standard Borel space, for instance (without loss of generality), the interval $[0,1]$. The fact that a~set~$C$ is an element (block) of a~partition~$\xi$ is denoted as follows: $C\in\xi$. Partitions are also regarded $\operatorname{mod} 0$. \begin{figure}[h!] \centering \includegraphics{ver01.eps} \caption{A~measurable partition} \label{fig1} \end{figure} A~fundamental fact, which goes back to works on probability theory of the 1930s (J.~Doob and others), but was rigorously proved by V.\,A.~Rokhlin at the initiative of A.\,N.~Kolmogorov (some hints on the necessity of a~rigorous study of conditional measures are contained in his famous book~\cite{33}), is as follows. \begin{theorem}[{\rm(V.\,A.~Rokhlin~\cite{45})}] \label{th2} Every measurable partition~$\xi$ of a~Lebesgue space $(X,\mu)$ can be equipped with a~unique $\operatorname{mod} 0$ canonical system of measures~$\{\mu^C\}$, $C\in \xi$ (in a~more popular language, a~system of conditional measures), on almost all elements of~$\xi$ in such a~way that 1) almost all, with respect to the measure $\mu$, elements $(C,\mu^C)$ endowed with the conditional measures are Lebesgue spaces; the quotient space $X/\xi$ endowed with the quotient measure~$\mu_{\xi}$ is also a~Lebesgue space; 2) for every measurable set $A \in X$, the function $\mu^C(A\cap C)$ is measurable (as a~function on $(X/\xi,\mu_{\xi})$), and the following (Fubini, repeated integration) formula holds: $$ \mu(A)=\int_{X/{\xi}}\mu^C( A\cap C)\,d\mu_{\xi}(C). $$ \end{theorem} Recall that a~measurable partition~$\xi$ gives rise to a~unique $\sigma$-algebra~${\mathfrak A}_{\xi}$ of sets measurable with respect to~$\xi$, i.\,e., composed of elements of~$\xi$. In Lebesgue spaces, the converse is also true: every $\sigma$-subalgebra~${\mathfrak A}$ of measurable sets in a~space $(X,\mu)$ gives rise to a~unique ($\operatorname{mod} 0$) measurable partition~$\xi$, and ${\mathfrak A}_{\xi}=\mathfrak A$. In the spaces~$L^1$ and~$L^2$, we have the \textit{operator of conditional expectation} corresponding to a~measurable partition~$\xi$, i.\,e., the projection~$P_{\xi}$ (in~$L^2$, the orthogonal projection) to the subspace of functions constant on elements of~$\xi$; this is the operator of averaging over the elements of~$\xi$ endowed with the conditional measures. From an alternative point of view, the theorem on conditional measures is simply a~theorem on an integral representation of the operator of conditional expectation: $$ (P_{\xi}f)(C)=\int_C f(x)\,d\mu^C(x). $$ The left-hand side can be understood as a~function on the quotient space, but also as a~function on~$X$ belonging to the subalgebra of functions constant on elements of~$\xi$. The projection to this subalgebra is well defined, and the proof of the existence of an integral representation usually proceeds as follows: first one endows the space~$X$ with a~compact topology, or even a~metric, then considers the projection to the space of continuous functions, and, finally, approximates functions in $L^1(X,\mu)$ by continuous functions. Introducing a~partial order on the classes of partitions coinciding $\operatorname{mod} 0$ turns the collection of partitions into a~lattice, where $\xi\succ \xi'$ means that elements of~$\xi$ are finer than elements of~$\xi'$, i.\,e., that an element of~$\xi'$ is composed of elements of~$\xi$; it is this order that agrees with the order of inclusion on the spaces of measurable functions associated with partitions. The greatest partition is the partition into singletons $\operatorname{mod} 0$, and the smallest one is the trivial partition (with only one nonempty component). Note that this order is opposite to that adopted in combinatorics, where the greatest partition is the trivial one and the smallest partition is that into singletons. \subsection{Classification of measurable partitions} \label{ssec2.3} The purpose of the upcoming sections is to describe metric invariants (i.\,e., give a~classification) of partitions, and then of finite filtrations, i.\,e., finite decreasing sequences of measurable partitions. The case of one partition is investigated in the well-known paper~\cite{45}, and we cite the corresponding result (with a~slightly modified statement) only for convenience of further generalizations. Note that this is perhaps the main original result of~\cite{45}. \begin{theorem}[{\rm(V.\,A.~Rokhlin~\cite{45})}] \label{th3} A~metric invariant of a~measurable partition~$\xi$ in a~continuous Lebesgue space $(X,\mu)$ is a~metric invariant of the map $$ V_{\xi}\colon(X/\xi,\mu_{\xi}) \to \Sigma, $$ where $(X/\xi,\mu_{\xi})$ is the quotient space, $\mu_{\xi}$ is the quotient of the measure~$\mu$ under the projection $X \to X/\xi$, and the map~$V_{\xi}$ sends every element $C \in \xi$ to the ordered sequence of measures of the atoms $m_1(C)\geqslant m_2(C)\geqslant\cdots$, regarded as a~point of the simplex~$\Sigma$ of all ordered series of nonnegative numbers with sum at most~$1$. \end{theorem} Thus, two measurable partitions of Lebesgue spaces are isomorphic, i.\,e., correspond to each other under an isomorphism of measure spaces, if and only if the corresponding maps~$V$ are metrically isomorphic. But we can go further and ask about the metric invariants of the map~$V_{\xi}$. The answer is as follows: these invariants are the $V_{\xi}$-image of the measure~$\mu_{\xi}$, which is a~Borel measure on the simplex~$\Sigma$, and the so-called multiplicities (we will not need them, so we refer the interested reader to the paper~\cite{70}, where the invariants of measurable maps are described in detail, in generalization of V.\,A.~Rokhlin's paper~\cite{46} on classification of functions). Speaking rather loosely, a~measurable partition is a~random measure space whose all realizations are regarded as disjoint subsets in the same space. Before turning to the case of several partitions, we describe a~number of classes and examples of partitions. 1. \textit{Partitions that have independent complements}. Assume that almost all elements~$C$ of a~partition~$\xi$ are isomorphic as spaces endowed with the conditional measures~$\mu^C$, i.\,e., the $V_{\xi}$-image of the measure~$\mu_{\xi}$ is the $\delta$-measure at the point of the simplex~$\Sigma$ corresponding to the (common) type of the conditional measures~$\mu^C$. In this, and only this, case the partition~$\xi$ has an \textit{independent complement~$\xi^-$}, and the space~$(X,\mu)$ can be decomposed into a~direct product of measure spaces: $$ (X,\mu)=(X/{\xi}, \mu_{\xi})\times (X/{\xi^-},\mu_{\xi^-}), $$ where the partition~$\xi^-$ is the unique (up to isomorphism, but not geometrically) independent complement to~$\xi$. Almost every element of~$\xi$ has a~nonempty intersection with almost every element of~$\xi^-$, and the independence condition is satisfied: $$ \mu(A\cap B)=\mu_{\xi}(A)\mu_{\xi^-}(B) $$ for any sets~$A$ and~$B$ measurable with respect to~$\xi$ and~$\xi^-$, respectively. \begin{corollary}[{\rm(V.\,A.~Rokhlin~\cite{45})}] \label{cor1} A~Lebesgue space with continuous measure has a~unique, up to isomorphism, partition with continuous conditional measures. \end{corollary} This is the partition of the square $[0,1]^2$ endowed with the Lebesgue measure into vertical intervals. A~(non-unique) independent complement is the partition into horizontal intervals. 2.\;\textit{Finite partitions}. Partitions with finitely many elements (blocks) are called finite; the complete invariant of finite partitions, in the case where $(X,\mu)$ is a~continuous measure space, is the collection of measures of the elements; in this case, $V_{\xi}\mu=\delta_{0}$, where $0\in\Sigma$ is the zero sequence. 3.\;\textit{Atomic partitions}. Let~$\xi$ be a~partition whose almost all elements are finite, more exactly, almost all measures~$\mu^C$ are atomic measures with finitely many atoms; such partitions are called discrete (though it would be more appropriate to call them ``cofinite''). A~measurable partition~$\xi$ of a~Lebesgue space $(X,\mu)$ with continuous measure will be called \textit{atomic} if the number of metric types of conditional measures is finite; in other words, if the $V_{\xi}$-image of~$\mu_{\xi}$ is a~measure on~$\Sigma$ with finitely many atoms. 4.\;A~\textit{semihomogeneous partition} is an atomic partition with finite blocks and uniform conditional measures on almost every block. The invariant of a~semihomogeneous partition is the collection of positive integers equal to the numbers of points in the blocks and the measures of the sets of all blocks with a~given number of points. A~\textit{homogeneous} (dyadic, triadic, etc.)\ partition is a~partition with the same number of points in all blocks; in this case, the $V_\xi$-image of $\mu_\xi$ is the $\delta$-measure at the point $$ (1/n,\dots (n) \dots,1/n)\in \Sigma. $$ \subsection{Classification of finite filtrations} \label{ssec2.4} The study and classification of (finite or infinite) collections of $\sigma$-subalgebras, or, which is equivalent, measurable partitions, in Lebesgue spaces with continuous measures is perhaps the most general and difficult geometric problem of measure theory and probability theory. Already the case of two measurable partitions in general position is of great interest and involves combinatorial and algebraic difficulties; we will consider it elsewhere. In this section, we first give a~classification of finite monotone sequences of measurable partitions, i.\,e., finite filtrations. It is quite simple. However, the main subject of the paper is the analysis of infinite filtrations. The study of finite filtrations in a~space with continuous measure should be preceded by the study of such filtrations in the following finite measure space: $$ (C,m),\qquad C=\{c_1,c_2,\dots,c_k\},\quad m(c_i)=m_i,\quad i=1,\dots,k. $$ A~filtration of length~$n$ in a~finite space is a~\textit{hierarchy}: $\xi_0$ is the partition of~$C$ into singletons; $\xi_1$ is a~partition of~$C$; $\xi_2$ combines some blocks of~$\xi_1$ into larger blocks,~etc.; the last partition~$\xi_{n-1}$ consists of a~single block, which is the whole space~$C$. It is convenient to think of a~hierarchy as a~tree of rank~$n$, i.\,e., with~$n$ levels, endowed with a~measure (see Fig.~\ref{fig2}). \begin{figure}[t!] \centering \includegraphics{ver02.pdf} \caption{A~finite filtration of a~finite measure space} \label{fig2} \end{figure} For $n=1$, the tree has two levels: several vertices of the zero level and a~single vertex of the first level. The vertices of the zero level are linearly ordered according to the measures of the points. In the general case, the vertices of the~$r$th level are the elements of the partition~$\xi_r$, $r=0,1,\dots,n-1$, and an edge joining a~vertex of level $r-1$ with a~vertex of level~$r$ means an inclusion between the corresponding elements of partitions. The vertices of the zero level, i.\,e., the points of the original space, are ``leaves'' of the tree. They have the measures~$m_i$, $i=1,\dots,k$; let us introduce a~partial linear order on the leaves belonging to the same element of the partition~$\xi_1$ according to their measures (up to an arbitrary order on leaves of equal measure). Given these measures, one can recover, by a~simple summation, the conditional measures on the vertices of the first level, i.\,e., on the elements of the partition~$\xi_1$, and continue in the same way for the elements $\xi_2,\dots,\xi_{n-1}$ of the subsequent levels. \begin{definition} \label{def1} Denote by $\operatorname{Tree}_n$ the space of finite single-root trees of rank~$n$ (i.\,e., trees with~$n$ levels labeled by the numbers $0,1,\dots,n-1$ and a~unique vertex of the last level as a~root) endowed with a~strictly positive probability measure on the leaves, i.\,e., on the vertices of the zero level, and hence on the vertices of all levels; the values of the measure induce a~linear order on the vertices up to an arbitrary order of vertices with equal measure. \end{definition} One may say that $\operatorname{Tree}_n$ is the space of finite filtrations of length~$n$ on finite measure spaces. In what follows, we will equip these trees with another important structure, a~metric, and the resulting category of trees endowed with a~measure and a~metric will be a~powerful tool in solving our problems. Now we are ready to describe a~complete system of invariants of a~finite filtration of an arbitrary Lebesgue space. We consider only filtrations in which all measurable partitions are atomic: $$ \{\xi_0\succ \xi_1 \succ \xi_2 \succ\dots\succ \xi_n\}=\{\xi_k\}_{k=1}^n, $$ where $\xi_0=\varepsilon$ is the partition into singletons $\operatorname{mod} 0$. \goodbreak The fact that such a~sequence is decreasing means that almost every element~$C_n$ of the last partition~$\xi_n$ is finite and composed of finitely many elements of the partition~$\xi_{n-1}$, i.\,e., $$ C_n=\bigcup C^i_{n-1},\qquad C^i_{n-1}\in \xi_{n-1}, $$ the elements~$C^i_{n-1}$ are unions of elements of~$\xi_{n-2}$, etc. In other words, almost every element $C_n\in \xi_n$ supports a~hierarchy, i.\,e., a~finite filtration with a~measure. Thus we have a~homomorphism $$ t_n\colon X/\xi_n \to \operatorname{Tree}_n, $$ which sends almost every element $C_n\in \xi_n$, endowed with the conditional measures of its points, to the measured tree corresponding to the finite filtration on this element induced by the filtration on the whole space. Denote the $t_n$-image of the measure~$\mu_{\xi_n}$ on~$X/\xi_n$ by~$\omega_{\tau}$. \begin{theorem} \label{th4} The complete system of isomorphism invariants of a~finite filtration $\tau \equiv \{\xi_k\}_{k=0}^n$ of atomic partitions of a~Lebesgue space with continuous measure coincides with the system of invariants of the homomorphism~$t_n$ to the space~$\operatorname{Tree}_n$ of measured trees, which consists of the measure~$\omega_{\tau}$ on~$\operatorname{Tree}_n$ obtained as the $t_n$-image of the measure~$\mu_{\xi_n}$ on $X/\xi_n$ and the multiplicity function. If the filtration has finitely many types of conditional measures, then the complete invariant is simply the atomic measure~$\omega_{\tau}$ on the set $\operatorname{Tree}_{n+1}$ of trees of rank $n+1$ (all multiplicities are continual). In the case $n=1$ (i.\,e., the case of one atomic partition~$\xi_1$), this is a~measure on ordered probability vectors, in accordance with Theorem~\ref{th3}. \end{theorem} The proof that these are invariants is obvious, since the construction used only invariants of partitions, namely, systems of conditional measures. The completeness follows from the fact that two nonisomorphic filtrations must differ at some set of positive measure that is measurable with respect to the partition~$\xi_n$, and hence the images of the corresponding measures on~$\operatorname{Tree}_{n+1}$ do not coincide. Repeating what has been said in the case of one partition, one may think of the invariant of a~finite filtration as a~random tree endowed with a~measure. It is not difficult to extend this procedure to classification of finite filtrations with arbitrary measurable (rather than only atomic) partitions: one should pass from finite trees to countable, or even continual, trees and general measures on them. Earlier, classification of finite filtrations was considered in different terms (see, e.\,g.,~\cite{21}); the language of trees or hierarchies proposed above was not used. The classification itself is not very interesting; it could be useful if one could apply it to classification of infinite filtrations. But one cannot simply construct the inductive limit of the spaces $\operatorname{Tree}_n$ with respect to natural embeddings and consider measures on the resulting space as invariants of filtrations, and a~direct attempt to pass to the limit is of no use for classification of infinite filtrations. The reason is that when we pass to the limit, the bijectivity of the correspondence between classes of filtrations and finite invariants is lost. In fact, the information related to finite filtrations useful for classification of infinite filtrations is somewhat different: finite fragments of a~filtration should be considered not by themselves, as we did above, but together with some functions or metrics, as we will do below. \subsection{Filtrations we consider and how one can define them} \label{ssec2.5} \subsubsection{Classes and properties of filtrations} \label{sssec2.5.1} First of all, we mention the most interesting class of filtrations (to be considered in more detail in another paper). This is the class of filtrations in which all partitions have purely continuous conditional measures. Is is of great importance for random processes and the theory of $C^*$-algebras, but it is not related to combinatorial problems. The key notion is that of ergodicity of filtrations, called also Kolmogorov property, regularity, etc. A~filtration $$ \tau=\{{\mathfrak A}_n\} \simeq \{\xi_n\} \simeq \{L^{\infty}(\xi_n)\} $$ is called \textit{ergodic} if \begin{enumerate} \item[--] the intersection of the $\sigma$-algebras~${\mathfrak A}_n$ is the trivial $\sigma$-algebra~${\mathfrak N}$: $$ \bigcap_n {\mathfrak A}_n={\mathfrak N}, $$ \end{enumerate} or \begin{enumerate} \item[--] the measurable intersection of the partitions~$\xi_n$ is the trivial partition $$ \bigwedge_n \xi_n=\nu, $$ \end{enumerate} or \begin{enumerate} \item[--] the intersection of the algebras of functions constant on elements of the partitions~$\xi_n$ over all~$n$ consists of the constants: $$ \bigcap_n L^{\infty}(X/\xi_n)=\{\operatorname{Const}\}. $$ \end{enumerate} In our further considerations, we deal with filtrations in which all partitions have atomic conditional measures without continuous components (filtrations with continuous conditional measures are left for another paper). We distinguish the following classes: \textit{locally finite filtrations, homogeneous filtrations, and semihomogeneous filtrations}. \begin{definition} \label{def2} (i) A~filtration $\tau=\{\xi_n\}_{n=0}^{\infty}$ of a~Lebesgue space $(X,\mu)$ with continuous measure~$\mu$ is called \textit{locally finite} if all partitions~$\xi_n$, $n=0,1,2,\dots$, are atomic, i.\,e., almost all conditional measures on the elements of~$\xi_n$ are atomic, and the number of types of conditional measures is finite, in particular, for every~$n$ the number of atoms in almost all elements of~$\xi_n$ is finite (but, possibly, depends on~$n$). (ii) A~locally finite filtration is called \textit{semihomogeneous} if the conditional measures of almost all elements of all partitions~$\xi_n$, $n=0,1,2,\dots$, are uniform distributions (i.\,e., the measures of all points of a~given element are equal).\footnote{But the conditional measures of elements of the quotient partitions $\xi_n/\xi_{n-1}$ are not necessarily uniform.} (iii) A~locally finite filtration is called \textit{homogeneous} if for every~$n$ the conditional measures of almost all elements of~$\xi_n$ are uniform and equal. If the number of elements in~$\xi_n$ is~$r_n$, then the filtration is called $\{r_n\}_n$-adic; for $r_n\equiv 2$, dyadic, and for $r_n\equiv 3$, triadic. \end{definition} Relaxing the finiteness condition in the definition of a~locally finite filtration (for example, allowing elements of partitions to have countably many atoms or an arbitrary number of types of conditional measures) does not raise serious new difficulties and essentially new phenomena in the theory of filtrations, thus we restrict ourselves only to locally finite filtrations. Consider also the following special classes. \begin{definition} \label{def3} We say that a~filtration~$\tau$ is (a) a~\textit{Bernoulli} filtration if it is the tail filtration for an arbitrary sequence of independent random variables; in other words, if there exists a~sequence of independent partitions~$\eta_n$, $n=1,2,\dots$, in a~Lebesgue space with continuous measure such that $$ \xi_n=\bigvee_{k=n}^{\infty} \eta_k,\qquad n=0,1,2,\dots\,; $$ (b) more generally, a~\textit{Markov} filtration if it is the tail filtration of a~one-sided Markov chain with discrete time and arbitrary state space. \end{definition} Let us turn our attention to the following important class of Markov filtrations which generalizes random processes with independent increments. Consider a~group~$G$ and a~finitely supported probability measure~$m$ on~$G$. The Markov chain~$y_n$, $n=0,1,2,\dots$, corresponding to the (say, left) random walk on~$G$ with transition probability~$m$ generates a~Markov filtration. The ergodicity of this filtration is equivalent to the triviality of the Poisson--Furstenberg boundary. And the triviality criterion for the boundary is that the entropy vanishes (see~\cite{26}). More generally, assume that the group~$G$ acts on a~Lebesgue space $(Z,\nu)$ with continuous (or even $\sigma$-finite) invariant measure~$\nu$: $$ \forall\,g \quad T_g\nu=\nu. $$ This gives rise to a~one-sided Markov process~$\{z_n\}_n$ with the state space $(Z,\nu)$ and transition probabilities $$ \operatorname{Prob}(z|u)=m(g),\quad\text{where } z=T_gu, $$ and the corresponding filtration, which will be considered in the upcoming sections. If~$m$ is the uniform measure on the generators of the group, then the filtration is homogeneous. Note that the state space of this process is continual, but the filtration is locally finite. As we will see in the next section, in this case the filtration can be defined by a~sequence of finite partitions $$ \eta_n,\quad n=1,2,\dots,\qquad \bigwedge_n \eta_n=\epsilon $$ (where $\epsilon$ is the partition into singletons). The filtration~$\{\xi_n\}_n$ is defined as follows: $$ \xi_n=\bigvee_{k=n+1}^{\infty} \eta_k,\qquad n=0,1,2,\dots\,. $$ \subsubsection{The basis method of defining filtrations} \label{sssec2.5.2} The most efficient method of defining filtrations, both in measure and Borel spaces, is to introduce first an increasing sequence of finite partitions (a~basis), and then define a~filtration as the sequence of products of ``tails.'' We will call it the basis method. Consider a~sequence of finite partitions (a~basis) $$ \{\eta_n\}_{n=1}^{\infty},\qquad \bigwedge_n \eta_n=\epsilon $$ (where $\epsilon$ is the partition into singletons) of a~measure or Borel space. A~filtration~$\{\xi_n\}_n$ is defined as follows: $$ \xi_n=\bigvee_{k=n+1}^{\infty} \eta_k,\qquad n=0,1,2,\dots\,. $$ The basis method of defining filtrations naturally arises in connection with random processes with time~${\mathbb Z}_+$ and in other probabilistic situations (for example, $\eta_n$ is the partition corresponding to the $\sigma$-algebra of sets determined by the state of the process at time~$n$). We will use it systematically in Sec.~\ref{sec3} when defining tail filtrations on path spaces of graphs. We emphasize that geometrically one and the same filtration can be defined via different bases, and the problem of (metric or Borel) isomorphism of filtrations defined by the basis method is quite nontrivial. Moreover, the conditional measures of the partitions~$\xi_n$ (i.\,e., equipments, see Sec.~\ref{ssec3.3}) in the case of measure spaces can be defined only by passing to the limit. The relationship between two sequences of partitions, the increasing one (a~basis), $$ \{\zeta_n\}_n:\quad \zeta_n=\bigvee_{k=1}^n\eta_k, \quad n=1,2,\dots, $$ and the decreasing one (a~filtration), $$ \{\xi_n\}_n:\quad \xi_n=\bigvee_{k=n+1}^{\infty} \eta_k,\quad n=0,1,2,\dots, $$ which look as dual objects, is by no means symmetric: the theory of filtrations is much deeper than the theory of increasing sequences, and the most important difference is in the asymptotic behavior. Namely, passing to the limit in the theory of filtrations is much finer than in the case of increasing sequences. In Sec.~\ref{sec8}, we show an example from A.\,N.~Kolmogorov's paper with a~commentary by V.\,A.~Rokhlin, which demonstrates the absence of continuity of the product when passing to the limit in a~filtration. This fact distinguishes filtrations of subalgebras of functions from filtrations of subspaces in a~Hilbert space, where there is a~complete symmetry between increasing and decreasing sequences. The reason, of course, is that the lattice of subalgebras (partitions), in contrast to the lattice of subspaces, has no canonical involution similar to passing to the orthogonal complement, which sends increasing sequences to decreasing ones and vice versa. \subsection{The equivalence relation associated with a~filtration, cocycles and associated measures} \label{ssec2.6} Along with the canonical intersection of $\sigma$-algebras, which turns out to be trivial for ergodic filtrations, there is another set-theoretic intersection, and the corresponding well-defined equivalence relation on the measure space itself. Assume that we have a~locally finite filtration (or even just a~filtration with discrete elements) $\tau=\{\xi_n\}_n$. Consider the monotone limit of the measurable partitions~$\xi_n$; denote it by $\bigcap\limits_n \xi_n={\overline \xi}_{\tau}$ (as opposed to the measurable intersection $\bigwedge \xi_n$). \begin{proposition} \label{pr1} The partition $\bigcap\limits_n \xi_n$ is well defined as the monotone limit of the decreasing sequence of partitions~$\{\xi_n\}_n$. It is not measurable, since in the ergodic case there are no nonconstant measurable functions constant on elements of this partition. Nevertheless, there is a~canonical object corresponding to this partition in a~function space (e.\,g., in~$L^2(X,\mu)$): \begin{equation} \label{eq2.1} {\mathfrak C}=\biggl\{f\in L^2(X,\mu):\exists n, \ \int_{C} f(x)\,d\mu^C(x)=0\mbox{ for a.\,e.\ }C\in\xi_n\biggr\}. \end{equation} Clearly, the integral in~\eqref{eq2.1} vanishes also for all $m>n$. The linear space ${\mathfrak C}\subset L^2(X,\mu)$ is not closed, its closure is the orthogonal complement to the intersection $\bigcap\limits_n L^{\infty}(X/\xi_n,\mu_{\xi_n})$ (in the ergodic case, the closure is the space of all functions with zero integral). \end{proposition} \begin{proof} 1. If we modify all partitions~$\xi_n$, $n=1,2,\dots$, at a~set of measure zero, then ${\overline \xi}_{\tau}$ will also change only on a~set of measure zero (it is important here that the partitions~$\xi_n$ decrease and their elements are finite). 2. The space~${\mathfrak C}$ is uniquely determined by the filtration and does not change under automorphisms of this filtration. The proposition is proved. \end{proof} The space~${\mathfrak C}$ can be extended to an invariant functional analog of the intersection~${\overline\xi}_{\tau}$ itself rather than the filtration. To this end, in the definition of~${\mathfrak C}$ one should take functions~$f$ for which the zero integral condition is satisfied for almost all elements~$C$ of arbitrary measurable partitions greater than the intersection: $\xi\succ {\overline \xi}_{\tau} $. This condition no longer changes when a~filtration is replaced with another filtration having the same intersection of partitions. This space is a~functional analog of a~tame partition.\footnote{This observation was made by the author in~\cite{88},~\cite{83}. Clearly, classification of tame partitions is equivalent to classification of nonclosed subspaces of the above form with respect to automorphisms, that is, unitary real multiplicative operators.} The most important property of the intersection of partitions is that its (countable) elements support a~projective conditional measure, namely, for almost all pairs of points~$x$,~$y$ of its elements, the ratio of conditional measures $\dfrac{\mu^C(x)}{\mu^C(y)}$\,, where~$C$ is an element of an arbitrary measurable partition $\xi \succ \overline \xi$, does not depend on the choice of~$\xi$. This ratio is a~well-defined cocycle, which will be discussed in more detail in the next section (see Sec.~\ref{ssec3.5}). In the case where~$\xi$ is the orbit partition for a~transformation with quasi-invariant measure (or for an amenable group), this cocycle is called the Radon--Nikodym cocycle. See also~\cite{94}. The paradoxical role of the cocycle defined on the intersection of measurable partitions (on the tail equivalence relation) and constructed from a~given measure~$\mu$ on~$X$ (in the next section, a~measure is constructed on paths in a~graph) is that it can simultaneously be a~cocycle for other measures singular with respect to~$\mu$. Thus it turns out to be related to another notion $\operatorname{mod} 0$ and, consequently, to other classes of functions coinciding $\operatorname{mod} 0$. In other terms, this paradox can be stated as follows: let $\tau=\{\xi_n\}_n$ be a~filtration; then the conditional measures of the partitions~$\xi_n$ for all~$n$ do not uniquely determine the measure~$\mu$, there can exist other \textit{associated measures} that are singular with respect to~$\mu$, but have the same conditional measures with respect to the partitions~$\xi_n$. The notion of cocycle and the problem of describing all measures with a~given cocycle appeared in the theory of Markov processes (works of E.\,B.~Dynkin~\cite{9, 10, 11}), in ergodic theory (K.~Schmidt~\cite{52}), in the theory of graded graphs (works of the author and S.\,V.~Kerov, see~\cite{86}, and their followers). The problem of constructing all measures with a~given cocycle is a~far-reaching generalization of the traditional problem of describing all invariant measures for group actions. For more details, see~\cite{82} and Sec.~\ref{sec6}. \section{Tail filtrations in graded graphs and Markov chains} \label{sec3} In this section we introduce a~new circle of notions, which brings into the theory of filtrations a~substantially different point of view on basic notions and provides it with an enormous amount of examples. We use $\mathbb N$-graded graphs and multigraphs (Bratteli diagrams); the theory of such graphs and their links with $C^*$-algebras were intensively studied in the 1970s--1980s (by O.~Bratteli, G.~Elliott, E.~Effros, D.~Handelman, and others, see~\cite{18},~\cite{86} and the references therein). New connections of this theory to dynamics and ergodic theory were discovered by the author in the 1970s and became a~source of new problems both in representation theory and dynamics; in part, they became the subject of asymptotic representation theory developed in the same years by the author together with S.\,V.~Kerov and their followers. In this survey and several previous papers by the author, attention is drawn, apparently for the first time, to the role of filtrations of $\sigma$-algebras and filtrations of subalgebras of $C^*$-algebras in the analysis of the structure of algebras themselves. But here we are primarily interested in the measure-theoretic structure of filtrations appearing in connection with graded graphs. \subsection{Path spaces of graded graphs (Bratteli diagrams) and Markov compacta} \label{ssec3.1} Consider a~locally finite, infinite, $\mathbb N$-graded graph~$\Gamma$ (in other words, a~Bratteli diagram). The set of vertices of~order~$n$, $n=0,1,2,\dots$, will be denoted by~$\Gamma_n$ and called the $n$th \textit{level} of the graph; $$ \Gamma=\coprod_{n\in \mathbb N} \Gamma_n, $$ where $\Gamma_0$ consists of the unique initial vertex~$\{\varnothing\}$. We assume that every vertex has at least one successor and every vertex except the initial one has at least one predecessor. In what follows, we also assume that edges of~$\Gamma$ are simple.\footnote{Considering multigraphs, i.\,e., graphs with multiple edges, in general gives nothing new for our purposes, since cotransition probabilities (equipments) introduced below replace and generalize the notion of multiplicities of edges. However, in what follows we use the language of multigraphs to simplify some statements.} A~graded graph~$\Gamma$ gives rise canonically to a~locally semisimple algebra~$A(\Gamma)$ over~$\mathbb C$, however here we consider neither this algebra no the relationship of the notions introduced below with this algebra and its representations. A~path is a~(finite or infinite) sequence of adjacent edges of the graph (for graphs without multiple edges, this is the same as a~finite or infinite sequence of adjacent vertices). The number of paths from the initial vertex to a~given vertex is finite, which means exactly that the graph is locally finite. The space of all infinite paths in a~graph~$\Gamma$ is denoted by~$T(\Gamma)$; in a~natural sense, it is an inverse limit of the spaces~$T_n(\Gamma)$ of finite paths from the initial vertex to the vertices of the $n$th level. Thus $T(\Gamma)$ is a~Cantor-like compactum. Cylinder sets in the space $T(\Gamma)$ are sets determined by conditions on initial segments of paths up to level~$n$; these sets are clopen and define a~base of a~topology. One can naturally define the notion of the \textit{tail equivalence relation~$\tau_{\Gamma}$} on~$T(\Gamma)$: two infinite paths are equivalent if they eventually coincide; we also say that these two paths lie in the same block of the tail partition. The \textit{tail filtration} (for the moment, a~Borel one, without any measure) $$ \Xi(\Gamma)=\{{\mathfrak A}_0 \supset {\mathfrak A}_1 \supset \dotsb\} $$ is the decreasing sequence of $\sigma$-algebras of Borel sets where the $\sigma$-algebra~${\mathfrak A}_n$, $n \in \mathbb N$, consists of all Borel subsets in~$T(\Gamma)$ that together with every path~$\gamma$ contain all paths coinciding with~$\gamma$ up to level~$n$. In a~clear sense, the $\sigma$-algebra~${\mathfrak A}_n$ is complementary to the finite $\sigma$-algebra of cylinder sets of order~$n$. \subsection{Trees of paths} \label{ssec3.2} Let us make a~general remark on the path space of a~graded graph: the paths joining a~given vertex with the initial one form a~tree whose root is the given vertex, and each path (a~leaf of the tree) is endowed with a~measure. The connection between the tree constructed from the paths of a~graph and that constructed from a~measurable partition is obvious. \begin{definition} \label{def4} Every vertex~$v$ of level~$n$ in a~graded graph determines in a~natural way a~tree of rank~$n$, which is composed of the paths leading from the initial vertex to~$v$; these trees agree in a~natural sense: the tree corresponding to a~vertex~$u$ is embedded as a~subtree into the tree corresponding to a~vertex~$v$ if $v$ succeeds~$u$. Leaves of the tree correspond to paths in the graph. \end{definition} \begin{proposition} \label{pr2} The measured trees constructed in Sec.~\ref{ssec2.4} from finite fragments of an infinite filtration~$\tau$ are exactly the trees constructed from the vertices of the graph~$\Gamma(\tau)$. (The possibility of combining them into an inductive limit that determines the type of the filtration up to finite isomorphism was discussed above. However, we do not need this, since the type of the filtration is determined by the minimal model itself.) \end{proposition} Thus an infinite path in a~graded graph gives rise to an inductive limit of a~sequence of nested trees corresponding to successive vertices of this path. The sequence of the roots of these trees has no limit, and the increasing sequence of the nested sets of leaves of the trees constitutes the set of all infinite paths in the graph cofinal with the given one; the number of such paths, as well as the number of vertices at every level of the trees, goes to infinity. The measures on the leaves have no limit, but there arises a~cocycle of ratios of measures. Thus we have obtained a~rootless tree whose all levels are infinite. It is easier to view this tree as an infinite sequence of hierarchies on the infinite set of its leaves supporting the cocycle of ratios of conditional measures. Of course, the construction depends on the choice of an infinite path. Here one should take into account whether the filtration is ergodic or not, but we will not need to study this problem. At the same time, it is clear that this construction cannot be used to advance the classification of finite filtrations from Sec.~\ref{ssec2.4} towards a~classification of infinite filtrations, since it takes into account only finite invariants of filtrations, which do not suffice for this purpose. However, these data suffice for classification of filtrations up to \textit{finite isomorphism}, which means the existence of a~complete system of Borel invariants for this classification. \subsection{An equipment of a~multigraph and cotransition probabilities} \label{ssec3.3} We introduce an additional structure on a~(multi)graph, a~\textit{system of cotransition probabilities}, or an \textit{equipment}, $$ \Lambda=\{\lambda=\lambda_v^u; \ u\in \Gamma_n,\ v\in \Gamma_{n+1},\ (u,v)\in \operatorname{edge}(\Gamma_n,\Gamma_{n+1}), \ n=0,1,2,\dots\}, $$ by associating with each vertex $v \in \Gamma_n$ a~probability vector whose component~$\lambda_v^u$ is the probability of an edge $u\prec v$ that comes to~$v$ from the previous level; thus $\displaystyle\sum_{u\colon u\prec v}\lambda_v^u=1$; $\lambda_v^u\geqslant 0$. In the case of a~multigraph, probabilities are assigned to each edge from the set of edges joining vertices~$u$,~$v$ such that~${u\prec v}$. \begin{definition} \label{def5} An \textit{equipped multi(graph)} is a~pair $(\Gamma,\Lambda)$ consisting of a~(mul\-ti)\-graph~$\Gamma$ and a~system~$\Lambda$ of cotransition probabilities on the edges of~$\Gamma$. An equipment allows one to define the probability of a~path leading from~$\varnothing$ to a~given vertex as the product of cotransition probabilities over all edges constituting the path. The most important special case of an equipment (i.\,e., of a~system~$\Lambda$ of cotransition probabilities), which is called the \textit{canonical}, or \textit{central}, equipment and is studied in combinatorics, representation theory, and algebraic situations, is as follows: $$ \lambda_v^u=\frac{\dim(u)}{\sum_{u:u\prec v} \dim(u)}\,, $$ where $\dim(u)$ is the number of paths from the initial vertex~$\varnothing$ to a~vertex~$u$ (in representation-theoretic terms, this is the dimension of the representation of the algebra~$A(\Gamma)$ corresponding to the vertex~$u$). \end{definition} One can easily see that for the central equipment, the measure on the set of paths leading from~$\varnothing$ to a~given vertex is uniform, and the cotransition probability to achieve a~vertex~$v$ from a~preceding vertex~$u$ is proportional to the fraction (among all the paths that lead from~$\varnothing$ to~$v$) of those paths that pass through~$u$. The canonicity of this system of cotransition probabilities is in the fact that it is determined only by the graph itself. The corresponding Markov measures on the path space~$T(\Gamma)$ are called \textit{central measures}; up to now, the study of Bratteli diagrams was restricted to only these measures. In terms of the theory of $C^*$-algebras, central measures are traces on the algebra~$A(\Gamma)$, and ergodic central measures are indecomposable traces. For more details on the case of central measures, see~\cite{77}, Sec.~7 below, and the extensive bibliography which can be found in the papers from our (incomplete) list of references. Measures on the path space of a~graph that agree with the canonical equipment are called central, and in the theory of stationary Markov chains they are called measures of maximal entropy. \begin{figure}[t!] \centering \includegraphics{ver03.pdf} \caption{A~graph of paths and a~Markov chain: rotation by $90^\circ$} \label{fig3} \end{figure} \subsection{The Markov compactum of paths} \label{ssec3.4} The term ``cotransition probabilities'' is borrowed from the theory of Markov chains~\cite{9}: if we regard a~graph, rotated by~$90^\circ$ (see Fig.~\ref{fig3}), as the set of states of a~Markov chain starting from the state~$\varnothing$ at time $t=0$, and the numbers of levels as moments of time, then $\Lambda=\{\lambda_v^u\}$ should be viewed as the system of cotransition probabilities for the Markov chain: $$ \operatorname{Prob}\{x_{t}=u\mid x_{t+1}=v\}=\lambda_v^u. $$ Recall the definition of a~Markov compactum in the generality we need. \begin{definition} \label{def6} Consider a~sequence~$\{X_n\}_{n=1}^{\infty}$ of finite sets, and associate with it a~sequence of ``multi-edges,'' i.\,e., a~collection of matrices~$M_n$ of size ($|X_n|=:$) $d_n\times d_{n+1}$ ($:=|X_{n+1}|$) in which an element~$m_{u,v}$, $u \in X_n$, $v \in X_{n+1}$, is a~nonnegative integer equal to the number of edges joining~$u$ and~$v$; to each such edge we assign a~nonzero number. A~\textit{path (trajectory)} is a~sequence of adjacent edges. If all the multiplicities~$m_{u,v}$ are equal to zero or one, then a~path is a~sequence of adjacent vertices. The space of all paths (trajectories) endowed with the natural topology of an inverse spectrum of sets of finite paths is called a~\textit{Markov (topological) compactum}. A~Markov compactum is called stationary if the state spaces and the matrices do not depend on~$n$: $$ X_n\equiv X_1,\qquad M_n\equiv M_1,\quad n=1,2,\dots; $$ in this case, one can consider not only the one-sided, but also the two-sided Markov compactum. A~Markov compactum with an equipment (or with cotransitions) is a~Markov compactum in the sense of the above definition in which elements~$m_{u,v}$ of the matrix~$M_n$ are vectors whose coordinates are positive numbers not exceeding~$1$ corresponding to the probabilities of edges going from $u\in X_n$ to~$v\in X_{n+1}$, so that the sum of all probabilities along a~row (over all edges going to~$v$) is equal to~$1$. \end{definition} Sometimes, a~Markov compactum (either equipped or not) will be called a~Mar\-kov chain. We are going to consider Markov measures on a~(usually, nonstationary) Markov compactum. One can see from the definition that the path space of a~graded (multi)graph is a~Markov compactum and an equipment of the graph defined above is an equipment of the Markov compactum. In what follows, we do not distinguish between these two languages: that of Markov compacta and that of path spaces of graded (multi)graphs. To begin with, these two (seemingly quite different) areas of mathematics, the theory of Markov processes and the theory of approximately finite-dimensional algebras (together with its combinatorial ``lining,'' the analysis of graded multigraphs), are in fact different versions of the same theory, and the multigraph corresponding to a~Markov chain is exactly a~Bratteli diagram. However, the settings of the problems are different, being algebraic in one case and probabilistic in the other one, and the traditions of drawing graphs are different too. It may seem that the difference between these two seemingly distinct theories can be removed by rotating the figure by~$90^\circ$. However, it is important to overcome this difference conceptually and to understand to what extent both areas, the theory of graded graphs and $\operatorname{AF}$-algebras on the one hand and the theory of Markov chains on the other hand, can be enriched by exchanging their dissimilar problems and intrinsic methods. For example, I have long ago raised perhaps paradoxical problems about the $K$-functor of a~Markov chain and ergodic $K$-theory, or about Shannon-type theorems for $\operatorname{AF}$-algebras (see~\cite{87}). Note that the analysis of asymptotic properties of the sequence of matrices~$M_n$ introduced above is also of great independent interest from the technical point of view. One may say that the whole theory of Markov compacta of paths in graded graphs we consider here is a~part of asymptotic theory of infinite products of Markov matrices, which is important in itself and also related to the theory of invariant measures, phase transitions, etc. \subsection{A~cocycle of an equipped graph} \label{ssec3.5} The notion of equipment is related to the notion of cocycle on the path space of a~graph. Consider a~function~$c(\,\cdot\,{,}\,\cdot\,)$ on the set of pairs of cofinal (i.\,e., eventually coinciding) paths on the graph with values in the set of nonnegative real numbers. Assume that $$ c(t,s)c(s,t)=1,\quad c(t,s)=c(t,z)c(z,s),\quad c(t,t)=1 $$ for all pairs of cofinal paths~$(s,t)$. Sometimes, instead of ``$2$-cocycle'' one says simply ``cocycle on an equivalence relation''; the definition makes sense for an arbitrary equivalence relation. A~cocycle is said to be trivial, or cohomologous to~$1$, if there exists a~nonnegative function~$g$ on paths such that $$ c(t,s)=\frac{g(t)}{g(s)}\,. $$ A~simple example: for a~measurable partition of a~space, the function \begin{equation} \label{eq3.1} c(t,s)=\frac{\mu^C(t)}{\mu^C(s)} \end{equation} (the ratio of the conditional measures of points lying in the same element of the partition) is a~trivial cocycle. \begin{lemma} \label{lem1} For every locally finite filtration in a~Lebesgue space with continuous measure there is a~well-defined cocycle on the tail equivalence relation. \end{lemma} \begin{proof} Consider two paths~$s$,~$t$ lying in the same element of the partition~$\xi_n$, and define~$c(t,s)$ to be the ratio of their conditional measures (see~\eqref{eq3.1}). As follows from the definition, conditional measures have the transitivity property, i.\,e., the ratio does not change if we consider the conditional measures of the partition~$\xi_m$, $m>n$. The lemma is proved. \end{proof} Assume that we have an equipment of a~graded graph~$\Gamma$ (or a~Markov compactum); the cocycle corresponding to the equipment is defined as follows: for two paths $$ t=(t_1,t_2,\dots,t_n,\dots),\quad s=(s_1,s_2,\dots,s_n,\dots), $$ with $t_i=s_i$ for $i\ge n+1$, we have $$ c(t,s)=\prod_{i=1}^{n-1} \frac{\lambda_{t_{i+1}}^{t_i}}{\lambda_{s_{i+1}}^{s_i}}\,. $$ Such a~cocycle will be called a~\textit{Markov} cocycle. Of course, there exist non-Markov cocycles. \textit{Every Borel probability measure defined on the path space of a~graph (the space of trajectories of a~Markov chain) determines a~system of cotransition probabilities; namely, it is the system of conditional measures of the measurable partitions~$\xi_n$}. One says that a~measure \textit{agrees} with a~given system~$\Lambda$ of cotransition probabilities (with a~given cocycle) if the collection of the corresponding cotransition probabilities for all vertices coincides with~$\Lambda$. \begin{lemma} \label{lem2} If the tail filtration on the path space of an equipped graph is ergodic with respect to a~given measure (or simply the measure is ergodic), then the cocycle constructed from the equipment is nontrivial. \end{lemma} Indeed, the triviality of the cocycle would mean the existence of measurable functions constant on tail equivalence classes, which contradicts the ergodicity. The canonical equipment gives rise to the cocycle identically equal to~$1$, and measures that agree with this cocycle are central, or invariant, measures. The Markov property of central measures on an arbitrary graded graph follows automatically from their definition, as was observed in old papers by the author and S.\,V.~Kerov~\cite{86},~\cite{30}. Recall that a~system of cotransition probabilities does not in general allow one to uniquely determine a~system of transition probabilities $$ \operatorname{Prob} \{x_{t+1}=v\mid x_t=u\}; $$ in other words, this system does not in general uniquely determine the Markov chain. On the contrary, the transition probabilities uniquely determine the Markov chain if an initial distribution is given. An analog of the initial distribution for cotransition probabilities is the final distribution, i.\,e., a~measure on the boundary (see below), which is the collection of all ergodic measures with given cotransitions. A~measure on the path space of a~graph will be called \textit{ergodic} if the tail $\sigma$-algebra (i.\,e., the intersection of all $\sigma$-algebras of the tail filtration) is trivial $\operatorname{mod} 0$. \textit{One of our purposes is to enumerate all ergodic Markov measures with~a given system of cotransition probabilities. The list of such measures will be called the absolute boundary, or the absolute, of an equipped Markov compactum, or the absolute of a~graded multigraph. This is a~topological boundary, and, as we will see, the Choquet boundary of a~certain simplex (a~projective limit of finite-dimensional simplices).} \subsection{Examples of graded graphs and Markov compacta} \label{ssec3.6} The list of graded graphs and multigraphs includes a~series of classical examples: the Pascal graph (already known to ancient Chinese mathematicians), which is the lattice~${\mathbb Z}^2_+$ graded along the diagonal (see Fig.~\ref{fig4}), and its multidimensional generalizations~${\mathbb Z}^d_+$, $d>2$, the Euler graph, the Young graph (see Fig.~\ref{fig5}), and, more generally, Hasse diagrams of lattices, primarily distributive lattices. Among the more recent examples, we can mention the graph of unordered pairs, or the tower of dyadic measures (\cite{69},~\cite{82}, see Sec.~\ref{ssec7.3} below), the graph of ordered pairs~\cite{91}, etc., which appeared in connection with the theory of filtrations and the adic realization of ergodic automorphisms~\cite{67},~\cite{37},~\cite{89}. \begin{figure}[h!] \centering \includegraphics{ver04.pdf} \caption{The Pascal graph} \label{fig4} \end{figure} \begin{figure}[h!] \centering \includegraphics{ver05.pdf} \caption{The Young graph} \label{fig5} \end{figure} A~graph is said to be of dyadic type if the number of paths from the initial vertex to every vertex of the $n$th level is equal to~$2^n$ for all $n=1,2,\dots$\,. This class includes the graphs of ordered and unordered pairs, the minimal dyadic graph, and the minimal polyadic (Glimm) graph (see Fig.~\ref{fig6}). We emphasize once more that the path space in any graph can be regarded as the space of trajectories of a~topological (in general, nonstationary) Markov chain. This space (of paths or trajectories), regarded as a~Borel space, supports the tail filtration. The graph (chain) can be endowed with an equipment. And if we have a~Borel measure (not necessarily Markov) on the space of paths (trajectories), then we obtain a~\textit{locally finite} filtration on the resulting measure space. The local finiteness of this filtration follows from the local finiteness of the graph. \begin{figure}[h!] \centering \includegraphics[scale=0.9]{ver06} \caption{The Glimm graph (beads)} \label{fig6} \end{figure} \section{The Markov realization of locally finite filtrations} \label{sec4} We have approached one of the main theorems of the paper, which connects the theory of filtrations with the combinatorics of Markov chains and graded graphs. \subsection{The theorem on a~Markov realization} \label{ssec4.1} \begin{theorem} \label{th5} Every locally finite filtration is isomorphic to the tail filtration of a~Markov chain, or to the tail filtration of the path space of a~graded graph.\footnote{Here we more often associate the theory of filtrations with Markov chains rather than with graded graphs, because the term ``Markov property'' and the language of Markov chains is better known and more widely used than the equivoluminar language of graded graphs. One of the purposes of this survey is to popularize the equivalence of these languages and to report the new things brought into the field by graded graphs.} \end{theorem} \begin{proof} Let $$ \tau=\{\mathfrak A\}_{n=0}^{\infty}\simeq \{\xi_n\}_{n=0}^{\infty} $$ be a~locally finite filtration in a~Lebesgue space $(X,{\mathfrak A}_0,\mu)$ with continuous measure~$\mu$. Starting from~$\tau$, we will construct a~Markov chain with finite state spaces whose tail filtration is isomorphic to~$\tau$. The finite partitions introduced below are labeled, i.\,e., their elements are assigned labels (for example, positive integers). The (ordered) product of two or more labeled partitions is again a~partition of the same type, whose elements are labeled by the ordered tuples of labels of the factors. In what follows, it is convenient to assume that for finite partitions $\eta \succ \eta'$, the set of labels of the larger partition~$\eta$ includes the set of labels of the smaller partition~$\eta'$. Denote by $\phi=\{\xi_n\}_n$ the sequence of measurable partitions corresponding to a~filtration of $\sigma$-algebras~$\mathfrak F$. Choose a~basis of the $\sigma$-algebra~${\mathfrak A}_0$, i.\,e., an arbitrary increasing sequence of finite labeled partitions~$\{\eta_n\}_{n=1}^{\infty}$ that tends to the partition into singletons of the space~$(X,\mu)$. We start an inductive construction with the base case, namely, with defining the first state space of the future Markov chain. For this, we first choose an arbitrary partition \textit{complementary to~$\xi_1$}, i.\,e., a~labeled measurable partition~$\zeta_1$ such that almost every element of~$\zeta_1$ intersects almost every element of~$\xi_1$ in at most one point (this can be done, since the elements of~$\xi_1$ are finite, and hence~$\zeta_1$ can be chosen finite).\footnote{The choice of a~partition~$\zeta_1$ complementary to a~given partition whose elements are finite or countable is a~standard procedure developed in~\cite{45}: it proceeds by successively constructing sets of maximal measure for which the intersection with almost all elements of the given partition consists of at most one point. This is the main step in the classification of measurable partitions.} Consider the product of partitions $\sigma_1=\eta_1\vee\zeta_1$; it is a~finite partition. We declare the set of its elements, i.\,e., the quotient space $Y_1=X/\sigma_1$ regarded as a~measure space, to be the set of labeled states of the Markov chain at time $t=1$. The induction step is a~construction of the set~$Y_n$ of states at time $t=n$ provided that we have already constructed a~finite Markov chain of length~$n-1$ as the quotient of the space $(X,\mu)$ by the finite partition $\bigvee\limits_{i=1}^{n-1} \sigma_i$. We construct a~partition~$\sigma_n$, consider the quotient space $X/{\xi_{n-1}}$, and define on this space a~finite labeled partition~$\zeta_n$ complementary (in the sense described above) to the partition $\xi_n/\xi_{n-1}$ in the space $X/\xi_{n-1}$. Note that if the intersection of an element of the partition~$\zeta_n$ with an element~$C$ of the partition $\xi_n/\xi_{n-1}$ is nonempty, then this intersection consists of a~single point, and hence every point from~$C$ obtains a~uniquely defined label, an element $D \in \zeta_n$. The same is true if (as below) we replace the partition~$\zeta_n$ with a~finite subpartition. Denote by~$\pi_n$ the canonical projection to the quotient space: $$ \pi_n\colon X\to X/\xi_n. $$ Define a~finite partition~$\sigma_n$ of the space~$X/\xi_{n-1}$ as a~subpartition of~$\zeta_n$ ($\sigma_n\succ \zeta_n$) by the following rule: $(*)$ by definition, two points~$y_1$ and~$y_2$ of an element~$D$ of the partition~$\zeta_n$ lie in the same element of~$\sigma_n$ if and only if the elements $C_1=\pi_n^{-1}(y_1)$ and $C_2=\pi_n^{-1}(y_2)$ endowed with the conditional measures of~$\xi_{n-1}$ are isomorphic as finite measured trees with points labeled by the elements of the partitions~$\sigma_i$, $i=1,2,\dots,n-1$; the partition~$\eta_n$ is obtained by restricting each of the finite partitions~$\sigma_i$,~$\eta_i$, $i=1,\dots,n-1$, to the elements~$C_1$ and~$C_2$. Note that, as explained above, an isomorphism between two elements~$C_1$ and~$C_2$, if it exists, is unique. Now we can define the $n$th state space~$Y_n$ as the quotient $\{X/\xi_1\}/\sigma_n$, and the path space of the $n$-step Markov chain, as the quotient $\{X/\xi_1\}/\!\bigvee\limits_{i=1}^n \sigma_i$; denote these spaces by~$({\mathscr M}_n,\nu_n)$. The Markov property immediately follows from the construction: an element of the partition contains points with equal conditional measures. As also follows from the construction, these spaces constitute, in a~natural sense, an inverse spectrum of spaces, i.\,e., $$ ({\mathscr M}_n,\nu_n)\leftarrow ({\mathscr M}_{n+1},\nu_{n+1}), $$ and the projection preserves the Markov structure (a~new space is being added), as well as the measure. Thus we can consider the inverse limit of these spaces, which is already the space of infinite trajectories of a~one-sided Markov chain; denote this space, endowed with the limiting measure, by $({\mathscr M}_{\infty},\nu_{\infty})$. Thus we have constructed a~homomorphism of the Lebesgue space with filtration $(X,\mu,\mathfrak F)$ to the space $({\mathscr M}_{\infty},\nu_{\infty},\mathfrak T)$, where $\mathfrak T$ is the tail filtration of the Markov chain. By construction, $\sigma_n \succ \eta_n$ for all~$n$, therefore, $$ \bigvee_n \sigma_n \succ \bigvee_n\eta_n=\epsilon, $$ i.\,e., the limiting homomorphism $X \to \mathscr M$ is an isomorphism of the measure spaces $(X,\mu)$ and $({\mathscr M},\nu)$, where $\nu$ is the image of the measure (the quotient measure). Denote the $n$th coordinate partition of the space~$\mathscr M$, i.\,e., the partition into the classes of trajectories that coincide from the $(n+1)$th position, by~$\theta_n$. The tail filtration~$\mathfrak T$ gives rise to the sequence of partitions~$\{\theta_n\}_n$. Let us show that this isomorphism sends the filtration~$\mathfrak F$ to~$\mathfrak T$. For this, it suffices to observe that our construction (by the same reason as above) sends the space~$X/\xi_1$ isomorphically to the quotient of $({\mathscr M},\nu)$ by the partition into the classes with respect to the first coordinate, and, more generally, the space $X/\xi_n$, to the analogous quotient with respect to the $n$th coordinate. Hence the bases of the corresponding partitions are sent to each other isomorphically, and, together with our choice of the partitions~$\zeta_n$ (complementary to~$\xi_n$), which are sent to the partitions into the classes of sequences that coincide from the~$n$th coordinate, this yields a~desired isomorphism of filtrations. Theorem~\ref{th5} is proved. \end{proof} \subsection{Remarks on Markov models} \label{ssec4.2} 1) Recall that in measure theory there are several theorems on Markov realizations of objects similar to the theorem proved above. For example: every transformation with invariant or quasi-invariant measure has an adic realization on the path space of a~graded graph with Markov measure.\footnote{An adic transformation~\cite{67},~\cite{68} (a~generalized odometer) is defined for graphs in which for each vertex there is a~linear order on the edges going to this vertex; using these orders, one can define in a~natural way a~lexicographic order on the sets of cofinal paths; then the adic automorphism sends a~path to the next path with respect to this order if it is defined, which is the case on a~set of full measure.} The theory of adic transformations and adic realizations of automorhisms became a~new source of examples in ergodic theory. Even the first example suggested by the author, that of the Pascal automorphism, still intrigues researchers; it is not yet known whether its spectrum is continuous, and many other properties are also unknown (see~\cite{75},~\cite{80},~\cite{41},~\cite{24}). 2) Another example is the theory of tame (hyperfinite) partitions (see~\cite{63}). Every discrete tame partition (in the category of spaces with quasi-invariant measure) is isomorphic to the tail partition on the space of trajectories of a~stationary Markov chain with finite state spaces and a~measure quasi-invariant with respect to the shift. 3) Let us give a~corollary of Theorem~\ref{th5}. \begin{corollary} \label{cor2} For an arbitrary locally finite filtration, the cocycle corresponding to the tail equivalence relation (see the definitions in Sec.~\ref{ssec3.5}) is isomorphic to a~cocycle of a~Markov chain with finite state spaces. In particular, every hyperfinite (tame) partition with countable elements in a~Le\-bes\-gue space is isomorphic to the tail partition of a~Markov chain. In other words, every hyperfinite equivalence relation is isomorphic to the tail equivalence of a~Mar\-kov chain; this is also a~corollary of Theorem~\ref{th5} on the Markov property of a~locally finite filtration; this fact is known for a~long time (see~\cite{60}). \end{corollary} 4) Of course, the model constructed in Theorem~\ref{th5} is by no means unique: there are many graded graphs whose tail filtrations are isomorphic. In spite of this, the language of graded graphs and their tail filtrations on spaces of paths, or on spaces of trajectories of Markov chains, is extremely convenient. Note, however, that when passing from the original filtration (even a~Markov one) to a~new realization, we can lose some additional (noninvariant) properties, e.\,g., the stationarity of the approximation: the original Markov chain could be defined as a~stationary one, but the approximation constructed in the proof of Theorem~\ref{th5} is rarely stationary. As we will see from examples in what follows, this loss is compensated by the simplicity of the analysis of the constructed approximation. The proof of Theorem~\ref{th5} contains a~method of constructing a~graph in whose path space the given filtration can be realized. We obtain a~class of rather difficult problems related to constructing graded equipped graphs with tail filtrations on path spaces isomorphic to a~given filtration. Let us consider an important example. 5) Consider a~\textit{dyadic filtration}, for which all partitions $$ \xi_1,\xi_2/\xi_1,\dots,\xi_n/\xi_{n-1},\dots $$ have two-point elements with the measures $(1/2,1/2)$. Our construction proves that every such filtration is isomorphic to the tail filtration of a~dyadic graded graph endowed with a~central measure on its path space; in this graph, all vertices of all levels except the initial vertex have two predecessors with equal cotransitions. Thus the problem of metric classification of dyadic filtrations is reduced to the essentially combinatorial problem of classification of dyadic graded graphs (more exactly, path spaces of such graphs) endowed with central measures. Rephrasing results obtained in the 1970s--1990s, as well as recent facts and theorems, in these, essentially combinatorial, terms allows one to take a~fresh look on the problem. The standardness criterion for dyadic filtrations can be rephrased as a~criterion saying when a~dyadic graph endowed with a~central measure can be brought to the simplest Glimm graph, see Fig.~\ref{fig6}. Specific Markov models of filtrations will be considered in more detail in Sec.~\ref{sec6} devoted to nonstandard filtrations. \section{Standardness, criteria, finite isomorphism} \label{sec5} \subsection{The minimal model of a~filtration} \label{ssec5.1} In this section, with every infinite locally finite filtration $\tau=\{\xi_k\}_{k=1}^\infty$ of an arbitrary Lebesgue space~$(X,\mu)$ with continuous measure~$\mu$ we associate its canonical realization as the tail filtration of an equipped graded multigraph, or, equivalently, of a~Markov chain. \textit{In contrast to the result of Theorem~\ref{th5}, the obtained filtration will not, in general, be isomorphic to the original filtration, but only finitely isomorphic.} As we will see, this construction leads to a~special class of equipped graded multigraphs, which we call minimal. The model itself will be called the \textit{canonical minimal model} of a~filtration.\footnote{Another natural term for minimal filtrations is ``finitely determined filtrations,'' since they are completely determined by their finite invariants (the same term ``finitely determined'' should have been used for standard filtrations, see the next section~\ref{ssec5.2}). \label{fnt7}} Note that a~finite isomorphism does not preserve even the ergodicity of a~filtration (this is obvious from examples of dyadic filtrations), so we add the ergodicity requirement each time when we compare a~finite isomorphism and a~true one. \textit{Given an ergodic filtration $\tau=\{\xi_k\}_{k=1}^\infty$ on a~Lebesgue space~$(X,\mu)$, we will construct an equipped graded multigraph~$\Gamma(\tau)$ and a~measure~$\nu$ on the path space~$T(\Gamma(\tau))$ that agrees with the equipment.} The construction, as in Theorem~\ref{th5}, is inductive: we successively build a~multi\-graph (by levels), an equipment of this multigraph, and measures on its levels that will determine a~measure on the whole path space. We start from the vertex~$\varnothing$ of the zero level and associate it to the whole space, more exactly, to the quotient of~$(X,\mu)$ by the trivial partition. Consider the first partition~$\xi_1$ and the finite (by the local finiteness of the filtration) partition~$\delta_1$ of the quotient space $X/\xi_1$ into the sets $A_1,A_2,\dots,A_{k_1}$ satisfying the following property: any two points $C,D \in X/\xi_1$, regarded as elements of~$\xi_1$, have isomorphic conditional measures if and only if they lie in the same element of~$\delta_1$. With the elements of the partition~$\delta_1$ of the space $X/{\xi_1}$, i.\,e., with the sets~$A_i$, $i=1,2,\dots,k_1$, associate the vertices of the first level of the graph~$\Gamma$. In other words, the vertices of the first level~$\Gamma_1$ correspond to the types of conditional measures of the partition~$\xi_1$. The number of edges going from every vertex to the vertex~$\varnothing$ is equal to the number of points in the corresponding element of the partition~$\xi_1$, and the cotransition probabilities are exactly the conditional measures of points in this element. Finally, the measures of the vertices themselves are equal to the measures~$\mu_{\xi_1}$ of the sets~$A_i$, $i=1,\ldots,k_1$. The process continues in exactly the same way. It is easiest to say that the construction of the vertices, edges, and cotransition probabilities of the level~$\Gamma_{m+1}$, provided that we already have the previous level~$\Gamma_m$, proceeds as above with the original filtration replaced by the filtration $\{\xi_{m+s}/\xi_m\}_{s=1}^{\infty}$. The sequence of partitions $\delta_1,\delta_2,\dots$ of the spaces $X/\xi_1, X/\xi_2,\dots$, respectively, corresponds to the partition of the vertices of the graph~$\Gamma(\tau)$ into levels, and the elements of each partition~$\delta_n$ correspond to the vertices of the $n$th level. Thus, from the invariants of the filtration~$\tau$, we have constructed a~multigraph~$\Gamma(\tau)$ with an equipment and a~Markov measure on cylinder sets that agrees with this equipment and thus defines a~Markov measure~$\mu_{\Gamma(\tau)}$ on the space~$T(\Gamma_{\tau})$ of infinite paths in~$\Gamma(\tau)$. The obtained equipped locally finite graded multigraph~$\Gamma(\tau)$ (we will call this multigraph \textit{minimal}, and the corresponding tail filtration~$\overline\tau$, the \textit{minimal model of the original filtration~$\tau$}) has the following characteristic property, which obviously follows from the construction. \begin{proposition}[{\rm(theorem-definition of a~minimal graph)}] \label{pr3} An equipped multigraph is a~minimal model of a~locally finite filtration on a~Lebesgue space with continuous measure if and only if for any two distinct vertices~$u$,~$v$ of an arbitrary level, the trees, with roots in $u$ and $v$, of paths leading from $\varnothing$ to these vertices are not isomorphic as measured trees. \end{proposition} The filtration~$\tau$ is finitely isomorphic to~$\overline\tau$ (this is obvious from the construction), but, in general, not isomorphic to it. Clearly, the above construction can be applied to the tail filtration of an arbitrary equipped multigraph to obtain a~new multigraph with an equipment and a~measure. If the equipment of the original graph is canonical (i.\,e., the filtration is semihomogeneous), then its standard model will also be a~semihomogeneous filtration, since the image of a~canonical equipment is, obviously, also canonical. Thus our construction defines, in particular, an operation on graded graphs, by associating with an arbitrary multigraph a~\textit{minimal multigraph}. Obviously, the minimal model of a~minimal multigraph is the multigraph itself. In other words, this operation is a~projection in the space of all graded multigraphs to the space of minimal multigraphs. We emphasize that in the basis method of defining a~filtration, the basis of the algebra of all measurable sets in the path space of the constructed multigraph~$\Gamma(\tau)$ is the algebra of cylinder sets, i.\,e., sets of paths determined by conditions on finite segments of paths. But when we map the tail filtration of one multigraph to the tail filtration of another one, the inverse image of the $\sigma$-algebra of all measurable sets of the second multigraph does not in general coincide with the $\sigma$-algebra of all measurable sets of the first one in the space of the original filtration. In other words, we obtain homomorphisms, and not isomorphisms, of the spaces $X/\xi_n$, in contrast to Theorem~\ref{th5} from the previous section, where the construction was aimed at obtaining an isomorphism. Namely, in the proof of Theorem~\ref{th5}, we had to subdivide the spaces $X/\xi_n$ and introduce additional vertices in graphs corresponding to the same conditional measures on the path space of the tree of the original graph. In the new construction, we identify these vertices and glue them together. \begin{example} \label{ex1} The standard model of every ergodic dyadic filtration is the tail filtration of the Glimm multigraph of beads (see Fig.~\ref{fig6}); this graph is minimal, and its path space is the dyadic compactum~$\{0,1\}^{\infty}$ with the ordinary tail equivalence relation. Note that in this example there is a~unique central measure, the Bernoulli measure with probabilities $(1/2,1/2)$. For brevity, in what follows, filtrations finitely isomorphic to Bernoulli ones will be called finitely Bernoulli. \end{example} \begin{figure}[b!] \centering \includegraphics{ver07.pdf} \caption{A~minimal graph: the half of the Pascal graph} \label{fig7} \end{figure} \begin{example} \label{ex2} The Pascal graph is not minimal, since symmetric vertices in the graph have isomorphic trees (see Fig.~\ref{fig7}, showing the quotient graph). Moreover, if we introduce the ergodic central measure on the path space (this is the Lebesgue measure on paths regarded as sequences of zeros and ones), then the set of paths passing through one of the two first vertices does not satisfy the conditions of Theorem~\ref{th5}. Therefore, the filtration on the path space endowed with this measure is not standard. A~simple example of an inhomogeneous nonstandard Markov chain is given at the end of Sec.~\ref{ssec5.5}. An example of two measured trees that are isomorphic as trees but nonisomorphic as measured trees is shown in Fig.~8. \end{example} \begin{figure}[t!] \centering \includegraphics{ver08.pdf} \caption{Nonisomorphism of words with respect to the group of symmetries of a~tree} \label{fig8} \end{figure} Let us summarize the main conclusions. \begin{proposition} \label{pr4} 1) Every locally finite filtration~$\tau$ is finitely isomorphic to its minimal model~$\overline\tau$, i.\,e., the tail filtration on a~minimal multigraph~$\Gamma(\tau)$, but in general is not isomorphic to it. 2) The minimal model, regarded as an equipped graph with a~measure on the path space, is a~complete finite isomorphism invariant of filtrations, i.\,e., two filtrations are finitely isomorphic if and only if their minimal models coincide. 3) The minimal model of an $\{r_n\}_n$-adic and, more generally, finitely Ber\-noul\-li filtration is a~Bernoulli filtration. Thus an example of a~minimal ergodic filtration is provided by any Bernoulli filtration. \end{proposition} \begin{proof} Indeed, every finite fragment of the filtration has been isomorphically and measure-preservingly mapped to a~finite fragment of the graph. \textit{And since we have used only conditional measures of all partitions and measures of sets with a~fixed type of conditional measures, finitely isomorphic filtrations lead to the same standard models}. Conversely, if two filtrations $\tau=\{\xi_n\}$ and $\tau'=\{\xi'_n\}$ are not finitely isomorphic, then for some~$n$ the partitions~$\xi_n$ and~$\xi'_n$ are not isomorphic, and hence their invariants contained in the minimal model will differ. It is clear from the above that for every Markov chain with finite state spaces there exists a~minimal Markov chain generating (as the tail filtration) a~filtration finitely isomorphic to the given one. The proposition is proved. \end{proof} We see that the \textit{problem of finite classification of locally finite filtrations is equivalent to the problem of constructing the space of minimal equipped graded graphs with a~measure on the path space that agrees with the equipment, i.\,e., the space of complete invariants of such filtrations} (for more on the space of filtrations, see Sec.~\ref{ssec5.6} below). Unfortunately, passing to minimal models and minimal filtrations does not simplify the study of filtrations, since every filtration can be arbitrarily closely, in a~natural sense, approximated by minimal ones. However, minimality brings us nearer to the fundamental notion of standardness. \subsection{Minimality of graphs, standardness, and the general standardness criterion} \label{ssec5.2} The following important definition will be often used in what follows. \begin{definition} \label{def7} A~locally finite ergodic filtration is called standard if it is isomorphic to a~minimal filtration.\footnote{See footnote\,\raisebox{1pt}{{\scriptsize{\ref{fnt7}}}} about the term ``finitely determined filtration.''} \end{definition} For example, a~$p$-adic filtration is standard if it is isomorphic to a~Bernoulli filtration; in other words, in the class of filtrations finitely isomorphic to Ber\-noul\-li ones, only the Bernoulli filtrations themselves and those isomorphic to them are standard. The difficulty is that a~filtration isomorphic to a~minimal one (i.\,e., a~standard filtration) is not necessarily minimal, in other words, is not necessarily the tail filtration of a~minimal multigraph (or a~minimal Markov chain). Thus the \textit{minimality property for filtrations is not invariant under isomorphism, and the ``invariant hull'' of this property is exactly ``standardness.''} Hence the problem arises of how to describe standard filtrations in invariant (intrinsic) terms, i.\,e., how to check that a~given filtration is isomorphic to a~standard one. In the next theorem we give an invariant characterization of a~standard filtration (the ``standardness criterion''). Let $\tau=\{\xi_n\}_n$ be an ergodic locally finite filtration, and let $\eta_n$ be the finite partition of the space~$X/\xi_n$ into the maximal classes of elements $C\in \xi_n$ on which the restrictions of the finite filtration~$\{\xi_k\}_{k=1}^{n-1}$ are isomorphic (equivalently, the corresponding measured trees are isomorphic). Fix an arbitrary isomorphism of these elements with some typical element~$\overline C$ regarded as a~measured tree. Then, obviously, every element~$D_i$, $i=1,\dots,r_n$, of the partition~$\eta_n$ is a~direct product: $$ D_i\simeq \overline C \times (D_i/\xi_n), \qquad D_i \in \eta_n. $$ Thus we have defined an independent complement~$\xi^-_{n,i}$ to the restriction of the partition~$\xi_n$ to~$D_i$; this is a~finite partition of~$D_i$. \begin{theorem} \label{th6} A~locally finite ergodic filtration $\tau=\{\xi_n\}_n$ of a~Lebesgue space $(X,\mu)$ with continuous measure is standard if and only if, in the above notation, for every $\varepsilon >0$ and every measurable set~$A$ there exists a~number~$n$ and a~set~$A'$ with $\mu(A \bigtriangleup A')<\varepsilon$ such that $$ A'=\bigcup_{i=1}^{r_n} A'_i $$ and~$A'_i$ is measurable with respect to the independent complement~$\xi^-_{n,i}$. \end{theorem} The condition of the theorem means that every measurable subset in~$X$, for sufficiently large~$n$ and up to a modification on a~set of small measure, breaks into a~union of sets measurable with respect to the maximal possible independent complement to the restrictions of the partition~$\xi_n$. An important fact here is the maximality of the restrictions of~$\xi_n$ for which such a~complement exists. However, the choice of these independent complements is subject to additional considerations. For filtrations finitely isomorphic to Bernoulli ones (for example, for dyadic filtrations), the partitions~$\xi_n$ themselves have independent complements, so for such filtrations the criterion consists in verifying the possibility of constructing coherent independent complements constituting a~basis of the space. In this sense, the condition of Theorem~\ref{th6} simply reduces the situation to the case of a~homogeneous filtration, where it is \textit{tautological}. \begin{proof}[of Theorem~\ref{th6}] The ``if'' part is equivalent to the assertion that the partitions into cylinders in the path space of the graph from the definition of a~standard model (see Definition~\ref{def7}) constitute a~basis of the $\sigma$-algebra of measurable sets. The ``only if'' part follows from the fact that the basis of cylinders is exactly the basis of independent complements to the (maximal) restrictions of the partitions of the filtration mentioned above. The theorem is proved. \end{proof} The condition of Theorem~\ref{th6} shows that the standardness of a~filtration is in a~sense the property of its decomposability into Bernoulli components. For finitely Bernoulli filtrations, this is exactly the Bernoulli property, i.\,e., independence. Hence the notion of standardness is a~generalization of the notion of independence. In the next section we will formulate a~concrete standardness criterion, which uses and generalizes the criterion for finitely Bernoulli filtrations suggested by the author~\cite{59} in the 1970s. It follows from the conditions of the theorem that a~possible obstacle to standardness is the existence of a~special group of symmetries preserving the measure in the path space of the graph. \subsection{The combinatorial standardness criterion} \label{ssec5.3} The general standardness criterion given above (Theorem~\ref{th6}) requires checking the measurability of sets with respect to a~system of independent complements, but does not show how one can do this. In this section we give a~more constructive, in fact combinatorial, method of verifying standardness. \subsubsection{The combinatorial criterion in terms of partitions or metrics} \label{sssec5.3.1} We start with the standardness criterion for finitely Bernoulli filtrations; as we will see, the general case will be obtained by ``mixing'' them. The criterion to be stated differs from the standardness criterion suggested by the author in the 1970s for dyadic sequences only by a~more universal statement. Note that in the form given below this criterion holds also for filtrations with continuous conditional measures. Let $\tau=\{\xi_n\}$ be a~locally finite filtration of a~Lebesgue space $(X,\mu)$ with continuous measure that is finitely isomorphic to a~Bernoulli filtration, i.\,e., for every~$n$ the quotient partition $\xi_n/\xi_{n-1}$, $n=1,2,\dots$, is isomorphic to a~partition whose all elements are finite and all conditional measures coincide and are equal to the distribution determined by a~finite probability vector~$p^{(n)}$, $n=1,2,\dots$, possibly different for different~$n$. The leading special case is the dyadic one: $$ p^{(n)}\equiv (1/2,1/2)\quad\text{for all~$n$}. $$ In our case, an element~$C$ of the partition~$\xi_n$, regarded as a~finite set endowed with the conditional probability measure, has the structure of a~direct product of finite spaces endowed with the measure $\prod\limits_{k=1}^n p_k$, but it is more convenient to represent it as a~homogeneous tree of rank~$n$ with the product measure on leaves. Assume that on the space $(X,\mu)$ we have a~measurable bounded function~$f$ with real values, or a~metric, or a~semimetric~$\rho$; then on the space $X/\xi_n$ we can introduce a~distance $d^n_f(\,\cdot\,{,}\,\cdot\,)$ ($d^n_{\rho}$), which should be interpreted as the distance between elements~$C_1$,~$C_2$ of the partition~$\xi_n$ regarded as measured trees. This distance is a~version of the Kantorovich metric for probability measures in a~metric space, but in our case, there is an additional condition. Namely, we consider so-called \textit{couplings~$\psi$}, i.\,e., measures on the direct product $C_1\times C_2$ with the given marginal projections~$\mu^{C_1}$ and~$\mu^{C_2}$ equal to the conditional measures. The additional condition is that these couplings are not simply measures, but also couplings of trees, i.\,e., $\psi$ should be concentrated on some tree, as a~subset in $C_1\times C_2$, whose projections to both coordinates are~$C_1$ and~$C_2$ as trees. An example of a~coupling of trees is the graph of a~measure-preserving isomorphism of one tree onto another one. The measures~$\psi(\,\cdot\,{,}\,\cdot\,)$ are called \textit{couplings}, or \textit{transportation plans}; the set of all couplings will be denoted by~$\Psi_{C_1,C_2}$.\footnote{The metric or semimetric~$d^n_{\rho}$ is exactly the Kantorovich metric on the set of probability measures of the (semi)metric space $(Y,\rho)$; in our case, we take the semimetric $\rho(x,y)=|f(x)-f(y)|$ on $Y=C_1\cup C_2$.} Then the distance~$d^n_f$ ($d^n_{\rho}$) is given by the formula $$ d^n_f(C_1,C_2)=\min_{\psi \in \Psi_{C_1,C_2}} \sum_{x\in C_1,y\in C_2} |f(x)-f(y)|\psi(x,y) $$ (in case of metrics, one should replace $|f(x)-f(y)|$ with~$\rho(x,y)$). \begin{theorem} \label{th7} An ergodic filtration finitely isomorphic to a~Bernoulli one is standard, i.\,e., isomorphic to a~Bernoulli filtration, if and only if the following condition holds: either -- for every measurable function~$f$ on the space $(X,\mu)$, or -- for every admissible metric~$\rho(\,\cdot\,{,}\,\cdot\,)$ on the space $(X,\mu)$, or -- for every cut semimetric\footnote{That is, a~semimetric $\rho(x,y)$ equal to~1 if~$x$,~$y$ lie in different elements of a~finite partition of the space $(X,\mu)$, and to~$0$ if~$x$,~$y$ lie in the same element.} \noindent the following equality holds: $$ \lim_{n\to \infty}\,\int_{X/\xi_n \times X/\xi_n} d^n_f(C_1,C_2)\,d\mu_{\xi_n}\,d\mu_{\xi_n}=0. $$ \end{theorem} For what follows, it is convenient to give an equivalent $\varepsilon$-form of the theorem condition: \begin{align*} &\forall\,\varepsilon>0,\quad \exists N,\quad \forall\,n>N,\quad \exists D_n\subset X/{\xi_n}\colon \\ &\qquad\mu_{\xi_n}(D_n)>1-\varepsilon\quad\text{and}\quad \sup_{C_1,C_2 \in D_n} d^n_f(C_1,C_2)<\varepsilon. \end{align*} The theorem condition is especially clear for dyadic filtrations of cut semimetrics with respect to partitions into~$k$ subsets: in this case, a~substantial simplification is due to the fact that we may take couplings to be the graphs of isomorphisms of trees; i.\,e., we may consider the Hamming metric on the set of vertices of the cube~$\mathbf{k}^{2^n}$, and then (instead of couplings~$\psi$) take the distance between the orbits of the action of the group of automorphisms of the dyadic tree on this cube. In other words, \textit{the standardness criterion reduces to the fact that the space of orbits of the action of the group of automorhisms of the tree on the cube~$\mathbf{k}^{2^n}$ collapses in this metric to a~single point} (see~\cite{59},~\cite{69}). The proof of the criterion for homogeneous, i.\,e., $\{r_n\}$-adic, filtrations exactly coincides with the proof for dyadic ($r_n\equiv 2$) filtrations and was suggested in~\cite{65},~\cite{69}. Later, several other expositions appeared, see, e.\,g.,~\cite{7},~\cite{13},~\cite{14}. The scheme of the proof remains exactly the same also for finitely Bernoulli filtrations, as well as for continuous filtrations (in which all conditional measures of all partitions $\xi_n/\xi_{n-1}$, $n=1,2,\dots$, are continuous). We will explain only the main idea of the proof of Theorem~\ref{th7}; for details, see~\cite{69}. First of all, the ``only if'' part follows from the fact that, by definition, for a~Bernoulli filtration there exists a~sequence of independent partitions~$\{\eta_n\}_n$ constituting a~basis of the space such that $$ \xi_n=\bigvee_{k=n}^{\infty}\eta_k,\qquad n=0,1,2,\dots\,. $$ Namely, it follows that for all finite partitions $\bigvee_{k=1}^n \eta_k$, the theorem condition follows from their independence with the corresponding partitions~$\xi_n$. On the other hand,~$\{\eta_n\}_n$ is a~basis, and hence the condition is satisfied for any finite partitions. The proof of the ``if'' part is more laborious, one should prove that the theorem condition implies the existence of a~sequence of independent partitions~$\{\eta_n\}_n$ associated with~$\{\xi_n\}$ as described above that is a~basis of the measure space. This can be done again by induction, and the main lemma is that the theorem condition can be extended to the quotient filtrations $\{\xi_{n+k}/\xi_k\}_{n=0}^\infty$, $k=1,2,\dots$, hence one can construct a~basis~$\{\eta_n\}_n$ by successively passing to quotient spaces and approximating some basis chosen in advance. For the first time, this scheme was used in the proof of the lacunary isomorphism theorem for dyadic sequences~\cite{58}. \subsubsection{Comments on the criterion} \label{sssec5.3.2} The statement of Theorem~\ref{th7} involves the notion of an admissible metric (see~\cite{71}). Recall that an admissible metric in a~Lebesgue space is a~metric defined on a~set of full measure as a~measurable function of two variables such that the $\sigma$-algebra generated by all balls of positive radius in this metric generates the whole $\sigma$-algebra of measurable sets (see~\cite{92}). If a~function of two variables satisfies the conditions of a~metric (nonnegativity, symmetry, triangle inequality) as conditions $\operatorname{mod} 0$ on classes of metrics, as well as the above condition on the $\sigma$-algebra of balls, then the $\operatorname{mod} 0$ class of this function contains an admissible metric (see~\cite{61}). \begin{theorem} \label{th8} Let $\rho$ be an arbitrary admissible metric on a~Lebesgue space with continuous measure. An ergodic filtration finitely isomorphic to a~Bernoulli one is standard, i.\,e., isomorphic to a~Bernoulli filtration, if and only if the following condition holds: for every $\varepsilon>0$ there exists a~positive integer~$N$ such that for every $n>N$ and some set $$ A_n \subset X/\xi_n, \qquad \mu_{\xi_n}(A_n)>1-\varepsilon, $$ the Kantorovich distance between the conditional measures on arbitrary elements $C_1,C_2\subset A_n$ of the partition~$\xi_n$ in the space~$(X,\rho)$ regarded as a~metric space does not exceed~$\varepsilon$. This condition is or is not satisfied independently of the choice of~$\rho$. \end{theorem} Note that if in the condition of the last theorem we take $\rho$ to be the cut semimetric corresponding to a~partition, then we obtain the condition of the theorem for functions with finitely many values. Hence the conclusion of the theorem for cut semimetrics follows from the conclusion for functions. On the other hand, every semimetric is the limit of sums of cut semimetrics, hence the conclusion of the theorem for functions implies the conclusion for a~dense family of semimetrics. Finally, it is not difficult to check that the fact that the elements of the partition endowed with the conditional measures approach one another in the introduced metric remains true when passing to the limit along a sequence of metrics.\footnote{More exactly, the Kantorovich metric on the space of probability measures of a~given Borel space is continuous with respect to the weak topology in the space of admissible metrics (regarded as functions of two variables). The same considerations imply that the conclusion of the theorem is independent of the choice of a~metric.} Note, by the way, that the idea of varying metrics and the notion of an admissible metric for a~fixed measure in a~space was suggested by the author in the 1980s and has been repeatedly used, for example, in the definition of scaled entropy (see~\cite{71},~\cite{82}). The conclusion of the theorem can be equivalently stated as the convergence of distances to zero in measure, etc. \subsection{The standardness criterion for an arbitrary filtration} \label{ssec5.4} How one should modify the condition of Theorem~\ref{th6} to obtain a~standardness criterion for an arbitrary locally finite filtration? In fact, the condition for the general case reduces to a~mixture of conditions on each finitely Bernoulli component of the filtration. We will need the finite partitions~$\delta_n$ on the spaces $(X/\xi_n, \mu_{\xi_n})$ introduced when defining minimality in Sec.~\ref{ssec5.1}. Recall that two elements~$C_1$,~$C_2$ of the partition~$\xi_n$ lie in the same element of~$\delta_n$ if the finite filtration~$\{\xi_k\}_{k=1}^{n-1}$ induces on~$C_1$,~$C_2$ isomorphic filtrations, or, equivalently, if these elements~$C_1$,~$C_2$ are isomorphic as trees endowed with the conditional measures. It is essential that the partition~$\delta_n$ is determined only by the conditional measures. For homogeneous filtrations, this is the trivial partition of the space $X/\xi_n$, and for minimal graphs, this is the partition into singletons at each level. Then the statement in the $\varepsilon$-language is as follows. \begin{theorem} \label{th9} Let $\tau=\{\xi_n\}_{n=0}^{\infty}$ be an arbitrary finite filtration in a~Lebesgue space $(X,\mu)$ with continuous measure. The filtration~$\tau$ is isomorphic to a~standard filtration if and only if the following condition holds: for every $\varepsilon >0$ there exists~$N$ such that for every $n>N$ the partition of~$X/\xi_n$ into the types of elements of~$\xi_n$ isomorphic with respect to the previous fragment of the filtration, $$ X/\xi_n=\bigcup_{i=1}^{k_n} A^n_i, \qquad \{A^n_i\}_i=\delta_n, $$ has the following property: \begin{align*} &\text{there exists~$D$}, \ D\subset X/{\xi_n},\ \mu_{\xi_n}(D)>1-\varepsilon, \\ &\qquad\text{such that} \ \sup_{C_1,C_2 \in A^n_i\cap D} d_f(C_1,C_2)<\varepsilon, \end{align*} in other words, inside each element of the partition~$\delta_n$, the distances between most elements of the partition~$\xi_n$ are small with respect to the metric~$d_f$ or~$d_{\rho}$. \end{theorem} It should be clear from this statement in what sense standardness is a~generalization of independence. In fact, it differs from Theorem~\ref{th8} (i.\,e., the theorem for finitely Bernoulli filtrations) only in that the pairwise distances should be small only for \textit{most} pairs of isomorphic elements, and not for all such pairs. Thus the proof of the ``if'' part, which is the main part of the theorem, does not differ from the previous case. However, checking the condition is by no means easy. It makes sense to state the ``only if'' part of the criterion as a~separate proposition. \begin{proposition} \label{pr5} 1) The filtrations satisfying the standardness criterion form an invariant class, i.\,e., a~filtration isomorphic to a~filtration satisfying the standardness criterion also satisfies this criterion. \nopagebreak 2) A~minimal filtration satisfies the standardness criterion. \end{proposition} \begin{proof} The first claim directly follows from the statement of the criterion. The proof of the second claim breaks into two cases. In the first (obvious) case, assume that a~minimal multigraph is a~graph, i.\,e., has no multiple edges. Then the partition into the types of elements of the partitions~$\xi_n$ coincides with the partition into cylinder sets, and the $\sigma$-algebra generated by all complements coincides with the whole $\sigma$-algebra. In the second case, the partition into types reduces to Bernoulli components, for which the ``only if'' part of the criterion is proved. The proposition is proved. \end{proof} The standardness criterion is of interest in the case of graphs essentially different from minimal ones. For canonical equipments, i.\,e., central measures, the criterion can be somewhat simplified, and in this simplified form it was stated in our previous papers; but for filtrations that are not semihomogeneous, the simplified statement is not equivalent to the general one given above, and it is in this sense that the term ``standardness'' is used. We will return to this question in connection with the so-called internal metric (see~\cite{82},~\cite{79},~\cite{78},~\cite{81}, and Sec.~\ref{sec7}). \subsection{The martingale form of the standardness criterion and intermediate conditions} \label{ssec5.5} It is natural to ask how, given a~Markov, or even arbitrary, one-sided random process, one can determine whether or not its tail filtration is standard. The problem is to state the standardness criterion in terms of the process. The most natural form of the answer to this question, as well as to other questions related to the interpretation of weakenings of the independence condition in the theory of random processes, looks as a~strengthening of martingale convergence theorems. The classical theorem for a~random process~$\{x_n\}_{n \leqslant 0}$ in the form we need it looks as follows. \textit{If the tail filtration of the process (i.\,e., the intersection of the past $\sigma$-algebras) is trivial, $$ \bigcap_n{\mathfrak A}_n=\mathfrak N, $$ then for every real functional~$F$ in~$k+1$ coordinates of this process, the conditional distributions of~$F$ given a~fixed past (starting from $-n-1<k$) converge almost everywhere to the unconditional distribution of this functional: \begin{align*} &\lim_n \operatorname{Prob}(F(x_0,x_{-1},\dots,x_{-k})\geqslant t\mid x_{-n-1},\dots)\hspace*{3cm}\\ &\qquad\qquad\qquad\qquad= \operatorname{Prob}(F(x_0,x_{-1},\dots,x_{-k})\geqslant t),\quad t \in \mathbb R; \end{align*} in other words, as $n$ tends to infinity, the conditional measures on~$k+1$ coordinates of the process weakly converge almost everywhere (or in measure) to the corresponding unconditional measures.} Here it is essential that~$k$ is fixed and $n\to \infty$. Can~$k$ be allowed to go to infinity along with~$n$? Literally, this does not make sense, since the limit of the conditional measures for growing~$k$ is not defined, but one may speak about the \textit{conditional measures approaching one another as~$n$ goes to infinity.} For this, one should define some metric in the space of conditional measures for a~given~$k$ and require, for instance, that the expectation of the distances between the conditional measures with respect to different conditions converge to zero. Here the choice of a~metric plays a~crucial role, and the classes of processes satisfying such requirements depend substantially on this choice. But in any case, the validity of conditions of this type shows that the process is kind of similar to a~Bernoulli process. Let us state the standardness criterion in terms of convergence of a~sequence of conditional measures (i.\,e., measures on spaces of measured trees); it can be regarded as a~``martingale'' definition of standard random processes. \begin{theorem}[{\rm(Martingale standardness criterion)}] \label{th10} Consider an arbitrary (not necessarily Markov) random process~$\{x_n\}_{n<0}$ with values in~$[0,1]$ (or in an arbitrary Borel space). Consider its tail filtration~${\mathfrak A}_n$, $n=0,1,2,\dots$, where~${\mathfrak A}_n$ is the $\sigma$-algebra generated by the random variables~$x_k$, $k\leqslant -n$, $n=0,1,2,\dots$\,. Assume that the filtration is ergodic, i.\,e., the intersection of the $\sigma$-algebras satisfies the zero--one law: $$ \bigcap_{n=0}^{\infty}{\mathfrak A}_n=\mathfrak N. $$ Take either -- an arbitrary cylinder function~$f$ on the space of trajectories of the process, or -- an arbitrary metric~$\rho$ on the space of trajectories of the process. Fix two trajectories~$C_1$,~$C_2$ of the process for time from~$-\infty$ to~$-n$ with isomorphic conditional measures~$\mu^{C_1}$,~$\mu^{C_2}$ on the finite segments $$ (x_0,x_{-1},x_{-2},\dots,x_{-n}|C_1)\quad\text{and}\quad (x_0,x_{-1},x_{-2},\dots,x_{-n}|C_2), $$ i.\,e., two points of the quotient space $X/\xi_n$ lying in the same element of the partition~$\delta_n$ (see the standardness criterion, Theorems~\ref{th7} and~\ref{th8}), and consider the trees of trajectories corresponding to the elements~$C_1$ and~$C_2$ endowed with the conditional measures on the leaves. Then the filtration is standard if and only if the following condition is satisfied: $$ \lim_{n \to \infty}\,\int_{X/\xi_n}\,\int_{X/\xi_n} d_f(\mu^{C_1},\mu^{C_2})\,d\mu_{\xi_n}\,d\mu_{\xi_n}=0 $$ (in the case of a~metric, one should replace~$d_f$ with~$d_{\rho}$) for all measurable functions~$f$ on the space of trajectories of the process (it suffices to check this only for cylinder functions with finitely many values), or for all admissible metrics (actually, it suffices to check this only for one metric). \end{theorem} This statement is a~verbatim reproduction of the standardness criterion, but it leads to an important conclusion: \textit{between the ordinary martingale convergence theorem, which holds for all processes determining an ergodic filtration (i.\,e., regular processes), on the one hand, and the standardness criterion, which has less applicability, on the other hand, there is a~natural spectrum of intermediate conditions on processes.} They are determined by the relation between the moment~$n<0$ that fixes a~condition in the past and the moment~$k(n)>n$ up to which we compare the conditional measures on trees of rank~$|k(n)|$. If~$k(n)$ does not depend on~$n$ and $n\to \infty$, this is the ordinary martingale convergence theorem, and if $k(n)=n$, this is the standardness criterion. Hence standardness is the greatest possible strengthening of the zero--one law. The author knows nothing on the intermediate cases, which can be called ``higher zero--one laws,'' but one can expect that processes for which the conditions hold for different intermediate growth rates of the sequence~$k(n)$ have different properties. The most interesting part of the theory is the study of filtrations that are finitely Bernoulli but not isomorphic to Bernoulli filtrations. Deviations from the conditions of the standardness criterion can be measured by various invariants, for instance, the entropy of filtrations. We will return to this problem in Sec.~\ref{sec6}. The simplest example of an inhomogeneous finitely Bernoulli nonstandard filtration is the Markov chain with two states and the transition matrix $$ \begin{pmatrix} p & q \\ q & p \end{pmatrix},\qquad p \ne q,\quad p+q=1,\quad p,q>0 $$ (see~\cite{82}); the cylinder set $\{x=\{x_n\}:x_1=0\}$ does not satisfy the standardness criterion. This filtration is a~double cover of the Bernoulli filtration with probabilities~$(p,q)$. Conversely, the Fibonacci Markov chain with the transition matrix $$ \begin{pmatrix} \lambda^2 & \lambda \\ 1 & 0 \\ \end{pmatrix},\qquad \lambda^2+\lambda=1, $$ which is not locally Bernoulli, determines a~standard homogeneous tail filtration, since the partitions~$\delta_n$, $n=1,2,\dots$, are the partitions into singletons of each level of the graph. \subsection{The spaces of filtrations, Markov compacta, and grad\-ed multigraphs} \label{ssec5.6} The theorem on the Markov property of filtrations allows one to take the space of equipped graded multigraphs, or, equivalently, the space of Markov compacta with Markov measures, as the space of all locally finite filtrations in a~Lebesgue space. In other words, we give the following definition. \begin{definition} \label{def8} We identify the space of locally finite filtrations in a~continuous Lebesgue space with the space of equipped graded multigraphs (respectively, with the space of Markov chains with cotransition probabilities) endowed with a~fixed continuous measure that agrees with the equipment (respectively, with the cotransitions). \end{definition} With this definition, a~filtration is identified with the tail filtration of the path space of a~multigraph (respectively, with the space of trajectories of a~Mar\-kov chain). We emphasize that an important role in the convention to identify the collection of filtrations in measure spaces and the collection of equipped graded multigraphs is played by the measure on the path space that agrees with the equipment: the equipped graph alone does not determine, for instance, whether the filtration is ergodic, this is determined by the measure. For definiteness, we will speak about multigraphs; reproducing all statements in terms of Markov chains brings nothing new. As we have seen, an equipped multigraph can be defined by a~sequence of finite matrices whose nonzero entries are the probabilities of cotransitions of adjacent vertices. Each such sequence determines an equipped graded multigraph~$\Gamma$. The path space of~$\Gamma$ will be denoted by~$T(\Gamma)$. Besides, given a~multigraph~$\Gamma$, we introduce the simplex~$\Sigma_{\Gamma}$ of all probability measures on the space~$T(\Gamma)$ that agree with the equipment. \textit{Denote by~$\mathfrak F$ the space of all pairs $(\Gamma,\Sigma_{\Gamma})$ where~$\Gamma$ runs over the set of all infinite equipped graded locally finite multigraphs and $\Sigma_{\Gamma}$ are the corresponding simplices of measures.} This space can also be viewed as the space of all locally finite filtrations of a~continuous Lebesgue space, and simultaneously as the space of all locally finite Markov compacta with Markov measures. We endow this space with the weak topology, i.\,e., the topology of an inverse spectrum. We may consider the following important subspaces of this space. 1. \textit{The subspace of ergodic multigraphs}, i.\,e., multigraphs for which there exists at least one continuous ergodic measure on the path space that agrees with the equipment. This class does not contain, for example, trees and other decomposable multigraphs. It is of most interest for the needs of the theory of filtrations and ergodic theory. The combinatorial structure of such multigraphs must be completely described. In fact, this is a~question about hyperfinite equivalence relations on a~standard Borel space (the path space) with a~fixed cocycle of measures (cf.~\cite{6},~\cite{78},~\cite{52}). 2. \textit{The subspace of multigraphs with canonical equipments} (the tail filtration in this case is homogeneous, i.\,e., the conditional measures on the elements~$\xi_n$, $n=1,2,\dots$, are uniform). The geometry of the projective limit of simplices and the limiting simplex of central measures ${\mathfrak F}_0$ are quite special; in the dyadic case, the limiting simplex is the simplex of unordered pairs, or the so-called ``tower of dyadic measures'' (see Fig.~\ref{fig11} in Sec.~\ref{ssec7.3}). In the language of Markov chains (in the stationary case), central measures are measures of maximal entropy. The canonical equipment is determined by the very structure of the graph, which is why this space is of special interest. 3. \textit{The space~${\mathfrak F}^{\rm st}$ of stationary filtrations} (i.\,e., filtrations invariant under the shift, which is the passage to the quotient by the first partition: $\tau\equiv \{\xi_n\}_{n=0}^{\infty}\simeq \{\xi_n/\xi_1\}_{n=1}^{\infty}\equiv \tau'$), or, equivalently, \textit{the space of stationary multigraphs} (i.\,e., multigraphs with constant cotransition matrices) with an additional symmetry, etc. The analysis of this space is of special importance for ergodic theory (of one automorhism). 4. Besides, in all cases one can give up considering measures on path spaces and pass to the Borel theory of graphs and filtrations. This means that we study only the space~$\mathscr G$, without fixing the second component in pairs $(\Gamma,\Sigma_{\Gamma})$, i.\,e., a~simplex of measures~$\Sigma_{\Gamma}$, and consider only cotransitions. (See Sec.~\ref{sec7}.) It is more important to analyze the space~${\mathfrak F}$ from the point of view of the problem of finite and general isomorphism. Let us select in~${\mathfrak F}$ filtrations corresponding to minimal multigraphs (minimal Markov chains) with an equipment and a~measure on the path space. By the previous analysis, this is a~subspace of~${\mathfrak F}$ consisting of the minimal locally finite filtrations. By Theorem~\ref{sec5}, this is the \textit{space of complete invariants of filtrations up to finite isomorphism: no two different standard filtrations are finitely isomorphic, and any filtration is finitely isomorphic to a~standard one}. Our definitions immediately imply the following assertions. \begin{proposition} \label{pr6} 1) The space of all equipped graded multigraphs (respectively, the space of all Markov compacta with cotransitions) is fibered over the space of minimal equipped graphs (respectively, over the space of minimal Markov compacta with cotransitions). Thus the space of all filtrations is fibered over the space of minimal filtrations. The space of minimal equipped graphs is dense in the space of filtrations.\footnote{At the same time, in the space, say, of dyadic filtrations there is only one minimal ergodic filtration and only one minimal graph. In general, one can introduce a~more complicated topology in which the set of minimal filtrations is closed.} 2) The set of standard ergodic filtrations is dense in the space of all filtrations. \end{proposition} Basically, the space of all minimal multigraphs (Markov chains) is the space of all locally finite types of filtrations. Thus, in contrast to the problem of finite isomorphism, the problem of isomorphism of locally finite filtrations is wild: passing to the quotient of~${\mathfrak F}$ by the classes of isomorphic filtrations does not result in a~space with a~separable Borel structure. Moreover, the latter remains true even if we consider the problem of isomorphism of ergodic dyadic filtrations, i.\,e., in one (in fact, an arbitrary nontrivial) class of finite isomorphism. The situation is similar to many isomorphism problems in algebra and analysis, e.\,g., the problem of isomorphism of actions of infinite groups with invariant measure, or the problem of classification of irreducible unitary representations of the infinite symmetric group, etc. This question arises in the theory of dynamical systems, representation theory, asymptotic combinatorics, etc. The wildness of the classification problem follows from the fact that the classification includes a~typical wild problem; we will return to this in another paper. In what follows, we, on the one hand, propose to consider the isomorphism problem for some classes of filtrations close to standard ones and, on the other hand, suggest types of invariants for general filtrations substantially different from standard ones. \section{Nonstandardness: examples and a~sketch of a~general theory} \label{sec6} The most difficult problems of the theory of filtrations are related to nonstandard filtrations. But, as follows from the standardness criterion, nonstandardness for inhomogeneous filtrations can have different causes. Recall that a~minimal filtration is uniquely determined by its finite type. The standardness of a~filtration means that it does not differ much from its finite type; roughly speaking, its homogeneous parts must be standard, i.\,e., Bernoulli. Thus the nature of true nonstandardness shows itself already in homogeneous (and, moreover, even dyadic) filtrations. As we have already mentioned, the group of symmetries of the filtration itself and its quotients by the partitions~$\xi_n$ with arbitrarily large~$n$ must be infinite, otherwise the filtration cannot have a~complicated structure at infinity. In this section we continue constructing Markov chains and graphs for nonstandard dyadic filtrations. Actually, these chains with finite state spaces, i.\,e., path spaces of graded multigraphs, should be regarded as an approximation or, more exactly, a~simple model of continual, as well as more complicated, Markov chains. But perhaps the most instructive thing in these constructions is that the graphs obtained from these approximations are interesting in themselves and, possibly, of even more importance than the approximation. This fact should be emphasized, because these graded graphs, regarded as Bratteli diagrams, represent important $C^*$-algebras which seem to have never been studied in the literature. For example, the algebras generated by the graph of words for the Abelian group~${\mathbb Z}^d$ (see below) look mysterious. The method of proving nonstandardness contained in the criterion becomes clearer if one endows it with a~combinatorial meaning. This is exactly what we do below in several examples. In fact, the problem is reduced to the analysis of a~measure on the orbits of an action of the group of automorphisms of a~tree on some subsets of configurations. The most transparent case is the action of this group on the set of vertices of the cube~$2^{2^{n}}$ with the uniform measure or with an arbitrary product measure with equal factors. The standardness criterion requires that the measure concentrates (with respect to the Hamming metric) in a~neighborhood of one orbit, and we need advanced methods of proving this fact. Unfortunately, such a~clarity is not yet achieved in the examples given below. \subsection{Graphs of random walks over orbits of groups, graphs of words} \label{ssec6.1} The construction of a~Markov approximation, whose existence follows from Theorem~\ref{th5}, as a~Markov chain with finite state spaces is a~rich source of new graded graphs. On the other hand, these constructions allow one to study filtrations given a~priori as Markov filtrations with continual state spaces. Specializing the proof of Theorem~\ref{th5}, we will give an explicit construction of a~Markov chain with finite state spaces and obtain a~``graphic'' interpretation of the filtration. We mean models of random walks over the orbits of an action of a~countable group in a~space with continuous invariant measure and the tail filtration of the corresponding Markov chain. This subject is sometimes called the theory of \textit{random walks in random scenery} (RWRS), and it is a~further step in the theory of random walks on groups, which is currently the subject of hundreds of works. It should be noted that this relatively new field deals with problems of different nature: for instance, problems related to boundaries of walks, entropy, etc.\ (see, e.\,g., the circle of problems in~\cite{26}) are replaced with problems about the relation between the action of the group and properties of filtrations (see~\cite{77}). The question of when the corresponding filtrations (even dyadic ones) are standard is not sufficiently studied. It is only known that the filtration of the Markov process~$(T,T^{-1})$, that is, the random walk over the orbits of an automorphism~$T$, is not standard if the entropy is positive, $h(T)>0$ (see~\cite{22}). For automorphisms~$T$ with discrete spectrum, this filtration is presumably standard. For the rotation~$T_{\lambda}$ of the circle, this is proved by the author (unpublished) for well-approximable~$\lambda$, and by W.~Parry (unpublished) in full generality. But these are only first examples of random walks over orbits of actions; nonsimple walks (i.\,e., walks not only over generators) offer much more possibilities. One can hope that the machinery of graphs will provide new tools for investigating such problems. We begin the construction with the following general example. Let $G$ be a~group and $S$, with $|S|=s<\infty$, be a set of generators of~$G$; consider the set~$\mathbf{k}^{G}$ where $$ \mathbf{k}=\{0,1,\dots,k-1\}; $$ let $m$ be a~Bernoulli measure on~$\mathbf{k}^{G}$, i.\,e., the direct product $\prod_G p$, where $$ \{p_g\},\qquad g\in S,\quad \sum_g p_g=1, \quad p_g\geqslant 0, $$ is a~probability vector of dimension~$s$. Assume that there is a~free action of~$G$ on a~Lebesgue space $(X,\mu)$ with continuous invariant measure. Consider the (one-sid\-ed) Markov process~$(y_n)_{n\leqslant 0}$ with the state space~$X$ and probabilities of transitions (since we consider negative moments of time) equal to the probabilities of cotransitions: $$ \operatorname{Prob}(y|x)=\begin{cases} p_g& \text{if} \ y=T_gx, \ g\in S, \\ 0 & \text{otherwise}; \end{cases} $$ the measure~$\mu$ is an invariant initial measure of the Markov process; we also have the Markov measure~$M$ defined on the space~$\mathscr X$ of all trajectories of the process. Consider the tail filtration $\tau=\{\mathfrak A\}_{n=0}^{\infty}$ on the space $(\mathscr X,M)$ determined by this stationary Markov process. It is, obviously, locally finite (since~$S$ is finite), but the Markov process has a~continual state space. However, by Theorem~\ref{th5}, the corresponding filtration can be defined as the tail filtration in the path space of some graded graph, or as the tail filtration of a~Markov chain with \textit{finite state spaces}. Certainly, this Markov chain will no longer be stationary, for instance, since the cardinality of state spaces, i.\,e., the number of vertices on levels of the graph, grows. But, surely, the filtration itself remains stationary, only the approximation is not shift-invariant. Of course, this chain is not unique, and, as was already mentioned, in general there is no canonical choice of such a~chain. We will give only a~typical example of constructing such a~graph. Let $B_n$ be the set of all elements of the group~$G$ that can be written as words of length at most~$n$ in the generators~$S$; here $B_1=S$. Observe that $$ gB_{n-1}\subset B_n\quad\text{if } g \in S. $$ We introduce the following series of graded graphs $\Gamma_{G,S}\equiv \Gamma$. The set~$\Gamma_k$ of vertices of the $k$th level of the graph~$\Gamma$, $k\geqslant 1$, consists of all functions on the ball~$B_k$ with values in~$\mathbf{k}$; an edge joins a~vertex $v\in \Gamma_k$ with a~vertex $u\in \Gamma_{k-1}$ if~$u$, regarded as a~function on~$B_{k-1}$, is the restriction of~$v$, regarded as a~function on~$B_k$, to $B_{k-1} \subset B_k$. An equipment on the edge $(u,v)$ is defined in this case as the product of the numbers~$p_g$ corresponding to the product of generators completing the word~$u$ to the word~$v$. Thus we have constructed an equipped graded graph~$\Gamma_{G,S}$. We will call it the \textit{graph of words of the given group with generators $(G,S)$}. This construction admits useful generalizations, e.\,g., one can consider the graph of words corresponding to the sequence of balls~$B_{n_k}$, where $\{n_k\}$ is a~sequence of positive integers going to infinity. We define a~homomorphism of the space~$\mathscr X$ onto the path space~$T(\Gamma_{G,S})$ of the constructed graph as follows: a~trajectory of the process is a~sequence of configurations~$\{\phi_n\}_{n<0}$ on the group. Consider the map $$ \Phi\colon \{\phi_n\}_n \to \{\phi_n|B_n\}_n. $$ \begin{theorem} \label{th11} 1) The image of the space~$\mathscr X$ under the homomorphism~$\Phi$ is the path space of the graph of words~$\Gamma_{G,S}$. 2) The homomorphism~$\Phi$ agrees with the equipment, i.\,e., the parameters of the Markov chain (transition probabilities) coincide with the cotransition probabilities on the path space. Hence~$\Phi$ maps the measure~$M$ to a~measure on the path space that agrees with the equipment. 3) The homomorphism~$\Phi$ of the space~$\mathscr X$ to the path space~$T(\Gamma_{G,S})$ defines a~homomorphism of the space of configurations, i.\,e., trajectories of the Markov chain, onto the path space of the graph, and the filtration is homomorphically and measure-preservingly mapped onto the tail filtration of the path space. \end{theorem} \begin{proof} Follows immediately from our definition of the graph and its structure. \end{proof} The answer to the important question of \textit{when this homomorphism is an isomorphism} depends on the group, on the choice of a~sequence~$\{n_k\}_k$, and on the transition probabilities, more exactly, on whether the shifts of the balls in the random walk cover the whole group. In the general case, one should increase the size of the balls, which dictates the choice of the sequence~$\{n_k\}_k$. We will return to this question later, and now, in the upcoming sections, consider concrete examples. It is useful to observe that this construction can also be used for semigroups; for instance, one can take~$S$ to be a set of generators without inverses. The resulting graph corresponds to the filtration of the process constructed from the group, but it is simpler, and it sometimes suffices to analyze this simpler graph in order to study the process itself. Below we give examples of applying this scheme. \subsection{Walks over orbits of a~free group} \label{ssec6.2} Historically the first example of a~nonstandard dyadic filtration~\cite{59} involved the free group $G=F_2$ with two generators $(a,b)$ and the space $X=2^{F_2}$ endowed with a~Bernoulli measure (the infinite product of the measures $(1/2,1/2)$). We considered the random walk over the trajectories (orbits) of the Bernoulli action of the free semigroup with two generators, i.\,e., the Markov process with the transitions $y(g)\to y(ag)$ and $y(g)\to y(bg)$ having the probabilities $(1/2,1/2)$. Let us cite the paper~\cite{59}: ``\textit{The constructed Markov process has the following paradoxical property, which can be loosely stated as follows. Assume that the process of development of mathematics is the same as the constructed Markov process. If each year, starting from~$-\infty$, \textit{Abstract Journal of Mathematics}\footnote{Which, unfortunately, has not survived to modern times.} publishes a~volume containing all essentially new results (i.\,e., results independent of the previous ones) discovered in this year, then reading all these volumes does not allow one to fully recover the picture of mathematics for each given year. Of course, it is understood that in year~$-\infty$ there were no discoveries.}'' Let us add that the above is true for any way of writing the volumes, i.\,e., for any choice of an independent complement to the previous volumes. In other words, there are essential events (i.\,e., events of positive measure) occurred in the given year that are not measurable with respect to any system of independent complements to the partitions of this dyadic filtration. The construction of the graph described above for the general case determines an approximation of this process by finite Markov chains. One of the corresponding multigraphs looks as follows: the number of vertices of level~$n$ is equal to~$2^{2^n}$, which corresponds to the set of all functions on all words of length~$n$ with values~$0$,~$1$. Every function-vertex~$v$ is joined by an edge with two functions-vertices~$u$ and~$u'$ of the previous level if~$u$ (respectively,~$u'$) is the function on words of length~$n-1$ obtained from~$v$ by considering it on the indices beginning as~$(0,*,*,\dots,*)$ (respectively,~$(1,*,*,\dots,*)$). But since functions are arbitrary, this means that the set of vertices of the $n$th level is the set of all pairs of vertices of the $(n-1)$th level. In other words, the graph of words in this case is the \textit{graph of ordered pairs}~\cite{82},~\cite{91} (see Fig.~\ref{fig9}).\footnote{The possibility to use this graph in the study of random walks on free semigroups and the corresponding filtrations was not observed in~\cite{91}; this link will be considered in another paper.} From here it is not difficult to deduce that the corresponding filtration is nonstandard (see below) and to obtain a~number of other properties. \begin{landscape} \begin{figure}[h] \vspace*{2cm} \hspace*{-12cm} \centering \includegraphics[scale=0.9]{ver09} \vspace*{3mm} \caption{The graph of ordered pairs} \label{fig9} \end{figure} \end{landscape} Exactly the same numerical analysis was used in~\cite{59} to prove the nonstardardness of filtrations arising in the study of actions of locally finite groups, e.\,g., a~Bernoulli action of the group of roots of unity of orders of the form~$2^n$, or a~direct sum of groups of order~$2$. The main argument proving the nonstandardness in this case was that the orders of orbits of the groups of automorphisms of the dyadic tree in these examples was substantially smaller than the order sufficient for the measure to concentrate near one orbit. This argument was used by the author to introduce the notion of the entropy of a~dyadic filtration. After the author's paper~\cite{59} was published, where nonstandard dyadic filtrations and the standardness criterion appeared for the first time and the notion of entropy of dyadic filtrations was defined (see~\cite{69}), A.\,M.~Stepin observed that in examples related to actions of locally finite groups (like the infinite sum of groups of order~$2$), one can also use the entropy of the action as an invariant of a~trajectory dyadic sequence. In~\cite{64},~\cite{65},~\cite{69}, an estimate on the growth of the orders of groups is given for which the entropy of the action coincides with the entropy of the filtration. Further research~\cite{22} showed that this estimate is sharp. However, the entropy of filtrations in the sense of~\cite{62},~\cite{69} is of a~much more general nature. Let us also add that the tree of paths~$T_u$ with root at a~given vertex~$u$ of level~$n$ together with the sequence of zeros and ones on the leaves of the tree saying at which of the two vertices~$a$,~$b$ of the first level the given path ends, is exactly what in~\cite{59} was called the \textit{universal projection} of the cylinder set~$\xi_a$ or~$\xi_b$ of vertices of the first level to the initial fragment of the filtration, i.\,e., to the first $n$ partitions. The fact that the universal projection does not stabilize, i.\,e., that for different vertices~$u$ of level~$n$ the trees do not approach one another in measure, means, according to the standardness criterion, exactly that the filtration is nonstandard. In conclusion of this example, note that although (by the theorem on a~Mar\-kov realization) filtrations generated by different RWRS actions can be represented as filtrations of Markov chains with finitely many states, an explicit form of this realization is known only in a~small number of examples. This is also true for adic realizations of the corresponding groups and subgroups. Such a~realization is not even known for all Abelian groups, which is discussed in the next section. \subsection{Walks over orbits of free Abelian groups} \label{ssec6.3} Walks over orbits of infinite Abelian groups provide an example of a~filtration that is simpler in one respect and more complex in another. We will also consider a~model example. If $G={\mathbb Z}^d$ and $S=(e_1,e_2,\dots,e_d)$, where $e_i$ are the unit coordinate vectors, then the corresponding graph looks as follows: the set of vertices of level~$n$ is the set of all functions on the integer simplex $$ \Sigma_d^n=\biggl\{\{r_1,r_2,\dots,r_d\}\colon \sum_i r_i=n,\ r_i\in {\mathbb Z}_+\biggr\} $$ with values~0 or~1, and an edge joins a~vertex-function~$u$ with the vertices-functions of the previous level that are the restrictions of~$u$ to the subsimplices obtained by removing one of the faces. Consider the case $d=1$, $G=\mathbb Z$, $S=\{-1,+1\}$; the corresponding graph is the \textit{graph of words for~$\mathbb Z$}. The vertices of the $n$th level are the words of length~$n$: $$ (\epsilon_1,\epsilon_2,\dots,\epsilon_{n-1},\epsilon_n),\qquad \epsilon \in 0,1, $$ and the edges lead to the following two subwords: $$ (\epsilon_1,\epsilon_2,\dots,\epsilon_{n-1}),\quad (\epsilon_2,\dots,\epsilon_{n-1},\epsilon_n),\qquad n=2,3,\dots\,. $$ The graph of words for~$\mathbb Z$ is an Abelian analog of the graph of ordered pairs. This is a~dyadic (multi)graph, the $n$th level has~$2^n$ vertices, each having two predecessors and four successors (see Fig.~\ref{fig10}). The~natural central measure is the uniform measure on the vertices of each level. \begin{figure}[h!] \centering \includegraphics{ver10.pdf} \caption{The graph of binary words for the group $\mathbb{Z}$} \label{fig10} \end{figure} The adic shift on this graph is quite interesting, it somewhat resembles the $(T,T^{-1})$-transformation (where $T$ is a~Bernoulli automorphism). In the well-known paper~\cite{27}, it is proved that this transformation is not Bernoulli and even not weakly Bernoulli. But here we are interested in the properties of the filtration; the relation of these properties to ergodic characteristics of transformations is not yet sufficiently studied. The nonstandardness of the filtration for $(T,T^{-1})$ was conjectured by the author and first proved in~\cite{22} (see also~\cite{84}). An extension of the results of~\cite{27} to the groups~${\mathbb Z}^d$ was obtained in~\cite{4}. The nonstandardness of the filtrations for all free Abelian groups with $d \geqslant 1$ and for nilpotent groups was proved in~\cite{84}, as well as the fact that the scaled entropy of these filtrations depends on~$d$; we will return to these graphs and their generalizations to other groups in subsequent papers. \subsection{Lacunarity} \label{ssec6.4} The theorem on lacunary isomorphism~\cite{58} says that every dyadic ergodic filtration~$\{\xi_n\}_n$ is lacunary standard, i.\,e., there exists an increasing subsequence~$\{n_k\}$ of positive integers such that the filtration~$\{\xi_{n_k}\}_k$ is a $\{2^{n_k-n_{k-1}}\equiv r_k\}$-adic standard filtration. The sequences~$\{n_k\}$ for which this is true constitute the so-called \textit{scale}, which is an invariant of the original filtration. The scale of a~standard dyadic sequence consists of all sequences and is called complete. Of course, together with every sequence, the scale contains all sequences cofinal with it (i.\,e., differing with it in finitely many positions); also, the scale is monotone, since a~subsequence of a~standard filtration is obviously standard. It seems that these two properties exhaust the conditions on a~scale. In~\cite{69},~\cite{40}, nonstandard dyadic sequences with various scales are constructed. For instance, $\{\xi_{2k}\}$ can be a~standard 4-adic filtration, while the filtration~$\{\xi_k\}_k$ is nonstandard, etc. By the same method one can prove a~general theorem on arbitrary inhomogeneous filtrations, not necessarily locally finite (see~\cite{82}). \begin{theorem} \label{th12} Every ergodic filtration is lacunary standard. \end{theorem} Since standardness can be violated along an arbitrary sequence, classification of filtrations can hardly be efficient. However, the role of lacunary theorems is different: they measure the degree of closeness of a~filtration to a~standard one. In~\cite{64}, the notion of an automorphism with a complete (dyadic) scale was introduced; in terms of this paper, this is an automorphism for which there exists a~dyadic realization with a~standard tail filtration (with respect to the given measure).\footnote{In~\cite{64}, there was an attempt to apply the results related to dyadic filtrations (standardness, scale, etc.) to the study of ergodic properties of automorphisms. However, in contrast to such applications to actions of locally finite groups, for groups that are not locally finite the requirement of dyadicity or homogeneous periodicity in the spirit of Rokhlin's lemma complicates all constructions. As the further development of the subject showed, one should pass from homogeneous filtrations to semihomogeneous ones; it is in this way that adic realizations of group actions are constructed, and the notions of standardness and complete scale become more natural.} Later, A.\,B.~Katok~\cite{28} (see also~\cite{29}) introduced and studied the notion of a~standard automorphism as an automorphism that is monotonically equivalent to an automorphism with universal discrete spectrum. It would be interesting to find out whether these notions coincide. \subsection{Scaled and secondary entropy} \label{ssec6.5} Another measure of closeness to standardness is the scaled entropy of a~filtration (and its analog, the scaled entropy of a~group action). Recall that the condition of the standardness criterion is that the metric~$d^n_{\rho}$ on the space of measured trees converges in measure to a~degenerate metric. \begin{proposition} \label{pr7} If a~filtration is ergodic, then the sequence of metrics~$d^n_{\rho}$ either converges in measure to a~degenerate metric, or has no limit. \end{proposition} The reason of this simple, but important fact is that the existence of a~limit that is not reduced to a~one-point space contradicts the ergodicity, since otherwise nontrivial limiting functionals would appear. Thus in the case of a~nonstandard filtration there arises a~sequence of metric spaces that has no limiting metric space, but one can track the $\epsilon$-entropy of this metric space as a~function of~$n$. It turns out that the asymptotics of this sequence for some normalization does not depend on the choice of a~metric, and thus its class is an invariant of the filtration. It is in this way that this invariant was defined in~\cite{65},~\cite{59} in the case of a~dyadic filtration with exponential (Kolmogorov) normalization. In~\cite{69},~\cite{84}, it was defined in the general case and, for reasons related to the stationary case, called secondary entropy. In the sense of this definition and under a correct normalization, the secondary entropy of the filtration corresponding to the transformation $(T,T^{-1})$, by a~theorem from~\cite{23}, equals the entropy~$h(T)$. In~\cite{91}, it was proved that the scaled entropy of the adic automorphism\footnote{The definition of this entropy was given in the author's papers~\cite{73},~\cite{74}, where its main properties were formulated, proved later by P.~B.~Zatitskiy~\cite{100},~\cite{101}.} for some measures on the graph of ordered pairs is equal to the entropy (scaling) sequence (with respect to the same measure) of the tail filtration. These theorems should be extended to arbitrary groups. \subsection{A project of classification of nonstandard filtrations} \label{ssec6.6} Kolmogorov's zero--one law (the triviality of the $\sigma$-algebra of the intersection of the $\sigma$-algebras of a~filtration) is the simplest condition on the behavior of a~filtration at infinity. Standardness, or Bernoulli property, is, on the contrary, the strongest such condition: it guarantees that at infinity there is no complexity, since all conditional distributions converge in the strongest sense. Here, in contrast to the traditional way of measuring the closeness of a~process to a~process of independent random variables, this closeness is expressed not by the rate of convergence to zero of the correlation coefficients or other characteristics, but by the smallness of the difference (in one or another metric) between the tree structures of conditional measures. Thus one can speak about ``higher zero--one laws,'' meaning assertions that conditional distributions approach one another in some metric, which are intermediate between the ordinary zero--one law and standardness (Bernoulli property). As observed in the previous section, such a~scale of conditions apparently exists. Ornstein's very weak Bernoulli (VWB) condition is also a~law of this type, but, as shown by examples, it does not coincide with the standardness condition: in~\cite{16}, an example is given of a~non-Bernoulli process with a~standard ergodic dyadic filtration, and our examples show that a~Bernoulli automorphism can have a~Markov generator with respect to which the past is not standard; another example is given in~\cite{23}. However, these are only examples, and a~more complete analysis of the situation is yet to be carried out. Note also that conditions may depend on what metric on the space of measured trees we choose. Recall that the scheme of the standardness criterion involves considering functions on $X/\xi_n$ with values in the space of measured trees of rank~$n$, and the condition is that for sufficiently large~$n$ the values of the functions approach one another with respect to a~metric on the space of trees, i.\,e., the sequence of functions converges to a~constant in measure. In other words, the images of the measures on the spaces of trees converge to a~$\delta$-measure. Instead of functions on the quotient space $(X/\xi_n,\mu_{\xi})$, it is convenient to consider their liftings to the space $(X,\mu)$ itself, and then on $(X,\mu)$ we obtain semimetrics~$\rho_n$ in which the distance between points is the distance between their images, the values of the functions, i.\,e., the corresponding trees. The condition of the standardness criterion is that the sequence of these metrics collapses to a~point, and if this condition is satisfied, then two finitely isomorphic filtrations are isomorphic. If there is no standardness, then the functions do not converge to a~constant in measure, and this means that the sequence $(X,\mu,\rho_n)$, $n=1,2,\dots$, where $\rho_n$ is the sequence of semimetrics, does not converge and there is no limiting metric on $(X,\mu)$. What criteria can we use to establish an isomorphism of filtrations? Necessary conditions are the coincidence of the entropy, of the scale, etc.\ (see~\cite{64}). But can we state sufficient conditions? For this, one should understand what does it mean that two sequences of (semi)metric spaces for which there is no limiting space are asymptotically isomorphic. The conjecture we formulate below is that sometimes finite isomorphism supplemented by a~similar asymptotic isomorphism implies isomorphism. This conjecture reduces the study of all invariants of nonstandard filtrations (like, e.\,g., entropy) to a~unified context. To state this conjecture more precisely, recall the following general idea related to asymptotic isomorphism suggested in~\cite{82}. Consider a~so-called metric triple, or, in another terminology, an admissible triple $\theta=(X,\mu,\rho)$, that is, a~space~$X$ endowed with a~measure~$\mu$ and a~metric~$\rho$ that agree with each other: the metric~$\rho$ is measurable as a~function of two variables on the space $(X,\mu)$, the space $(X,\rho)$ is separable as a~metric space, and~$\mu$ is a~Borel probability measure on this metric space $(X,\rho)$. By the way, note that, in contrast to the traditional practice of fixing a~metric on a~space and considering various Borel measures on the resulting metric space, the author has repeatedly advocated the usefulness of another approach: fixing a~measure and varying metrics on the resulting measure space~\cite{71}. In~\cite{19},~\cite{71},~\cite{72}, a~theorem is proved on a~complete system of invariants of metric triples with respect to the group of measure-preserving isometries. \begin{definition} \label{def9} The matrix distribution~$D_{\theta}$ of a~metric triple $\theta=(X,\mu,\rho)$ is the measure in the space of infinite nonnegative real matrices defined as the image of the Bernoulli measure~$\mu^{\infty}$ on~$X^{\infty}$ under the map \begin{align*} M_{\theta}\colon (X^{\infty},\mu^{\infty}) &\to ({\mathbb M}_{\mathbb N }(\mathbb{R}_+),D_{\theta}), \\* \{x_n\}_n&\mapsto \{\rho(x_i,x_j)\}_{(i,j)}. \end{align*} \end{definition} \begin{theorem} \label{th13} Two metric triples $$ \theta=(X,\mu,\rho)\quad\text{and}\quad \theta'=(X',\mu',\rho') $$ with nondegenerate measures ($\mu(U)>0$, $\mu(U')>0$ if $U$,~$U'$ are nonempty open sets in~$X$,~$X'$) are isomorphic with respect to the above equivalence (i.\,e., there exists an invertible isometry $T\colon X\to X'$ preserving the measure: $T\mu=\mu'$) if and only if their matrix distributions coincide: $$ D_{\theta}=D_{\theta'}. $$ \end{theorem} In~\cite{70},~\cite{76},~\cite{85}, a~similar theorem is proved on the classification of arbitrary measurable functions of several variables via matrix distributions. \begin{definition} \label{def10} Assume that for a~sequence of admissible metric triples on the same measure space $(X,\rho_n,\mu)$ there exists a~weak limit $\lim_n D_{\theta_n}$ of matrix distributions (regarded as measures on the space of matrices~${\mathbb M}_{\mathbb N}(\mathbb{R}_+)$ with the ordinary weak topology). This limit will be called the virtual matrix distribution of the sequence of metric triples. It is an invariant with respect to a~sequence of isometries~$I_n$, $n=1,2,\dots$\,. \end{definition} Note that if the metric triples do not converge to any metric triple, then the limit of the matrix distributions, if it exists, is not the matrix distribution of any metric triple. On the other hand, the set of all matrix distributions of metric triples is not weakly closed, hence one can use virtual matrix distributions for classification of some diverging sequences of metric spaces. Now we can state a~conjecture on isomorphism of filtrations. \begin{conjecture} \label{con1} Consider two finitely isomorphic ergodic filtrations~$\tau$,~$\tau'$ and assume that { \begin{enumerate} \renewcommand{\labelenumi}{{\rm\arabic{enumi})}} \renewcommand{\theenumi}{{\rm\arabic{enumi})}} \item\label{lab1} they both are nonstandard; \item\label{lab2} they both have virtual matrix distributions of the sequences of (semi)metric triples. \end{enumerate} } If these matrix distributions coincide, then the filtrations are isomorphic. \end{conjecture} The validity of this conjecture, along with the standardness criterion, would allow us to believe that the classification of nonstandard filtrations is completed in the above terms and under the above assumption on the validity of condition~\ref{lab2} for an arbitrary nonstandard filtration. The answer would be that the virtual matrix distribution together with the finite invariants exhaust all invariants of ergodic locally finite filtrations. However, condition~\ref{lab2} (more exactly, the existence of a~virtual matrix distribution) can hardly always be satisfied. Hence the main problem is to find a~complete system of invariants for a~diverging sequence of metrics on a~measure space. For the dyadic case, it suffices to consider invariants of sequences of metrics arising from iterations of metrics on the graph of unordered pairs, which we discuss in the next section. But in the example of a~Markov chain with a~nonstandard filtration from Sec.~\ref{ssec6.2}, the virtual matrix distribution does exist. Namely, the sequence of matrix distributions of triples on the two-point space, $$ \rho_{n}(a,b)=0, \quad n\equiv 0 \ (\operatorname{mod} 2); \qquad \rho_{n}(a,b)=1, \quad n\equiv 1 \ (\operatorname{mod} 2), $$ converges in the sense of Ces\`aro to the virtual matrix distribution of this sequence, which is a~measure on the set of infinite symmetric matrices with zeros on the diagonal and with independent entries~$r_{i,j}$, $i>j$, taking the values~0 and~1 with equal probabilities. \section{Projective limits of simplices, invariant measures, and the absolute of a~graph} \label{sec7} This section is devoted to the theory of filtrations in Borel spaces: in this case, a~priori there is no measure, but only an equipment, i.\,e., conditional measures. The main problem is to describe all Borel probability measures that agree with the equipment. We describe the fourth language of the theory of locally finite filtrations: that of projective limits of finite-dimensional simplices. One may say that the category of projective limits of finite-dimensional simplices is equivalent to the category of locally finite filtrations in Borel spaces, or the category of multigraphs with equipments, or the category of Markov compacta with cotransition probabilities. Although this is an extremely natural relation, nevertheless, as far as the author knows, until recently the functorial equivalence of the theory of Borel and metric filtrations on the one hand and the convex geometry of projective limits of simplices on the other hand almost has not been discussed in the literature in this context. We use some definitions and facts from the recent paper~\cite{78} by the author (see also~\cite{36},~\cite{5},~\cite{98},~\cite{52}). The analysis of the theory of locally finite filtrations in Borel spaces, as well as the theory of central and invariant measures on path spaces of multigraphs, or the theory of Markov measures with given cotransitions, shows that the adequate and geometric language of projective limits of finite-dimensional simplices is the most appropriate language for all these theories. Below we will describe a~correspondence between the problem of describing the measures that agree with an equipment and, in particular, of describing the central measures, and the geometry of projective limits of simplices (for more details, see~\cite{78}). \subsection{Projective limits of simplices, extreme points} \label{ssec7.1} First, we will show how, given a~pair $(\Gamma,\Lambda)$, i.\,e., an equipped graded graph, one can define, in a canonical way, a~projective limit of finite-dimensional simplices. Later we will see that there is also a~reverse transition from projective limits to equipped graphs. Denote by~$\Sigma_n$ the finite-dimensional simplex of formal convex combinations of vertices $v \in\Gamma_n$ of level~$n$. It is natural to view the simplex as the set of all probability measures on the set of vertices of~$\Gamma_n$. We define affine projections $$ \pi_{n,n-1}\colon \Sigma_n \to \Sigma_{n-1}; $$ it suffices to define them for every vertex $v \in \Gamma_n$. Obviously, these projections can be regarded as a~system of cotransition probabilities~$\Lambda$, and the images of vertices~$v$ are points of the previous simplex, i.\,e., probability vectors: \begin{equation} \label{eq7.1} \pi_{n,n-1}(\delta_v)=\sum_{u: u\prec v} \lambda_v^u \delta_u; \end{equation} this map can be extended by linearity to the whole simplex~$\Gamma_n$. The vertex~$\varnothing$ gives rise to the zero-dimensional simplex consisting of a~single point. Degeneracies are allowed (i.\,e., projections may glue together different vertices). Now we define projections of simplices with arbitrary numbers: \begin{gather*} \pi_{n,m}\colon \Sigma_n \to \Sigma_m,\qquad m<n; \\ \pi_{n,m}=\prod_{i=m}^{n+1}\pi_{i,i-1},\qquad m>n,\quad m,n \in \mathbb N. \end{gather*} The collection of data $\{\Sigma_n, \pi_{n,m}\}$ allows one to define a~projective limit on the one hand, and to recover the graph on the other hand: the vertices of the graph~$\Gamma_n$ are the vertices of~$\Sigma_n$, and the edges (and then also the paths in the graph) are recovered from the nonzero coordinates of the vectors~$\pi_{n,n-1}$, $n \in \mathbb{N}$. Let ${\mathscr M}=\prod\limits_{n=0}^{\infty}\Sigma_n$ be the direct product of the simplices~$\Sigma_n$, $n\in \mathbb N$, endowed with the product topology. \begin{definition} \label{def11} The space of the projective limit of the family~$\{\Sigma_n\}_n$ of simplices with respect to the system of projections~$\{\pi_{n,m}\}$ is the following subset of the direct product~${\mathscr M}$: \begin{align*} \lim_{n\to\infty}(\Sigma_n,\pi_{n,m}) &\equiv \bigl\{\{x_n\}_n; \ \pi_{n,n-1}(x_n)=x_{n-1}, \ n=1,2,\dots\bigr\} \\ &\equiv (\Sigma_{\infty},\Lambda) \subset \prod_{n=0}^{\infty}\Sigma_n =\mathscr M. \end{align*} \end{definition} \begin{proposition} \label{pr8} The space of the projective limit~$\Sigma_{\infty}$ is always a~nonempty, convex, closed (and hence compact) subset of the direct product~$\mathscr M$, which is a~(possibly infinite-dimensional) Choquet simplex. \end{proposition} The affine structure of the direct product~$\mathscr M$ determines an affine structure on the limiting space; the fact that the set is nonempty and closed is obvious. It remains to check only that an arbitrary point of the limit has a~unique decomposition over its Choquet boundary (see below). We distinguish between the space of the projective limit and the ``structure of the projective limit,'' meaning that not only the limiting space itself, i.\,e., some infinite-dimensional simplex, is important for us, but also the structure of the pre-limit simplices and their projections. In other words, we consider the category of projective limits of simplices in which an object is not a~limiting simplex, but a~sequence of finite-dimensional simplices. Let us show that the collection of data of a~projective limit of simplices allows one to recover the graph, the path space, and the system of cotransition probabilities, and the projective limit corresponding to this system and constructed according to the above rule coincides with the original one. This will establish a~tautological relation between the two languages: the language of pairs (a~Bratteli diagram, a~system of cotransitions) on the one hand, and the language of projective limits of finite-dimensional simplices on the other hand. Indeed, assume that we have a~projective limit of finite-dimensional simplices~$\{\Sigma_n\}$, $n\in \mathbb N$, and a~coherent system of affine projections $$ \{\pi_{n,m}\},\qquad \pi_{n,m}\colon\Sigma_n \to \Sigma_m,\quad n\geqslant m,\quad n,m\in \mathbb N. $$ Take the vertices of~$\Sigma_n$ as the vertices of the $n$th level of the graph~$\Gamma$; a~vertex~$u$ of level~$n$ precedes a~vertex~$v$ of level~$n+1$ if the projection~$\pi_{n+1,n}$ sends~$v$ to a~point of the simplex~$\Sigma_n$ for which the barycentric coordinate with respect to~$u$ is positive. As a~system of transition probabilities, we take the system of vectors~$\{\lambda_v^u\}$ related to the projections~$\pi_{n+1,n}$ by formula~\eqref{eq7.1}. In what follows, having a~projective limit of simplices, we will use the graph (of vertices of all simplices) canonically associated with it, the space of paths in this graph, etc., and in the same way will speak about projections of simplices canonically associated with an equipped graph. Consider an arbitrary projective limit of finite-dimensional simplices: $$ \Sigma_1 \leftarrow \Sigma_2 \leftarrow \dots\leftarrow \Sigma_n\leftarrow \Sigma_{n+1}\leftarrow \dots \leftarrow\Sigma_{\infty}\equiv \Sigma(\Gamma,\Lambda). $$ First of all, we define limiting projections $$ \pi_{\infty,m}\colon\Sigma_{\infty}\to \Sigma_m $$ as the limits $\lim_n \pi_{n,m}$ for each~$m$: obviously, the images~$\pi_{n,m}\Sigma_n$, regarded as subsets in the simplices~$\Sigma_m$, monotonically decrease as~$n$ grows, and their intersections are some sets denoted by $$ \Omega_m= \bigcap_{n:n>m} \pi_{n,m}\Sigma_n; $$ these are convex closed subsets of the finite-dimensional simplices~$\Sigma_m$, $m=1,2,\dots$, and the limiting projections send the limiting compactum~$\Sigma_{\infty}$ epimorphically on these sets: $$ \pi_{\infty,m}\colon\Sigma_{\infty}\to \Omega_m. $$ It would be more economical to consider also the projective limit $$ \Omega_1\leftarrow \Omega_2 \leftarrow \dots\leftarrow \Omega_n \leftarrow \dots\leftarrow \Omega_{\infty}=\Sigma(\Gamma,\Lambda) $$ itself with the epimorphic projections~$\pi_{n,m}$ restricted to the subset~$\Omega_n$ and, by definition, with the same limiting space. However, to find the sets $\Omega_n$ explicitly is an interesting and difficult problem equivalent to the main problem of describing all invariant measures.\footnote{In particular, an explicit form of the compacta~$\Omega_n$ is known in very few cases, even among those where the central measures are known. Even for the Pascal graph, we obtain interesting and rather complicated convex compacta; but, for example, for the Young graph, the author does not know the form of these compacta with the same degree of precision.} Every point of the limiting compactum, i.\,e., a~sequence $$ \{x_m\}\colon x_m \in \Sigma_m,\quad \pi_{m,m-1}x_m=x_{m-1}, $$ determines, for every~$m$, a~sequence of measures~$\{\nu_n^m\}_n$ on the simplex~$\Sigma_m$, namely, $$ \nu_n^m=\pi_{n,m}(\mu_n), $$ where the measure~$\mu_n$ is the (unique) decomposition of the point~$x_n$ into the extreme points of the simplex~$\Sigma_n$. Of course, the barycenter of each of the measures~$\nu_n^m$ in~$\Sigma_m$ is $x_m$, and this sequence of measures itself becomes coarser, in a~clear sense, and weakly converges in~$\Sigma_m$ as $n \to \infty$ to a~measure~$\nu_{x_m}$ supported by $\Omega_m \subset \Sigma_m$. Obviously, in this way we obtain all points of the limiting compactum, i.\,e., all measures with given cotransition probabilities. A point of an arbitrary convex compactum~$K$ is called \textit{extreme} if it cannot be represented by any nontrivial convex combination of points of~$K$; the collection of extreme points is called the Choquet boundary of~$K$ and denoted by~$\operatorname{ex}(K)$. A~point is called \textit{almost extreme} if it lies in the closure~$\overline{\operatorname{ex}(K)}$ of the Choquet boundary. Recall that an affine compactum in which every point has a~unique decomposition into extreme points (over the Choquet boundary) is called a~Choquet simplex. Let us give general criteria of extremality and almost extremality for points of a~projective limit of simplices. \begin{proposition} \label{pr9} 1) A~point~$\{x_n\}$ of a~projective limit of simplices is extreme if and only if for every~$m$ the weak limit of the measures~$\nu_{x_m}$ in the simplex~$\Sigma_m$ is the $\delta$-measure at the point~$x_m$: $$ \lim_n \nu_n^m \equiv \nu_{x_m}=\delta_{x_m}. $$ 2) A~point $\{x_n\}$ is almost extreme if for every~$m$ and every neighborhood~$V(x_m)$ of the point $x_m \in \Sigma_m$ there exists an extreme point~$\{y_n\}$ of the limiting compactum for which $y_m \in V(x_m)$. 3) For every point~$\{x_n\}$ of the limiting compactum of simplices there exists a~unique decomposition into extreme points (Choquet decomposition), which is determined by the measures~$\nu_{x_m}$. \end{proposition} \begin{corollary} \label{cor3} The limiting compactum of a~projective limit of fi\-nite-di\-men\-sion\-al simplices is a~Choquet simplex (in general, infinite-dimensional). \end{corollary} In the literature (see~\cite{98}), the question was discussed why a~limit of simplices is a~simplex; even in the finite-dimensional case, this problem is not quite obvious, it was raised by A.\,N.~Kolmogorov. The most correct way to explain this is to refer to the result, formally much more general but in fact equivalent (see, e.\,g.,~\cite{52}), on the uniqueness of a~decomposition of measures with a~given cocycle into ergodic components, i.\,e., use the probabilistic interpretation of simplices. It is easy to prove that the converse is also true: every separable Choquet simplex can be represented as a~projective limit of finite-dimensional simplices, but, of course, such a~representation is not unique. Slightly digressing from the main focus, note that the simplex of invariant measures with respect to an action of a~nonamenable group on a~compactum is separable, though its possible approximation is not generated by finite approximations of the action; thus there arises a~non-trajectory finite-dimensional approximation of the action, which, apparently, has not been considered in the literature. \begin{remark} \label{rem1} The first two claims of Proposition~\ref{pr9} can perhaps be extended to projective limits of arbitrary convex compacta. \end{remark} Recall that the class of separable Choquet simplices contains \textit{Poulsen simplices}, for which the set of extreme points is dense; such a~simplex is unique up to an affine isomorphism and universal in the class of all affine separable simplices (see~\cite{72} and the references therein). \begin{proposition} \label{pr10} Consider a~projective limit of simplices with the following property: for every~$m$ the union of the projections $$ \bigcup_{n,t}\{\pi_{n,m}(t); \ t\in \operatorname{ex}(\Sigma_n),\ n=m,m+1,\dots\} $$ over all vertices of the simplices~$\Sigma_n$ and all $n>m$ is dense in the simplex~$\Sigma_m$. Then the limiting simplex is a~Poulsen simplex. \end{proposition} It is clear that this construction can be carried out by induction, and the extremality criterion obviously implies that in this case the extreme points are dense. With such arbitrariness in the construction, the uniqueness seems surprising. Nevertheless, verifying that a~given projective limit is a~Poulsen simplex is not so easy and is similar to problems related to filtrations. A simplex whose Choquet boundary is closed is called a~ \textit{Bauer simplex}. Between the classes of Bauer and Poulsen simplices, there are many intermediate types of simplices. In the literature on convex geometry and the theory of invariant measures, this has been repeatedly discussed. However, it seems that these and similar properties of infinite-dimensional simplices in application to projective limits and the theory of graded graphs and corresponding algebras have never been considered. Each of these properties has an interesting interpretation in the framework of these theories. The author believes that the following class of simplices (or even convex compacta) is useful for applications: an \textit{almost Bauer simplex} is a~simplex whose Choquet boundary is open in its closure. Parallelism between the study of pairs $(\Gamma,\Lambda)$ (a graded graph and an equipment of this graph) on the one hand and of projective limits of simplices on the other hand means that the latter has a~probabilistic interpretation. It is useful to describe this interpretation directly. Recall that in the context of projective limits a~path is a~sequence~$\{t_n\}_n$ of vertices ${t_n \in \operatorname{ex}(\Sigma_n)}$ that agrees with the projections in the following way: the point $\pi_{n,n-1}t_n$ for all $n\in {\mathbb N}$ has a~nonzero barycentric coordinate with respect to the vertex~$t_{n-1}$. First of all, every point $x_{\infty} \in \Sigma_{\infty}$ of the limiting simplex is a~sequence of points of simplices that agrees with the projections: $$ \{x_n\}\colon \pi_{n,n-1}x_n=x_{n-1},\quad n \in \mathbb N. $$ As an element of a~simplex,~$x_n$ defines a~measure on the vertices of this simplex, and since all these measures agree with the projections,~$x_{\infty}$ defines a~measure~$\mu_x$ on the path space with fixed cotransition probabilities. Conversely, each such measure defines a~sequence of projections. Thus the limiting simplex is the simplex of all measures on the path space with given cotransitions. If a~point $\mu\in \operatorname{ex}(\Sigma_{\infty})$ is extreme, this means that the measure~$\mu$ is ergodic, i.\,e., the tail $\sigma$-algebra is trivial with respect to the measure~$\mu$ on the path space. We end this section with the following conclusion. \textit{The theory of equipped graded multigraphs and the theory of Markov compacta with systems of cotransitions are identical to the theory of Choquet simplices regarded as projective limits of finite-dimensional simplices. All three theories can be viewed as theories of locally finite filtrations in a~fixed basis representation}. \subsection{The fundamental problem: description of the absolute} \label{ssec7.2} \begin{problem}[Main problem] Given a~projective limit of finite-dimensional simplices, find the Choquet boundary of the limiting simplex. It will be convenient to call this boundary (in the context of path spaces of graded graphs) the absolute of a~projective limit (of a~Markov chain, a graded graph, etc.). Sometimes the absolute consists of a~single point; in this case it is trivial, but this happens very rarely. By above, this problem is equivalent to the following one: \begin{enumerate} \item[--] given a~Markov compactum with a~system of cotransitions, describe all ergodic measures that agree with these cotransitions, \end{enumerate} or \begin{enumerate} \item[--] given a~graded graph with an equipment, describe all ergodic measures on its path space that agree with this equipment, \end{enumerate} \noindent or, finally, \begin{enumerate} \item[--] given a~filtration on a~Borel space, describe all ergodic Borel measures that agree with the cocycle of this filtration (see Sec.~\ref{sec3}). \end{enumerate} \end{problem} Actually, as we have seen, all these problems are tautologically equivalent, but still they belong to different areas of mathematics. The formulation with Markov chains is essentially due to E.\,B.~Dynkin (see~\cite{11}).\footnote{In~\cite{11}, the terms ``exit boundary,'' ``entrance boundary,'' etc.\ were used. Note that the Martin boundary can also be easily defined in our context, see~\cite{78}.} For graphs with the canonical equipment, this is the problem of describing the central measures, or traces of $\operatorname{AF}$-algebras, or characters of locally finite groups; besides, this is also the problem of describing the invariant measures for actions of countable groups, provided that an adic approximation is chosen. This is an enormous circle of problems which covers a~substantial part of asymptotic representation theory, combinatorics, algebra, and, of course, the theory of dynamical systems. In the case of filtrations, fixing some ergodic measure allows one to regard a~filtration as a~filtration of a~measure space (rather than of merely a~Borel space), with all the problems considered above, whose solution substantially depends on the choice of a~specific measure. But the absolute, in a sense, combines different measures with respect to which one can consider filtrations, equivalence relations, adic actions, etc. Thus the problem is not only to describe the absolute. Here is a~list of natural important additional questions (we state them in terms of graded graphs). The setting of the problem can be, for instance, as follows: -- On which graded graphs every measure from the absolute determines a~standard tail filtration on the path space endowed with this measure (it is natural to call such graphs standard)? -- On which graphs there exists a~measure from the absolute with a~standard tail filtration? -- On which graphs the absolute contains no nontrivial measures with a~standard tail filtration? The same series of questions can be posed in connection with entropy and other invariants of filtrations. From the point of view of the metric theory of filtrations and the theory of Markov chains, these questions look especially unconventional, since we ask to what extent can measures with the same conditional (cotransition) probabilities be dissimilar. \subsection{Several examples. Relation to the theory of locally finite filtrations} \label{ssec7.3} We choose several examples of problems of describing the absolute. 1. \textsl{Absolutes of groups}. Consider a~countable finitely generated group~$G$ and choose a~symmetric system of generators: $S=S^{-1}$. By the \textit{dynamical Cayley graph}, or the \textit{Markov compactum of the random walk}, we mean the graph in which the vertices of level~$n$ are all elements of~$G$ representable as words of length at most~$n$, and edges correspond to the right multiplication by generators. The equipment is canonical, i.\,e., the set of word representations of a~given element is endowed with the uniform measure. The problem of describing the absolute, i.\,e., the set of ergodic central measures, is a~considerable extension of the problem of describing the Poisson--Furstenberg boundary of a~random walk. While the Poisson--Furstenberg boundary reduces to a~single point for many groups, the absolute does it very rarely; it has a~structure of a~generalized fiber bundle over the Poisson--Furstenberg boundary. For the free group, the absolute is calculated in~\cite{90}. In this case, a~natural phase transition from ergodicity to nonergodicity is discovered. Presently, the absolute of nilpotent groups is under study. A~quite useful tool here is a~generalization of the ergodic method (see~\cite{82}), which reduces finding the central measures to establishing their additional symmetry properties. In this sense, describing the absolute can be viewed as a far-reaching generalization of de~Finetti's theorem on measures invariant with respect to groups of permutations. In all group examples considered so far, the filtrations turned out to be standard, i.\,e., the dynamical graph can be called standard. 2. On the contrary, graphs arising in the approximation theory of dynamical systems (i.\,e., graphs providing adic realizations of automorphisms with invariant measure) are often nonstandard. \begin{conjecture} \label{con2} A graph providing an adic realization of an automorphism with positive entropy is not standard. The corresponding limiting simplex is a~Poulsen simplex. \end{conjecture} The graph of ordered pairs (see Sec.~\ref{sec4}) has many central measures with respect to which the tail filtration is not standard (see~\cite{91}). Another, more interesting, example of this type is the ``tower of measures'' graph. This is the graph of unordered pairs, which appeared in 1969 as a~tower of binary measures. The simplest way to introduce this graph is to define it as a~projective limit of simplices in which the set of vertices of each simplex is the set of all vertices and middle points of all edges of the previous simplex, the initial simplex being the interval $[0,1]$ (see Fig.~\ref{fig11}). The limiting simplex provides nice interpretations of invariants of sets with respect to nonstandard dyadic filtrations. Moreover, the definition of the tower of measures was motivated by an interpretation of the standardness criterion (more exactly, an interpretation of the values of the universal projection of a~set) for nonstandard situations. The construction of the tower of measures is related to constructions of universal spaces in the set-theoretic and algebraic topology. \begin{figure}[h!] \centering \includegraphics{ver11.pdf} \caption{The beginning of the tower of measures} \label{fig11} \end{figure} 3. An enormous supply of graded graphs, and hence of absolutes, filtrations and their invariants, is provided by the representation theory of $\operatorname{AF}$-algebras and locally finite groups. Each of these objects gives rise to its own Bratteli diagram, the canonical equipment determines a~projective limit of simplices and the limiting simplex of central measures, etc. These links have been studied (but yet without taking into account tail filtrations) both in Borel path spaces and in path spaces endowed with central measures. We only mention that the computation of the absolute, i.\,e., of the traces and characters, looks isolated in each specific case; up to now, there no sufficiently general methods for solving this problem. For example, one has not yet managed to prove the well-known Kerov's conjecture~\cite{30, 31, 32} on central measures in the Macdonald graph, though it differs little from known solved problems. An attempt to separate solvable problems related to the absolute from ``intractable'' ones, i.\,e., those that have no reasonable solution, was made in~\cite{77},~\cite{79},~\cite{82}; it is also reproduced in part in this paper. This is the reason for bringing the notion of standardness of filtrations into these problems. In the papers mentioned above (in more detail, in~\cite{82}), the so-called internal metric on paths of a~graph is introduced, whose role is exactly to separate the two cases. For want of space, we postpone a~further analysis of these relations to future publications. \section{Historical commentary} \label{sec8} In conclusion of this survey, it is appropriate to add several remarks on the history of the problem and some papers, examples, and counterexamples useful in the study of filtrations. 1. In~1958, the famous A.\,N.~Kolmogorov's paper~\cite{34} was published which contained a~definition of entropy as an invariant of the metric theory of dynamical systems. In that paper, Kolmogorov defined entropy in the spirit of Shannon's concept of information transmission and rested on a~filtration (in the old terminilogy, the ``sequence of past $\sigma$-algebras'') that he had often considered earlier, acknowledging the importance of Rokhlin's theory of partitions used in his considerations. As one can deduce from recollections of various persons, in an earlier lecture course, given by A.\,N.\ in 1957 and, unfortunately, lost, he apparently proved that Bernoulli schemes with different entropies are nonisomoprhic using a~combinatorial description of entropy, which, due to the subsequent definition suggested by Ya.\,G.~Sinai~\cite{53}, later became the main one. The more so since this definition was at once easily extended to arbitrary groups.\footnote{Here one should mention the master thesis by D.\,Z.~Arov (Odessa University, 1957), mentioned by A.\,N.\ in his second paper~\cite{35} and published only recently (see~\cite{1}), which contains a~combinatorial definition of an invariant of an automorphism close to Sinai's definition. A~full comparison of Kolmogorov's entropy and Arov's entropy was carried out by B.\,M.~Gurevich~\cite{20}.} Kolmogorov's definition from the first paper was much closer to the traditional theory of stationary processes and Shannon's ideas. But a~small error forced A.\,N.\ to write the second paper (see~\cite{35}). Here we quote a~fragment and a~footnote from this second paper by A.\,N., in which the error contained in the first paper was corrected,\footnote{For some reason, the publishers of A.\,N.'s collected works decided not to publish this paper in full, but to combine a~small part of it with the first paper.} since this is directly related to difficulties of the theory of filtrations. ``V.\,A.~Rokhlin pointed out that the proof of Theorem~2 in my paper~(1) tacitly uses the following assumption. It follows from $$ {\mathfrak A}_1 \supseteq {\mathfrak A}_2 \supseteq \cdots \supseteq {\mathfrak A}_n \supseteq\dotsb,\qquad \bigcap_n {\mathfrak A}_n={\mathfrak N}, $$ that $$ \bigcap_n({\mathfrak B}\vee{\mathfrak A}_n)=\mathfrak B.\textrm{''} $$ ``By the courtesy of V.~A.~Rokhlin, I reproduce this example. Let ${\mathfrak G}^m$ be the additive group of numbers of the form~$\alpha m^{-\beta}$ ($m$~is a~positive integer, $\alpha$~is an integer, $\beta$~is a~nonnegative integer). Denote by~$U$ the automorphism of~${\mathfrak G}^6$ that consists in dividing by~$6$; by $M$, the group of characters of~${\mathfrak G}^6$; and by $T$, the automorphism of~$M$ conjugate to~$U$. The subgroups~${\mathfrak G}^2$,~${\mathfrak G}^3$ of~${\mathfrak G}^6$ satisfy the following obvious relations: \begin{alignat*}{3} {\mathfrak G}^2&\subset U{\mathfrak G}^2, &\quad \bigvee_n U^n{\mathfrak G}^2&={\mathfrak G}^6,&\quad \bigcap_n U^n{\mathfrak G}^2&=0, \\* {\mathfrak G}^3&\subset U{\mathfrak G}^3, &\quad \bigvee_n U^n{\mathfrak G}^3&={\mathfrak G}^6,&\quad \bigcap_n U^n{\mathfrak G}^3&=0. \end{alignat*} Therefore, the corresponding subalgebras~${\mathfrak G}^2$,~${\mathfrak G}^3$ of the algebra~${\mathfrak G}^6$ of measurable sets of the space~$M$ satisfy the quasi-regularity conditions. At the same time,~${\mathfrak G}^2$ has index~3 in~$U{\mathfrak G}^2$ and~${\mathfrak G}^3$ has index~2 in~$U{\mathfrak G}^3$, so $$ MH(T{\mathfrak G}^2|{\mathfrak G}^2)=\lg 3, \qquad MH(T{\mathfrak G}^3|{\mathfrak G}^3)=\lg 2. $$ Here $$ \bigcap_n({\mathfrak G}^2\vee T^n{\mathfrak G}^3)\ne {\mathfrak G}^2,\qquad \bigcap_n({\mathfrak G}^3\vee T^n{\mathfrak G}^2)\ne {\mathfrak G}^3.\textrm{''} $$ This example shows the lack of continuity when passing from a~filtration to its supremum with a~given partition. It is interesting that this difficulty is absent in the theory of increasing sequences of partitions ($\sigma$-algebras). I mention this also because the theory of invariant partitions (in our terms, the theory of stationary filtrations) became an object of interest for Rokhlin, Sinai, and many subsequent authors (see~\cite{47}). In our view, this deep subject is not yet exhausted, and its relations with the theory of groups of unitary or Markov operators, with scattering theory should be studied systematically. On the other hand, as one can see from the survey, the applicability of the theory of filtrations is much wider than the stationary case. 2. The question of whether there exists an ergodic filtration finitely isomorphic, but not isomorphic to a~Bernoulli one appeared as early as in the 1950s, e.\,g., in the work by M.~Rosenblatt (see~\cite{49}). In slightly different terms, it was discussed in N.~Wiener's book \textit{Nonlinear Problems in Random Theory}~\cite{97} (chapters \textit{Coding} and \textit{Decoding}). Actually, \textit{Decoding} contains a~negative answer to this question along with tedious calculations. More exactly, it is claimed there that if a~stationary Gaussian random sequence~$\alpha_n$, $n<0$, has the property that for every~$n$, for the $\sigma$-algebra~${\mathfrak A}_n$ generated by the variables with indices~$<n$, there exist~$|n|$ Gaussian random variables independent of each other and independent of~${\mathfrak A}_n$ that together with~${\mathfrak A}_n$ generate the $\sigma$-algebra of all measurable sets, then the original stationary sequence is isomorphic to a~one-sided Bernoulli shift. However, this is not true, since there exist nonstandard (and hence nonisomorphic to Bernoulli ones) stationary filtrations. I must say that in my first paper~\cite{58} I also claimed (for dyadic filtrations) that nonstandard filtrations do not exist (without giving any arguments or calculations to support this claim), but soon~\cite{59} corrected this error and suggested the first series of examples of nonstandard dyadic sequences, as well as the standardness criterion. Thus classification of filtrations became a~meaningful problem. 3. Speaking about the prehistory of the theory of filtrations, one should begin with the remark that the problem of analyzing the structure of stationary one-sided filtrations was raised in the 1960s by V.\,A.~Rokhlin in the context of the study of endomorphisms (noninvertible measure-preserving transformations): in this field, many important results are due to him (see~\cite{48}). He attracted to it B.\,G.~Vinokurov and his Tashkent pupils (B.\,A.~Rubshtein, N.\,N.~Ganikhodzhaev, etc.), who published a~series of interesting papers on this subject (see~\cite{95},~\cite{94},~\cite{50},~\cite{51}). On the other hand, a~direct necessity to study homogeneous (e.\,g., dyadic) filtrations appeared later in connection with the problem of trajectory isomorphism of automorphisms, also raised by Rokhlin irrespectively of filtrations. It was successfully studied by R.\,M.~Belinskaya~\cite{2}. In fact, by that time, the positive answer to the problem of trajectory isomorphism of all ergodic transformations with invariant measure had been already obtained by H.~Dye~\cite{8}, a~fact that became known to the participants of Rokhlin's seminar on ergodic theory only later, when I proved the theorem on lacunary isomorphism, which immediately implied Dye's theorem. The techniques used in~\cite{8} were based on the theory of $W^*$-algebras, in contrast to an approximation (essentially combinatorial) proof in~\cite{58},~\cite{2}. Much later, the same result was proved for all actions of amenable groups with invariant measure~\cite{43},~\cite{3}. I was interested in trajectory theory in connection with the theory of factors even earlier, and some results for the more general case of quasi-invariant measures were obtained simultaneously with results of W.~Krieger and A.~Connes. For set-theoretic intersections of filtrations, I used the term ``locally measurable partitions,'' V.\,A.~Rokhlin suggested the term ``tame partitions,'' and the generally accepted term is ``hyperfinite partitions.'' 4. Similar problems have also been raised in probability theory itself, long ago and repeatedly. For instance, the question of what stationary processes can be obtained by encoding Bernoulli schemes was studied by M.~Rosenblatt, K.~It\^o~\cite{45}, and many others. The clearest formulation of the problem was developed in the framework of information theory and ergodic theory: what automorphisms or endomorphisms are factors of Bernoulli automorphisms? For autormophisms, the problem was solved in the remarkable paper by D.~Ornstein~\cite{42}, who proved that a~necessary and sufficient condition is the condition incongruously called VWB (very weak Bernoulli), while it would be more appropriate to call it ``Ornstein's condition.'' For endomorphisms, it is not yet completely solved. Ornstein's condition resembles in nature the standardness condition, which appeared in another connection in approximately the same time~\cite{59},~\cite{66}. The similarity is in the fact that both are requirements on the behavior of the collection of conditional measures of finite segments of a~process given a~fixed past starting from some time, namely, one requires that these conditional measures approach one another as this time goes to~$-\infty$. Obviously, any interpretation of a~weakening of the independence condition can be expressed exactly in these terms, but still this method differs from various conditions of weak dependence which existed earlier in traditional schemes of the theory of random processes (Kolmogorov's, Rosenblatt's, Ibragimov's conditions, etc.)\ and are usually stated as a~decrease of correlation coefficients, measure densities, etc. But in our approach, the leading role is played by the system of conditional measures and a~metric that controls the convergence of systems of conditional measures. In Rosenblatt's condition, this is the Kantorovich metric (rediscovered by him specifically in this case) constructed from the Hamming metric on trajectories. In the case of standardness, this is a~modified Kantorovich metric taking into account the (tree-like) structure of conditional measures. The difference between them is substantial, but the problems and classes of processes are also different: Ornstein's condition concerns stationary processes on~$\mathbb Z$ (though is stated for their restrictions to~${\mathbb Z}_+$) and deals with the properties of a~two-sided process, while standardness characterizes the properties of a~one-sided filtration.\footnote{Later, D.~Ornstein and B.~Weiss~\cite{43},~\cite{44}, D.~Rudolph, and others extended this theory from the group~$\mathbb Z$ to actions of arbitrary amenable groups with invariant measure, but this generalization has no parallel in the theory of filtrations in the form it is used for the group~$\mathbb Z$ (the existence of a~notion of ``past''). However, filtrations that arise as a~tool in trajectory approximation (see one of the examples of filtrations in Sec.~\ref{sec1}) are undoubtedly related also to Ornstein's condition for general groups.} It is worth mentioning that instead of the Hamming metric (on trajectories), other metrics have also been considered, and the Kantorovich metric on measures constructed from a~metric on trajectories can also be replaced with other ones: there is a~wide room for experiments and analysis of asymptotic properties of processes and filtrations depending on metrics. It seems that the most important idea is that of measuring the degree of ``being non-Bernoulli'' and ``being nonstandard'' with the help of what was called ``secondary entropy'' and higher zero--one laws. \section{Acknowledgements} \label{sec9} The author must remember and thank the people with whom he discussed the subject of this paper many years ago. First of all, this is V.\,A.~Rokhlin, one of my teachers, who posed a~series of stimulating questions on similar subjects and who was very attentive and sympathetic to my ideas in this field (see~\cite{47},~\cite{48}); A.\,N.~Kolmogorov, who was interested in filtrations and communicated several my papers on this subject to \textit{Proceedings of the USSR Academy of Sciences}; R.~Belinskaya, who asked me one of the first specific questions about dyadic filtrations; S.~Yuzvinsky, who took an active interest in the subject and wrote an important paper on applications of the scale of an automorphism~\cite{99}. In those years, I had useful fierce disputes with Ya.~Sinai, B.~Gurevich, V.~Alexeev, A.~Katok, S.~Yuzvinsky, A.~Stepin, V.~Vinokurov, B.~Rubshtein, L.~Abramov, M.~Gordin, I.~Ibragimov, and others, as well as with A.~Kirillov, who suggested a~meaningful comparison of this subject to some previous works. Later, after a~long pause, the interest to the subject was revived abroad; M.~Yor's question about filtrations related to the Brownian motion (see also~\cite{57}) was partially solved in~\cite{7}, where my standardness criterion was rediscovered. A~remarkable flurry of activity in the theory of filtrations was observed in 1997--1998 during the semester on ergodic theory at the Israel Institute for Advanced Studies (Jerusalem): H.~Furstenberg, B.~Weiss, J.~Feldman, D.~Rudolph, D.~Hoffman, D.~Heicklen, J.-P.~Thouvenot published a~number of papers and participated in discussions of this subject. In the 2000s, M.~\'Emery, who gave a Bourbaki~talk~\cite{12} on this and related subjects, his students W.~Schachermayer and others also worked in the theory of filtrations. In the 1990s, the author had many discussions of the theory of filtrations with J.~Feldman (see~\cite{15},~\cite{16}) and B.~Tsirelson (see, in particular,~\cite{17}). Most recently, especially after the subject has been combined with the theory of graded graphs, the theory of filtrations and the notion of standardness attracted much interest from S.~Laurent, T.~de~la~Rue, and others; they obtained many new results mentioned in the survey (see~\cite{38, 39, 40}, \cite{25}). Finally, I have repeatedly discussed the subject with my students A.~Gorbulsky, F.~Petrov, P.~Zatitskiy, and others, which, hopefully, will lead to the discovery of new facts and a~new understanding of this interesting and important subject. The author thanks all people mentioned above and those he may have forgotten to mention. Special thanks are due to A.\,A.~Lodkin for his help with formatting the bibliography, and to A\,M.~Minabutdinov for the figures.
1,108,101,566,148
arxiv
\section{#1} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0}} \setcounter{page}{1} \title{Alphabet Sizes of Auxiliary Variables\\ in Canonical Inner Bounds} \author{Soumya Jana\\ Department of Electrical and Computer engineering\\ University of Illinois at Urbana-Champaign\\ Email: {\tt [email protected]}} \begin{document} \maketitle \thispagestyle{plain} \pagestyle{plain} \baselineskip=1.25\normalbaselineskip \renewcommand{\baselinestretch}{1.4} \renewcommand{\baselinestretch}{1.5} \begin{abstract} Alphabet size of auxiliary random variables in our canonical description is derived. Our analysis improves upon estimates known in special cases, and generalizes to an arbitrary multiterminal setup. The salient steps include decomposition of constituent rate polytopes into orthants, translation of a hyperplane till it becomes tangent to the achievable region at an extreme point, and derivation of minimum auxiliary alphabet sizes based on Caratheodory's theorem. \end{abstract} \Section{Introduction} \label{sec:intro} The central question in Shannon theory of source coding is the characterization of achievable regions in information-theoretic terms. Historically, simple information-theoretic (so-called `single-letter') descriptions were shown to completely characterize the achievable regions of certain problems, such as Shannon's lossless and lossy coding problems \cite{ShanLL,Shannon}, the Slepian-Wolf problem \cite{SW}, the Wyner-Ahlswede-K\"{o}rner problem \cite{Wyner,AhlKor}, the Wyner-Ziv problem \cite{WZ}, and the Berger-Yeung problem \cite{BY}. Specifically, coincident inner and outer bounds have been found for the aforementioned problems. However, in certain other source coding problems, including the Berger-Tung and the partial side information problems \cite{BT,Upper}, coincident inner and outer bounds have not been found. In this paper, we shall consider a general class of inner bounds, which we call {\em canonical}, and which may or may not be tight \cite{isit08}. For example, our bound coincides with known descriptions in aforementioned solved problems, as well as with Berger-Tung bound known for the Berger-Tung and the partial side information problems. Further, unlike earlier attempts at unification, such as by Csisz\'{a}r and K\"{o}rner \cite{CsisKor}, and Han and Kobayashi \cite{HanKob}, our canonical bound brings both lossless and lossy coding under the same framework. Moreover, our bound is tight for (hence solves) a large class of multiterminal problems \cite{itw07}, generalizing the longstanding single-helper problem \cite{CsisKor}. However, at present we shall not focus on conditions for tightness. Instead we shall analyze an aspect that has historically received very little attention. Note that our inner bounds involve certain auxiliary variables $\{Z_k\}$ with alphabets $\{{\cal Z}_k\}$ (the notation is made precise subsequently). Alphabet sizes $\{|{\cal Z}_k|\}$ play an important role in practical computation, and hence understanding of the inner bounds (see, e.g., \cite{GuEffros,GuJanaEffros}). The available results generally estimate $|{\cal Z}_k|\le|{\cal X}_k|+$constant, where ${\cal X}_k$ is the given alphabet of the source $X_k$ associated with the auxiliary variable $Z_k$, and the constant is one or greater. In this paper, we shall derive a tight bound $|{\cal Z}_k|\le|{\cal X}_k|$ of such alphabets, thereby, facilitating computation. As alluded earlier, in different contexts $|{\cal Z}_k|$ has been estimated within a constant factor of $|{\cal X}_k|$. For example, we know $|{\cal Z}_k|\le |{\cal X}_k|+2$ for the Wyner-Ahlswede-K\"{o}rner problem \cite{Wyner,AhlKor}, $|{\cal Z}_k|\le |{\cal X}_k|+1$ the Wyner-Ziv problem \cite{WZ}, and $|{\cal Z}_k|\le |{\cal X}_k|+2$ for the Berger-Yeung problem \cite{BY}. In those problems, there is only one auxiliary variable, and a rate-distortion orthant is varied to create the desired inner bound (which equals the achievable region). In contrast, the Berger-Tung region involves two auxiliary variables, and is created by varying a convex core region, which is more complicated than an orthant \cite{BT}. So far, there exists no rigorous analysis of the alphabet size in this case, but estimates vary between $|{\cal Z}_k|\le |{\cal X}_k|+1$ and $|{\cal Z}_k|\le |{\cal X}_k|+2$. In an earlier paper \cite{itw07}, we gave an estimate of $|{\cal Z}_k|\le |{\cal X}_k|+M$ for the general $M$-terminal single-helper problem, where the convex core region is a complicated polytope. In this backdrop, Gu and Effros estimated $|{\cal Z}_k|\le |{\cal X}_k|$ for the Wyner-Ahlswede-K\"{o}rner problem using a linear programing argument \cite{GuEffros}. Later in \cite{GuJanaEffros}, the same result was extended to the Wyner-Ziv problem, and to the partial side information problem \cite{Upper}. The above result was crucially dependent on the fact that the convex core region that sweep out the overall inner bound is an orthant. In contrast, we shall prove the alphabet size $|{\cal Z}_k|\le |{\cal X}_k|$ for any arbitrary problem, where the core region is always a polytope. Specifically, we decompose the polytope into constituent orthants, and make an orthant-based argument. The above decomposition, apart from being central to the problem at hand, enhances the geometric understanding of source coding. The main difficulty here lies in identifying the extreme points exhaustively, thereby identifying the constituent orthants. We show that there are $M!$ such orthants for an $M$-source problem. In order to prove this result, we develop an intricate chain of information theoretic results. Further, the orthant-based reasoning borrows an essential notion from a linear-programing-based argument. In particular, we consider only extreme points, which are reached by translating any hyperplane, with its direction fixed, away from the origin towards the achievable region. Our final argument about the alphabet size follows the line of Wyner and Ziv based on a version of Caratheodory's theorem \cite{WZ}. \Section{Canonical Inner Bound} \label{sec:CIB} Consider joint source distribution $p(x_{\{1,...,M\}},s,v)$ governing source variables $X_{\{1,...,M\}}$, decoder side information $S$, and target variable $V$ for lossy reconstruction/estimation. Also consider $L$ bounded distortion measures $d_l: {\cal V}\times \hat{\cal V}_l\rightarrow [0,d_{l\mbox{max}}]$ ($1\le l\le L$), each with a possibly distinct reconstruction alphabet $\hat{\cal V}_l$. In this setting, the canonical inner bound ${\cal A}_1^*$ is defined as follows. \begin{definition} \label{def:A*} Define ${\cal A}_1^*$ as the set of $(M+L)$-vectors $(R_{\{1,...,M\}},D_{\{1,...,L\}})$ satisfying the following conditions: \begin{enumerate} \item auxiliary random variables $Z_{\{1,...,M\}}$ (taking values in respective finite alphabets ${\cal Z}_{\{1,...M\}}$) exist such that $Z_m=X_m$, $1\le m \le J$, and $(X_{\{1,...,M\}},S,V,Z_{\{J+1,...,M\}})$ follows the joint distribution \begin{equation} \label{eq:p...} p(x_{\{1,...,M\}},s,v)\prod_{k=J+1}^M q_k(z_k|x_k),\end{equation} for some test channels $\{q_k(z_k|x_k)\}_{k=J+1}^{M}$; \item {\em (rate conditions)} \begin{equation} \label{eq:R1} I(X_I;Z_I|Z_{I^c},S)\le \sum_{i\in I} R_i, \end{equation} where $I^c= \{1,2,...,M\}\setminus I$, and condition (\ref{eq:R1}) holds for all $I\subseteq\{1,...,M\}\setminus \emptyset$; \item {\em (distortion conditions)} mappings $\psi_l:{\cal X}_1\times... \times {\cal X}_{J}\times{\cal Z}_{J+1}\times...\times{\cal Z}_M \times {\cal S} \rightarrow \hat{\cal V}_l$, $1\le l\le L$, exist such that \begin{equation} \label{eq:dist} \mbox{E} d_l(V,\psi_l(X_{\{1,...,J\}},Z_{\{J+1,...,M\}},S))\le D_l. \end{equation} \end{enumerate} \end{definition} \begin{lemma} \label{le:AlphSize} Every extreme point of ${\cal A}_1^*$ corresponds to some choice of auxiliary variables $Z_{\{J+1,...,M\}}$ with alphabet sizes $|{\cal Z}_k|\le |{\cal X}_k|$, $J+1\le k \le M$. \end{lemma} The main goal of this paper is to prove Lemma \ref{le:AlphSize}. The proof is difficult because ${\cal A}_1^*$ has a complicated geometry. First of all, consider specific auxiliary variables $Z_{\{J+1,...,M\}}$. Then choosing coordinate planes $y_i=R_i=0$, $1\le i \le M$, and $y_{M+l} = D_l=0$, $1\le l \le L$, note that distortion equations (\ref{eq:dist}) are all parallel to coordinate planes, and hence form an orthant, whose analysis is tractable. On the other hand, the rate equations (\ref{eq:R1}) are not all parallel to coordinate planes, leading to a complicated region. In this backdrop, in Sec.~\ref{sec:DARR} we consider the {\em distortion-extracted} rate region given by (\ref{eq:R1}), and find a decomposition into finite number of orthants. Based on such decomposition, in Sec.~\ref{sec:decompose} we write ${\cal A}_1^*$ as a finite union of component regions that are formed by orthants. Finally, using such component regions, the extreme points in Lemma \ref{le:AlphSize} are characterized in Sec.~\ref{sec:linear} with the help of certain linear combination properties. \Section{Geometry of Distortion-Extracted Rate Region} \label{sec:DARR} We first consider the rate region formed by rate conditions (\ref{eq:R1}). More generally, consider random variables $(X_{\{1,...,M\}},S,Z_{\{1,...,M\}})$ following the joint distribution \begin{equation} \label{eq:p...!} p(x_{\{1,...,M\}},s)\prod_{k=1}^M q_k(z_k|x_k).\end{equation} In this section, we fix $p(x_{\{1,...,M\}},s)$ as well as all $q_k(z_k|x_k)$, $1\le k \le M$. Further, define ${\cal B}^*$ as the set of rate $M$-vectors $R_{\{1,...,M\}}$ satisfying \begin{equation} \label{eq:R1'} \label{eq:Rdef} I(X_I;Z_I|Z_{I^c},S)\le \sum_{i\in I} R_i, \end{equation} where condition (\ref{eq:Rdef}) holds for all $I\subseteq\{1,...,M\}\setminus \emptyset$. We call ${\cal B}^*$ the {\em distortion-extracted} rate region because it is delinked from distortion measures. Of course, we also do not impose the original restrictions $Z_m=X_m$, $1\le m \le J$. Next we find the extreme points of ${\cal B}^*$. In our analysis, we shall assume that there is no degeneracy, i.e., any extraneous Markov chain property, not dictated by the form (\ref{eq:p...!}) of joint distribution $p$, does not hold. Note that the nondegeneracy requirement is mild, and met if all random variables under consideration are statistically dependent. \subsection{Number of Extreme Points: Upper Bound} \begin{lemma} \label{le:unique} Suppose there exists rate $M$-vector $R_{\{1,...,M\}}$ such that \begin{eqnarray} \label{eq:I} I\left(X_{I};Z_I|Z_{I^c},S\right) &=& \sum_{i\in I} R_i\\ \label{eq:I'} I\left(X_{I'};Z_{I'}|Z_{{I'}^c},S\right) &=& \sum_{i\in I'} R_i \end{eqnarray} simultaneously hold for some distinct sets $I,I'\subseteq \{1,...,M\}\setminus \emptyset$. Then either $I\subset I'$ or $I'\subset I$. \end{lemma} The proof is involved, and makes use of a series of new information-theoretic relations involving $(X_{\{1,...,M\}},S,Z_{\{1,...,M\}})$. It is given in Appendix \ref{sec:ProofUnique}. \begin{lemma} \label{le:num} ${\cal B}^*$ has at most $M!$ extreme points. \end{lemma} {\bf {\em Proof}:} At each extreme point of ${\cal B}^*$, $M$ of the $2^M-1$ constraints given by (\ref{eq:R1'}) are active. Therefore, in view of Lemma \ref{le:unique}, the number of extreme points of ${\cal B}^*$ is upper bounded by the number of possible ways one can have $$ I^{(1)}\subset I^{(2)} \subset ... \subset I^{(m)} \subset I^{(m+1)} \subset ... \subset I^{(M-1)}\subset I^{(M)},$$ where $I^{(m)} \subseteq \{1,...,M\}$ with cardinality $|I^{(m)}|=m$, $1\le m \le M$. To begin with, we have the only choice $I^{(M)} = \{1,...,M\}$. However, given any $I^{(m+1)}$ ($1\le m < M$), one can choose $I^{(m)}$ by discarding one of the $m+1$ elements of $I^{(m+1)}$. Hence one can choose the entire sequence of sets $\{I^{(m)}\}_{m=1}^{M}$ in $M\times (M-1) \times ...\times 2 = M!$ possible ways. Hence the result. \hfill$\Box$ \begin{remark} The above argument does not clarify whether all $M!$ points under consideration are distinct. Hence we can claim only an upper bound. \end{remark} \subsection{Number of Extreme Points: Lower Bound} \begin{lemma} \label{le:cornQ} The rate $M$-vector $R_{\{1,...,M\}}$ such that \begin{equation} \label{eq:cornQ} R_{i} = I(X_{i};Z_{i}|Z_{\{1,...,i-1\}},S), \quad 1\le i\le M, \end{equation} is an extreme point of ${\cal B}^*$. \end{lemma} \begin{remark} By Lemma \ref{le:last} and (\ref{eq:cornQ}), we have \begin{equation} \label{eq:lastL} I\left(X_{I};Z_{I}|Z_{I^c},S\right) \le \sum_{i\in I} I\left(X_i;Z_i|Z_{\{1,...,i-1\}},S\right) = \sum_{i\in I} R_i \end{equation} for all $I\subseteq \{1,...,M\}\setminus\emptyset$. Thus, by (\ref{eq:Rdef}), $R_{\{1,...,M\}}\in {\cal B}^*$. \end{remark} {\bf {\em Proof}:} It is enough to show that the given $R_{\{1,...,M\}}$ makes $M$ constraints, given in (\ref{eq:Rdef}), active. From (\ref{eq:cornQ}), we can write \begin{equation} \label{eq:cornQ1} \sum_{i=m}^{M} I(X_{i};Z_{i}|Z_{I_{\{1,...,i-1\}}},S) = \sum_{i=m}^{M} R_i \end{equation} for each $1\le m\le M$. Further, by Corollary \ref{cor:chain3}, (\ref{eq:cornQ1}) is same as \begin{equation} \label{eq:cc3'} I\left(X_{\{m,...,M\}};Z_{\{m,...,M\}}| Z_{\{1,...,m-1\}},S\right)= \sum_{i=m}^{M} R_i, \quad 1\le m \le M, \end{equation} which makes $M$ constraints, given in (\ref{eq:Rdef}), active. This completes the proof. \hfill$\Box$ Now the indices $\{1,...,M\}$ in (\ref{eq:cornQ}) can be permuted to obtain $M!$ extreme points. Importantly, these extreme points are all distinct due to the nondegeneracy assumption. \begin{corollary} \label{cor:lower} ${\cal B}^*$ has at least $M!$ extreme points. \end{corollary} \begin{remark} \label{rem:num} By Lemma \ref{le:num} and Corollary \ref{cor:lower}, ${\cal B}^*$ has exactly $M!$ extreme points, each of which takes the form (\ref{eq:cornQ}) except that the indices $\{1,...,M\}$ undergo suitable permutation. (As it is, (\ref{eq:cornQ}) corresponds to identity permutation.) \end{remark} \Section{Decomposition of \boldmath{${\cal A}_1^*$}} \label{sec:decompose} Now we move on to the rate-distortion region ${\cal A}_1^*$. Specifically, consider subset ${\cal A}_1^*(\{q_k\})$ of ${\cal A}_1^*$ defined by (\ref{eq:p...})--(\ref{eq:dist}) for given conditional distributions $q_k(z_k|x_k)$, $J+1\le k \le M$. Of course, ${\cal A}_1^* = \bigcup {\cal A}_1^*(\{q_k\})$, where the union is taken over all $\{q_k\}$. Note that, like ${\cal A}_1^*$, ${\cal A}_1^*(\{q_k\})$ is a subset of the $(M+L)$-dimensional real space. However, although ${\cal A}_1^*$ is not necessarily convex, each ${\cal A}_1^*(\{q_k\})$ is convex. Further, every extreme point of ${\cal A}_1^*$ is an extreme point of some ${\cal A}_1^*(\{q_k\})$. Finally, notice that the projection of ${\cal A}_1^*(\{q_k\})$ onto the space of $M$ rate coordinates is the same as ${\cal B}^*$ with the choice $Z_m=X_m$, $1\le m\le J$ (which does not violate our degeneracy assumption), whereas the projection onto the space of $L$ distortion coordinates is simply a suitable orthant. Therefore, by Remark \ref{rem:num}, ${\cal A}_1^*(\{q_k\})$ possesses $M!$ extreme points, one of which, denoted $(R^0_{\{1,...,M\}}(\{q_k\}),D^0_{\{1,...,L\}}(\{q_k\}))$, is specified by (from (\ref{eq:cornQ}) and (\ref{eq:dist})) \begin{eqnarray} \label{eq:cornQ'} R^0_{i}(\{q_k\}) &=& I(X_{i};Z_{i}|Z_{\{1,...,i-1\}},S), \quad 1\le i\le M\\ \label{eq:dist'} D^0_l(\{q_k\}) &=& \min_{\psi_l} \mbox{E} d_l(V,\psi_l(Z_{\{1,...,M\}},S)), \quad 1\le l\le L, \end{eqnarray} where $Z_m=X_m$, $1\le m\le J$. In general, any extreme point $(R^\pi_{\{1,...,M\}}(\{q_k\}),D^\pi_{\{1,...,L\}}(\{q_k\}))$ is generated by a suitable permutation (bijection) $P^\pi: \{1,...,M\} \rightarrow \{1,...,M\}$, where $\pi$ takes $M!$ values, say, $\{0,...,M!-1\}$ (we set $P^0$ to be the identity permutation). In other words, in (\ref{eq:cornQ'}) and (\ref{eq:dist'}), each occurrence of index $i$ is replaced by $P_\pi(i)$. As regards dependence on $\pi$, vectors $R^\pi_{\{1,...,M\}}(\{q_k\})$ are all distinct (as mentioned earlier), whereas vectors $D^\pi_{\{1,...,L\}}(\{q_k\})$ are all identical. At this point, denote the orthant specified by $(R^\pi_{\{1,...,M\}}(\{q_k\}),D^\pi_{\{1,...,L\}}(\{q_k\}))$ as \begin{equation} \label{eq:Orth0} {\cal A}_{1;\pi}^*(\{q_k\}) = \{(R_{\{1,...,M\}},D_{\{1,...,L\}}):R^\pi_{i}(\{q_k\}) \le R_i,1\le i\le M; D^\pi_{l}(\{q_k\}) \le D_l,1\le l\le L\} \end{equation} for $0\le \pi \le M!-1$, and all possible $\{q_k\}$. Clearly, $${\cal A}_1^*(\{q_k\}) = \mbox{conv}\left(\bigcup_{\pi=0}^{M!-1} {\cal A}_{1;\pi}^*(\{q_k\})\right),$$ where $\mbox{conv}(\cdot)$ indicates `convex hull of'. Consequently, we have \begin{equation} \label{eq:Orth1} \mbox{conv}({\cal A}_1^*) = \mbox{conv}\left(\bigcup_{\{q_k\}} {\cal A}_1^*(\{q_k\})\right) = \mbox{conv}\left(\bigcup_{\{q_k\}} \bigcup_{\pi=0}^{M!-1} {\cal A}_{1;\pi}^*(\{q_k\})\right). \end{equation} Now, interchanging the union operations in the last term in (\ref{eq:Orth1}), and defining \begin{equation} \label{eq:A1pi*} {\cal A}_{1;\pi}^*= \bigcup_{\{q_k\}} {\cal A}_{1;\pi}^*(\{q_k\}), \end{equation} we obtain \begin{equation} \label{eq:Orth2} \mbox{conv}({\cal A}_1^*) = \mbox{conv}\left( \bigcup_{\pi=0}^{M!-1} {\cal A}_{1;\pi}^*\right). \end{equation} In view of (\ref{eq:Orth2}), every extreme point of ${\cal A}_1^*$ is an extreme point of some ${\cal A}_{1;\pi}^*$. Consequently, in order to establish Lemma \ref{le:AlphSize}, it is enough to show the following. \begin{lemma} \label{le:AlphSizePI} Every extreme point of ${\cal A}_{1;\pi}^*$ ($0\le \pi \le M!-1$) corresponds to some choice of auxiliary variables $Z_{\{J+1,...,M\}}$ with alphabet sizes $|{\cal Z}_k|\le |{\cal X}_k|$, $J+1\le k \le M$. \end{lemma} The rest of the note is devoted to the proof of Lemma \ref{le:AlphSizePI}. In particular, we shall prove the result only for $\pi=0$. Our analysis extends to other values of $\pi$ in a straightforward manner. At present, consider the real $(M+L)$-space, and let $y_i=0$, $1\le i \le M+L$, be the coordinate planes. In this space, an $(M+L-1)$-hyperplane \begin{equation} \label{eq:HP} \sum_{i=1}^{M+L} a_i y_i = c \end{equation} is specified by the direction cosine vector $(a_1,...,a_{M+L})$ subject to $\sum_{i=1}^{M+L} a_i^2 = 1$, and the intercept $c$. At this point, identifying $y_i=R_i$, $1\le i \le M$, and $y_{M+l}=D_l$, $1\le l \le L$, note that ${\cal A}_{1;0}^*$ lies in the nonnegative orthant. Further, every extreme point of ${\cal A}_{1;0}^*$ has a tangent hyperplane of the form (\ref{eq:HP}), whose direction cosines and intercept are nonnegative ($a_i\ge 0$, $1\le i \le M+L$; $c\ge0$). Conversely, for any $(a_1,...,a_{M+L})$ with $a_i\ge 0$, $1\le i \le M+L$, there exists $c\ge 0$ such that the hyperplane (\ref{eq:HP}) is tangent to ${\cal A}_{1;0}^*$ at some extreme point. Hence we obtain the following result. \begin{corollary} \label{cor:exA10} The set of extreme points of ${\cal A}_{1;0}^*$ is given by \begin{equation} \label{eq:extreme..} \left\{\arg \min_{(R_{\{1,...,M\}},D_{\{1,...,L\}})\in{\cal A}_{1;0}^*} \left(\sum_{i=1}^M a_i R_i + \sum_{l=1}^L a_{M+l} D_l \right): \sum_{i=1}^{M+L} a_i^2 = 1; a_i \ge 0,1\le i \le M+L\right\}. \end{equation} \end{corollary} By (\ref{eq:Orth0}) and (\ref{eq:A1pi*}), every minimizer in (\ref{eq:extreme..}) is of the form $(R^0_{\{1,...,M\}}(\{q_k\}),D^0_{\{1,...,L\}}(\{q_k\}))$ for some $\{q_k\}$. Further, using $Z_m=X_m$, $1\le m\le J$, in (\ref{eq:cornQ'}), notice that $R^0_{\{1,...,J\}}(\{q_k\})$ does not depend on $(\{q_k\})$. Hence we set $a_1=...=a_J=0$ without loss of generality (and scale the remaining direction cosines appropriately) to obtain the following. \begin{corollary} \label{cor:exA10'} The set of extreme points of ${\cal A}_{1;0}^*$ is given by the set of rate-distortion vectors $(R^0_{\{1,...,M\}}(\{q_k\}),D^0_{\{1,...,L\}}(\{q_k\}))$ such that $\{q_k\}$ minimizes \begin{equation} \label{eq:exA10'} \sum_{i=J+1}^M a_i R^0_i(\{q_k\}) + \sum_{l=1}^L a_{M+l} D_l^0(\{q_k\}), \end{equation} and direction cosine vector $a_{\{J+1,...,M+L\}}$ varies through admissible values. \end{corollary} Note that Lemma \ref{le:AlphSizePI} follows for $\pi =0$ (corresponding to identity permutation $P^0$), if we lose no generality by restricting to minimizers $\{q_k\}$ of (\ref{eq:exA10'}) that satisfy $|{\cal Z}_k|\le |{\cal X}_k|$, $J+1\le k \le M$. We shall show that the last condition indeed holds as a consequence of certain linear combination properties. \Section{Linear Combination Properties} \label{sec:linear} \subsection{Change of Variables} For $J+1\le k\le M$, denote marginal distributions of $X_k$ and $Z_k$ by $p_k(x_k)$ and $p'_k(z_k)$, respectively, and conditional distribution of $X_k$ given $Z_k$ by $q'_k(x_k|z_k)$. Note that $p_k(x_k)$ is specified by marginalizing the source distribution $p(x_{\{1,...,M\}},s,v)$. Further, by Bayes' rule, we have $p_k(x_k) q_k(z_k|x_k) = p'_k(z_k)q'_k(x_k|z_k)$. Of course, one completely specifies both $p'_k$ and $q'_k$ by specifying $q_k$. At the same time, rather than varying $q_k$, we can equivalently vary the pair $(p'_k,q'_k)$ subject to the admissibility condition \begin{equation} \label{eq:pCON} p_k(x_k) = \sum_{z_k\in{\cal Z}_k} p'_k(z_k)q'_k(x_k|z_k). \end{equation} Apart from the above specific notation, we shall denote by `$r$' generic distributions. For example, $r(y,u|w)$ indicates the joint distribution of $(Y,U)$ conditioned on $W$. At this point, consider identity permutation $P^0$ of $\{1,...,M\}$, and, correspondingly, the set ${\cal A}_{1;0}^*(\{p'_k,q'_k\})$. Here, we recall that variation of $\{q_k\}$, and variation of $\{p'_k,q'_k\}$ subject to (\ref{eq:pCON}) are equivalent, and, in a slight abuse of notation, denote by ${\cal A}_{1;0}^*(\{p'_k,q'_k\})$ the set function of $\{p'_k,q'_k\}$ equalling ${\cal A}_{1;\pi}^*(\{q_k\})$. Subsequently, we shall make analogous change of variables without explicit mention. Using $Z_m=X_m$, $1\le m\le J$, in (\ref{eq:cornQ'}) and (\ref{eq:dist'}), we have \begin{eqnarray} \label{eq:cornQ''1} R^0_{i}(\{p'_k,q'_k\}) &=& H(X_{i}|X_{\{1,...,i-1\}},S), \quad 1\le i\le J\\ \label{eq:cornQ''2} R^0_{i}(\{p'_k,q'_k\}) &=& I(X_{i};Z_i|X_{\{1,...,J\}},Z_{\{J+1,...,i-1\}},S), \quad J+1\le i\le M\\ \label{eq:dist''} D^0_l(\{p'_k,q'_k\}) &=& \min_{\psi_l} \mbox{E} d_l(V,\psi_l(X_{\{1,...,J\}},Z_{\{J+1,...,M\}},S)), \quad 1\le l\le L. \end{eqnarray} As mentioned earlier, and by (\ref{eq:cornQ''1}), $R^0_{\{1,...,J\}}(\{p'_k,q'_k\})$ does not depend on $(\{p'_k,q'_k\})$. However, the remaining rate and distortion components, given by (\ref{eq:cornQ''2}) and (\ref{eq:dist''}), do exhibit dependence on $(\{p'_k,q'_k\})$. Next we isolate the dependence of individual rate as well as distortion component on individual pair $(p'_k,q'_k)$, while keeping the rest of the pairs fixed. We highlight the dependence on $(p'_k,q'_k)$ by dropping the rest of the pairs $\{(p'_\kappa,q'_\kappa)\}_{\kappa \ne k}$ from the argument. Specifically, we show that each $R_i^0(p'_k,q'_k)$ ($k\le i\le M$) and each $D_l^0(p'_k,q'_k)$ ($1\le l\le L$) is a linear combination of functionals of $q'_k(\cdot|z_k)$'s weighted by $p'_k(z_k)$'s. Here $q'_k(\cdot|z_k)$ denotes the probability vector $\{q'_k(x_k|z_k)\}_{x_k\in{\cal X}_k}$ for a given $z_k\in {\cal Z}_k$. \subsection{Rate Components} Consider $J+1\le k\le i\le M$. From (\ref{eq:cornQ''2}), we have \begin{eqnarray} \nonumber R^0_{i}(p'_k,q'_k) &=& I(X_{i};Z_i|X_{\{1,...,J\}},Z_{\{J+1,...,i-1\}},S)\\ \label{eq:IH1} &=& H(X_i|X_{\{1,...,J\}},Z_{\{J+1,...,i-1\}},S) - H(X_i|X_{\{1,...,J\}},Z_{\{J+1,...,i\}},S). \end{eqnarray} Further, denote by $\Delta_{{\cal X}_k}$ the $(|{\cal X}_k|-1)$-dimensional probability simplex, i.e., the set of probability vectors defined on ${\cal X}_k$. \begin{lemma} \label{le:RateLin1} If $J+1\le k \le i \le M$, then $$H(X_i|X_{\{1,...,J\}},Z_{\{J+1,...,i-1\}},S) = \sum_{z_k\in{\cal Z}_k} p'_k(z_k) \Phi^{(1)}_{ki}(q'_k(\cdot|z_k))$$ for some functional $\Phi^{(1)}_{ki}$ defined on $\Delta_{{\cal X}_k}$. \end{lemma} {\bf {\em Proof}:} First note that, if $i=k$, then the target entropy does not depend on $(p'_k,q'_k)$, and $\Phi^{(1)}_{ki}$ reduces to a trivial constant. A more interesting situation arises when $i > k$. In this case, verify that $k\in\{J+1,...,i-1\}$. Now write $U = (X_{\{1,...,J\}},Z_{\{J+1,...,i-1\}\setminus\{k\}},S)$, and verify that \begin{equation} \label{eq:Mlop0} Z_k \rightarrow X_k \rightarrow (U,X_i) \end{equation} forms a Markov chain. Hence we obtain \begin{eqnarray} \nonumber &&H(X_i|X_{\{1,...,J\}},Z_{\{J+1,...,i-1\}},S) ~~=~~ H(X_i|Z_k,U)\\ \nonumber &=& -\sum_{(x_i,z_k,u)} r(x_i,z_k,u)\log\frac{r(x_i,z_k,u)}{r(z_k,u)}\\ \label{eq:MLop1} &=& -\sum_{(x_i,z_k,u)} \sum_{x_k} p'_k(z_k)q'_k(x_k|z_k)r(x_i,u|x_k)\log\frac{\sum_{x_k} p'_k(z_k)q'_k(x_k|z_k)r(x_i,u|x_k)}{\sum_{x_k} p'_k(z_k)q'_k(x_k|z_k)r(u|x_k)}\\ \label{eq:MLop2} &=& -\sum_{z_k} p'_k(z_k) \sum_{(x_i,u)} \sum_{x_k} q'_k(x_k|z_k)r(x_i,u|x_k)\log\frac{\sum_{x_k} q'_k(x_k|z_k)r(x_i,u|x_k)}{\sum_{x_k} q'_k(x_k|z_k)r(u|x_k)}\\ \label{eq:MLop3} &=& \sum_{z_k} p'_k(z_k) \Phi^{(1)}_{ki}(q'_k(\cdot|z_k)). \end{eqnarray} Here (\ref{eq:MLop1}) follows by noting Markov chain (\ref{eq:Mlop0}), and writing \begin{eqnarray*} r(z_k,x_i,u) &=& \sum_{x_k} r(z_k, x_k,x_i,u)~=~ \sum_{x_k} p'_k(z_k)q'_k(x_k|z_k)r(x_i,u|x_k)\\ r(z_k,u) &=& \sum_{x_k} p'_k(z_k)q'_k(x_k|z_k)r(u|x_k). \end{eqnarray*} Further, (\ref{eq:MLop2}) follows by rearranging, and by canceling out $p'_k(z_k)$ from the numerator and denominator of the argument of `$\log$'. Finally, (\ref{eq:MLop3}) follows by defining the functional $$\Phi^{(1)}_{ki}(t) = -\sum_{(x_i,u)} \sum_{x_k} t(x_k) r(x_i,u|x_k)\log\frac{\sum_{x_k} t(x_k) r(x_i,u|x_k)}{\sum_{x_k} t(x_k) r(u|x_k)},$$ where $t=\{t(x_k):x_k\in {\cal X}_k\}$ is any probability vector on ${\cal X}_k$. \hfill $\Box$ Adopting a similar approach, we also obtain the following. \begin{lemma} \label{le:RateLin2} If $J+1\le k \le i \le M$, then $$H(X_i|X_{\{1,...,J\}},Z_{\{J+1,...,i\}},S) = \sum_{z_k\in{\cal Z}_k} p'_k(z_k) \Phi^{(2)}_{ki}(q'_k(\cdot|z_k))$$ for some functional $\Phi^{(2)}_{ki}$ defined on $\Delta_{{\cal X}_k}$. \end{lemma} Noting (\ref{eq:IH1}), combining Lemmas \ref{le:RateLin1} and \ref{le:RateLin2}, and writing $\Phi_{ki} = \Phi^{(1)}_{ki}-\Phi^{(2)}_{ki}$, we obtain the following corollary. \begin{corollary} \label{cor:RateLin} If $J+1\le k \le i \le M$, then $$R^0_i(p'_k,q'_k) = \sum_{z_k\in{\cal Z}_k} p'_k(z_k) \Phi_{ki}(q'_k(\cdot|z_k))$$ for some functional $\Phi_{ki}$ defined on $\Delta_{{\cal X}_k}$. \end{corollary} \subsection{Distortion Components} \begin{lemma} \label{le:DistLin} For $J+1\le k \le M$, and $1\le l\le L$, we have $$D^0_l(p'_k,q'_k) = \sum_{z_k\in{\cal Z}_k} p'_k(z_k) \Psi_{kl}(q'_k(\cdot|z_k))$$ for some functional $\Psi_{kl}$ defined on $\Delta_{{\cal X}_k}$. \end{lemma} {\bf {\em Proof}:} Write $U = (X_{\{1,...,J\}},Z_{\{J+1,...,M\}\setminus\{k\}},S)$, and verify that \begin{equation} \label{eq:Kab0} Z_k \rightarrow X_k \rightarrow (U,V) \end{equation} forms a Markov chain. Hence from (\ref{eq:dist''}), we obtain \begin{eqnarray} \nonumber D^0_l(p'_k,q'_k) &=& \min_{\psi_l} \mbox{E} d_l(V,\psi_l(U,Z_k))\\ \nonumber &=& \min_{\psi_l} \sum_{(v,u,z_k)} r(u,v,z_k) d_l(v,\psi_l(u,z_k))\\ \label{eq:Kab1} &=& \min_{\psi_l} \sum_{(v,u,z_k)} \sum_{x_k} p'_k(z_k) q'_k(x_k|z_k) r(u,v|x_k) d_l(v,\psi_l(u,z_k))\\ \label{eq:Kab2} &=& \sum_{z_k} p'_k(z_k) \sum_{u} \min_{\mbox{$\hat{v}$}_l} \left[ \sum_{(v,x_k)} q'_k(x_k|z_k) r(u,v|x_k) d_l(v,\mbox{$\hat{v}$}_l)\right]\\ \label{eq:Kab3} &=& \sum_{z_k} p'_k(z_k) \Psi_{kl}(q'_k(\cdot|z_k)). \end{eqnarray} Here (\ref{eq:Kab1}) follows by noting Markov chain (\ref{eq:Kab0}), and writing $$ r(u,v,z_k) = \sum_{x_k} r(u,v,x_k,z_k) = \sum_{x_k} p'_k(z_k)q'_k(x_k|z_k)r(u,v|x_k).$$ Further, (\ref{eq:Kab2}) follows by rearranging. Finally, (\ref{eq:Kab3}) follows by defining the functional $$\Psi_{kl}(t) = \sum_{u} \min_{\mbox{$\hat{v}$}_l} \left[ \sum_{(v,x_k)} t(x_k) r(u,v|x_k) d_l(v,\mbox{$\hat{v}$}_l)\right],$$ where $t=\{t(x_k):x_k\in {\cal X}_k\}$ is any probability vector on ${\cal X}_k$. \hfill $\Box$ \subsection{Minimization of Linear Combination} At this time, consider the setting of Corollary \ref{cor:exA10'}, i.e., $a_1=...=a_J=0$. \begin{lemma} \label{le:Linear} Pick any $J+1\le k\le M$, and fix admissible $a_{\{J+1,...,M+L\}}$ and $\{(p'_\kappa,q'_\kappa)\}_{\kappa\ne k}$ in an arbitrary manner. Then there exists a minimizer $(p'_k,q'_k)$ of the problem $$\min_{\mbox{\footnotesize $(p'_k,q'_k)$ subject to (\ref{eq:pCON})}} \sum_{i=J+1}^M a_i R_i^0(p'_k,q'_k) + \sum_{l=1}^L a_{M+l} D_l^0(p'_k,q'_k)$$ such that $p'_k(z_k)$ is defined on alphabet ${\cal Z}_k$ with size $|{\cal Z}_k|\le |{\cal X}_k|$ (and hence $q'_k(x_k|z_k)$ is specified by at most $|{\cal X}_k|$ probability vectors defined on ${\cal X}_k$). \end{lemma} {\bf {\em Proof}:} Given $a_{\{J+1,...,M+L\}}$ and $\{(p'_\kappa,q'_\kappa)\}_{\kappa\ne k}$, consider $$\omega = \sum_{i=J+1}^M a_i R_i^0(p'_k,q'_k) + \sum_{l=1}^L a_{M+l} D_l^0(p'_k,q'_k),$$ and denote by $\Omega$ the set of admissible values of $\omega$. Further, denote $\omega^* = \min_{\omega\in \Omega} \omega$. Now, by Corollary \ref{cor:RateLin} and Lemma \ref{le:DistLin}, we have \begin{equation} \label{eq:lin57} \omega = \sum_{z_k\in{\cal Z}_k} p'_k(z_k) \Theta(q'_k(\cdot|z_k)), \end{equation} where $\Theta(t) = \sum_{i=J+1}^M a_i \Phi_{ki}(t)+ \sum_{l=1}^L a_{M+l} \Psi_{kl}(t)$ is defined on $\Delta_{{\cal X}_k}$. Note that $\Theta$ is continuous and bounded, and the $(|{\cal X}_k|-1)$-dimensional probability simplex $\Delta_{{\cal X}_k}$ is compact. Now consider the mapping $t\rightarrow(t,\Theta(t))$, and denote by ${\cal S}$ the image of $\Delta_{{\cal X}_k}$ under this mapping. Of course, ${\cal S}$ is connected and compact, and ${\cal S}$ has dimensionality $|{\cal X}_k|$. Therefore, by Fenchel-Eggleston strengthening of Caratheodory's theorem, any point in $\mbox{conv}({\cal S})$ is a linear combination of at most $|{\cal X}_k|$ points in ${\cal S}$. Further, in view of (\ref{eq:pCON}) and (\ref{eq:lin57}), any pair $(p_k,\omega)$ belongs to $\mbox{conv}({\cal S})$. In particular, set $\Omega$ of admissible $\omega$, where source distribution $p_k$ is fixed by problem statement, is given by $$\Omega = \{\omega:(p_k,\omega)\in \mbox{conv}({\cal S})\}.$$ In other words, every admissible $\omega\in \Omega$, including $\omega^*$, can be expressed as in (\ref{eq:lin57}) with $|{\cal Z}_k|\le |{\cal X}_k|$. This completes the proof. \hfill $\Box$ \begin{corollary} \label{cor:Linear} For any admissible $a_{\{J+1,...,M+L\}}$, there exists a minimizer $\{p'_k,q'_k\}$ of the problem $$\min_{\mbox{\footnotesize $\{p'_k,q'_k\}$ subject to (\ref{eq:pCON})}} \sum_{i=J+1}^M a_i R_i^0(\{p'_k,q'_k\}) + \sum_{l=1}^L a_{M+l} D_l^0(\{p'_k,q'_k\})$$ such that each $p'_k(z_k)$ ($J+1\le k\le M$) is defined on alphabet ${\cal Z}_k$ with size $|{\cal Z}_k|\le |{\cal X}_k|$ (and hence each $q'_k(x_k|z_k)$ is specified by at most $|{\cal X}_k|$ probability vectors defined on ${\cal X}_k$). \end{corollary} {\bf {\em Proof}:} We shall prove the result by contradiction. Suppose there exists admissible $a_{\{J+1,...,M+L\}}$ such that a minimizer $\{p'_k,q'_k\}$ with $|{\cal Z}_k|\le |{\cal X}_k|$, $J+1\le k\le M$, does not exist. Pick such $a_{\{J+1,...,M+L\}}$, and compute the minimum value $\phi$ of the objective function. By supposition, any corresponding minimizer $\{p'_k,q'_k\}$ has $|{\cal Z}_i| > |{\cal X}_i|$ for some $J+1\le i\le M$. We now undertake a procedure such that the minimum value does not increase at any stage. Specifically, choose $k=J+1$, and keep $\{(p'_\kappa,q'_\kappa)\}_{\kappa \ne k}$ fixed. By Lemma \ref{le:Linear}, the objective function is no greater than $\phi$ for some new choice $(p'_k,q'_k)$ with $|{\cal Z}_k|\le |{\cal X}_k|$. Update $(p'_k,q'_k)$ to this new choice. Next choose $k=J+2$, keep $\{(p'_\kappa,q'_\kappa)\}_{\kappa \ne k}$ fixed, and update $(p'_k,q'_k)$ (in view of Lemma \ref{le:Linear}) such that the objective function is no greater than $\phi$, yet $|{\cal Z}_k|\le |{\cal X}_k|$. Continue this procedure till $k=M$. Finally, we have a new $\{(p'_k,q'_k)\}$ with $|{\cal Z}_k|\le |{\cal X}_k|$, $J+1\le k\le M$, such that the corresponding objective function is no greater than $\phi$. This is a contradiction. \hfill$\Box$ {\bf {\em Proofs of Lemmas \ref{le:AlphSizePI} and \ref{le:AlphSize}}:} Note that $\{q_k\}$ is completely determined by $\{p'_k,q'_k\}$ by Bayes' rule $$q_k(z_k|x_k) = p'_k(z_k)q'_k(x_k|z_k)/p_k(x_k),$$ because $p_k(x_k)$ is specified by the problem statement. Therefore, by Corollary \ref{cor:Linear}, we lose no generality by restricting to minimizers $\{q_k\}$ of (\ref{eq:exA10'}) that satisfy $|{\cal Z}_k|\le |{\cal X}_k|$, $J+1\le k \le M$. Hence Lemma \ref{le:AlphSizePI} follows for $\pi =0$ (corresponding to identity permutation $P^0$). Further, a similar analysis straightforwardly establishes Lemma \ref{le:AlphSizePI} for each $1\le \pi \le M!-1$. Finally, in view of (\ref{eq:Orth2}), Lemma \ref{le:AlphSize} follows. \hfill$\Box$
1,108,101,566,149
arxiv
\section{Introduction and standing assumption} Throughout this paper, \begin{empheq}[box = \mybluebox]{equation*} \text{$\mathcal{H}$ is a real Hilbert space} \end{empheq} with inner product $\innp{\cdot,\cdot}$ and induced norm $\|\cdot\|$. We denote by $\mathcal{P}(\mathcal{H})$ the set of all nonempty subsets of $\mathcal{H}$ containing \emph{finitely many} elements. Assume that \begin{empheq}[box=\mybluebox]{equation*} S=\{x_{1}, x_{2}, \ldots, x_{m}\} \in \mathcal{P}(\mathcal{H}). \end{empheq} \emph{The goal of this paper is to provide a systematic study of the circumcenter of $S$, i.e., of the (unique if it exists) point in the affine hull of $S$ that is equidistant all points in $S$.} The classical case in trigonometry or Euclidean geometry arises when $m=3$ and $\mathcal{H}=\mathbb{R}^2$. Recent applications of the circumcenter focus on the present much more general case. Indeed, our work is motivated by recent works of Behling, Bello Cruz, and Santos (see \cite{BCS2017} and \cite{BCS2018}) on accelerating the Douglas--Rachford algorithm by employing the circumcenter of intermediate iterates to solve certain best approximation problems. The paper is organized as follows. Various auxiliary results are collected in \cref{sec:AuxResults} to ease subsequent proofs. Based on the circumcenter, we introduce our main actor, the \emph{circumcenter operator}, in \cref{sec:DefinCircOper}. Explicit formulae for the circumcenter are provided in Sections \ref{sec:ClosFormuCircOper} and \ref{sec:SymmFormuCCS} while \cref{sec:BasiPropCircOper} records some basic properties. In \cref{sec:LimiCircOperSeqSet}, we turn to the behaviour of the circumcenter when sequences of sets are considered. \cref{sec:CircThreePoints} deals with the case when the set contains three points which yields particularly pleasing results. The importance of the circumcenter in the algorithmic work of Behling et al.\ is explained in \cref{sec:applications}. In the final \cref{sec:CircCrossProd}, we return to more classical roots of the circumcenter and discuss formulae involving cross products when $\mathcal{H}=\mathbb{R}^3$. The notation employed is standard and largely follows \cite{BC2017}. \section{Auxiliary results} \label{sec:AuxResults} In this section, we provide various results that will be useful in the sequel. \subsection{Affine sets} Recall that a nonempty subset $S$ of $\mathcal{H}$ is an \emph{affine subspace} of $\mathcal{H}$ if $(\forall \rho\in\mathbb{R})$ $\rho S + (1-\rho)S = S$; moreover, the smallest affine subspace containing $S$ is the \emph{affine hull} of $S$, denoted $\ensuremath{\operatorname{aff}} S$. \begin{fact} {\rm \cite[page 4]{R2015}} Let $S \subseteq \mathcal{H}$ be an affine subspace and let $a \in \mathcal{H}$. Then the \emph{translate} of $S$ by $a$, which is defined by \begin{align*} S + a = \{ x + a ~|~ x \in S \}, \end{align*} is another affine subspace. \end{fact} \begin{definition} An affine subspace $S$ is said to be \emph{parallel} to an affine subspace $ M $ if $S = M +a $ for some $ a \in \mathcal{H}$. \end{definition} \begin{fact} {\rm \cite[Theorem 1.2]{R2015}} \label{fac:AffinePointLinearSpace} Every affine subspace $S$ is parallel to a unique linear subspace $L$, which is given by \begin{align*} (\forall y \in S) \quad L = S - y = S - S . \end{align*} \end{fact} \begin{definition} \cite[page 4]{R2015} The \emph{dimension} of an affine subspace is defined to be the dimension of the linear subspace parallel to it. \end{definition} \begin{fact} {\rm \cite[page 7]{R2015}} \label{fac:AffSubsExpre} Let $x_{1}, \ldots, x_{m} \in \mathcal{H}$. Then \begin{align*} \ensuremath{\operatorname{aff}}\{x_{1}, \ldots, x_{m}\} =\Big\{\lambda_{1}x_{1}+\cdots +\lambda_{m}x_{m} ~\Big|~\lambda_{1},\ldots,\lambda_{m} \in \mathbb{R} ~\text{and}~\sum^{m}_{i=1} \lambda_{i}=1 \Big\}. \end{align*} \end{fact} Some algebraic calculations and \cref{fac:AffSubsExpre} yield the next result. \begin{lemma} \label{lem:AffineHull} Let $x_{1}, \ldots,x_{m} \in \mathcal{H}$. Then for every $i_{0} \in \{2, \ldots, m\}$, we have \begin{align*} \ensuremath{\operatorname{aff}}\{x_{1}, \ldots, x_{m}\} &~=x_{1} + \ensuremath{{\operatorname{span}}} \{x_{2}-x_{1}, \ldots, x_{m}-x_{1}\}\\ &~=x_{i_{0}}+\ensuremath{{\operatorname{span}}}\{x_{1}-x_{i_{0}},\ldots,x_{i_{0}-1}-x_{i_{0}}, x_{i_{0}+1}-x_{i_{0}},\ldots,x_{m}-x_{i_{0}}\}. \end{align*} \end{lemma} \begin{definition} {\rm \cite[page~6]{R2015}} Let $x_{0}, x_{1}, \ldots, x_{m} \in \mathcal{H}$. The $m+1$ vectors $x_{0}, x_{1}, \ldots, x_{m}$ are said to be affinely independent if $\ensuremath{\operatorname{aff}} \{x_{0}, x_{1}, \ldots, x_{m}\}$ is $m$-dimensional. \end{definition} \begin{fact} {\rm \cite[page 7]{R2015}} \label{fac:AffinIndeLineInd} Let $x_{1}, x_{2}, \ldots, x_{m} \in \mathcal{H}$. Then $x_{1}, x_{2}, \ldots,x_{m}$ are affinely independent if and only if $ x_{2}-x_{1}, \ldots, x_{m}-x_{1}$ are linearly independent. \end{fact} \begin{lemma} \label{lem:UniqExpreAffIndp} Let $x_{1}, \ldots, x_{m}$ be affinely independent vectors in $\mathcal{H}$. Let $p \in \ensuremath{\operatorname{aff}} \{x_{1}, \ldots, x_{m}\}$. Then there exists a unique vector $\begin{pmatrix} \alpha_{1}& \cdots& \alpha_{m}\end{pmatrix}^{\intercal} \in \mathbb{R}^{m}$ with $\sum^{m}_{i=1} \alpha_{i} =1$ such that \begin{align*} p= \alpha_{1} x_{1} + \cdots + \alpha_{m} x_{m}. \end{align*} \end{lemma} The following lemma will be useful later. \begin{lemma}\label{lem:AffineIndep:OpenSet} Let \begin{align*} \mathcal{O}=\Big\{ (x_{1}, \ldots, x_{m-1}, x_{m}) \in \mathcal{H}^{m} ~\Big|~ x_{1}, \ldots, x_{m-1}, x_{m}~\text{are affinely independent} \Big\}. \end{align*} Then $\mathcal{O}$ is open. \end{lemma} \begin{proof} Assume to the contrary that there exist $( x_{1}, \ldots, x_{m-1}, x_{m})\in \mathcal{O}$ such that for every $k \in \mathbb{N}\smallsetminus\{0\}$, there exist $( x^{(k)}_{1}, \ldots, x^{(k)}_{m-1}, x^{(k)}_{m}) \in B\Big(( x_{1}, \ldots, x_{m-1}, x_{m}); \frac{1}{k} \Big)$ such that $ x^{(k)}_{1}, \ldots, x^{(k)}_{m-1}, x^{(k)}_{m} $ are affinely dependent. By \cref{fac:AffinIndeLineInd}, for every $k$, there exists $b^{(k)}=(\beta^{(k)}_{1}, \beta^{(k)}_{2},\ldots,\beta^{(k)}_{m-1}) \in \mathbb{R}^{m-1} \smallsetminus \{0\}$ such that \begin{align}\label{eq:linindep:mitem} \beta^{(k)}_{1}(x^{(k)}_{2}-x^{(k)}_{1})+\cdots+\beta^{(k)}_{m-1}(x^{(k)}_{m}-x^{(k)}_{1})=0. \end{align} Without loss of generality we assume \begin{align}\label{eq:lem:lineindp:1} (\forall k \in \mathbb{N}\smallsetminus\{0\} ) \quad \norm{b^{(k)}}^{2}=\sum^{m-1}_{i=1}(\beta^{(k)}_{i})^{2}= 1, \end{align} and there exists $\bar{b}=(\beta_{1}, \ldots, \beta_{m-1}) \in \mathbb{R}^{m-1}$ such that \begin{align*} \lim_{k \rightarrow \infty} ( \beta^{(k)}_{1}, \ldots, \beta^{(k)}_{m-1} ) =\lim_{k \rightarrow \infty}b^{(k)} =\bar{b}=(\beta_{1}, \ldots, \beta_{m-1}). \end{align*} Let $k$ go to infinity in \cref{eq:lem:lineindp:1}, we get \begin{align*} \norm{\bar{b}}^{2} = \beta^{2}_{1}+ \cdots+\beta^{2}_{m-1}=1, \end{align*} which yields that $(\beta_{1}, \ldots, \beta_{m-1})\neq 0$. Let $k$ go to infinity in \cref{eq:linindep:mitem}, we obtain \begin{align*} \beta_{1}(x_{2}-x_{1})+\cdots+\beta_{m-1}(x_{m}-x_{1})=0, \end{align*} which means that $x_{2}-x_{1}, \ldots, x_{m}-x_{1}$ are linearly dependent. By \cref{fac:AffinIndeLineInd}, it contradicts with the assumption that $x_{1}, \ldots, x_{m-1}, x_{m}$ are affinely independent. Hence $\mathcal{O}$ is indeed an open set. \end{proof} \begin{fact} {\rm \cite[Theorem~9.26]{D2010}} \label{fact:BestAppAffSubspace} Let $V$ be an affine subset of $\mathcal{H}$, say $V=M+v$, where $M$ is a linear subspace of $\mathcal{H}$ and $v\in V$. Let $x \in \mathcal{H}$ and $y_{0} \in \mathcal{H}$. Then the following statements are equivalent: \begin{enumerate} \item \label{fact:BestAppAffSubspace:i} $y_{0}=P_{V}(x)$. \item \label{fact:BestAppAffSubspace:ii} $x-y_{0} \in M^{\perp}$. \item \label{fact:BestAppAffSubspace:iii} $\innp{x-y_{0}, y-v}=0 ~~~~\mbox{for all}~ y \in V$. \end{enumerate} Moreover, \begin{align*} P_{V}(x+e)=P_{V}(x) ~~~~~~~~\mbox{for all} ~x \in X, e \in M^{\perp}. \end{align*} \end{fact} \subsection{The Gram matrix} \begin{definition} \label{defn:GramMatrix} Let $ a_{1}, \ldots, a_{m} \in \mathcal{H}$. Then \begin{align*} G(a_{1}, \ldots, a_{m})= \begin{pmatrix} \norm{a_{1}}^{2} &\innp{a_{1},a_{2}} & \cdots & \innp{a_{1}, a_{m}} \\ \innp{a_{2},a_{1}} & \norm{a_{2}}^{2} & \cdots & \innp{a_{2},a_{m}} \\ \vdots & \vdots & ~~& \vdots \\ \innp{a_{m},a_{1}} & \innp{a_{m},a_{2}} & \cdots & \norm{a_{m}}^{2} \\ \end{pmatrix} \end{align*} is called the \emph{Gram matrix} of $a_{1}, \ldots, a_{m}$. \end{definition} \begin{fact} {\rm \cite[Theorem~6.5-1]{K1978}} \label{fact:Gram:inver} Let $ a_{1}, \ldots, a_{m} \in \mathcal{H}$. Then the Gram matrix $G(a_{1}, \ldots, a_{m})$ is invertible if and only if the vectors $a_{1}, \ldots, a_{m}$ are linearly independent. \end{fact} \begin{remark} \label{note:AffinIndpDetermNonZero} Let $x,y,z$ be affinely independent vectors in $\mathbb{R}^{3}$. Set $a=y-x$ and $b=z-x$. Then, by \cref{fac:AffinIndeLineInd} and \cref{fact:Gram:inver}, $\norm{a}^{2} \norm{b}^{2}-\innp{a,b}^{2} \neq 0$ and $\norm{a} \neq 0$, $\norm{b} \neq 0$. \end{remark} \begin{proposition} \label{prop:GramMatrSymm} Let $x_{1}, \ldots, x_{m} \in \mathcal{H}$. Then for every $k \in \{2, \ldots, m\}$, we have \begin{align*} \det \Big(G(x_{2}-x_{1}, \ldots, x_{m}-x_{1}) \Big) = \det \Big(G(x_{1}-x_{k}, \ldots,x_{k-1}-x_{k}, x_{k+1} -x_{k},\ldots, x_{m}-x_{k}) \Big) \end{align*} \end{proposition} \begin{proof} By \cref{defn:GramMatrix}, $G(x_{1}-x_{k}, \ldots,x_{k-1}-x_{k}, x_{k+1} -x_{k},\ldots, x_{m}-x_{k})$ is \begin{align}\label{eq:prop:Gram:x1k} \begin{pmatrix} \innp{x_{1}-x_{k},x_{1}-x_{k}}& \cdots & \innp{x_{1}-x_{k},x_{k-1}-x_{k}} & \innp{x_{1}-x_{k},x_{k+1}-x_{k}} &\cdots &\innp{x_{1}-x_{k},x_{m}-x_{k}} \\ \innp{x_{2}-x_{k},x_{1}-x_{k}}& \cdots & \innp{x_{2}-x_{k},x_{k-1}-x_{k}} & \innp{x_{2}-x_{k},x_{k+1}-x_{k}} &\cdots &\innp{x_{2}-x_{k},x_{m}-x_{k}} \\ \vdots & \cdots & \vdots & \vdots &\cdots &\vdots \\ \innp{x_{k-1}-x_{k},x_{1}-x_{k}}& \cdots & \innp{x_{k-1}-x_{k},x_{k-1}-x_{k}} & \innp{x_{k-1}-x_{k},x_{k+1}-x_{k}} &\cdots &\innp{x_{k-1}-x_{k},x_{m}-x_{k}} \\ \innp{x_{k+1}-x_{k},x_{1}-x_{k}}& \cdots & \innp{x_{k+1}-x_{k},x_{k-1}-x_{k}} & \innp{x_{k+1}-x_{k},x_{k+1}-x_{k}} &\cdots &\innp{x_{k+1}-x_{k},x_{m}-x_{k}} \\ \vdots & \cdots & \vdots & \vdots &\cdots &\vdots \\ \innp{x_{m}-x_{k},x_{1}-x_{k}}& \cdots & \innp{x_{m}-x_{k},x_{k-1}-x_{k}} & \innp{x_{m}-x_{k},x_{k+1}-x_{k}} &\cdots &\innp{x_{m}-x_{k},x_{m}-x_{k}} \\ \end{pmatrix}. \end{align} In \cref{eq:prop:Gram:x1k}, we perform the following elementary row and column operations: For every $i \in \{2,3,\ldots, m-1\}$, subtract the $1^{\text{st}}$ row from the $i^{\text{th}}$ row, and then subtract the $1^{\text{st}}$ column from the $i^\text{th}$ column. Then multiply $1^{\text{st}}$ row and $1^{\text{st}}$ column by $-1$, respectively. It follows that the determinant of \cref{eq:prop:Gram:x1k} equals the determinant of \begin{align}\label{eq:prop:Gram:xk1} \begin{pmatrix} \innp{x_{k}-x_{1},x_{k}-x_{1}}& \cdots & \innp{x_{k}-x_{1},x_{k-1}-x_{1}} & \innp{x_{k}-x_{1},x_{k+1}-x_{1}} &\cdots &\innp{x_{k}-x_{1},x_{m}-x_{1}} \\ \innp{x_{2}-x_{1},x_{k}-x_{1}}& \cdots & \innp{x_{2}-x_{1},x_{k-1}-x_{1}} & \innp{x_{2}-x_{1},x_{k+1}-x_{1}} &\cdots &\innp{x_{2}-x_{1},x_{m}-x_{1}} \\ \vdots & \cdots & \vdots & \vdots &\cdots &\vdots \\ \innp{x_{k-1}-x_{1},x_{k}-x_{1}}& \cdots & \innp{x_{k-1}-x_{1},x_{k-1}-x_{1}} & \innp{x_{k-1}-x_{1},x_{k+1}-x_{1}} &\cdots &\innp{x_{k-1}-x_{1},x_{m}-x_{1}} \\ \innp{x_{k+1}-x_{1},x_{k}-x_{1}}& \cdots & \innp{x_{k+1}-x_{1},x_{k-1}-x_{1}} & \innp{x_{k+1}-x_{1},x_{k+1}-x_{1}} &\cdots &\innp{x_{k+1}-x_{1},x_{m}-x_{1}} \\ \vdots & \cdots & \vdots & \vdots &\cdots &\vdots \\ \innp{x_{m}-x_{1},x_{k}-x_{1}}& \cdots & \innp{x_{m}-x_{1},x_{k-1}-x_{1}} & \innp{x_{m}-x_{1},x_{k+1}-x_{1}} &\cdots &\innp{x_{m}-x_{1},x_{m}-x_{1}} \\ \end{pmatrix}. \end{align} In \cref{eq:prop:Gram:xk1}, we interchange $i^{\text{th}}$ row and $(i+1)^{\text{th}}$ successively for $i=1, \ldots,k-2$. In addition, we interchange $j^{\text{th}}$ column and $(j+1)^{\text{th}}$ column successively for $j=1, \ldots,k-2$. Then the resulting matrix is just $G(x_{2}-x_{1}, \ldots, x_{m}-x_{1})$. Because the number of interchange we performed is even, the determinant is unchanged. Therefore, we obtain \begin{align*} \det \Big(G(x_{1}-x_{k}, \ldots,x_{k-1}-x_{k}, x_{k+1} -x_{k},\ldots, x_{m}-x_{k}) \Big) = \det \Big(G(x_{2}-x_{1}, \ldots, x_{m}-x_{1}) \Big) \end{align*} as claimed. \end{proof} \begin{fact} {\rm \cite[page 16]{T2008}} \label{fact:MatrixDeterInverse} Let $S = \{ A \in \mathbb{R}^{n\times n} ~|~A ~\text{is invertible}~ \}$. Then the mapping $S \to S : A \mapsto A^{-1}$ is continuous. \end{fact} \begin{fact}[Cramer's rule] {\rm \cite[page 476]{MC2000}} \label{fact:CramerRule} If $A \in \mathbb{R}^{n\times n}$ is invertible and $Ax=b$, then for every $i \in \{1, \ldots,n\}$, we have \begin{align*} x_{i} = \frac{\det(A_{i})}{\det(A)}, \end{align*} where $A_{i} =[ A_{*,1}|\cdots|A_{*,i-1}|b|A_{*,i+1}|\cdots|A_{*,n}]$. That is, $A_{i}$ is identical to $A$ except that column $A_{*,i}$ has been replaced by $b$. \end{fact} \begin{corollary} \label{cor:GramInver:Continu} Let $\{x_{1}, \ldots, x_{m} \} \subseteq \mathcal{H}$ with $x_{1}, \ldots, x_{m} $ being affinely independent. Let $\big( (x^{(k)}_{1}, \ldots,x^{(k)}_{m}) \big)_{k \in \mathbb{N}} \subseteq \mathcal{H}^{m}$ such that \begin{align*} \lim_{k \rightarrow \infty} (x^{(k)}_{1}, \ldots,x^{(k)}_{m}) = (x_{1}, \ldots, x_{m}). \end{align*} Then \begin{align*} G(x_{2}-x_{1}, \ldots, x_{m}-x_{1})^{-1}= \lim_{k \rightarrow \infty}G(x^{(k)}_{2}-x_{1}^{(k)}, \ldots,x^{(k)}_{m}-x_{1}^{(k)})^{-1}. \end{align*} \end{corollary} \begin{proof} By \cref{lem:AffineIndep:OpenSet}, we know there exists $K \in \mathbb{N}$ such that \begin{align*} (\forall k \geq K) \quad x^{(k)}_{1}, \ldots, x^{(k)}_{m} ~\text{are affinely independent}. \end{align*} Using \cref{fac:AffinIndeLineInd}, we know \begin{align*} x_{2}-x_{1}, \ldots, x_{m}-x_{1} ~\text{are linearly independent}, \end{align*} and \begin{align*} (\forall k \geq K) \quad x^{(k)}_{2}-x^{(k)}_{1}, \ldots, x^{(k)}_{m}-x^{(k)}_{1} ~\text{are linearly independent}. \end{align*} Hence \cref{fact:Gram:inver} tells us that $G(x_{2}-x_{1}, \ldots, x_{m}-x_{1})^{-1}$ and $(\forall k \geq K)$ $G(x^{(k)}_{2}-x_{1}^{(k)}, \ldots,x^{(k)}_{m}-x_{1}^{(k)})^{-1}$ exist. Therefore, the required result follows directly from \cref{fact:MatrixDeterInverse}. \end{proof} \section{The circumcenter} \label{sec:DefinCircOper} Before we are able to define the main actor in this paper, the circumcenter operator, we shall require a few more more results. \begin{proposition} \label{prop:NormEqInnNorm} Let $p,x,y \in \mathcal{H}$, and set $U=\ensuremath{\operatorname{aff}}\{x,y\}$. Then the following are equivalent: \begin{enumerate} \item \label{prop:NormEqInnNorm:Norm} $\norm{p-x} =\norm{p-y}$. \item \label{prop:NormEqInnNorm:NorInnp} $\innp{p-x,y-x} =\frac{1}{2} \norm{y-x}^{2}$. \item \label{prop:NormEqInnNorm:ProjeEqu}$P_{U}(p)=\frac{x+y}{2}$. \item \label{prop:NormEqInnNorm:ProjeBelo} $p \in \frac{x+y}{2} +(U-U)^{\perp}$. \end{enumerate} \end{proposition} \begin{proof} It is clear that \begin{align*} \norm{p-x} =\norm{p-y} &\Longleftrightarrow \norm{p-x}^{2} =\norm{(p-x)+(x-y)}^{2}\\ &\Longleftrightarrow \norm{p-x}^{2} =\norm{p-x}^{2} + 2\innp{p-x,x-y} +\norm{x-y}^{2}\\ &\Longleftrightarrow \innp{p-x,y-x} =\frac{1}{2} \norm{y-x}^{2}. \end{align*} Hence we get \cref{prop:NormEqInnNorm:Norm} $\Leftrightarrow$ \cref{prop:NormEqInnNorm:NorInnp}. Notice $\frac{x+y}{2} \in U$. Now \begin{align*} \frac{x+y}{2} = P_{U}(p) \Longleftrightarrow & (\forall u \in U) \quad \innp{p- \frac{x+y}{2}, u -x}=0 \quad (\text{by \cref{fact:BestAppAffSubspace:i}} \Leftrightarrow \text{\cref{fact:BestAppAffSubspace:iii} in \cref{fact:BestAppAffSubspace}})\\ \Longleftrightarrow & (\forall \alpha \in \mathbb{R}) \quad \innp{p- \frac{x+y}{2}, (x+ \alpha(y-x)) -x}=0 \quad (\text{by}~U=x +\ensuremath{{\operatorname{span}}}\{y-x\})\\ \Longleftrightarrow & \innp{p- \frac{x+y}{2}, y-x}=0 \\ \Longleftrightarrow & \innp{p-(x -\frac{x-y}{2}), y-x}=0 \\ \Longleftrightarrow & \innp{p-x, y-x} + \innp{\frac{x-y}{2},y-x}=0 \\ \Longleftrightarrow & \innp{p-x,y-x} =\frac{1}{2} \norm{y-x}^{2}, \end{align*} which imply that \cref{prop:NormEqInnNorm:ProjeEqu} $\Leftrightarrow$ \cref{prop:NormEqInnNorm:NorInnp}. On the other hand, by \cref{fact:BestAppAffSubspace:i} $\Leftrightarrow$ \cref{fact:BestAppAffSubspace:ii} in \cref{fact:BestAppAffSubspace} and by \cref{fac:AffinePointLinearSpace}, \begin{align*} \frac{x+y}{2} = P_{U}(p) \Longleftrightarrow & p- \frac{x+y}{2} \in (U-U)^{\perp}\\ \Longleftrightarrow & p \in \frac{x+y}{2} + (U-U)^{\perp}, \end{align*} which yield that \cref{prop:NormEqInnNorm:ProjeEqu} $\Leftrightarrow$ \cref{prop:NormEqInnNorm:ProjeBelo}. In conclusion, we obtain \cref{prop:NormEqInnNorm:Norm} $\Leftrightarrow$ \cref{prop:NormEqInnNorm:NorInnp} $\Leftrightarrow$ \cref{prop:NormEqInnNorm:ProjeEqu} $\Leftrightarrow$ \cref{prop:NormEqInnNorm:ProjeBelo}. \end{proof} \begin{corollary}\label{cor:LongNormEqInnNorm} Let $x_{1}, \ldots, x_{m}$ be in $\mathcal{H}$. Let $p \in \mathcal{H}$. Then \begin{align*} \norm{p-x_{1}}=\cdots =\norm{p-x_{m-1}}=\norm{p-x_{m}} \Longleftrightarrow \begin{cases} \innp{p-x_{1},x_{2}-x_{1}} = \frac{1}{2} \norm{x_{2}-x_{1}}^{2} \\ ~~~~~~~~~~\vdots \\ \innp{p-x_{1},x_{m-1}-x_{1}} = \frac{1}{2} \norm{x_{m-1}-x_{1}}^{2} \\ \innp{p-x_{1},x_{m}-x_{1}} = \frac{1}{2} \norm{x_{m}-x_{1}}^{2}. \end{cases} \end{align*} \end{corollary} \begin{proof} Set $I=\{2, \ldots, m-1,m\}$, and let $i \in I$. In \cref{prop:NormEqInnNorm}, substitute $x=x_{1}$ and $y =x_{i}$ and use \cref{prop:NormEqInnNorm:Norm} $\Leftrightarrow$ \cref{prop:NormEqInnNorm:NorInnp}. Then we get $\norm{p-x_{1}} =\norm{p-x_{i}} \Longleftrightarrow \innp{p-x_{1},x_{i}-x_{1}} = \frac{1}{2} \norm{x_{i}-x_{1}}^{2}$. Hence \begin{equation*} (\forall i \in I) \quad \norm{p-x_{1}} =\norm{p-x_{i}} \Longleftrightarrow \innp{p-x_{1},x_{i}-x_{1}} = \frac{1}{2} \norm{x_{i}-x_{1}}^{2}. \end{equation*} Therefore, the proof is complete. \end{proof} The next result plays an essential role in the definition of the circumcenter operator. \begin{proposition} \label{prop:unique:PExisUnique} Set $S=\{x_{1}, x_{2}, \ldots, x_{m}\}$, where $m \in \mathbb{N} \smallsetminus \{0\}$ and $x_{1}, x_{2}, \ldots, x_{m}$ are in $\mathcal{H}$. Then there is at most one point $p \in \mathcal{H}$ satisfying the following two conditions: \begin{enumerate} \item \label{prop:unique:PExisUnique:i} $p \in \ensuremath{\operatorname{aff}}(S)$, and \item \label{prop:unique:PExisUnique:ii} $\{ \norm{p-s}~|~s \in S\}$ is a singleton: $\norm{p-x_{1}} =\norm{p-x_{2}}=\cdots =\norm{p-x_{m}}$. \end{enumerate} \end{proposition} \begin{proof} Assume both of $p, q$ satisfy conditions \cref{prop:unique:PExisUnique:i} and \cref{prop:unique:PExisUnique:ii}. By assumption and \cref{lem:AffineHull}, $p, q \in \ensuremath{\operatorname{aff}}(S)=\ensuremath{\operatorname{aff}} \{x_{1}, \ldots, x_{m} \} =x_{1} +\ensuremath{{\operatorname{span}}} \{ x_{2}-x_{1}, \ldots, x_{m}-x_{1} \} $. Thus $p-q \in \ensuremath{{\operatorname{span}}} \{ x_{2}-x_{1}, \ldots, x_{m}-x_{1} \} $, and so there exist $ \alpha_{1}, \ldots , \alpha_{m-1} \in \mathbb{R}$ such that $ p-q= \sum^{m-1}_{i=1} \alpha_{i}(x_{i+1}-x_{1})$. Using the \cref{cor:LongNormEqInnNorm} above and using the condition \cref{prop:unique:PExisUnique:ii} satisfied by both of $p$ and $q$, we observe that for every $i \in I=\{2, \ldots,m\}$, we have \begin{align*} \innp{p-x_{1}, x_{i}-x_{1}}&=\frac{1}{2}\norm{x_{i}-x_{1}}^{2} \quad \text{and}\\ \innp{q-x_{1}, x_{i}-x_{1}}&=\frac{1}{2}\norm{x_{i}-x_{1}}^{2}. \end{align*} Subtracting the above equalities, we get \begin{align*} (\forall i \in I) \quad \innp{p-q, x_{i}-x_{1}}=0. \end{align*} Multiplying $\alpha_{i}$ on both sides of the corresponding $i^{\text{th}}$ equality and then summing up the $m-1$ equalities, we get \begin{align*} 0= \Innp{p-q,\sum^{m-1}_{i=1} \alpha_{i}(x_{i+1}-x_{1})} = \innp{p-q,p-q}=\norm{p-q}^{2}. \end{align*} Hence $p=q$, which implies that if such point satisfying conditions \cref{prop:unique:PExisUnique:i} and \cref{prop:unique:PExisUnique:ii} exists, then it must be unique. \end{proof} We are now in a position to define the circumcenter operator. \begin{definition}[circumcenter] \label{defn:Circumcenter} The \emph{circumcenter operator} is \begin{empheq}[box=\mybluebox]{equation*} \CCO{} \colon \mathcal{P}(\mathcal{H}) \to \mathcal{H} \cup \{ \varnothing \} \colon S \mapsto \begin{cases} p, \quad ~\text{if}~p \in \ensuremath{\operatorname{aff}} (S)~\text{and}~\{\norm{p-s} ~|~s \in S \}~\text{is a singleton};\\ \varnothing, \quad~ \text{otherwise}. \end{cases} \end{empheq} The \emph{circumradius operator} is \begin{align*} \CRO{} \colon \mathcal{P}(\mathcal{H}) \to \mathbb{R}\colon S \mapsto \begin{cases} \norm{\CCO(S) -s }, &\text{if $\CCO(S) \in \mathcal{H}$ and $s\in S$};\\ +\infty, &\text{if $\CCO(S)=\varnothing$.} \end{cases} \end{align*} In particular, when $\CCO(S) \in \mathcal{H}$, that is, $\CCO(S) \neq \varnothing$, we say that the circumcenter of $S$ exists and we call $\CCO(S)$ the circumcenter of $S$ and $\CRO(S)$ the circumradius of $S$. \end{definition} Note that in the \cref{prop:unique:PExisUnique} above, we have already proved that for every $S \in \mathcal{P}(\mathcal{H})$, there is at most one point $p \in \ensuremath{\operatorname{aff}} (S) $ such that $\{\norm{p-s} ~|~s \in S \}$ is a singleton, so the notions are \emph{well defined}. Hence we obtain the following alternative expression of the circumcenter operator: \begin{remark} \label{rem:Circumcenter} Let $S \in \mathcal{P}(\mathcal{H})$. Then the $\CCO(S)$ is either $\varnothing$ or the \emph{unique} point $p \in \mathcal{H}$ such that \begin{enumerate} \item $p \in \ensuremath{\operatorname{aff}} (S)$ and, \item $\{\norm{p-s}~|~s \in S \}$ is a singleton. \end{enumerate} \end{remark} \begin{example} \label{exam:CircForTwoPoints} Let $x_1,x_2$ be in $\mathcal{H}$. Then \begin{align*} \CCO{\big(\{x_1,x_2\}\big)}=\frac{x_{1} + x_{2}}{2}. \end{align*} \end{example} \section{Explicit formulae for the circumcenter} \label{sec:ClosFormuCircOper} We continue to assume that \begin{empheq}[box=\mybluebox]{equation*} m \in \mathbb{N} \smallsetminus \{0\}, \quad x_1,\ldots,x_m \text{~are vectors in $\mathcal{H}$},\quad \text{and} \quad S = \{x_{1}, \ldots, x_{m}\}. \end{empheq} If $S$ is a singleton, say $S=\{x_{1}\}$, then, by \cref{defn:Circumcenter}, we clearly have $\CCO(S)=x_{1}$. So in this section, to deduce the formula of $\CCO(S)$, we always assume that \begin{empheq}[box=\mybluebox]{equation*} m \geq 2. \end{empheq} We are ready for an explicit formula for the circumcenter. \begin{theorem} \label{thm:unique:LinIndpPformula} Suppose that $x_{1}, \ldots, x_{m}$ are affinely independent. Then $\CCO(S) \in \mathcal{H}$, which means that $\CCO(S)$ is the unique point satisfying the following two conditions: \begin{enumerate} \item \label{prop:unique:i} $\CCO(S) \in \ensuremath{\operatorname{aff}} (S)$, and \item \label{prop:unique:ii} $\{ \norm{\CCO(S)-s}~|~s \in S \}$ is a singleton. \end{enumerate} Moreover, \begin{align*} \CCO(S)= x_{1}+\frac{1}{2}(x_{2}-x_{1},\ldots,x_{m}-x_{1}) G( x_{2}-x_{1},\ldots,x_{m}-x_{1})^{-1} \begin{pmatrix} \norm{x_{2}-x_{1}}^{2} \\ \vdots\\ \norm{x_{m}-x_{1}}^{2} \\ \end{pmatrix}, \end{align*} where $G( x_{2}-x_{1},\ldots,x_{m-1}-x_{1},x_{m}-x_{1})$ is the Gram matrix defined in \cref{defn:GramMatrix}: \begin{align*} &G( x_{2}-x_{1},\ldots, x_{m-1}-x_{1},x_{m}-x_{1})\\ =& \begin{pmatrix} \norm{x_{2}-x_{1}}^{2} &\innp{x_{2}-x_{1},x_{3}-x_{1}} & \cdots & \innp{x_{2}-x_{1}, x_{m}-x_{1}} \\ \vdots & \vdots & ~~& \vdots \\ \innp{x_{m-1}-x_{1},x_{2}-x_{1}} & \innp{x_{m-1}-x_{1}, x_{3}-x_{1}} & \cdots & \innp{x_{m-1}-x_{1},x_{m}-x_{1}} \\ \innp{x_{m}-x_{1},x_{2}-x_{1}} & \innp{x_{m}-x_{1},x_{3}-x_{1}} & \cdots & \norm{x_{m}-x_{1}}^{2} \\ \end{pmatrix}. \end{align*} \end{theorem} \begin{proof} By assumption and \cref{fac:AffinIndeLineInd}, we get that $x_{2}-x_{1}, \ldots, x_{m}-x_{1} $ are linearly independent. Then by \cref{fact:Gram:inver}, the Gram matrix $G(x_{2}-x_{1}, x_{3}-x_{1}, \ldots, x_{m}-x_{1})$ is invertible. Set \begin{align*} \begin{pmatrix} \alpha_{1} \\ \alpha_{2}\\ \vdots\\ \alpha_{m-1} \\ \end{pmatrix} = \frac{1}{2}G(x_{2}-x_{1}, x_{3}-x_{1}, \ldots, x_{m}-x_{1})^{-1} \begin{pmatrix} \norm{x_{2}-x_{1}}^{2} \\ \norm{x_{3}-x_{1}}^{2} \\ \vdots\\ \norm{x_{m}-x_{1}}^{2} \\ \end{pmatrix}, \end{align*} and \begin{align*} p = x_{1}+\alpha_{1}(x_{2}-x_{1})+\alpha_{2}(x_{3}-x_{1})+\cdots+\alpha_{m-1}(x_{m}-x_{1}). \end{align*} By the definition of $G(x_{2}-x_{1}, x_{3}-x_{1}, \ldots, x_{m}-x_{1})$ and by the definitions of $\begin{pmatrix} \alpha_{1} & \alpha_{2}& \cdots& \alpha_{m-1} \end{pmatrix}^{\intercal}$ and $p$, we obtain the equivalences \begin{align*} &G(x_{2}-x_{1}, x_{3}-x_{1}, \ldots, x_{m}-x_{1}) \begin{pmatrix} \alpha_{1} \\ \alpha_{2}\\ \vdots\\ \alpha_{m-1} \\ \end{pmatrix} = \frac{1}{2} \begin{pmatrix} \norm{x_{2}-x_{1}}^{2} \\ \norm{x_{3}-x_{1}}^{2} \\ \vdots\\ \norm{x_{m}-x_{1}}^{2} \\ \end{pmatrix} \\ \Longleftrightarrow & \begin{cases} \innp{ \alpha_{1}(x_{2}-x_{1})+ \cdots +\alpha_{m-1}(x_{m}-x_{1}), x_{2}-x_{1} } = \frac{1}{2} \norm{x_{2}-x_{1}}^{2} \\ ~~~~~~~~~~\vdots \\ \innp{\alpha_{1}(x_{2}-x_{1})+ \cdots +\alpha_{m-1}(x_{m}-x_{1}), x_{m}-x_{1} } = \frac{1}{2} \norm{x_{m}-x_{1}}^{2} \end{cases}\\ \Longleftrightarrow & \begin{cases} \innp{p-x_{1},x_{2}-x_{1}} = \frac{1}{2} \norm{x_{2}-x_{1}}^{2} \\ ~~~~~~~~~~\vdots \\ \innp{p-x_{1},x_{m}-x_{1}} = \frac{1}{2} \norm{x_{m}-x_{1}}^{2}. \end{cases} \end{align*} Hence by \cref{cor:LongNormEqInnNorm}, we know that $p$ satisfy condition \cref{prop:unique:ii}. In addition, it is clear that $p=x_{1}+\alpha_{1}(x_{2}-x_{1})+\alpha_{2}(x_{3}-x_{1})+\cdots+\alpha_{m-1}(x_{m}-x_{1}) \in x_{1}+ \ensuremath{{\operatorname{span}}} \{x_{2} -x_{1}, \ldots, x_{m}-x_{1} \}= \ensuremath{\operatorname{aff}}(S)$, which is just the condition \cref{prop:unique:i}. Hence the point satisfying conditions \cref{thm:unique:i} and \cref{thm:unique:ii} exists. Moreover, by \cref{prop:unique:PExisUnique}, if the point exists, then it must be unique. \end{proof} \begin{lemma}\label{lem:unique:BasisPformula} Suppose that $\CCO(S) \in \mathcal{H}$, and let $K \subseteq S$ such that $\ensuremath{\operatorname{aff}}(K)=\ensuremath{\operatorname{aff}}(S)$. Then \begin{align*} \CCO(S)=\CCO(K). \end{align*} \end{lemma} \begin{proof} By assumption, $\CCO(S) \in \mathcal{H}$, that is: \begin{enumerate} \item \label{thm:unique:i} $\CCO(S) \in \ensuremath{\operatorname{aff}} (S)$, and \item \label{thm:unique:ii} $\{ \norm{\CCO(S)-s}~|~s \in S \}$ is a singleton. \end{enumerate} Because $K \subseteq S$, we get $\{ \norm{\CCO(S)-s}~|~s \in K \}$ is a singleton, by \cref{thm:unique:ii}. Since $\ensuremath{\operatorname{aff}}(K)=\ensuremath{\operatorname{aff}}(S)$, by \cref{thm:unique:i}, the point $\CCO(S)$ satisfy \begin{enumerate}[label=(\Roman*)] \item $\CCO(S) \in \ensuremath{\operatorname{aff}} (K)$, and \item $\{ \norm{\CCO(S) - u}~|~ u \in K\}$ is a singleton. \end{enumerate} Replacing $S$ in \cref{prop:unique:PExisUnique} by $K$ and combining with \cref{defn:Circumcenter}, we know $\CCO(K) =\CCO(S)$. \end{proof} \begin{corollary} \label{cor:unique:BasisPformula} Suppose that $\CCO(S) \in \mathcal{H}$. Let $x_{i_{1}}, \ldots, x_{i_{t}}$ be elements of $S$ such that $x_{1}, x_{i_{1}}, \ldots, x_{i_{t}}$ are affinely independent, and set $K=\{x_{1}, x_{i_{1}}, \ldots, x_{i_{t}} \}$. Furthermore, assume that $ \ensuremath{\operatorname{aff}} (K) =\ensuremath{\operatorname{aff}}(S)$. Then \begin{align*} \CCO(S)=\CCO(K) = x_{1}+\frac{1}{2}(x_{i_{1}}-x_{1},\ldots,x_{i_{t}}-x_{1}) G( x_{i_{1}}-x_{1},\ldots,x_{i_{t}}-x_{1})^{-1} \begin{pmatrix} \norm{x_{i_{1}}-x_{1}}^{2} \\ \vdots\\ \norm{x_{i_{t}}-x_{1}}^{2}\\ \end{pmatrix}. \end{align*} \end{corollary} \begin{proof} By \cref{thm:unique:LinIndpPformula}, $x_{1}, x_{i_{1}}, \ldots, x_{i_{t}}$ are affinely independent implies that $\CCO(K) \neq \varnothing$, and \begin{align*} \CCO(K)= x_{1}+\frac{1}{2}(x_{i_{1}}-x_{1},\ldots,x_{i_{t}}-x_{1}) G( x_{i_{1}}-x_{1},\ldots,x_{i_{t}}-x_{1})^{-1} \begin{pmatrix} \norm{x_{i_{1}}-x_{1}}^{2} \\ \vdots\\ \norm{x_{i_{t}}-x_{1}}^{2}\\ \end{pmatrix}. \end{align*} Then the desired result follows from \cref{lem:unique:BasisPformula}. \end{proof} \begin{lemma}\label{lem:Basis:AffineHullEq} Let $x_{i_{1}}, \ldots, x_{i_{t}}$ be elements of $S$, and set $K =\{x_{1}, x_{i_{1}}, \ldots, x_{i_{t}} \}$. Then \begin{align*} & \ensuremath{\operatorname{aff}}(K) =\ensuremath{\operatorname{aff}}(S)~\mbox{ and}~ x_{1}, x_{i_{1}}, \ldots, x_{i_{t}}~\mbox{are affinely independent}.\\ \Longleftrightarrow ~& x_{i_{1}}-x_{1}, \ldots, x_{i_{t}}-x_{1} ~\mbox{is a basis of}~ \ensuremath{{\operatorname{span}}}\{x_{2}-x_{1}, \ldots, x_{m}-x_{1} \} \end{align*} \end{lemma} \begin{proof} Indeed, \begin{align*} &~~ x_{i_{1}}-x_{1}, \ldots, x_{i_{t}}-x_{1}~\mbox{is a basis of}~ \ensuremath{{\operatorname{span}}}\{x_{2}-x_{1}, \ldots, x_{m}-x_{1} \}\\ \Longleftrightarrow & \begin{cases} x_{i_{1}}-x_{1}, \ldots, x_{i_{t}}-x_{1} ~\mbox{are linearly independent, and}\\ \ensuremath{{\operatorname{span}}} \{x_{i_{1}}-x_{1}, \ldots, x_{i_{t}}-x_{1} \} = \ensuremath{{\operatorname{span}}}\{x_{2}-x_{1}, \ldots, x_{m}-x_{1} \} \end{cases}\\ \stackrel{\text{\cref{fac:AffinIndeLineInd}}}{\Longleftrightarrow} & \begin{cases}x_{1}, x_{i_{1}}, \ldots, x_{i_{t}}~\mbox{are affinely independent, and}\\ x_{1}+\ensuremath{{\operatorname{span}}} \{x_{i_{1}}-x_{1}, \ldots, x_{i_{t}}-x_{1} \} = x_{1}+\ensuremath{{\operatorname{span}}}\{x_{2}-x_{1}, \ldots, x_{m}-x_{1} \} \end{cases}\\ \Longleftrightarrow & \begin{cases}x_{1}, x_{i_{1}}, \ldots, x_{i_{t}}~\mbox{are affinely independent, and}\\ \ensuremath{\operatorname{aff}}(K)= \ensuremath{\operatorname{aff}}(S), \end{cases} \end{align*} which completes the proof. \end{proof} \iffalse \cref{cor:unique:BasisPformula} and \cref{lem:Basis:AffineHullEq} tell us that if only $\CCO(S) \in \mathcal{H}$, then we can use the programming software to calculate it easily. \fi \section{Additional formulae for the circumcenter} \label{sec:SymmFormuCCS} Upholding the assumptions of \cref{sec:ClosFormuCircOper}, we assume additionally that \begin{empheq}[box=\mybluebox]{equation*} x_{1}, \ldots, x_{m} ~\text{are affinely independent.} \end{empheq} By \cref{thm:unique:LinIndpPformula}, $\CCO(S) \in \mathcal{H}$. Let \begin{empheq}[box=\mybluebox]{equation*} k \in \{2, 3, \ldots, m\}~\text{be arbitrary but fixed}. \end{empheq} By \cref{thm:unique:LinIndpPformula} again, we know that \begin{subequations} \label{eq:FormuCCSAlpha} \begin{align} \CCO(S) &~= x_{1}+\alpha_{1}(x_{2}-x_{1})+\alpha_{2}(x_{3}-x_{1})+\cdots+\alpha_{m-1}(x_{m}-x_{1}) \label{eq:FormuSymmCCS:1:F}\\ &~= \big(1 - {\textstyle \sum^{m-1}_{i=1}} \alpha_{i}\big) x_{1} + \alpha_{1} x_{2} +\cdots+\alpha_{m-1} x_{m}, \label{eq:FormuSymmCCS:1} \end{align} \end{subequations} where \begin{align} \label{eq:FormCCS:Param:Alpha} \begin{pmatrix} \alpha_{1} \\ \alpha_{2}\\ \vdots\\ \alpha_{m-1} \\ \end{pmatrix} = \frac{1}{2}G(x_{2}-x_{1}, x_{3}-x_{1}, \ldots, x_{m}-x_{1})^{-1} \begin{pmatrix} \norm{x_{2}-x_{1}}^{2} \\ \norm{x_{3}-x_{1}}^{2} \\ \vdots\\ \norm{x_{m}-x_{1}}^{2} \\ \end{pmatrix}. \end{align} By the symmetry of the positions of the points $x_{1}, \ldots, x_{k}, \ldots,x_{m}$ in $S$ in \cref{defn:Circumcenter} and by \cref{prop:unique:PExisUnique}, we also get \begin{subequations}\label{eq:FormuCCSBeta} \begin{align} \CCO(S) &~= x_{k} +\beta_{1}(x_{1}-x_{k})+\cdots+\beta_{k-1}(x_{k-1}-x_{k})+\beta_{k}(x_{k+1}-x_{k})\cdots+\beta_{m-1}(x_{m}-x_{k}) \label{eq:FormuSymmCCS:k:F}\\ &~= \beta_{1} x_{1} +\cdots+\beta_{k-1}x_{k-1}+(1 - \sum^{m-1}_{i=1} \beta_{i}) x_{k} + \beta_{k}x_{k+1}+\cdots+\beta_{m-1} x_{m} , \label{eq:ForSyCCS:k} \end{align} \end{subequations} where \begin{align} \label{eq:FormCCS:Param:Beta} \begin{pmatrix} \beta_{1} \\ \beta_{2}\\ \vdots\\ \beta_{m-1} \\ \end{pmatrix} = \frac{1}{2}G(x_{1}-x_{k},\ldots,x_{k-1}-x_{k}, x_{k+1}-x_{k}, \ldots,x_{m}-x_{k})^{-1} \begin{pmatrix} \norm{x_{1}-x_{k}}^{2} \\ \vdots\\ \norm{x_{k-1}-x_{k}}^{2}\\ \norm{x_{k+1}-x_{k}}^{2}\\ \vdots\\ \norm{x_{m}-x_{k}}^{2} \\ \end{pmatrix}. \end{align} \begin{proposition} The following equalities hold: \begin{align} & \big(1 - \textstyle \sum^{m-1}_{i=1} \alpha_{i}\big) = \beta_{1}, ~~~(\text{coefficient of $x_{1}$}) \label{eq:SFCCS} \\ & \alpha_{k-1}=\big(1 - \textstyle \sum^{m-1}_{i=1} \beta_{i}\big), ~~~(\text{coefficient of $x_{k}$}) \label{eq:SymForCCS:k} \\ & (\forall i \in \{ 2, \ldots, k-1\}) \quad \alpha_{i-1} = \beta_{i} \quad \text{and} \quad (\forall j \in \{k, k+1, \ldots, m-1\}) \quad \alpha_{j} =\beta_{j}. \label{eq:SymForCCS:Remaining} \end{align} \end{proposition} \begin{proof} Recall that at the beginning of this section we assumed $x_{1}, \ldots, x_{m}$ are affinely independent. Combining the equations \cref{eq:FormuSymmCCS:1} $\&$\cref{eq:ForSyCCS:k} and \cref{lem:UniqExpreAffIndp}, we get the required results. \end{proof} To simplify the statements, we use the following abbreviations. \begin{align*} A = ~&G(x_{2}-x_{1}, \ldots, x_{k}-x_{1}, \ldots, x_{m}-x_{1}),\\ B = ~& G(x_{1}-x_{k}, \ldots, x_{k-1}-x_{k}, x_{k+1}-x_{k}, \ldots, x_{m}-x_{k}), \end{align*} and the determinant of matrix $A$ (by \cref{prop:GramMatrSymm}, it is also the determinant of matrix $B$) is denoted by: \begin{align*} \delta = \det(A). \end{align*} We denote the two column vectors $a$, $b$ respectively by: \begin{align*} &~a =\begin{pmatrix} \norm{x_{2}-x_{1}}^{2}& \cdots&\norm{x_{k}-x_{1}}^{2}&\cdots&\norm{x_{m}-x_{1}}^{2} \end{pmatrix}^{\intercal},\\ &~b =\begin{pmatrix} \norm{x_{1}-x_{k}}^{2}&\cdots&\norm{x_{k-1}-x_{k}}^{2}&\norm{x_{k+1}-x_{k}}^{2}&\cdots & \norm{x_{m}-x_{k}}^{2} \end{pmatrix}^{\intercal}. \end{align*} For every $M \in \mathbb{R}^{n \times n}$, and for every $j \in \{1, 2, \ldots, n\}$, \begin{empheq}[box=\mybluebox]{equation*} \text{we denote the $j^{\text{th}}$ column of the matrix $M$ as $M_{*,j}$}. \end{empheq} In turn, for every $i \in \{1, \ldots, m-1\}$, \begin{align*} A_{i} =[A_{*,1}|\cdots|A_{*,i-1}|a|A_{*,i+1}|\cdots|A_{*,m-1}], \end{align*} and \begin{align*} B_{i} =[B_{*,1}|\cdots|B_{*,i-1}|b|B_{*,i+1}|\cdots|B_{*,m-1}]. \end{align*} That is, $A_{i}$ is identical to $A$ except that column $A_{*,i}$ has been replaced by $a$ and $B_{i}$ is identical to $B$ except that column $B_{*,i}$ has been replaced by $b$. \begin{lemma} \label{lem:CCSFormulasAlBe} The following statements hold: \begin{enumerate} \item \label{lem:SymFormCCSFormulasAl} $\begin{pmatrix} \alpha_{1} \cdots \alpha_{m-1} \end{pmatrix}^{\intercal}$ defined in \cref{eq:FormCCS:Param:Alpha} is the unique solution of the nonsingular system $Ay=\frac{1}{2}a$ where $y$ is the unknown variable. In consequence, for every $i \in \{1, \ldots, m-1\}$, \begin{align*} \alpha_{i}=\frac{\det(A_{i})}{2\delta}. \end{align*} \item \label{lem:SymFormCCSFormulasBe} $\begin{pmatrix} \beta_{1} \cdots \beta_{m-1} \end{pmatrix}^{\intercal}$ defined in \cref{eq:FormCCS:Param:Beta} is the unique solution of the nonsingular system $By=\frac{1}{2}b$ where $y$ is the unknown variable. In consequence, for every $i \in \{1, \ldots, m-1\}$, \begin{align*} \beta_{i}=\frac{\det(B_{i})}{2\delta}. \end{align*} \end{enumerate} \end{lemma} \begin{proof} By assumption, $x_{1}, \ldots, x_{m}$ are affinely independent, and by \cref{prop:GramMatrSymm}, we know $\det(B)=\det(A)=\delta \neq 0$. \cref{lem:SymFormCCSFormulasAl}: By definition of $\begin{pmatrix} \alpha_{1} \cdots \alpha_{m-1} \end{pmatrix}^{\intercal}$, \begin{align*} \begin{pmatrix} \alpha_{1} \cdots \alpha_{m-1} \end{pmatrix}^{\intercal} = \frac{1}{2}A^{-1}a. \end{align*} Clearly we know it is the unique solution of the nonsingular system $Ay=\frac{1}{2}a$. Hence the desired result follows directly from the \cref{fact:CramerRule}, the Cramer Rule. \cref{lem:SymFormCCSFormulasBe}: Using the same method of proof of \cref{lem:SymFormCCSFormulasAl}, we can prove \cref{lem:SymFormCCSFormulasBe} . \end{proof} Using \cref{thm:unique:LinIndpPformula}, \cref{lem:CCSFormulasAlBe} and the equalities \cref{eq:SFCCS}, \cref{eq:SymForCCS:k} and \cref{eq:SymForCCS:Remaining}, we readily obtain the following result. \begin{corollary}\label{cor:DifRepreCCS} Suppose that $x_{1}, \ldots, x_{m}$ are affinely independent. Then \begin{align*} \CCO(S) =\big(1- \textstyle \sum^{m-1}_{i=1} \alpha_{i}\big) x_{1} + \alpha_{1} x_{2} + \cdots+\alpha_{m-1} x_{m}, \end{align*} where $(\forall i \in \{1, \ldots, m-1\})$ $\alpha_{i}= \frac{1}{2\delta} \det(A_{i})$. Moreover, \begin{align*} 1- \sum^{m-1}_{i=1} \alpha_{i} =\frac{1}{2\delta} \det(B_{1}),~~~\alpha_{k-1}= 1- \sum^{m-1}_{i=1} \frac{1}{2\delta} \det(B_{i}), \end{align*} \begin{align*} (\forall i \in \{2, \ldots, k-1\}) \quad \alpha_{i -1}=\frac{1}{2\delta} \det(B_{i}) \quad \text{and} \quad (\forall j \in \{k, k+1, \ldots, m-1\}) \quad \alpha_{j} =\frac{1}{2\delta} \det(B_{j}). \end{align*} \end{corollary} \section{Basic properties of the circumcenter} \label{sec:BasiPropCircOper} In this section we collect some fundamental properties of the circumcenter operator. Recall that \begin{empheq}[box=\mybluebox]{equation*} m \in \mathbb{N} \smallsetminus \{0\}, \quad x_1,\ldots,x_m \text{~are vectors in $\mathcal{H}$},\quad \text{and} \quad S = \{x_{1}, \ldots, x_{m}\}. \end{empheq} \begin{proposition}[scalar multiples] \label{prop:CircumHomoge} Let $\lambda \in \mathbb{R} \smallsetminus \{0\}$. Then $\CCO(\lambda S)=\lambda \CCO(S)$. \end{proposition} \begin{proof} Let $p \in \mathcal{H}$. By \cref{defn:Circumcenter}, \begin{align*} p = \CCO(S) & \Longleftrightarrow \begin{cases}p \in \ensuremath{\operatorname{aff}}(S)\\ \{\norm{p-s}~|~s \in S \}~\text{is a singleton}~ \end{cases}\\ & \Longleftrightarrow \begin{cases}\lambda p \in \ensuremath{\operatorname{aff}}(\lambda S)\\ \{\norm{\lambda p-\lambda s}~|~\lambda s \in \lambda S \}~\text{is a singleton}~ \end{cases}\\ & \Longleftrightarrow p = \CCO(\lambda S), \end{align*} and the result follows. \end{proof} The next example below illustrates that we had to exclude the case $\lambda =0$ in \cref{prop:CircumHomoge}. \begin{example} Suppose that $\mathcal{H} =\mathbb{R}$ and that $S = \{0, -1, 1\}$. Then \begin{align*} \CCO(0\cdot S) = \{0\} \neq \varnothing = 0\cdot \CCO(S). \end{align*} \end{example} \begin{proposition}[translations] \label{prop:CircumSubaddi} Let $y \in \mathcal{H}$. Then $\CCO(S+y)=\CCO(S)+y$. \end{proposition} \begin{proof} Let $p \in \mathcal{H}$. By \cref{lem:AffineHull}, \begin{align*} p \in \ensuremath{\operatorname{aff}} \{x_{1}, \ldots, x_{m}\} & \Longleftrightarrow (\exists~ \lambda_{1}, \ldots,\lambda_{m} \in \mathbb{R} ~\text{with}~\sum^{m}_{i=1} \lambda_{i}=1) \quad p=\sum^{m}_{i=1} \lambda_{i} x_{i}\\ & \Longleftrightarrow (\exists~ \lambda_{1}, \ldots,\lambda_{m} \in \mathbb{R} ~\text{with}~\sum^{m}_{i=1} \lambda_{i}=1) \quad p+y=\sum^{m}_{i=1} \lambda_{i} (x_{i}+y) \\ & \Longleftrightarrow p+y \in \ensuremath{\operatorname{aff}} \{x_{1}+y, \ldots, x_{m}+y\}, \end{align*} that is \begin{align} \label{eq:prop:CircumSubaddi} p \in \ensuremath{\operatorname{aff}} (S) \Longleftrightarrow p+y \in \ensuremath{\operatorname{aff}} (S +y). \end{align} By \cref{eq:prop:CircumSubaddi} and \cref{rem:Circumcenter}, we have \begin{align*} p = \CCO(S) \in \mathcal{H} & \Longleftrightarrow \begin{cases}p \in \ensuremath{\operatorname{aff}}(S)\\ \{\norm{p-s}~|~s \in S \}~\text{is a singleton}~ \end{cases}\\ & \Longleftrightarrow \begin{cases}p+y \in \ensuremath{\operatorname{aff}}(S+y)\\ \{\norm{(p+y)-(s+y)}~|~s+y \in S+y \}~\text{is a singleton}~ \end{cases}\\ & \Longleftrightarrow p+y = \CCO(S+y) \in \mathcal{H}. \end{align*} Moreover, because $\varnothing =\varnothing +y$, the proof is complete. \end{proof} \section{Circumcenters of sequences of sets} \label{sec:LimiCircOperSeqSet} We uphold the assumption that \begin{empheq}[box=\mybluebox]{equation*} m \in \mathbb{N} \smallsetminus \{0\}, \quad x_1,\ldots,x_m \text{~are vectors in $\mathcal{H}$},\quad \text{and} \quad S = \{x_{1}, \ldots, x_{m}\}. \end{empheq} In this section, we explore the convergence of the circumcenter operator over a sequence of sets. \begin{theorem} \label{thm:CCO:LimitCont} Suppose that $\CCO(S) \in \mathcal{H}$. Then the following hold: \begin{enumerate} \item \label{prop:CCO:LimitCont:Linear} Set $t=\dim \Big( \ensuremath{{\operatorname{span}}}\{x_{2}-x_{1}, \ldots, x_{m}-x_{1}\} \Big)$, and let $\widetilde{S}=\{x_{1}, x_{i_{1}}, \ldots, x_{i_{t}}\} \subseteq S$ be such that $x_{i_{1}} -x_{1}, \ldots, x_{i_{t}}-x_{1}$ is a basis of $\ensuremath{{\operatorname{span}}}\{x_{2}-x_{1}, \ldots, x_{m}-x_{1}\}$. Furthermore, let $\Big( (x^{(k)}_{1}, x^{(k)}_{i_{1}}, \ldots, x^{(k)}_{i_{t}}) \Big)_{k \geq 1}$ $\subseteq$ $\mathcal{H}^{t+1}$ with $\lim_{k\rightarrow \infty}( x^{(k)}_{1}, x^{(k)}_{i_{1}}, \ldots, x^{(k)}_{i_{t}})=(x_{1}, x_{i_{1}}, \ldots, x_{i_{t}})$, and set $(\forall k \geq 1)$ $\widetilde{S}^{(k)} = \{x^{(k)}_{1}, x^{(k)}_{i_{1}}, \ldots, x^{(k)}_{i_{t}}\}$. Then there exist $N \in \mathbb{N}$ such that for every $k \geq N$, $\CCO(\widetilde{S}^{(k)}) \in \mathcal{H}$ and \begin{align*} \lim_{k \rightarrow \infty} \CCO(\widetilde{S}^{(k)})= \CCO(\widetilde{S})=\CCO(S). \end{align*} \item \label{prop:CCO:LimitCont:LinearIndep} Suppose that $ x_{1}, \ldots, x_{m-1}, x_{m}$ are affinely independent, and let $ \Big( (x^{(k)}_{1}, \ldots, x^{(k)}_{m-1}, x^{(k)}_{m}) \Big)_{k \geq 1}$ $\subseteq$ $\mathcal{H}^{m} $ satisfy $\lim_{k\rightarrow \infty}( x^{(k)}_{1}, \ldots,x^{(k)}_{m-1}, x^{(k)}_{m})=(x_{1}, \ldots, x_{m-1},x_{m})$. Set $(\forall k \geq 1)$ $S^{(k)}=\{x^{(k)}_{1}, \ldots, x^{(k)}_{m-1}, x^{(k)}_{m}\}$. Then \begin{align*} \lim_{k \rightarrow \infty} \CCO( S^{(k)} )= \CCO( S ). \end{align*} \end{enumerate} \end{theorem} \begin{proof} \cref{prop:CCO:LimitCont:Linear}: Let $l$ be the cardinality of the set $S$. Assume first that $l=1$. Then $t=0$, and $\widetilde{S}=\{x_{1}\}$. Let $ (x^{(k)}_{1} )_{k \geq 1} \subseteq \mathcal{H}$ satisfy $\lim_{k \rightarrow \infty} x^{(k)}_{1}=x_{1}$. By \cref{defn:Circumcenter}, we know $\CCO(\{ x^{(k)}_{1} \}) =x^{(k)}_{1}$ and $\CCO(\{ x_{1}\})=x_{1}$. Hence \begin{align*} \lim_{k \rightarrow \infty} \CCO(\widetilde{S}^{(k)})= \lim_{k \rightarrow \infty} x^{(k)}_{1}=x_{1}=\CCO(S). \end{align*} Now assume that $l \geq 2$. By \cref{cor:unique:BasisPformula} and \cref{lem:Basis:AffineHullEq} , we obtain \begin{align} \label{eq:prop:CCO:LimitCont:1} \CCO(S)=\CCO(\widetilde{S})=x_{1}+\frac{1}{2}(x_{i_{1}}-x_{1},\ldots,x_{i_{t}}-x_{1}) G( x_{i_{1}}-x_{1},\ldots,x_{i_{t}}-x_{1})^{-1} \begin{pmatrix} \norm{x_{i_{1}}-x_{1}}^{2} \\ \vdots\\ \norm{x_{i_{t}}-x_{1}}^{2}\\ \end{pmatrix}. \end{align} Using the assumptions and the \cref{lem:AffineIndep:OpenSet}, we know that there exists $N \in \mathbb{N}$ such that \begin{align*} (\forall k \geq N) \quad x^{(k)}_{1}, x^{(k)}_{i_{1}}, \ldots, x^{(k)}_{i_{t}}~\mbox{are affinely independent.} \end{align*} By \cref{thm:unique:LinIndpPformula}, we know $(k \geq N)$ $\CCO(\widetilde{S}^{(k)}) \in \mathcal{H}$. Moreover, for every $k \geq N$, \begin{align} \label{eq:prop:CCO:LimitCont:1k} \CCO(\widetilde{S}^{(k)})=x^{(k)}_{1}+ \frac{1}{2}(x^{(k)}_{i_{1}}-x_{1}^{(k)} ,\ldots,x^{(k)}_{i_{t}}-x_{1}^{(k)}) G( x^{(k)}_{i_{1}}-x_{1}^{(k)} ,\ldots,x^{(k)}_{i_{t}}-x_{1}^{(k)})^{-1} \begin{pmatrix} \norm{x^{(k)}_{i_{1}}-x_{1}^{(k)}}^{2} \\ \vdots\\ \norm{x^{(k)}_{i_{t}}-x_{1}^{(k)}}^{2} \\ \end{pmatrix}. \end{align} Comparing \cref{eq:prop:CCO:LimitCont:1} with \cref{eq:prop:CCO:LimitCont:1k} and using \cref{cor:GramInver:Continu}, we obtain \begin{align*} \lim_{k \rightarrow \infty} \CCO(\widetilde{S}^{(k)})= \CCO(\widetilde{S})=\CCO(S). \end{align*} \cref{prop:CCO:LimitCont:LinearIndep}: Let $ x_{1}, \ldots, x_{m-1}, x_{m} \in \mathcal{H}$ be affinely independent. Then $t = m-1$ and $\widetilde{S} =S$. Substitute the $\widetilde{S}$ and $\widetilde{S}^{(k)}$ in part \cref{prop:CCO:LimitCont:Linear} by our $S$ and $S^{(k)}$ respectively. Then we obtain \begin{align*} \lim_{k \rightarrow \infty} \CCO( S^{(k)} )= \CCO( S ) \end{align*} and the proof is complete. \end{proof} \begin{corollary} \label{cor:Psi:Contin} The mapping \begin{align*} \Psi\colon \mathcal{H}^{m} \rightarrow \mathcal{H} \cup \{ \varnothing \} \colon (x_{1}, \ldots, x_{m}) \mapsto \CCO(\{x_{1}, \ldots, x_{m}\}). \end{align*} is continuous at every point $(x_{1}, \ldots, x_{m})\in \mathcal{H}^{m}$ where $x_{1}, \ldots, x_{m}$ is affinely independent. \end{corollary} \begin{proof} This follows directly from \cref{thm:CCO:LimitCont}\cref{prop:CCO:LimitCont:LinearIndep}. \end{proof} Let us record the doubleton case explicitly. \begin{proposition} \label{prop:CircumOperaContin2Points} Suppose that $m=2$. Let $\big( ( x^{(k)}_{1},x^{(k)}_{2}) \big)_{k \geq 1} \subseteq \mathcal{H}^{2}$ satisfy $\lim_{k \rightarrow \infty}( x^{(k)}_{1},x^{(k)}_{2})=(x_{1},x_{2})$. Then \begin{align*} \lim_{k \rightarrow \infty} \CCO(\{ x^{(k)}_{1},x^{(k)}_{2}\})=\CCO(\{x_{1},x_{2}\}). \end{align*} \end{proposition} \begin{proof} Indeed, we deduce from \cref{exam:CircForTwoPoints} that \begin{align*} \lim_{k \rightarrow \infty} \CCO( \{ x^{(k)}_{1},x^{(k)}_{2}\})=\lim_{k \rightarrow \infty} \frac{x^{(k)}_{1}+x^{(k)}_{2} }{2} =\frac{x_{1}+x_{2}}{2}=\CCO( \{x_{1},x_{2}\}) \end{align*} and the result follows. \end{proof} The following example illustrates that the assumption that ``$m=2$'' in \cref{prop:CircumOperaContin2Points} cannot be replaced by ``the cardinality of $S$ is 2''. \begin{example} \label{exam:CounterExampleContinuity:empty } Suppose that $\mathcal{H}=\mathbb{R}^{2}$, that $m=3$, and that $S=\{x_{1}, x_{2},x_{3}\}$ with $x_{1}=(-1,0)$, $x_{2}=x_{3}=(1,0)$. Then there exists $( (x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}) )_{k \geq 1} \subseteq \mathcal{H}^{3}$ such that \begin{align*} \lim_{k \rightarrow \infty} \CCO( \{ x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}\}) \neq \CCO(S). \end{align*} \end{example} \begin{proof} For every $k \geq 1$, let $(x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}) = \Big( (-1,0), (1,0), (1+\frac{1}{k},0) \Big) \in \mathcal{H}^{3}$. Then by \cref{defn:Circumcenter}, we know that $(\forall k \geq 1)$, $\CCO(S^{(k)})= \varnothing$, since there is no point in $\mathbb{R}^{2}$ which has equal distance to all of the three points. On the other hand, by \cref{defn:Circumcenter} again, we know $\CCO(S)=(0,0) \in \mathcal{H}$. Hence $\lim_{k \rightarrow \infty} \CCO( \{ x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}\}) = \varnothing \neq (0,0) = \CCO(S)$. \end{proof} The following question now naturally arises: \begin{question} \label{quest:CCS:Contin} Suppose that $\CCO( \{x_{1},x_{2},x_{3}\}) \in \mathcal{H}$, and let $\big( (x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}) \big)_{k \geq 1} \subseteq \mathcal{H}^{3}$ be such that $\lim_{k \rightarrow \infty} (x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3})=(x_{1},x_{2},x_{3})$. Is it true that the implication \begin{equation*} (\forall k \geq 1) \quad \CCO(\{x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}\}) \in \mathcal{H} \Longrightarrow \lim_{k \rightarrow \infty} \CCO(\{x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}\}) = \CCO( \{x_{1},x_{2},x_{3}\}) \end{equation*} holds? \end{question} When $x_{1},x_{2}, x_{3}$ are affinely independent, then \cref{thm:CCO:LimitCont}\cref{prop:CCO:LimitCont:LinearIndep} gives us an affirmative answer. However, the answer is negative if $x_{1},x_{2}, x_{3}$ are not assumed to be affinely independent. \begin{example} \label{exam:CounterExampleContinuity } Suppose that $\mathcal{H}=\mathbb{R}^{2}$ and $S=\{x_{1}, x_{2},x_{3}\}$ with $x_{1}=(-2,0)$, $x_{2}=x_{3}=(2,0)$. Then there exists a sequence $\big( (x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}) \big)_{k \geq 1} \subseteq \mathcal{H}^{3}$ such that \begin{enumerate} \item \label{exam:CounExamCont:condi} $\lim_{k \rightarrow \infty} (x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3})=(x_{1},x_{2},x_{3})$, \item \label{exam:CounExamCont:i} $(\forall k \geq 1) \quad \CCO(\{x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}\}) \in \mathbb{R}^{2}$, and \item \label{exam:CounExamCont:ii} $\lim_{k \rightarrow \infty} \CCO( \{ x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}\}) \neq \CCO(S)$. \end{enumerate} \end{example} \begin{proof} By \cref{defn:Circumcenter}, we know that $\CCO(S)=(0,0)\in \mathcal{H}$. Set \begin{align*} (\forall k \geq 1) \quad S^{(k)}=\{x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}\}=\Big\{(-2,0), (2,0), \big(2-\tfrac{1}{k},\tfrac{1}{4k}\big) \Big\}. \end{align*} \cref{exam:CounExamCont:condi}: In this case, \begin{align*} \lim_{k \rightarrow \infty} (x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}) =&\lim_{k \rightarrow \infty} \Big( (-2,0), (2,0), \big(2-\tfrac{1}{k},\tfrac{1}{4k}\big) \Big)\\ =&\big( (-2,0),(2,0),(2,0) \big) \\ =&(x_{1},x_{2},x_{3}). \end{align*} \cref{exam:CounExamCont:i}: It is clear that for every $k \geq 1$, the vectors $(-2,0), (2,0), (2-\frac{1}{k},\frac{1}{4k}) $ are not colinear, that is, $(-2,0), (2,0), (2-\frac{1}{k},\frac{1}{4k})$ are affinely independent. By \cref{thm:unique:LinIndpPformula}, we see that \begin{align*} (\forall k \geq 1) \quad \CCO(\{x^{(k)}_{1}, x^{(k)}_{2},x^{(k)}_{3}\}) \in \mathbb{R}^{2}. \end{align*} \cref{exam:CounExamCont:ii}: Let $k \geq 1$. By definition of $\CCO(S^{(k)})$ and \cref{exam:CounExamCont:i}, we deduce that $\CCO(S^{(k)}) =(p^{(k)}_{1}, p^{(k)}_{2}) \in \mathbb{R}^{2}$ and that \begin{align*} \norm{\CCO(S^{(k)}) -x^{(k)}_{1} }= \norm{\CCO(S^{(k)}) -x^{(k)}_{2} }=\norm{\CCO(S^{(k)}) -x^{(k)}_{3} }. \end{align*} Because $ \CCO(S^{(k)})$ must be in the intersection of the perpendicular bisector of $ x^{(k)}_{1}=(-2,0), x^{(k)}_{2}=(2,0)$ and the perpendicular bisector of $ x^{(k)}_{2}=(2,0), x^{(k)}_{3}=(2-\frac{1}{k},\frac{1}{4k})$, we obtain \begin{align*} p^{(k)}_{1}=0 \quad \text{and} \quad p^{(k)}_{2}=4\big( p^{(k)}_{1} - \frac{2+2-\frac{1}{k}}{2}\big)+\tfrac{1}{8k}; \end{align*} thus, \begin{align} \label{eq:exam:CounterExampleContinuity:SkFormula} \CCO(S^{(k)})=(p^{(k)}_{1}, p^{(k)}_{2}) = \big(0, -8+\tfrac{2}{k}+\tfrac{1}{8k}\big). \end{align} (Alternatively, we can use the formula in \cref{thm:unique:LinIndpPformula} to get \cref{eq:exam:CounterExampleContinuity:SkFormula}). Therefore, \begin{align*} \lim_{k \rightarrow \infty} \CCO(S^{(k)})= \lim_{k \rightarrow \infty} \big(0, -8+\tfrac{2}{k} +\tfrac{1}{8k}\big)=(0,-8) \neq (0,0)=\CCO(S), \end{align*} and the proof is complete. As the picture below shows, $(\forall k \geq 1)$ $x^{(k)}_{3}=(2-\frac{1}{k},\frac{1}{4k})$ converges to $x_{3}=(2,0)$ along the purple line $L=\{(x,y) \in \mathbb{R}^{2}~|~y=-\frac{1}{4}(x-2) \}$. In fact, $\CCO(S^{(k)})$ is just the intersection point between the two lines $M_{1}$ and $M_{2}$, where $M_{1}$ is the perpendicular bisector between the points $x_{1}$ and $x_{2}$, and $M_{2}$ is the perpendicular bisector between the points $x^{(k)}_{3}$ and $x_{2}$. \begin{figure}[H] \label{CountExampContinu.png} \begin{center}\includegraphics[scale=0.6]{CountExampContinu.png} \end{center} \caption{Continuity of circumcenter operator may fail even when $(\forall k \geq 1)$ $\CCO(S^{(k)}) \in \mathcal{H}$.} \end{figure} \end{proof} \section{The circumcenter of three points} \label{sec:CircThreePoints} In this section, we study the circumcenter of a set containing three points. We will give a characterization of the existence of circumcenter of three pairwise distinct points. In addition, we shall provide asymmetric and symmetric formulae. \begin{theorem}\label{thm:clform:three} Suppose that $S=\{x,y,z\} \in \mathcal{P}(\mathcal{H})$ and that $l=3$ is the cardinality of $S$. Then $x, y, z$ are affinely independent if and only if $\CCO(S) \in \mathcal{H}$. \end{theorem} \begin{proof} If $S$ is affinely independent, then $\CCO(S) \in \mathcal{H}$ by \cref{thm:unique:LinIndpPformula}. To prove the converse implication, suppose that $\CCO(S) \in \mathcal{H}$, i.e., \begin{enumerate} \item $\CCO(S) \in \ensuremath{\operatorname{aff}} \{x,y,z\}$ , and \item $\norm{\CCO(S)-x}=\norm{\CCO(S)-y}=\norm{\CCO(S)-z}$. \end{enumerate} We argue by contradiction and thus assume that the elements of $S$ are affinely dependent: \begin{align*} \dim( \ensuremath{{\operatorname{span}}}\{S-x\}) = \dim ( \ensuremath{{\operatorname{span}}} \{y-x, z-x\}) \leq 1. \end{align*} Note that $y-x \neq 0$ and $z-x \neq 0$. Set \begin{align*} U = x + \ensuremath{{\operatorname{span}}} \{y-x, z-x\} = x +\ensuremath{{\operatorname{span}}} \{y-x\} = x+\ensuremath{{\operatorname{span}}} \{z -x\}. \end{align*} Combining with \cref{lem:AffineHull}, we get \begin{align} \label{eq:thm:three:URepre} U =\ensuremath{\operatorname{aff}} \{x, y,z\} = \ensuremath{\operatorname{aff}}\{x,y\} =\ensuremath{\operatorname{aff}}\{x,z\}. \end{align} By definition of $\CCO(S)$, we have \begin{align} \label{eq:thm:clform:three:p1} \CCO(S) \in \ensuremath{\operatorname{aff}} \{x,y\} \stackrel{\cref{eq:thm:three:URepre}}{=} U \quad \mbox{and} \quad \norm{\CCO(S)-x}=\norm{\CCO(S)-y}, \end{align} and \begin{align} \label{eq:thm:clform:three:p2} \CCO(S) \in \ensuremath{\operatorname{aff}} \{x,z\} \stackrel{\cref{eq:thm:three:URepre}}{=} U \quad \mbox{and} \quad \norm{\CCO(S)-x}=\norm{\CCO(S)-z}. \end{align} Now using \cref{prop:NormEqInnNorm:Norm} $\Leftrightarrow$ \ref{prop:NormEqInnNorm:ProjeEqu} in \cref{prop:NormEqInnNorm} and using \cref{eq:thm:clform:three:p1}, we get \begin{align*} \CCO(S)=P_{U}\Big(\CCO(S) \Big)=\frac{x+y}{2}. \end{align*} Similarly, using \cref{prop:NormEqInnNorm:Norm} $\Leftrightarrow$ \cref{prop:NormEqInnNorm:ProjeEqu} in \cref{prop:NormEqInnNorm} and using \cref{eq:thm:clform:three:p2}, we can also get \begin{align*} \CCO(S)=P_{U}\Big(\CCO(S) \Big)=\frac{x+z}{2}. \end{align*} Therefore, \begin{align*} \frac{x+y}{2} =\CCO(S)=\frac{x+z}{2} \Longrightarrow y=z, \end{align*} which contradicts the assumption that $l=3$. The proof is complete. \end{proof} In contrast, when the cardinality of $S$ is $4$, then \begin{empheq}[box=\mybluebox]{equation*} \CCO(S) \in \mathcal{H} \not \Rightarrow ~\text{elements of $S$ are affinely independent} \end{empheq} as the following example demonstrates. Thus the characterization of the existence of circumcenter in \cref{thm:clform:three} is generally not true when we consider $l\geq 3$ pairwise distinct points. \begin{example} \label{exam:AffinDepExist} Suppose that $\mathcal{H}=\mathbb{R}^{2}$, that $m=4$, and $S = \{x_{1},x_{2}, x_{3},x_{4}\}$, where $x_{1}=(0,0)$, $x_{2}=(4,0)$, $x_{3}=(0,4)$, and $x_{4}=(4,4)$ (see \cref{CircFourAffineDep.png}). Then $x_{1},x_{2}, x_{3},x_{4}$ are pairwise distinct and affinely dependent, yet $\CCO(S) = (2,2)$. \end{example} \begin{figure}[H] \begin{center}\includegraphics[scale=0.8]{Count_ExistChara.png} \end{center} \caption{Circumcenter of the four affinely dependent points from \cref{exam:AffinDepExist}.} \label{CircFourAffineDep.png} \end{figure} In \cref{prop:unique:PExisUnique} and \cref{thm:unique:LinIndpPformula} above, where we presented formulae for $\CCO(S)$, we gave special importance to the first point $x_{1}$ in $S$. We now provide some longer yet symmetric formulae for $\CCO(S)$. \begin{remark}\label{rem:SymFormCircu} Suppose that $S=\{x,y,z\}$ and that $l=3$ is the cardinality of $S$. Assume furthermore that $\CCO(S) \in \mathcal{H}$, i.e., there is an unique point $\CCO(S)$ satisfying \begin{enumerate} \item $\CCO(S) \in \ensuremath{\operatorname{aff}} \{x,y,z\}$ and \item $\norm{\CCO(S)-x}=\norm{\CCO(S)-y}=\norm{\CCO(S)-z}$. \end{enumerate} By \cref{thm:clform:three}, the vectors $x, y, z$ must be affinely independent. From \cref{thm:unique:LinIndpPformula} we obtain \begin{align*} \CCO(S) = &~ x+\frac{1}{2} (y-x,z-x)\begin{pmatrix} \norm{y-x}^{2} & \innp{y-x, z-x} \\ \innp{z-x,y-x} & \norm{z-x}^{2} \\ \end{pmatrix} ^{-1} \begin{pmatrix} \norm{y-x}^{2} \\ \norm{z-x}^{2} \\ \end{pmatrix} \\ = &~ x+\frac{(\norm{y-x}^{2} \norm{z-x}^{2} -\norm{z-x}^{2}\innp{y-x, z-x})(y-x)}{2(\norm{y-x}^{2}\norm{z-x}^{2}- \innp{y-x, z-x}^{2})} \\ &~ +\frac{ (\norm{y-x}^{2} \norm{z-x}^{2} -\norm{y-x}^{2}\innp{y-x, z-x})(z-x) }{2(\norm{y-x}^{2}\norm{z-x}^{2}- \innp{y-x, z-x}^{2})} \\ = &~ \frac{1}{K_{1}}\Big( \norm{y-z}^{2} \innp{x-z,x-y}x+ \norm{x-z}^{2} \innp{y-z,y-x}y+ \norm{x-y}^{2} \innp{z-x,z-y}z \Big), \end{align*} where $K_{1}=2(\norm{y-x}^{2}\norm{z-x}^{2}- \innp{y-x, z-x}^{2})$.\\ Similarly, \begin{align*} \CCO(S)=\frac{1}{K_{2}}\Big( \norm{y-z}^{2} \innp{x-z,x-y}x+\norm{x-z}^{2} \innp{y-z,y-x}y+ \norm{x-y}^{2} \innp{z-x,z-y}z \Big), \end{align*} where $K_{2}=2(\norm{x-y}^{2}\norm{z-y}^{2}- \innp{x-y, z-y}^{2})$ and \begin{align*} \CCO(S)=\frac{1}{K_{3}}\Big( \norm{y-z}^{2} \innp{x-z,x-y}x+ \norm{x-z}^{2} \innp{y-z,y-x}y+ \norm{x-y}^{2} \innp{z-x,z-y}z \Big), \end{align*} where $K_{3}=2(\norm{x-z}^{2}\norm{y-z}^{2}- \innp{x-z, y-z}^{2})$. In view \cref{prop:unique:PExisUnique} (the uniqueness of the circumcenter), we now average the three formulae from above to obtain the following symmetric formula for $p$: \begin{align*} \CCO(S)= \frac{1}{K}\Big( \norm{y-z}^{2} \innp{x-z,x-y}x+ \norm{x-z}^{2} \innp{y-z,y-x}y+\norm{x-y}^{2} \innp{z-x,z-y}z \Big), \end{align*} where $K=\frac{1}{6} \Big( \frac{1}{\norm{y-x}^{2}\norm{z-x}^{2}- \innp{y-x, z-x}^{2}}+\frac{1}{\norm{x-y}^{2}\norm{z-y}^{2}- \innp{x-y, z-y}^{2}}+\frac{1}{\norm{x-z}^{2}\norm{y-z}^{2}- \innp{x-z, y-z}^{2}} \Big)$. In fact, \cref{prop:GramMatrSymm} yields $K_{1}=K_{2}=K_{3}$. \end{remark} We now summarize the above discussion so far in the following two pleasing main results. \begin{theorem}[nonsymmetric formula for the circumcenter] \label{thm:SymForm} Suppose that $S=\{x,y,z\}$ and denote the cardinality of $S$ by $l$. Then exactly one of the following cases occurs: \begin{enumerate} \item \label{thm:SymForm:1} $l=1$ and $\CCO(S) =x$. \item \label{thm:SymForm:2} $l=2$, say $S=\{u,v\}$, where $u,v \in S$ and $u \neq v$, and $\CCO(S)=\frac{u+v}{2}$. \item \label{thm:SymForm:3} $l=3$ and exactly one of the following two cases occurs: \begin{enumerate} \item \label{thm:SymForm:3:a} $x, y, z$ are affinely independent; equivalently, $\norm{y-x}\norm{z-x} > \innp{y-x,z-x}$, and \begin{align*} \CCO(S) = \frac{\norm{y-z}^{2} \innp{x-z,x-y}x+ \norm{x-z}^{2} \innp{y-z,y-x}y+ \norm{x-y}^{2} \innp{z-x,z-y}z }{2(\norm{y-x}^{2}\norm{z-x}^{2}- \innp{y-x, z-x}^{2})}. \end{align*} \item \label{thm:SymForm:3:b} $x, y, z$ are affinely dependent; equivalently, $\norm{y-x}\norm{z-x} = \innp{y-x,z-x}$, and $\CCO(S) = \varnothing $. \end{enumerate} \end{enumerate} \end{theorem} \begin{theorem}[symmetric formula of the circumcenter] \label{thm:Formular:Circum:Allcases} Suppose that $S=\{x,y,z\}$ and denote the cardinality of $S$ by $l$. Then exactly one of the following cases occurs: \begin{enumerate} \item \label{thm:Formular:Circum:Allcases:i} $l=1$ and $\CCO(S)=x=y=z = \frac{x +y +z}{3}$. \item \label{thm:Formular:Circum:Allcases:ii} $l=2$ and $\CCO(S)= \frac{\norm{x-y}z+\norm{x-z}y+\norm{y-z}x}{\norm{x-y} +\norm{x-z}+\norm{y-z}}$. \item \label{thm:Formular:Circum:Allcases:iii} $l=3$, consider $K=\frac{1}{6} \big( \frac{1}{\norm{y-x}^{2}\norm{z-x}^{2}- \innp{y-x, z-x}^{2}}+\frac{1}{\norm{x-y}^{2}\norm{z-y}^{2}- \innp{x-y, z-y}^{2}}+\frac{1}{\norm{x-z}^{2}\norm{y-z}^{2}- \innp{x-z, y-z}^{2}} \big)$, and exactly one of the following two cases occurs: \begin{enumerate} \item \label{thm:Formular:Circum:Allcases:iii:a} $K\in\left]0,+\infty\right[$ and \begin{align*} \CCO(S) & = \frac{\norm{y-z}^{2} \innp{x-z,x-y}x+ \norm{x-z}^{2} \innp{y-z,y-x}y+\norm{x-y}^{2} \innp{z-x,z-y}z}{K}. \end{align*} \item \label{thm:Formular:Circum:Allcases:iii:b} $K$ is not defined (because of a zero denominator) and $\CCO(S) = \varnothing $. \end{enumerate} \end{enumerate} \end{theorem} \section{Applications of the circumcenter} \label{sec:applications} In this section, we discuss applications of the circumcenter in optimization. Let $z \in \mathcal{H}$, and let $U_1,\ldots,U_m$ be closed subspaces of $\mathcal{H}$. The corresponding best approximation problem is to \begin{empheq}[box=\mybluebox]{equation} \label{eq:BestAppro} \mbox{Find}~\bar{u} \in \cap^{m}_{i=1} U_{i} ~\mbox{such that}~ \norm{z-\bar{u}}=\min_{u \in \cap^{m}_{i=1} U_{i}} \norm{z-u}. \end{empheq} Clearly, the solution of \cref{eq:BestAppro} is just $P_{\cap^{m}_{i=1} U_{i}}z$. Now assume that $\mathcal{H} = \mathbb{R}^{n}$, and let $U$ and $V$ be linear subspaces of $\mathcal{H}$, i.e., we focus on $m=2$ subspaces. Set \begin{align*} \mathcal{S}\colon \mathbb{R}^n\to\mathcal{P}(\mathbb{R}^n)\colon x\mapsto \{x, R_{U}x, R_{V}R_{U}x\}. \end{align*} Behling, Bello Cruz, and Santos introduced and studied in \cite{BCS2017} an algorithm to accelerate the Douglas--Rachford algorithm they termed the \emph{Circumcentered-Douglas-Rachford method (C-DRM)}. Given a current point $x \in \mathbb{R}^{n}$, the next iterate of the C-DRM is the circumcenter of the triangle with vertices $x$, $R_{U}x$ and $R_{V}R_{U}x$. Hence, given the initial point $x \in \mathbb{R}^{n}$, the C-DRM generates the sequence $(x^{(k)})_{k \in \mathbb{N}}$ via \begin{align} \label{eq:rem:CDRM} x^{(0)}=x, \quad \text{and} \quad (\forall k \in \mathbb{N}) \quad x^{(k+1)} =\CCO(\mathcal{S}(x^{(k)})). \end{align} Behling et al.'s \cite[Lemma~2]{BCS2017} guarantees that for every $x \in \mathbb{R}^{n}$, the circumcenter $\CCO(\mathcal{S}(x))$ is the projection of any point $w \in U \cap V$ onto the affine subspace $\ensuremath{\operatorname{aff}} \{x, R_{U}x, R_{V}R_{U}x\}$. Here, the existence of the circumcenter of $\mathcal{S}(x)$ turns out to be a necessary condition for the nonemptiness of $U \cap V$. In fact, $\CCO(\mathcal{S}(x)) = P_{\ensuremath{\operatorname{aff}} (\mathcal{S}(x))} (P_{U\cap V}x)$, which means that $\CCO(\mathcal{S}(x))$ is the closest point to the $P_{U\cap V}x$ among the points in the affine subspace $\ensuremath{\operatorname{aff}} (\mathcal{S}(x))$. In \cite[Theorem~1]{BCS2017}, the authors proved that if $x$ in \cref{eq:rem:CDRM} is replaced by $P_{U}z$, $P_{V}z$ or $P_{U+V}z$, where $z \in \mathbb{R}^{n}$, then the C-DRM sequence defined in \cref{eq:rem:CDRM} converges linearly to $P_{U\cap V}z$. Moreover, their rate of convergence is at least the cosine of the Friedrichs angle between $U$ and $V$, $c_{F} \in \left[0,1\right[$, which happens to be the sharp rate for the original DRM; see \cite[Theorem~4.1]{BCNPW2014} for details. In \cite[Section~3.1]{BCS2017}, the authors elaborate on how to compute the circumcenter of $\mathcal{S}(x)$ in $\mathbb{R}^{n}$. They used the fact that the projection of $\CCO(\mathcal{S}(x))$ onto each vector $R_{U}x -x$ and $R_{V}R_{U}x -x$ has its endpoint at the midpoint of the line segment from $x$ to $R_{U}x$ and $x$ to $R_{V}R_{U}x$. They exhibited a $2 \times 2$ linear system of equations to calculate the $\CCO(\mathcal{S}(x))$ and an expression of the $\CCO(\mathcal{S}(x))$ with parameters. Their expression of the $\CCO(\mathcal{S}(x))$ can be deduced from our \cref{rem:SymFormCircu}. Actually, for every $x \in \mathbb{R}^{n}$, using \cref{thm:SymForm}\cref{thm:SymForm:3:a}, we can easily obtain a closed formula for $\CCO(\mathcal{S}(x))$ allowing us to efficiently calculate the C-DRM sequence. In \cite[Corollary~3]{BCS2017}, Behling et al.\ proved that their linear convergence results are applicable to affine subspaces with nonempty intersection using the Friedrichs angle of suitable linear subspaces parallel to the original affine subspaces. Returning to \cref{eq:BestAppro}, we now set \begin{align*} \widehat{\mathcal{S}} \colon \mathbb{R}^n\to\mathcal{P}(\mathbb{R}^n)\colon x\mapsto \big\{x, R_{U_{1}}x, R_{U_{2}}R_{U_{1}}x, \ldots, R_{U_{m}}\cdots R_{U_{2}}R_{U_{1}}x\big\}. \end{align*} In order to minimize the inherent zig-zag behaviour of sequences generated by various reflection and projection methods, Behling et al.\ generalized the C-DRM in \cite{BCS2018} to the so-called \emph{Circumcentered-Reflection Method (CRM)}. Using our notation, it turns out that the underlying CRM operator $C\colon\mathbb{R}^n\to\mathbb{R}^n$ is nothing but the composition $\CCO\,\circ\, \widehat{\mathcal{S}}$. Hence Behling et al.'s CRM sequence is just \begin{align} \label{eq:CRM:Sequ} x^{(0)}=x, \quad \text{and} \quad (\forall k \in \mathbb{N}) \quad x^{(k+1)} =\CCO(\widehat{\mathcal{S}}(x^{(k)})). \end{align} In \cite[Lemma~3.1]{BCS2018}, they show $C$ is well defined. Moreover, they also obtain \begin{align*} (\forall w \in \cap^{m}_{i=1} U_{i} ) \quad \CCO(\widehat{\mathcal{S}}(x)) = P_{\ensuremath{\operatorname{aff}} (\widehat{\mathcal{S}}(x))} (w). \end{align*} In particular, $\CCO(\widehat{\mathcal{S}}(x)) = P_{\ensuremath{\operatorname{aff}} (\widehat{\mathcal{S}}(x))}(P_{\cap^{m}_{i=1} U_{i}}x)$, which means that the circumcenter of the set $\widehat{\mathcal{S}}(x)$ is the point in $\ensuremath{\operatorname{aff}} ( \widehat{\mathcal{S}}(x) )$ that is closest to $P_{\cap^{m}_{i=1} U_{i}}x$. Behling et al.'s central convergence result (see \cite[Theorem~3.3]{BCS2018}) states that the CRM sequence \cref{eq:CRM:Sequ} converges linearly to $P_{\cap^{m}_{i=1} U_{i}}x$. For the actual computation of the circumcenter of the set $\widehat{\mathcal{S}}(x)$, both \cite{BCS2017} and \cite{BCS2018} only contain passing references to that the computation ``requires the resolution of a suitable $m \times m$ linear system of equations.'' Concluding this section, let us point out that the explicit formula presented in \cref{cor:unique:BasisPformula} may be used; after finding a maximally linearly independent subset of $\widehat{\mathcal{S}}(x)-x$ (using Matlab, say) one can directly use the formula in \cref{cor:unique:BasisPformula} to calculate the circumcenter. \section{The circumcenter in $\mathbb{R}^3$ and the crossproduct}\label{sec:CircCrossProd} We conclude this paper by expressing the circumcenter and circumradius in $\mathbb{R}^3$ by using the cross product. We start by reviewing some properties of the cross product. \begin{definition}[cross product] \cite[page 483]{A1967} \label{def:CrosPro} Let $x=(x_{1}, x_{2}, x_{3})$ and $y=(y_{1}, y_{2},y_{3})$ be two vectors in $\mathbb{R}^{3}$. The \emph{cross product} $x \times y$ (in that order) is \begin{align*} x \times y=(x_{2}y_{3}-x_{3}y_{2}, x_{3}y_{1}-x_{1}y_{3}, x_{1}y_{2}-x_{2}y_{1}). \end{align*} \end{definition} \begin{fact} {\rm \cite[Theorem~13.12]{A1967} and \cite[Theorem~17.12]{C1969}} \label{lem:CrosProdProper} Let $x, y, z$ be in $\mathbb{R}^{3}$. Then the following hold: \begin{enumerate} \item \label{lem:CrosProdProper:Binliea} The cross product defined in \cref{def:CrosPro} is a bilinear function, that is, for every $\alpha, \beta \in \mathbb{R}$, \begin{align*} (\alpha x +\beta y) \times z =\alpha (x \times z) +\beta (y \times z) \quad \text{and} \quad x \times (\alpha y +\beta z)= \alpha (x \times y) +\beta (x \times z). \end{align*} \item \label{lem:CrosProdProper:PerpenSpanSpace} $x \times y \in (\ensuremath{{\operatorname{span}}} \{x,y\})^{\perp}$, that is \begin{align*} (\forall \alpha \in \mathbb{R}) \quad (\forall \beta \in \mathbb{R}) \quad \innp{x \times y , \alpha x +\beta y} =0. \end{align*} \item \label{fact:CroProProp:TripCroPro} We have \begin{align*} (x \times y) \times z = \innp{x,z}y -\innp{y,z}x \quad \text{and} \quad x \times (y \times z)= \innp{x,z} y -\innp{x,y}z. \end{align*} \item \label{lem:CrosProdProper:Lagrang} {\rm \textbf{(Lagrange's identity)}} $\norm{x \times y}^{2}=\norm{x}^{2}\norm{y}^{2}-\innp{x,y}^{2}$. \end{enumerate} \end{fact} \begin{definition} \cite[page 458]{A1967} \label{def:Angle} Let $x$ and $y$ be two nonzero vectors in $\mathbb{R}^{n}$, where $n \geq 1$. Then the \emph{angle} $\theta$ between $x$ and $y$ is defined by \begin{align*} \theta =\arccos \frac{\innp{x,y}}{\norm{x}\norm{y}}, \end{align*} where $\arccos \colon [-1,1] \to [0,\pi]$. \end{definition} \begin{remark} If $x$ and $y$ are two nonzero vectors in $\mathbb{R}^{n}$, where $n \geq 1$, then \begin{align*} \innp{x,y}=\norm{x}\norm{y}\cos \theta, \end{align*} where $\theta$ is the angle between $x$ and $y$. \end{remark} \begin{fact} {\rm \cite[page~485]{A1967}} \label{fact:CroProProp:crosproGeome} Let $x$ and $y$ be two nonzero vectors in $\mathbb{R}^{3}$, and let $\theta$ be the angle between $x$ and $y$. Then \begin{align*} \norm{x \times y}=\norm{x}\norm{y}\sin \theta =~\text{the area of the parallelogram determined by $x$ and $y$}. \end{align*} \end{fact} Now we are ready for the expression of the circumcenter and circumradius by cross product. \begin{theorem} \label{thm:CrosCircum} Suppose that $\mathcal{H}=\mathbb{R}^{3}$, that $x,y,z$ are affinely independent, and that $S=\{x,y,z \} $. Set $a=y-x$, and $b=z-x$ and let the angle between $a$ and $b$, defined in \cref{def:Angle}, be $\theta$. Then \begin{enumerate} \item \label{thm:CrosCircum:i} $\CCO(S)=x+\frac{(\norm{a}^{2}b-\norm{b}^{2}a )\times (a \times b)}{2 \norm{a \times b}^{2}}$. \item {\rm\cite[1.54]{C1969}} \label{thm:CrosCircum:ii} $ \CRO(S)=\frac{\norm{a}\norm{b}\norm{a-b}}{2 \norm{a \times b}} =\frac{\norm{a-b}}{2 \sin \theta}$. \end{enumerate} \end{theorem} \begin{proof} \cref{thm:CrosCircum:i}: Using the formula of circumcenter in \cref{thm:unique:LinIndpPformula}, we have \begin{align*} \CCO(S)&=x+\frac{1}{2} \begin{pmatrix} y-x &z-x \end{pmatrix}\begin{pmatrix} \norm{y-x}^{2} & \innp{y-x, z-x} \\ \innp{z-x,y-x} & \norm{z-x}^{2} \\ \end{pmatrix} ^{-1} \begin{pmatrix} \norm{y-x}^{2} \\ \norm{z-x}^{2} \\ \end{pmatrix} \\ &=x+\frac{1}{2} \begin{pmatrix} a& b \end{pmatrix} \begin{pmatrix} \norm{a}^{2} & \innp{a,b} \\ \innp{b,a} & \norm{b}^{2} \\ \end{pmatrix} ^{-1} \begin{pmatrix} \norm{a}^{2} \\ \norm{b}^{2} \\ \end{pmatrix} \\ &=x+ \frac{1}{2( \norm{a}^{2} \norm{b}^{2}-\innp{a,b}^{2})} \begin{pmatrix} a& b \end{pmatrix} \begin{pmatrix} \norm{b}^{2} & -\innp{a,b} \\ -\innp{b,a} & \norm{a}^{2} \\ \end{pmatrix} \begin{pmatrix} \norm{a}^{2} \\ \norm{b}^{2} \\ \end{pmatrix} \\ &=x+ \frac{1}{2( \norm{a}^{2} \norm{b}^{2}-\innp{a,b}^{2})} \begin{pmatrix} a& b \end{pmatrix} \begin{pmatrix} \norm{a}^{2} \norm{b}^{2} - \norm{b}^{2} \innp{a,b} \\ \norm{a}^{2} \norm{b}^{2} - \norm{a}^{2}\innp{a,b} \\ \end{pmatrix} \\ &=x+ \frac{ ( \norm{a}^{2} \norm{b}^{2} - \norm{b}^{2} \innp{a,b} )a + ( \norm{a}^{2} \norm{b}^{2} - \norm{a}^{2}\innp{a,b} )b }{2( \norm{a}^{2} \norm{b}^{2}-\innp{a,b}^{2})}\\ &=x+ \frac{ \innp{\norm{a}^{2}b-\norm{b}^{2}a, b }a - \innp{\norm{a}^{2}b-\norm{b}^{2}a,a}b }{2( \norm{a}^{2} \norm{b}^{2}-\innp{a,b}^{2})}. \end{align*} Using the \cref{lem:CrosProdProper} \cref{fact:CroProProp:TripCroPro} and \cref{lem:CrosProdProper:Lagrang}, we get \begin{align*} \CCO(S)=x+\frac{(\norm{a}^{2}b-\norm{b}^{2}a )\times (a \times b)}{2 \norm{a \times b}^{2}}. \end{align*} \cref{thm:CrosCircum:ii}: By \cref{defn:Circumcenter}, we have \begin{align} \label{eq:rem:CircuRadius:r} \CRO(S)=\norm{\CCO(S)-x}=\Norm{ \frac{(\norm{a}^{2}b-\norm{b}^{2}a )\times (a \times b)}{2 \norm{a \times b}^{2}} }. \end{align} Using \cref{lem:CrosProdProper}\cref{lem:CrosProdProper:Lagrang} and \cref{lem:CrosProdProper}\cref{lem:CrosProdProper:PerpenSpanSpace}, we obtain \begin{align} \label{eq:radius:Numer} \Norm{(\norm{a}^{2}b-\norm{b}^{2}a )\times (a \times b)} & = \Big( \Norm{ \norm{a}^{2}b-\norm{b}^{2}a }^{2} \norm{a \times b}^{2} - \innp{\norm{a}^{2}b-\norm{b}^{2}a, a \times b}^{2} \Big)^{\frac{1}{2}}\nonumber\\ & = \Norm{\norm{a}^{2}b-\norm{b}^{2}a} \norm{a \times b}. \end{align} In addition, by \cref{note:AffinIndpDetermNonZero}, since $\norm{a} \neq 0$, $\norm{b} \neq 0$, thus \begin{align} \label{eq:radius:u} \Norm{ \norm{a}^{2}b-\norm{b}^{2}a} = \norm{a} \norm{b} \NNorm{ \frac{\norm{a}}{\norm{b}} b- \frac{\norm{b}}{\norm{a}} a}. \end{align} Now \begin{align} \label{eq:redius:ThirdPart} \NNorm{ \frac{\norm{a}}{\norm{b}} b- \frac{\norm{b}}{\norm{a}} a }^{2} & = \NNorm{\frac{\norm{a}}{\norm{b}} b }^{2} - 2 \IInnp{\frac{\norm{a}}{\norm{b}} b, \frac{\norm{b}}{\norm{a}} a } + \NNorm{\frac{\norm{b}}{\norm{a}} a}^{2}\nonumber\\ & = \norm{a}^{2} - 2 \innp{b,a} + \norm{b}^{2}\nonumber\\ & = \norm{a-b}^{2}. \end{align} Upon combining \cref{eq:radius:Numer}, \cref{eq:radius:u} and \cref{eq:redius:ThirdPart}, we obtain \begin{align*} \Norm{(\norm{a}^{2}b-\norm{b}^{2}a )\times (a \times b)} =\norm{a}\norm{b}\norm{a-b}\norm{a \times b}. \end{align*} Hence \cref{eq:rem:CircuRadius:r} yields \begin{align*} \CRO(S) & = \frac{1}{2 \norm{a \times b}^{2}} \Norm{(\norm{a}^{2}b-\norm{b}^{2}a )\times (a \times b)} \\ & = \frac{1}{2 \norm{a \times b}^{2}} \norm{a}\norm{b}\norm{a-b}\norm{a \times b}\\ & = \frac{\norm{a}\norm{b}\norm{a-b}}{2 \norm{a \times b}} .\\ \end{align*} By \cref{fact:CroProProp:crosproGeome}, we know $\norm{a \times b}= \norm{a}\norm{b} \sin \theta$. Thus, we obtain \begin{align*} \CRO(S)=\frac{\norm{a}\norm{b}\norm{a-b}}{2 \norm{a \times b}} =\frac{\norm{a-b}}{2 \sin \theta} \end{align*} and the proof is complete. \end{proof} \begin{fact} {\rm \cite[Theorem I]{M1983}} \label{fact:CroProDef37} Suppose that $n \geq 3$, and a cross product is defined which assigns to any two vectors $v, w \in \mathbb{R}^{n}$ a vector $v \times w \in \mathbb{R}^{n}$ such that the following three properties hold: \begin{enumerate} \item \label{fact:CroProDef37:bilinear} $v \times w$ is a bilinear function of $v$ and $w$. \item \label{fact:CroProDef37:perp} The vector $v \times w$ is perpendicular to both $v$ and $w$. \item \label{fact:CroProDef37:norm} $ \norm{v \times w}^{2}=\norm{v}^{2}\norm{w}^{2} -\innp{v,w}^{2}$. \end{enumerate} Then $n=3$ or $7$. \end{fact} \begin{remark} In view of \cref{fact:CroProDef37} and our proof of \cref{thm:CrosCircum}, we cannot generalize the latter result to a general Hilbert space $\mathcal{H}$ --- unless the dimension of $\mathcal{H}$ is either 3 or 7. \end{remark} \iffalse In fact, our \cref{lem:CrosProdProper} \cref{lem:CrosProdProper:PerpenSpanSpace} are proved by the two cross product properties \cref{fact:CroProDef37:bilinear} $\&$ \cref{fact:CroProDef37:perp} in \cref{fact:CroProDef37}, and \cref{fact:CroProDef37} \cref{fact:CroProDef37:norm} is just our \cref{lem:CrosProdProper} \cref{lem:CrosProdProper:Lagrang}. Because in our proof of \cref{thm:CrosCircum} above, we used \cref{lem:CrosProdProper} \cref{lem:CrosProdProper:PerpenSpanSpace} $\&$ \cref{lem:CrosProdProper:Lagrang}. Hence by \cref{fact:CroProDef37}, if we only use our method of proof for \cref{thm:CrosCircum}, we cannot generalize our \cref{thm:CrosCircum} to general $\mathcal{H}$ except that the dimension of $\mathcal{H}$ is 3 or 7. \fi \section*{Acknowledgments} HHB and XW were partially supported by NSERC Discovery Grants. \bibliographystyle{abbrv}
1,108,101,566,150
arxiv
\section{Introduction} \vskip 2mm In this article, we are mainly concerned with the linear programming problem with the small noisy data as follows: \begin{align} & \min_{x \in \Re^{n}} c^{T}x, \hskip 2mm \text{subject to} \; Ax = b, \; x \ge 0, \label{LPND} \end{align} where $c$ and $x$ are vectors in $\Re^{n}$, $b$ is a vector in $\Re^{m}$, and $A$ is an $m \times n$ matrix. For the problem \eqref{LPND}, there are many efficient methods to solve it such as the simplex methods \cite{Pan2010,ZYL2013}, the interior-point methods \cite{FMW2007,Gonzaga1992,NW1999,Wright1997,YTM1994,Zhang1998} and the continuous methods \cite{AM1991,CL2011,Monteiro1991,Liao2014}. Those methods are all assumed that the constraints of problem \eqref{LPND} are consistent, i.e. $rank(A,\, b) = rank(A)$. For the consistent system of redundant constraints, references \cite{AA1995,Andersen1995,MS2003} provided a few preprocessing strategies which are widely used in both academic and commercial linear programming solvers. \vskip 2mm However, for a real-world problem, since it may include the redundant constraints and the measurement errors, the rank of matrix $A$ may be deficient and the right-hand-side vector $b$ has small noise. Consequently, they may lead to the inconsistent system of constraints \cite{AZ2008,CI2012,LLS2020}. On the other hand, the constraints of the original real-world problem are intrinsically consistent. Therefore, we consider the least-squares approximation of the inconsistent constraints in the linear programming problem based on the QR decomposition with column pivoting. Then, according to the first-order KKT conditions of the linear programming problem, we convert the processed problems into the equivalent problem of nonlinear equations with nonnegative constraints. Based on the system of nonlinear equations with nonnegative constraints, we consider a special continuous Newton flow with nonnegative constraints, which has the nonnegative steady-state solution for any nonnegative initial point. Finally, we consider a primal-dual path-following method and the adaptive trust-region updating strategy to follow the trajectory of the continuous Newton flow. Thus, we obtain an optimal solution of the original linear programming problem. \vskip 2mm The rest of this article is organized as follows. In the next section, we consider the primal-dual path-following method and the adaptive trust-region updating strategy for the linear programming problem. In section 3, we analyze the global convergence of the new method when the initial point is strictly primal-dual feasible. In section 4, for the rank-deficient problems with or without the small noise, we compare the new method with two other popular interior-point methods, i.e. the traditional path-following method (pathfollow.m in p. 210, \cite{FMW2007}) and the predictor-corrector algorithm (the built-in subroutine linprog.m of the MATLAB environment \cite{MATLAB,Mehrotra1992,Zhang1998}). Numerical results show that the new method is more robust than the other two methods for the rank-deficient problem with the small noisy data. Finally, some discussions are given in section 5. $\|\cdot\|$ denotes the Euclidean vector norm or its induced matrix norm throughout the paper. \vskip 2mm \section{Primal-dual path-following methods and the trust-region updating strategy} \subsection{The continuous Newton flow} \label{SUBSECNF} For the linear programming problem \eqref{LPND}, it is well known that its optimal solution $x^{\ast}$ if and only if it satisfies the following Karush-Kuhn-Tucker conditions (pp. 396-397, \cite{NW1999}): \begin{align} Ax - b = 0, \; A^{T}y + s - c = 0, \; XSe = 0, \; (x, \, s) \ge 0, \label{KKTLP} \end{align} where \begin{align} X = diag(x), \; S = diag(s), \; \text{and}\; e = (1, \, \ldots, \, 1)^{T}. \label{DIAGXS} \end{align} For convenience, we rewrite the optimality condition \eqref{KKTLP} as the following nonlinear system of equations with nonnegative constraints: \begin{align} F(z) = \begin{bmatrix} Ax - b \\ A^{T}y + s - c \\ XS e \end{bmatrix} = 0, \; (x, \, s) \ge 0, \; \text{and} \; z = (x, \, y, \, s). \label{NLEQNCON} \end{align} \vskip 2mm It is not difficult to know that the Jacobian matrix $J(z)$ of $F(z)$ has the following form: \begin{align} J(z) = \begin{bmatrix} A & 0 & 0 \\ 0 & A^{T} & I \\ S & 0 & X \end{bmatrix}. \label{JZMATR} \end{align} From the third block $XSe = 0$ of equation \eqref{NLEQNCON}, we know that $x_{i} = 0$ or $s_{i} = 0 \, (i = 1:n) $. Thus, the Jacobian matrix $J(z)$ of equation \eqref{JZMATR} may be singular, which leads to numerical difficulties near the solution of the nonlinear system \eqref{NLEQNCON} for the Newton's method or its variants. In order to overcome this difficulty, we consider its perturbed system \cite{AG2003,Tanabe1988} as follows: \begin{align} F_{\mu}(z) = F(z) - \begin{bmatrix} 0 \\ 0 \\ \mu e \end{bmatrix} = 0, \; (x, \, s) > 0, \; \mu > 0 \; \text{and} \; z = (x, \, y, \, s). \label{PNLEX} \end{align} The solution $z(\mu)$ of the perturbed system \eqref{PNLEX} defines the primal-dual central path, and $z(\mu)$ approximates the solution $z^{\ast}$ of the nonlinear system \eqref{NLEQNCON} when $\mu$ tends to zero \cite{FMW2007,NW1999,Wright1997,Ye1997}. \vskip 2mm We define the strictly feasible region $\mathbb{F}^{0}$ of the problem \eqref{LPND} as \begin{align} \mathbb{F}^{0} = \left\{(x, \, y, \, s)| Ax = b, \; A^{T}y + s = c, \; (x, \, s) > 0 \right\}. \label{SPDFR} \end{align} Then, when there is a strictly feasible interior point $(\bar{x}, \, \bar{y}, \, \bar{s}) \in \mathbb{F}^{0}$ and the rank of matrix $A$ is full, the perturbed system \eqref{PNLEX} has a unique solution (Theorem 2.8, p. 39, \cite{Wright1997}). The existence of its solution can be derived by the implicit theorem \cite{Doedel2007} and the uniqueness of its solution can be proved via considering the strict convexity of the following penalty problem and the KKT conditions of its optimal solution \cite{FM1990}: \begin{align} \min c^{T}x - \mu \sum_{i=1}^{n} \log (x_i) \; \text{subject to} \; Ax = b, \label{LOGPENFUN} \end{align} where $\mu$ is a positive parameter. \vskip 2mm According to the duality theorem of the linear programming (Theorem 13.1, pp. 368-369, \cite{NW1999}), for any primal-dual feasible solution $(x, \, y, \, s)$, we have \begin{align} c^{T}x \ge b^{T}y^{\ast} = c^{T}x^{\ast} \ge b^{T}y, \label{DUATH} \end{align} where the triple $(x^{\ast}, \, y^{\ast}, \, s^{\ast})$ is a primal-dual optimal solution. Moreover, when the positive number $\mu$ is small, the solution $z^{\ast}(\mu)$ of perturbed system \eqref{PNLEX} is an approximation solution of nonlinear system \eqref{NLEQNCON}. Consequently, from the duality theorem \eqref{DUATH}, we know that $x^{\ast}(\mu)$ is an approximation of the optimal solution of the original linear programming problem \eqref{LPND}. It can be proved as follows. Since $z^{\ast}(\mu)$ is the primal-dual feasible, from inequality \eqref{DUATH}, we have \begin{align} c^{T}x^{\ast}(\mu) \ge b^{T}y^{\ast} = c^{T}x^{\ast} \ge b^{T}y^{\ast}(\mu) \label{WDUATH} \end{align} and \begin{align} 0 \le (x^{\ast}(\mu))^{T}s^{\ast}(\mu) = c^{T}x^{\ast}(\mu) - b^{T}y^{\ast}(\mu). \label{DUALGAP} \end{align} From equations \eqref{WDUATH}-\eqref{DUALGAP}, we obtain \begin{align} |c^{T}x^{\ast}(\mu) - c^{T}x^{\ast}| \le c^{T}x^{\ast}(\mu) - b^{T}y^{\ast}(\mu) = (x^{\ast}(\mu))^{T} s^{\ast}(\mu) = n \mu. \end{align} Therefore, $x^{\ast}(\mu)$ is an approximation of the optimal solution of the original linear programming problem \eqref{LPND}. \vskip 2mm If the damped Newton method is applied to the perturbed system \eqref{PNLEX} \cite{DS2009,NW1999}, we have \begin{align} z_{k+1} = z_{k} - \alpha_{k} J(z_{k})^{-1} F_{\mu}(z_{k}), \label{NEWTON} \end{align} where $J(z_{k})$ is the Jacobian matrix of $F_{\mu}(z)$. We regard $z_{k+1} = z(t_{k} + \alpha_{k}), \; z_{k} = z(t_{k})$ and let $\alpha_{k} \to 0$, then we obtain the continuous Newton flow with the constraints \cite{AS2015,Branin1972,Davidenko1953,Tanabe1979,LXL2020} of the perturbed system \eqref{PNLEX} as follows : \begin{align} \frac{dz(t)}{dt} = - J(z)^{-1}F_{\mu}(z), \hskip 2mm z = (x, \, y, \, s) \; \text{and} \; (x, \, s) > 0. \label{NEWTONFLOW} \end{align} Actually, if we apply an iteration with the explicit Euler method \cite{SGT2003,YFL1987} for the continuous Newton flow \eqref{NEWTONFLOW}, we obtain the damped Newton method \eqref{NEWTON}. \vskip 2mm Since the Jacobian matrix $J(z) = F'_{\mu}(z)$ may be singular, we reformulate the continuous Newton flow \eqref{NEWTONFLOW} as the following general formula \cite{Branin1972,Tanabe1979}: \begin{align} -J(z)\frac{dz(t)}{dt} = F_{\mu}(z), \hskip 2mm z = (x, \, y, \, s) \; \text{and} \; (x, \, s) > 0. \label{DAEFLOW} \end{align} The continuous Newton flow \eqref{DAEFLOW} has some nice properties. We state one of them as the following property \ref{PRODAEFLOW} \cite{Branin1972,LXL2020,Tanabe1979}. \vskip 2mm \begin{property} (Branin \cite{Branin1972} and Tanabe \cite{Tanabe1979}) \label{PRODAEFLOW} Assume that $z(t)$ is the solution of the continuous Newton flow \eqref{DAEFLOW}, then $f(z(t)) = \|F_{\mu}(z)\|^{2}$ converges to zero when $t \to \infty$. That is to say, for every limit point $z^{\ast}$ of $z(t)$, it is also a solution of the perturbed system \eqref{PNLEX}. Furthermore, every element $F_{\mu}^{i}(z)$ of $F_{\mu}(z)$ has the same convergence rate $exp(-t)$ and $z(t)$ can not converge to the solution $z^{\ast}$ of the perturbed system \eqref{PNLEX} on the finite interval when the initial point $z_{0}$ is not a solution of the perturbed system \eqref{PNLEX}. \end{property} \proof Assume that $z(t)$ is the solution of the continuous Newton flow \eqref{DAEFLOW}, then we have \begin{align} \frac{d}{dt} \left(exp(t)F_{\mu}(z)\right) = exp(t) J(z) \frac{dz(t)}{dt} + exp(t) F_{\mu}(z) = 0. \nonumber \end{align} Consequently, we obtain \begin{align} F_{\mu}(z(t)) = F_{\mu}(z_0)exp(-t). \label{FUNPAR} \end{align} From equation \eqref{FUNPAR}, it is not difficult to know that every element $F_{\mu}^{i}(z)$ of $F_{\mu}(z)$ converges to zero with the linear convergence rate $exp(-t)$ when $t \to \infty$. Thus, if the solution $z(t)$ of the continuous Newton flow \eqref{DAEFLOW} belongs to a compact set, it has a limit point $z^{\ast}$ when $t \to \infty$, and this limit point $z^{\ast}$ is also a solution of the perturbed system \eqref{PNLEX}. \vskip 2mm If we assume that the solution $z(t)$ of the continuous Newton flow \eqref{DAEFLOW} converges to the solution $z^{\ast}$ of the perturbed system \eqref{PNLEX} on the finite interval $(0, \, T]$, from equation \eqref{FUNPAR}, we have \begin{align} F_{\mu}(z^{\ast}) = F_{\mu}(z_{0}) exp(-T). \label{FLIMT} \end{align} Since $z^{\ast}$ is a solution of the perturbed system \eqref{PNLEX}, we have $F_{\mu}(z^{\ast}) = 0$. By substituting it into equation \eqref{FLIMT}, we obtain \begin{align} F_{\mu}(z_{0}) = 0. \nonumber \end{align} Thus, it contradicts the assumption that $z_{0}$ is not a solution of the perturbed system \eqref{PNLEX}. Consequently, the solution $z(t)$ of the continuous Newton flow \eqref{DAEFLOW} can not converge to the solution $z^{\ast}$ of the perturbed system \eqref{PNLEX} on the finite interval. \qed \vskip 2mm \begin{remark} \blue{The inverse $J(x)^{-1}$ of the Jacobian matrix $J(x)$ can be regarded as the preconditioner of $F_{\mu}(x)$ such that the every element $x^{i}(t)$ of $x(t)$ has roughly the same convergence rate and it mitigates the stiffness of the ODE \eqref{DAEFLOW} \cite{LXL2020}. This property is very useful since it makes us adopt the explicit ODE method to follow the trajectory of the Newton flow \eqref{DAEFLOW} efficiently}. \end{remark} \vskip 2mm \subsection{The primal-dual path-following method} \label{SPPFM} \vskip 2mm From property \ref{PRODAEFLOW}, we know that the continuous Newton flow \eqref{DAEFLOW} has the nice global convergence property. However, when the Jacobian matrix $J(x)$ is singular or nearly singular, the ODE \eqref{DAEFLOW} is the system of differential-algebraic equations \cite{AP1998,BCP1996,HW1996} and its trajectory can not be efficiently followed by the general ODE method such as the backward differentiation formulas (the built-in subroutine ode15s.m of the MATLAB environment \cite{MATLAB,SGT2003}). Thus, we need to construct the special method to solve this problem. Furthermore, we expect that the new method has the global convergence as the homotopy continuation methods \cite{AG2003,OR2000} and the fast convergence rate as the traditional optimization methods. In order to achieve these two aims, we consider the continuation Newton method and the trust-region updating strategy for problem \eqref{DAEFLOW}. \vskip 2mm We apply the implicit Euler method to the continuous Newton flow \eqref{DAEFLOW} \cite{AP1998,BCP1996}, then we obtain \begin{align} J(z_{k+1}) \frac{z_{k+1}-z_{k}}{\Delta t_k} = -F_{\mu_{k+1}}(z_{k+1}). \label{IMEDAE} \end{align} Since the system \eqref{IMEDAE} is nonlinear which is not directly solved, we seek for its explicit approximation formula. To avoid solving the nonlinear system of equations, we replace $J(z_{k+1})$ with $J(z_{k})$ and substitute $F_{\mu_{k+1}}(z_{k+1})$ with its linear approximation $F_{\mu_{k+1}}(z_{k})+J(z_{k})(z_{k+1}-z_{k})$ into equation \eqref{IMEDAE}. Then, we obtain a variant of the damped Newton method: \begin{align} z_{k+1} = z_{k} - \frac{\Delta t_k}{1+\Delta t_k} J(z_k)^{-1}F_{\mu_{k+1}}(z_k). \label{SMEDAE} \end{align} \vskip 2mm \begin{remark} If we let $\alpha_{k} = \Delta t_k/(1+\Delta t_k)$ in equation \eqref{NEWTON}, we obtain the method \eqref{SMEDAE}. However, from the view of the ODE method, they are different. The damped Newton method \eqref{NEWTON} is derived from the explicit Euler method applied to the continuous Newton flow \eqref{DAEFLOW}. Its time-stepping size $\alpha_k$ is restricted by the numerical stability \cite{HW1996,SGT2003,YFL1987}. That is to say, for the linear test equation $dx/dt = - \lambda x$, its time-stepping size $\alpha_{k}$ is restricted by the stable region $|1-\lambda \alpha_{k}| \le 1$. Therefore, the large time-stepping size $\alpha_{k}$ can not be adopted in the steady-state phase. The method \eqref{SMEDAE} is derived from \blue{the implicit Euler method applied to the continuous Newton flow \eqref{DAEFLOW} and the linear approximation of $F_{\mu_{k+1}}(z_{k+1})$}, and its time-stepping size $\Delta t_k$ is not restricted by the numerical stability for the linear test equation. Therefore, the large time-stepping size $\Delta t_{k}$ can be adopted in the steady-state phase, and the method \eqref{SMEDAE} mimics the Newton method. Consequently, it has the fast convergence rate near the solution $z^{\ast}$ of the nonlinear system \eqref{NLEQNCON}. The most of all, the new time-stepping size $\alpha_{k} = \Delta t_{k}/(\Delta t_{k} + 1)$ is favourable to adopt the trust-region updating strategy for adaptively adjusting the time-stepping size $\Delta t_{k}$ such that the continuation method \eqref{SMEDAE} accurately tracks the trajectory of the continuation Newton flow \eqref{DAEFLOW} in the transient-state phase and achieves the fast convergence rate in the steady-state phase. \end{remark} \vskip 2mm We set the parameter $\mu_{k}$ as the average of the residual sum: \begin{align} \mu_{k} = \frac{\|Ax_{k} - b\|_{1} + \|A^{T}y_{k} + s_{k} - c\|_{1} + x_{k}^{T}s_{k}}{n}. \label{UK1DEF} \end{align} This selection of $\mu_{k}$ is slightly different to the traditional selection $\mu_{k} = x_{k}^{T}s_{k}/n$ \cite{FMW2007,NW1999,Wright1997}. According to our numerical experiments, this selection of $\mu_{k}$ can improve the robustness of the path-following method. In equation \eqref{SMEDAE}, $\mu_{k+1}$ is approximated by $\sigma_{k} \mu_{k}$, where the penalty coefficient $\sigma_{k}$ is simply selected as follows: \begin{align} \sigma_{k} = \begin{cases} 0.05, \; \text{when} \; \mu_{k} > 0.05, \\ \mu_{k}, \; \text{when} \; \mu_{k} \le 0.05. \end{cases} \label{SIGMA} \end{align} Thus, from equations \eqref{SMEDAE}-\eqref{SIGMA}, we obtain the following iteration scheme: \begin{align} \begin{bmatrix} A & 0 & 0\\ 0 & A^{T} & I \\ S_{k} & 0 & X_{k} \end{bmatrix} \begin{bmatrix} \Delta x_{k} \\ \Delta y_{k} \\ \Delta s_{k} \end{bmatrix} = - \begin{bmatrix} Ax_{k} -b \\ A^{T}y_{k} + s_{k} - c \\ X_{k}S_{k}e - \sigma_{k}\mu_{k}e \end{bmatrix} = - F_{\sigma_{k} \mu_{k}}(z_k) \label{DELTXYSK} \end{align} and \begin{align} \left(x_{k+1}, \, y_{k+1}, \, s_{k+1} \right) = \left(x_{k}, \, y_{k}, \, s_{k}\right) + \frac{\Delta t_{k}}{1 + \Delta t_{k}} \left(\Delta x_{k}, \, \Delta y_{k}, \, \Delta s_{k}\right), \label{XYSK1} \end{align} where $F_{\mu}(z)$ is defined by equation \eqref{PNLEX}. \vskip 2mm When matrix $A$ has full row rank, the linear system \eqref{DELTXYSK} can be solved by the following three subsystems: \begin{align} AX_{k}S_{k}^{-1}A^{T} \Delta y_{k} & = - \left(r_{p}^{k} + AS_{k}^{-1}\left(X_{k}r_{d}^{k} - r_{c}^{k} \right)\right), \label{DELTAYK} \\ \Delta s_{k} & = - r_{d}^{k} - A^{T} \Delta y_{k}, \label{DELTASK} \\ \Delta x_{k} & = -S_{k}^{-1}\left(X_{k}S_{k}e + X_{k}\Delta s_{k} - \sigma_{k} \mu_{k}e\right), \label{DELTAXK} \end{align} where the primal residual $r_{p}^{k}$, the dual residual $r_{d}^{k}$ and the complementary residual $r_{c}^{k}$ are respectively defined by \begin{align} r_{p}^{k} & = Ax_{k} - b, \label{PRESK} \\ r_{d}^{k} & = A^{T}y_{k} + s_{k} - c, \label{DRESK} \\ r_{c}^{d} & = S_{k}X_{k}e - \sigma_{k} \mu_{k}e. \label{DRESXK} \end{align} \vskip 2mm The matrix $AX_{k}S_{k}^{-1}A^{T}$ becomes very ill-conditioned when $(x_{k}, \, y_{k}, \, s_{k})$ is close to the solution $(x^{\ast}, \, y^{\ast}, \, s^{\ast})$ of the nonlinear system \eqref{NLEQNCON}. Thus, the Cholesky factorization method may fail to solve the linear system \eqref{DELTAYK} for the large-scale problem. Therefore, we use the QR decomposition (pp. 247-248, \cite{GV2013}) to solve it as follows: \begin{align} & D_{k}A^{T} = Q_{k}R_{k}, \; D_{k} = \text{diag}(\text{sqrt}(x_{k}./s_{k})), \nonumber \\ & R_{k}^{T} \Delta y_{k}^{m} = - \left(r_{p}^{k} + AS_{k}^{-1}\left(X_{k}r_{d}^{k} - r_{c}^{k} \right)\right), \nonumber \\ & R_{k} \Delta y_{k} = \Delta y_{k}^{m}. \label{DELTAYKQR} \end{align} where $Q_{k} \in \Re^{n \times m}$ satisfies $Q_{k}^{T}Q_{k} = I$ and $R_{k} \in \Re^{m \times m}$ is an upper triangle matrix with full rank. \vskip 2mm \subsection{The trust-region updating strategy} \vskip 2mm Another issue is how to adaptively adjust the time-stepping size $\Delta t_k$ at every iteration. There is a popular way to control the time-stepping size based on the trust-region updating strategy \cite{CGT2000,Deuflhard2004,Higham1999,Luo2010,Luo2012,LXL2020,Yuan2015}. Its main idea is that the time-stepping size $\Delta t_{k+1}$ will be enlarged when the linear model $F_{\sigma_{k} \mu_{k}}(z_k) + J(z_k)\Delta z_{k}$ approximates $F_{\sigma_{k} \mu_{k}}(z_{k}+\Delta z_{k})$ well, and $\Delta t_{k+1}$ will be reduced when $F_{\sigma_{k} \mu_{k}}(z_k) + J(z_k)\Delta z_{k}$ approximates $F_{\sigma_{k} \mu_{k}}(z_{k}+J(z_{k})\Delta z_{k})$ badly. We enlarge or reduce the time-stepping size $\Delta t_k$ at every iteration according to the following ratio: \begin{align} \rho_k = \frac{\|F_{\sigma_{k} \mu_{k}}(z_{k})\|-\|F_{\sigma_{k} \mu_{k}}(z_{k}+\Delta z_{k})\|} {\|F_{\sigma_{k} \mu_{k}}(z_{k})\| - \|F_{\sigma_{k} \mu_{k}}(z_{k})+J(z_{k})\Delta z_{k}\|}. \label{RHOK} \end{align} A particular adjustment strategy is given as follows: \begin{align} \Delta t_{k+1} = \begin{cases} 2 \Delta t_k, \; &{\text{if} \; 0 \leq \left|1- \rho_k \right| \le \eta_1 \; \text{and} \; (x_{k+1}, \, s_{k+1}) > 0,} \\ \Delta t_k, \; &{\text{else if} \; \eta_1 < \left|1 - \rho_k \right| < \eta_2 \; \text{and} \; (x_{k+1}, \, s_{k+1}) > 0, }\\ \frac{1}{2} \Delta t_k, \; &{\text{others}, } \end{cases} \label{TSK1} \end{align} where the constants are selected as $\eta_1 = 0.25, \; \eta_2 = 0.75$, according to our numerical experiments. When $\rho_{k} \ge \eta_{a}$ and $(x_{k+1}, \, s_{k+1}) > 0$, we accept the trial step and set \begin{align} (x_{k+1}, \, y_{k+1}, \, s_{k+1}) = (x_{k}, \, y_{k}, \, s_{k}) + \frac{\Delta t_{k}}{1+\Delta t_{k}} (\Delta x_{k}, \, \Delta y_{k}, \, \Delta s_{k}), \label{ACCPXK1} \end{align} where $\eta_{a}$ is a small positive number such as $\eta_{a} = 1.0\times 10^{-6}$. Otherwise, we discard the trial step and set \begin{align} (x_{k+1}, \, y_{k+1}, \, s_{k+1}) = (x_{k}, \, y_{k}, \, s_{k}). \label{NOACXK1} \end{align} \vskip 2mm \begin{remark} This new time-stepping size selection based on the trust-region updating strategy has some advantages compared to the traditional line search strategy. If we use the line search strategy and the damped Newton method \eqref{NEWTON} to track the trajectory $z(t)$ of the continuous Newton flow \eqref{DAEFLOW}, in order to achieve the fast convergence rate in the steady-state phase, the time-stepping size $\alpha_{k}$ of the damped Newton method is tried from 1 and reduced by the half with many times at every iteration. Since the linear model $F_{\sigma_{k} \mu_{k}}(z_{k}) + J(z_{k})\Delta z_{k}$ may not approximate $F_{\sigma_{k}\mu_{k}}(z_{k}+\Delta z_{k})$ well in the transient-state phase, the time-stepping size $\alpha_{k}$ will be small. Consequently, the line search strategy consumes the unnecessary trial steps in the transient-state phase. However, the selection of the time-stepping size $\Delta t_{k}$ based on the trust-region updating strategy \eqref{RHOK}-\eqref{TSK1} can overcome this shortcoming. \end{remark} \subsection{The treatment of rank-deficient problems} \vskip 2mm For a real-world problem, the rank of matrix $A$ may be deficient and the constraints are even inconsistent when the right-hand-side vector $b$ has small noise \cite{LLS2020}. However, the constraints of the original problem are intrinsically consistent. For the rank-deficient problem with the consistent constraints, there are some efficiently pre-solving methods to eliminate the redundant constraints in references \cite{AA1995,Andersen1995,MS2003}. Here, in order to handle the inconsistent system of constraints, we consider the following least-squares approximation problem: \begin{align} \min_{x \in \Re^{n}} \; \|Ax - b\|^{2}. \label{LLSP} \end{align} Then, by solving problem \eqref{LLSP}, we obtain the consistent system of constraints. \vskip 2mm Firstly, we use the QR factorization with column pivoting (pp. 276-278, \cite{GV2013}) to factor $A$ into a product of an orthogonal matrix $Q \in \Re^{m \times m}$ and an upper triangular matrix $R \in \Re^{m \times n}$: \begin{align} AP = QR = \begin{bmatrix} Q_{1} | Q_{2} \end{bmatrix} \begin{bmatrix} R_{1} \\ 0 \end{bmatrix} = Q_{1}R_{1}, \label{ATQR} \end{align} where $r = \text{rank}(A), \; Q_{1} = Q(1:m, \, 1:r), \; R_{1} = R(1:r, \, 1:n)$, and $P \in \Re^{n \times n}$ is a permutation matrix. Then, from equation \eqref{ATQR}, we know that problem \eqref{LLSP} equals the following problem \begin{align} \min_{x \in \Re^{n}} \; \left\| R_{1}P^{T}x - Q_{1}^{T}b\right\|^{2}. \label{BAPROB} \end{align} By solving problem \eqref{BAPROB}, we obtain its solution $x$ as follows: \begin{align} R_{1}\tilde{x} = b_{r}, \; x = P\tilde{x}, \label{APPCON} \end{align} where $b_{r} = Q_{1}^{T}b$. \vskip 2mm Therefore, when the constraints of problem \eqref{LPND} are consistent, problem \eqref{LPND} equals the following linear programming problem: \begin{align} \min_{x \in \Re^{n}} \; c_{r}^{T} \tilde{x} \; \text{subject to} \; R_{1} \tilde{x} = b_{r}, \; \tilde{x} \ge 0, \label{LOLPND} \end{align} where $b_r = Q_{1}^{T}b$ and $c_{r} = P^{T}c$. When the constraints of problem \eqref{LPND} are inconsistent, the constraints of problem \eqref{LOLPND} are the least-squares approximation of constraints of problem \eqref{LPND}. Consequently, in subsection \ref{SPPFM}, we replace matrix $A$, vector $b$ and vector $c$ with matrix $R_{1}$, vector $b_{r}$ and vector $c_{r}$, respectively, then the primal-dual path-following method can handle the rank-deficient problem. \vskip 2mm From the reduced linear programming problem \eqref{LOLPND}, we obtain its KKT conditions as follows: \begin{align} R_{1}\tilde{x} - b_{r} & = 0, \label{LOPFCON} \\ R_{1}^{T} \tilde{y} + \tilde{s} - c_{r} & = 0, \label{LODFCON} \\ \tilde{X}\tilde{S} e & = 0, \; i = 1, \, 2, \, \ldots, n, \label{LOCCON} \\ (\tilde{x}, \, \tilde{s}) & \ge 0, \label{LONNCON} \end{align} where $\tilde{X} = diag(\tilde{x}), \; \tilde{S} = diag(\tilde{s})$, $b_{r} = Q_{1}^{T}b$ and $c_{r} = P^{T}c$. Thus, from the QR decomposition \eqref{ATQR} of matrix $A$ and equation \eqref{KKTLP}, we can recover the solution $(x, \, y, \, s)$ of equation \eqref{KKTLP} as follows: \begin{align} x = P\tilde{x}, \; y = Q_{1}\tilde{y}, \; s = P\tilde{s}, \label{RSOL} \end{align} where $(\tilde{x}, \, \tilde{y}, \, \tilde{s})$ is a solution of equations \eqref{LOPFCON}-\eqref{LONNCON}. \vskip 2mm \begin{remark} \blue{The preprocessing strategies of the redundant elimination in references \cite{AA1995,Andersen1995,MS2003} are for empty rows and columns, the row or column singletons, duplicate rows or columns, forcing and dominated constraints, and finding the linear dependency based on the Gaussian elimination. Some of those techniques such as for empty rows and columns, the row or column singletons, duplicate rows or columns can be also applied to the inconsistent system. However, those preprocessing strategies in references \cite{AA1995,Andersen1995,MS2003} can not transform an inconsistent system to a consistent system. Thus, in order to handle the inconsistent system, we can replace the QR decomposition with the Gaussian elimination method after the preprocessing strategies such as for empty rows or columns, the row or column singletons, duplicate rows or colums. Here, for simplicity, we directly use the QR decomposition as the preprocessing strategy and do not use the preprocessing strategies in references \cite{AA1995,Andersen1995,MS2003}.} \end{remark} According to the above discussions, we give the detailed descriptions of the primal-dual path-following method and the trust-region updating strategy for linear the programming problem \eqref{LPND} in Algorithm \ref{PNMTRLP}. \vskip 2mm \begin{algorithm} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \caption{Primal-dual path-following methods and the trust-region updating strategy for linear programming (The PFMTRLP method)} \label{PNMTRLP} \begin{algorithmic}[1] \REQUIRE ~~\\ matrix $A \in \Re^{m \times n}$, vectors $b \in \Re^{m}$ and $c \in \Re^{n}$ for the problem: $\min_{x \in \Re^{n}} c^{T}x \; \text{subject to} \; Ax = b, \; x \ge 0$. \ENSURE ~~ \\ the primal-dual optimal solution: $(solx, \, soly, \, sols)$. \\ the maximum error of KKT conditions: $KKTError = \max \{\|A*solx -b\|_{\infty}, \, \|A^{T}*soly + sols -c \|_{\infty}, \, \|solx.*soly\|_{\infty}\}$. \STATE Initialize parameters: $\eta_{a} = 10^{-6}, \; \eta_1 = 0.25, \; \eta_2 = 0.75, \; \epsilon = 10^{-6}, \; \Delta t_0 = 0.9, \; \text{maxit} = 100$. \STATE Factor matrix $A$ by using the QR decomposition \eqref{ATQR}. Set $b_{r} = Q_{1}^{T}b$ and $c_{r} = P^{T}c$. % \STATE Set $\text{bigM} = \max(\max(\text{abs}(R_{1})))$; $\text{bigM} = \max(\|b_{r}\|_{\infty}, \, \|c_{r}\|_{\infty}, \, \text{bigM})$. \STATE Initialize $x_0$ = bigMfac*bigM*ones(n, 1), $s_0 = x_0$, $y_0$ = zeros(r, 1). \STATE flag\_success\_trialstep = 1, $\text{itc} = 0, \; k = 0$. % \WHILE {(itc $<$ maxit)} \IF{(flag\_success\_trialstep == 1)} \STATE Set itc = itc + 1. \STATE Compute $F_{k}^{1} = R_{1}x_k - b_{r}$, $F_{k}^{2} = R_{1}^{T}y_k + s_k - c_{r}$, $F_{k}^{3} = X_{k}s_k$, $\mu_{k} = (\|F_{k}^{1}\|_{1} + \|F_{k}^{2}\|_{1} + \|F_{k}^{3}\|_{1})/n$. \STATE Compute $\text{Resk} = \|[F_{k}^{1}; F_{k}^{2}; F_{k}^{3}]\|_{\infty}$. \IF{($\text{Resk} < \epsilon$)} \STATE break; \ENDIF \STATE Set $\sigma_k = \min(0.05, \, \mu_k)$. \STATE Compute $F_{k}^{3} = F_{k}^{3} - \sigma_{k}\mu_{k}*\text{ones}(n,1)$. Set $F_k = [F_{k}^{1}; F_{k}^{2}; F_{k}^{3}]$. \\ \STATE By using the QR decomposition to solve \eqref{DELTAYKQR}, we obtain $\Delta y_{k}$. Compute $\Delta s_{k} = - F_{k}^{2}- Q_{1}\Delta y_{k}$, and $\Delta x_{k} = -S_{k}^{-1}\left(F_{k}^{3} + X_{k}\Delta s_{k} \right). $ \ENDIF \STATE Set $(x_{k+1}, \, y_{k+1}, \, s_{k+1}) = (x_{k}, \, y_{k}, \, s_{k}) + \frac{\Delta t_{k}}{1 + \Delta t_{k}} (\Delta x_{k}, \, \Delta y_{k}, \, \Delta s_{k})$. \STATE Compute $F_{k+1}^{1} = R_{1}^{T}x_{k+1} - b_{r}$, $F_{k+1}^{2} = R_{1}^{T}y_{k+1} + s_{k+1} - c_{r}$, $F_{k+1}^{3} = X_{k+1}s_{k+1} - \sigma_{k}\mu_{k}*\text{ones}(n,1)$, $F_{k+1} = [F_{k+1}^{1};F_{k+1}^{2};F_{k+1}^{3}]$. \STATE Compute $LinAppF_{k+1} = [F_{k+1}^{1}; F_{k+1}^{2};(F_{k}^{3} + X_{k}(s_{k+1}-s_{k}) + S_{k}(x_{k+1}-x_{k}))]$. \STATE Compute the ratio $\rho_{k} = (\|F_{k}\|-\|F_{k+1}\|)/(\|F_{k}\|-\|LinAppF_{k+1}\|)$. \IF{($(|\rho_{k} - 1| \le \eta_{1}) \&\& (x_{k+1}, s_{k+1}) > 0)$)} \STATE Set $\Delta t_{k+1} = 2\Delta t_{k}$; \ELSIF{($(\eta_{1} < |\rho_{k} - 1| \le \eta_{2}) \&\& (x_{k+1}, s_{k+1}) > 0))$} \STATE Set $\Delta t_{k+1} = \Delta t_{k}$; \ELSE \STATE Set $\Delta t_{k+1} = 0.5\Delta t_{k}$; \ENDIF \IF{$((\rho_{k} \ge \eta_{a}) \&\& (x_{k+1}, s_{k+1}) > 0))$} \STATE Accept the trial point $(x_{k+1}, \, y_{k+1}, \, s_{k+1})$. Set flag\_success\_trialstep = 1. \ELSE \STATE Set $(x_{k+1}, \, y_{k+1}, \, s_{k+1}) = (x_{k}, \, y_{k}, s_{k})$, flag\_success\_trialstep = 0. \ENDIF \STATE Set $k \leftarrow k+1$. \ENDWHILE \STATE Return $(solx, \, soly, \, sols) = \left(Px_{k}, \, Q_{1}y_{k}, \, Ps_{k}\right)$, $KKTError = \|F_{k}\|_{\infty}$. \end{algorithmic} \end{algorithm} \vskip 2mm \section{Algorithm analysis} \vskip 2mm We define the one-sided neighborhood $\mathbb{N}_{-\infty}(\gamma)$ as \begin{align} \mathbb{N}_{-\infty}(\gamma) = \{(x, \, y, \, s) \in \mathbb{F}^{0} | XSe \ge \gamma \mu e \}, \label{OSINFN} \end{align} where $X = \text{diag}(x), \; S = \text{diag}(s), \; e = \text{ones}(n,\,1), \; \mu = x^{T}s/n$ and $\gamma$ is a small positive constant such as $\gamma = 10^{-3}$. In order to simplify the convergence analysis of Algorithm \ref{PNMTRLP}, we assume that (i) the initial point $(x_{0}, \, s_{0})$ is strictly primal-dual feasible, and (ii) the time-stepping size $\Delta t_{k}$ is selected such that $(x_{k+1}, \, y_{k+1}, \, s_{k+1}) \in \mathbb{N}_{-\infty}(\gamma)$. Without the loss of generality, we assume that the row rank of matrix $A\in \Re^{m\times n}$ is full. \vskip 2mm \begin{lemma} \label{LEMPROC} Assume $(x_{k}, \, y_{k}, \, s_{k}) \in \mathbb{N}_{-\infty}(\gamma)$, then there exists a sufficiently small positive number $\delta_{k}$ such that $(x_{k}(\alpha), \, y_{k}(\alpha), \, s_{k}(\alpha)) \in \mathbb{N}_{-\infty}(\gamma)$ holds when $0 < \alpha \le \delta_{k}$, where $(x_{k}(\alpha), \, y_{k}(\alpha), \, s_{k}(\alpha))$ is defined by \begin{align} (x_{k}(\alpha), \, y_{k}(\alpha), \, s_{k}(\alpha)) = (x_{k}, \, y_{k}, \, s_{k}) + \alpha (\Delta x_{k}, \, \Delta y_{k}, \, \Delta s_{k}), \label{LINST} \end{align} and $(\Delta x_{k}, \, \Delta y_{k}, \, \Delta s_{k})$ is the solution of the linear system \eqref{DELTXYSK}. \end{lemma} \vskip 2mm \proof Since $(x_{k}, \, y_{k}, \, s_{k})$ is a primal-dual feasible point, from equation \eqref{DELTXYSK}, we obtain \begin{align} A \Delta x_{k} = 0, \; A^{T} \Delta y_{k} + \Delta s_{k} = 0. \label{NULLSP} \end{align} Consequently, from equations \eqref{LINST}-\eqref{NULLSP}, we have \begin{align} \Delta x_{k}^{T} \Delta s_{k} = - \Delta x_{k}^{T}A^{T} \Delta y_{k} = (A \Delta x_{k})^{T} \Delta y_{k} = 0, \label{DELSXSUMZ} \\ Ax_{k}(\alpha) = b, \; A^{T}y_{k}(\alpha) + s_{k}(\alpha) = c. \label{PDFR} \end{align} From equation \eqref{LINST}, we have \begin{align} & X_{k}(\alpha)S_{k}(\alpha) = (X_{k} + \alpha \Delta X_{k})(S_{k} + \alpha \Delta S_{k}) \nonumber \\ & \quad = X_{k}S_{k} + \alpha (X_{k}\Delta S_{k} + S_{k}\Delta X_{k}) + \alpha^{2} \Delta X_{k} \Delta S_{k}. \label{XKSKPRO} \end{align} By replacing $X_{k}\Delta S_{k} + S_{k}\Delta X_{k}$ with equation \eqref{DELTAXK} into equation \eqref{XKSKPRO}, we obtain \begin{align} & X_{k}(\alpha)S_{k}(\alpha)e = X_{k}S_{k}e + \alpha (\sigma_{k}\mu_{k}e - X_{k}S_{k}e) + \alpha^{2} \Delta X_{k} \Delta S_{k}e \nonumber \\ & \quad = (1 - \alpha)X_{k}S_{k}e + \alpha \sigma_{k}\mu_{k}e + \alpha^{2} \Delta X_{k} \Delta S_{k}e. \label{XKSKVEC} \end{align} From equations \eqref{DELSXSUMZ}-\eqref{XKSKVEC}, we have \begin{align} \mu_{k}(\alpha) = \frac{1}{n} e^{T}X_{k}(\alpha)S_{k}(\alpha)e = (1 - (1-\sigma_{k})\alpha) \mu_{k}. \label{MUALPHA} \end{align} We denote \begin{align} \beta_{max}^{k} = \max_{1 \le i \le n} \{|\Delta x_{k}^{i}|, \, |\Delta s_{k}^{i}|\}. \label{BETACON} \end{align} Then, from equation \eqref{XKSKVEC}, we have \begin{align} X_{k}(\alpha)S_{k}(\alpha)e \ge \left((1 - \alpha)\gamma \mu_{k} + \alpha \sigma_{k}\mu_{k} - \alpha^{2} (\beta_{max}^{k})^{2}\right)e. \label{XKSKGEL} \end{align} From equations \eqref{MUALPHA} and \eqref{XKSKGEL}, we know that the proximity condition \begin{align} X_{k}(\alpha)S_{k}(\alpha) e \ge \gamma \mu_{k}(\alpha) \nonumber \end{align} holds, provided that \begin{align} (1 - \alpha)\gamma \mu_{k} + \alpha \sigma_{k}\mu_{k} - \alpha^{2}(\beta_{max}^{k})^{2} \ge \gamma (1- \alpha + \alpha \sigma_{k})\mu_{k}. \nonumber \end{align} By reformulating the above expression, we obtain \begin{align} \alpha (1-\gamma) \sigma_{k} \mu_{k} \ge \alpha^{2} (\beta_{max}^{k})^{2}. \label{ALPKGE} \end{align} We choose \begin{align} \delta_{k} = \frac{(1-\gamma)\sigma_{k}\mu_{k}}{(\beta_{max}^{k})^{2}}. \label{ALPKLE} \end{align} Then, inequality \eqref{ALPKGE} is true when $0< \alpha \le \delta_{k}$. \qed \vskip 2mm In the following Lemma \ref{LEMXSBOUN}, we give the lower bounded estimation of $(x_{k}, \, s_{k})$. \vskip 2mm \begin{lemma} \label{LEMXSBOUN} Assume that $(x_{0}, \, y_{0}, \, s_{0}) > 0$ is a primal-dual feasible point and $(x_{k}, \, y_{k}, \, s_{k})$ $(k = 0, \, 1, \, \ldots)$ generated by Algorithm \ref{PNMTRLP} satisfy the proximity condition \eqref{OSINFN}. Furthermore, if there exists a constant $C_{\mu}$ such that \begin{align} \mu_{k} \ge C_{\mu} > 0 \label{MUKGEC} \end{align} holds for all $k = 0, \, 1, \, 2, \, \ldots$, there exist two positive constants $C_{min}$ and $C_{max}$ such that \begin{align} 0 < C_{min} \le \min_{1\le i \le n} \{x_{k}^{i}, \; s_{k}^{i}\} \le \max_{1 \le i \le n} \{x_{k}^{i},\; s_{k}^{i} \} \le C_{max} \label{BOUNDXS} \end{align} holds for all $k = 0, \, 1, \, 2, \, \ldots$. \end{lemma} \vskip 2mm \proof Since $(x_{k}, \, y_{k}, s_{k})$ is generated by Algorithm \ref{PNMTRLP} and $(x_{k}, \, y_{k}, s_{k})$ is a primal-dual feasible point, from equation \eqref{MUALPHA}, we have \begin{align} \mu_{k+1} = \frac{x_{k+1}^{T}s_{k+1}}{n} = (1 - (1-\sigma_{k})\alpha_{k}) \mu_{k} \le \mu_{k}, \; k = 0, \, 1, \, \ldots. \nonumber \end{align} Consequently, we obtain \begin{align} \mu_{k+1} = \prod_{i = 0}^{k} (1 - (1-\sigma_{i})\alpha_{i}) \mu_{i} \le \mu_{0}, \; k = 0, \, 1, \, \ldots. \label{UKUPBOUN} \end{align} \vskip 2mm From equation \eqref{PDFR}, we have \begin{align} A (x_{k} - x_{0}) = 0, \; A^{T}(y_{k} - y_{0}) + (s_{k} - s_{0}) = 0. \end{align} Consequently, we obtain \begin{align} (x_{k} - x_{0})^{T}(s_{k} - s_{0}) = - (x_{k} - x_{0})^{T} A^{T}(y_{k} - y_{0}) = 0. \nonumber \end{align} By rearranging this expression and using the property \eqref{UKUPBOUN}, we obtain \begin{align} x_{k}^{T}s_{0} + s_{k}^{T}x_{0} = x_{k}^{T}s_{k} + x_{0}^{T}s_{0} \le n (\mu_{k} + \mu_{0}) \le 2n \mu_{0}. \nonumber \end{align} Consequently, we obtain \begin{align} x_{k}^{i} \le \frac{2n \mu_{0}} {\min_{1 \le j \le n} \{s_{0}^{j}\}}\; \text{and} \; s_{k}^{i} \le \frac{2n \mu_{0}}{\min_{1 \le j \le n} \{x_{0}^{j}\}}, \; 1 \le i \le n, \; k = 0,\, 1, \, \ldots. \nonumber \end{align} Therefore, if we select \begin{align} C_{max} = \max \left\{\frac{2n \mu_{0}}{\min_{1 \le j \le n} \{s_{0}^{j}\}}, \; \frac{2n \mu_{0}}{\min_{1 \le j \le n} \{x_{0}^{j}\}}\right\}, \nonumber \end{align} we obtain \begin{align} \max_{1 \le i \le n} \{x_{k}^{i}, s_{k}^{i}\} \le C_{max}, \; k = 0, \, 1, \, \ldots. \label{XSKUPBOUN} \end{align} \vskip 2mm On the other hand, from the assumption \eqref{MUKGEC} and the proximity condition \eqref{OSINFN}, we have \begin{align} x_{k}^{i}s_{k}^{i} \ge \gamma \mu_{k} \ge \gamma C_{\mu}, \; 1 \le i \le n, \; k = 0, \, 1, \, \ldots. \nonumber \end{align} By combining it with the estimation \eqref{XSKUPBOUN} of $(x_{k}, \, s_{k})$, we obtain \begin{align} x_{k}^{i} \ge \frac{\gamma C_{\mu}}{\max_{1 \le j \le n}\{s_{k}^{j}\}} \ge \frac{\gamma C_{\mu}}{C_{max}}, \; \text{and}\; s_{k}^{i} \ge \frac{\gamma C_{\mu}}{\max_{1 \le j \le n} \{x_{k}^{j}\}} \ge \frac{\gamma C_{\mu}}{C_{max}}, \; k = 0, \, 1, \, \ldots. \label{XSLBOUD} \end{align} We select $C_{min} = \gamma C_{\mu}/C_{max}$. Then, from equation \eqref{XSLBOUD}, we obtain \begin{align} \min_{1 \le i \le n} \{x_{k}^{i}, \, s_{k}^{i}\} \ge C_{min}, \; k = 0, \, 1, \, 2, \, \ldots. \nonumber \end{align} \qed \vskip 2mm \begin{lemma} \label{LEMDELSXUB} Assume that $(x_{0}, \, y_{0}, \, s_{0}) > 0$ is a primal-dual feasible point and $(x_{k}, \, y_{k}, \, s_{k})$ \, $(k = 0, \, 1, \, \ldots)$ generated by Algorithm \ref{PNMTRLP} satisfy the proximity condition \eqref{OSINFN}. Furthermore, if the assumption \eqref{MUKGEC} holds for all $k = 0, \, 1, \, \ldots$, there exit two positive constants $C_{\Delta x}$ and $C_{\Delta s}$ such that \begin{align} \|\Delta s_{k}\| \le C_{\Delta s} \; \text{and} \; \|\Delta x_{k}\| \le C_{\Delta x} \label{DELXSUB} \end{align} hold for all $k = 0, \, 1, \, \ldots$. \end{lemma} \proof Factorize matrix $A$ with the singular value decomposition (pp. 76-80, \cite{GV2013}): \begin{align} A = U\Sigma V^{T}, \; \Sigma = \begin{bmatrix} \Sigma_{r} & 0 \\ 0 & 0 \end{bmatrix}, \; \Sigma_{r} = \text{diag}(\lambda_{1}, \, \lambda_{2}, \, \ldots, \, \lambda_{r}) \succ 0, \label{SVDA} \end{align} where $U \in \Re^{m \times m}$ and $V \in \Re^{n \times n}$ are orthogonal matrices, and the rank of matrix $A$ equals $r$. Then, from the bounded estimation \eqref{BOUNDXS} of $(x_{k}, \, s_{k})$, we have \begin{align} z AX_{k}S^{-1}_{k}A^{T}z \ge \frac{C_{min}}{C_{max}} \|Az\|^2 \ge \frac{C_{min}\lambda_{min}^{2}}{C_{max}} \|z\|^{2}, \; k = 0, \, 1, \, \ldots, \; \forall z \in \Re^{n}, \label{AXSINLB} \end{align} and \begin{align} z AX_{k}S_{k}^{-1}A^{T}z \le \frac{C_{max}}{C_{min}} \|Az\|^2 \le \frac{C_{max}\lambda_{max}^{2}}{C_{min}} \|z\|^{2}, \; k = 0, \, 1, \ldots, \; \forall z \in \Re^{n}, \label{AXSINUB} \end{align} where $\lambda_{min}$ and $\lambda_{max}$ are the smallest and largest singular values of matrix $A$, respectively. \vskip 2mm From equations \eqref{DELTAYK}, \eqref{BOUNDXS} and \eqref{AXSINLB}-\eqref{AXSINUB}, we obtain \begin{align} & \frac{C_{min}\lambda_{min}^{2}}{C_{max}} \|\Delta y_{k} \|^{2} \le \Delta y_{k}^{T} \left(AX_{k}S_{k}^{-1}A^{T}\right) \Delta y_{k} = \Delta y_{k}^{T} AS_{k}^{-1}(X_{k}S_{k}e - \sigma_{k}\mu_{k}e) \nonumber \\ & \quad \le \|\Delta y_{k}\| \|A\| \left\|S_{k}^{-1}\right\| \left\|X_{k}S_{k}e - \sigma_{k}\mu_{k}e\right\| \le \|\Delta y_{k}\| \frac{\lambda_{max}}{C_{min}} (\|X_{k}S_{k}e\| + n \sigma_{k}\mu_{k}) \nonumber \\ & \quad \le \|\Delta y_{k}\| \frac{\lambda_{max}}{C_{min}} (\|X_{k}S_{k}e\|_{1} + n \sigma_{k}\mu_{k}) = \|\Delta y_{k}\| \frac{\lambda_{max}}{C_{min}} (1+\sigma_{k})n \mu_{k}. \nonumber \end{align} That is to say, we obtain \begin{align} \|\Delta y_{k}\| \le \frac{C_{max}\lambda_{max}}{C_{min}^{2}\lambda_{min}^{2}}(1+\sigma_{k}) n\mu_{k} \le \frac{C_{max}\lambda_{max}}{C_{min}^{2}\lambda_{min}^{2}}2n\mu_{k} \le \frac{C_{max}\lambda_{max}}{C_{min}^{2}\lambda_{min}^{2}}2n\mu_{0}. \label{DELYKUB} \end{align} The second inequality of equation \eqref{DELYKUB} can be inferred by $\sigma_{k} \le 1$ from equation \eqref{SIGMA}. The last inequality of equation \eqref{DELYKUB} can be inferred by $\mu_{k} \le \mu_{0}$ from equation \eqref{UKUPBOUN}. Therefore, from equation \eqref{DELTASK} and equation \eqref{DELYKUB}, we have \begin{align} \|\Delta s_{k}\| = \|-A^{T}\Delta y_{k}\| \le \|A^{T}\| \|\Delta y_{k} \| \le \frac{C_{max}\lambda^{2}_{max}}{C_{min}^{2}\lambda_{min}^{2}}2n\mu_{0}. \label{DELSKUPBD} \end{align} We set $C_{\Delta s} = (C_{max}\lambda^{2}_{max}2n\mu_{0})/(C_{min}^{2}\lambda_{min}^{2})$. Thus, from equation \eqref{DELSKUPBD}, we prove the first part of equation \eqref{DELXSUB}. \vskip 2mm From equations \eqref{DELTAXK}, \eqref{BOUNDXS} and the first part of equation \eqref{DELXSUB}, we have \begin{align} & \|\Delta x_{k}\| = \left\|-S_{k}^{-1}(X_{k}S_{k}e + X_{k}\Delta s_{k} - \sigma \mu_{k}e)\right\| \le \left\|S_{k}^{-1}\right\| \|X_{k}S_{k}e + X_{k}\Delta s_{k} - \sigma_{k} \mu_{k}e\| \nonumber \\ & \quad \le \frac{1}{C_{min}} \left(\|X_{k}S_{k}e\|+\|X_{k}\Delta s_{k}\|+\|\sigma_{k}\mu_{k}e \|\right) \nonumber \\ & \quad \le \frac{1}{C_{min}}\left(\|X_{k}S_{k}e\|_{1} + \|X_{k}\| \|\Delta s_{k}\|+ n \sigma_{k}\mu_{k} \|\right) \nonumber \\ & \quad \le \frac{1}{C_{min}} \left(n \mu_{k} + C_{max} C_{\Delta s} + n \sigma_{k} \mu_{k}\right) \le \frac{1}{C_{min}} \left(2n\mu_{0} + C_{max} C_{\Delta s}\right). \label{DELXKUPBD} \end{align} The last inequality of equation \eqref{DELXKUPBD} can be inferred by $\sigma_{k} \le 1$ from equation \eqref{SIGMA} and $\mu_{k} \le \mu_{0}$ from equation \eqref{UKUPBOUN}. We set $C_{\Delta x} = \left(2n\mu_{0} + C_{max} C_{\Delta s}\right)/C_{min}$. Thus, from equation \eqref{DELXKUPBD}, we also prove the second part of equation \eqref{DELXSUB}. \qed \vskip 2mm \begin{lemma} \label{LEMDELTLB} Assume that $(x_{0}, \, y_{0}, \, s_{0}) > 0$ is a primal-dual feasible point and $(x_{k}, \, y_{k}, \, s_{k})$ \, $(k = 0, \, 1, \, \ldots)$ generated by Algorithm \ref{PNMTRLP} satisfy the proximity condition \eqref{OSINFN}. Furthermore, if the assumption \eqref{MUKGEC} holds for all $k = 0, \, 1, \, \ldots$, there exits a positive constant $C_{\Delta t}$ such that \begin{align} \Delta t_{k} \ge C_{\Delta t} > 0 \label{DELTLB} \end{align} holds for all $k = 0, \, 1, \, 2, \ldots$. \end{lemma} \vskip 2mm \proof From equations \eqref{RHOK}, \eqref{DELTXYSK}-\eqref{XYSK1}, we have \begin{align} |\rho_k - 1| & = \left|\frac{\|F_{\sigma_{k} \mu_{k}}(z_{k})\| -\|F_{\sigma_{k} \mu_{k}}(z_{k}+\Delta z_{k})\|}{\|F_{\sigma_{k} \mu_{k}}(z_{k})\| - \|F_{\sigma_{k} \mu_{k}}(z_{k})+J(z_{k}) \Delta z_{k}\|} - 1\right| \nonumber \\ & = \left|\frac{\|F_{\sigma_{k} \mu_{k}}(z_{k})\|-\|F_{\sigma_{k} \mu_{k}}(z_{k}+\Delta z_{k})\|} {\|F_{\sigma_{k} \mu_{k}}(z_{k})\| - \|F_{\sigma_{k} \mu_{k}}(z_{k}) - (\Delta t_{k})/(1+\Delta t_{k})F_{\sigma_{k}\mu_{k}}(z_{k})\|} - 1\right| \nonumber \\ & = \frac{ \left|\|F_{\sigma_{k}\mu_{k}}(z_{k})\| - (1+\Delta t_{k})\|F_{\sigma_{k}\mu_{k}}(z_{k}+\Delta z_{k})\|\right|} {\Delta t_{k} \|F_{\sigma_{k}\mu_{k}}(z_{k})\|} \nonumber \\ & \le \frac{ \left\|(1+\Delta t_{k})\left(F_{\sigma_{k}\mu_{k}}(z_{k}+\Delta z_{k}) - F_{\sigma_{k}\mu_{k}}(z_{k})\right) + \Delta t_{k} F_{\sigma_{k}\mu_{k}}(z_{k})\right\|} {\Delta t_{k} \|F_{\sigma_{k}\mu_{k}}(z_{k})\|} \nonumber \\ & = \frac{\Delta t_{k}}{1+\Delta t_{k}} \frac{\|\Delta X_{k}\Delta S_{k}e\|} {\|X_{k}s_{k} - \sigma_{k}\mu_{k}e\|}. \label{RHOMINUS1} \end{align} The last equality of equation \eqref{RHOMINUS1} can be inferred from \begin{align} & F_{\sigma_{k}\mu_{k}}(z_{k}+\Delta z_{k}) - F_{\sigma_{k}\mu_{k}}(z_{k}) = J(z_{k})\Delta z_{k} + \left(\frac{\Delta t_{k}}{1+\Delta t_{k}}\right)^{2}\Delta X_{k}\Delta S_{k}e \nonumber \\ & \quad = - \frac{\Delta t_{k}}{1+\Delta t_{k}} F_{\sigma_{k}\mu_{k}}(z_{k}) + \left(\frac{\Delta t_{k}}{1+\Delta t_{k}}\right)^{2}\Delta X_{k}\Delta S_{k}e \nonumber. \end{align} \vskip 2mm On the other hand, from the property $\|a\| \ge a_{i} \, (i = 1, \, 2, \ldots, n)$, we have \begin{align} \|X_{k}s_{k} - \sigma_{k}\mu_{k}e\| \ge x_{k}^{i}s_{k}^{i} - \sigma_{k}\mu_{k}, \; i = 1, \, 2, \, \ldots, n. \nonumber \end{align} By summing the $n$ components of the above two sides and $\mu_{k} = x_{k}^{T}s_{k}/n$, we obtain \begin{align} \|X_{k}s_{k} - \sigma_{k}\mu_{k}e\| \ge (1-\sigma_{k}) \mu_{k} \ge (1 - 0.05) C_{\mu} = 0.95C_{\mu}. \label{XSLBD} \end{align} The second inequality of equation \eqref{XSLBD} can be inferred by $\sigma_{k} \le 0.05$ from equation \eqref{SIGMA} and the assumption $\mu_{k} \ge C_{\mu}$ from equation \eqref{MUKGEC}. \vskip 2mm Thus, from the bounded estimation \eqref{DELXSUB} of $(\Delta x_{k}, \, \Delta s_{k})$ and equations \eqref{RHOMINUS1}-\eqref{XSLBD}, we obtain \begin{align} |\rho_{k} - 1| & \le \frac{\Delta t_{k}}{1+\Delta t_{k}} \frac{\|\Delta X_{k} \Delta S_{k}e\|} {0.95C_{\mu}} \le \frac{\Delta t_{k}}{1+\Delta t_{k}} \frac{\|\Delta X_{k}\| \|\Delta s_{k}\|} {0.95C_{\mu}} \nonumber \\ & \le \frac{\Delta t_{k}}{1+\Delta t_{k}} \frac{\|\Delta x_{k}\| \|\Delta s_{k}\|} {0.95C_{\mu}} \le \frac{\Delta t_{k}}{1+\Delta t_{k}} \frac{C_{\Delta x} C_{\Delta s}} {0.95C_{\mu}} \le \eta_{1}, \nonumber \end{align} provided that \begin{align} \Delta t_{k} \le \frac{0.95 C_{\mu}\eta_{1}} {C_{\Delta x}C_{\Delta s}}. \label{DLETKLECON} \end{align} Thus, if we assume that $K$ is the first index such that $\Delta t_{K}$ satisfies equation \eqref{DLETKLECON}, according to the trust-region updating formula \eqref{TSK1}, $\Delta t_{K+1}$ will be enlarged. Therefore, we prove the result \eqref{DELTLB} if we set \begin{align} C_{\Delta t} = \frac{0.95 C_{\mu}\eta_{1}} {2C_{\Delta x}C_{\Delta s}}. \nonumber \end{align} \qed \vskip 2mm According to the above discussions, we give the global convergence analysis of Algorithm \ref{PNMTRLP}. \begin{theorem} Assume that $(x_{0}, \, y_{0}, \, s_{0}) > 0$ is a primal-dual feasible point and $(x_{k}, \, y_{k}, \, s_{k})$ \, $(k = 0, \, 1, \, \ldots)$ generated by Algorithm \ref{PNMTRLP} satisfy the proximity condition \eqref{OSINFN}. Then, we have \begin{align} \lim_{k \to \infty} \mu_{k} = 0, \label{MUKTOZ} \end{align} where $\mu_{k} = x_{k}^{T}s_{k}/n$. \end{theorem} \proof Assume that there exists a positive constant $C_{\mu}$ such that \begin{align} \mu_{k} \ge C_{\mu} \label{UKGELB} \end{align} holds for all $k = 0, \, 1, \, \ldots$. Then, according to the result of Lemma \ref{LEMDELTLB}, we know that there exists a positive constant $C_{\Delta t}$ such that $\Delta t_{k} \ge C_{\Delta t}$ holds for all $k = 0, \, 1, \ldots$. Therefore, from equation \eqref{MUALPHA}, we have \begin{align} \mu_{k+1} & = \mu_{k}(\alpha_{k}) = (1 - (1 - \sigma_{k})\alpha_{k}) \mu_{k} \le \left(1 - (1-\sigma_{k}) \frac{C_{\Delta t}}{1+C_{\Delta t}}\right) \mu_{k} \nonumber \\ & \le \left(1 - 0.95 \frac{C_{\Delta t}}{1+C_{\Delta t}}\right) \mu_{k} \le \left(1 - 0.95 \frac{C_{\Delta t}}{1+C_{\Delta t}}\right)^{k+1} \mu_{0}. \label{UK1LE} \end{align} The second inequality of equation \eqref{UK1LE} can be inferred by $\sigma_{k} \le 0.05$ from equation \eqref{SIGMA}. Thus, we have $\mu_{k} \to 0$, which contradicts the assumption \eqref{UKGELB}. Consequently, we obtain $\lim_{k \to \infty} \inf{\mu_{k}} = 0$. Since $\mu_{k}$ is monotonically decreasing, it is not difficult to know $\lim_{ k \to \infty} \mu_{k} = 0$. Furthermore, we obtain $\lim_{k \to \infty} \|X_{k}s_{k}\| = 0$ from $\|X_{k}s_{k}\| \le \|X_{k}s_{k}\|_{1} = n \mu_{k}$ and $(x_{k}, s_{k}) > 0$. \qed \vskip 2mm \begin{remark} Our analysis framework of Algorithm \ref{PNMTRLP} is the same as that of the classical primal-dual interior method (pp. 411-413, \cite{NW1999}). However, the result of Lemma \ref{LEMDELTLB} is new. \end{remark} \section{Numerical experiments} \vskip 2mm In this section, we test Algorithm \ref{PNMTRLP} (the PFMTRLP method) for some linear programming problems with full rank matrices or rank-deficient matrices, and compare it with the traditional path-following method (pathfollow.m, p. 210, \cite{FMW2007}) and the state-of-the-art predictor-corrector algorithm (the built-in subroutine linprog.m of the MATLAB environment \cite{MATLAB,Mehrotra1992,Zhang1998}). \vskip 2mm The tolerable errors of three methods are all set by $\epsilon = 10^{-6}$. We use the maximum absolute error (KKTError) of the KKT condition \eqref{KKTLP} and the primal-dual gap $x^{T}s$ to measure the error between the numerical optimal solution and the theoretical optimal solution. \vskip 2mm \subsection{The problem with full rank} For the standard linear programming problem with full rank, the sparse matrix $A$ of given density 0.2 is randomly generated and we choose feasible $x, \, y, \, s$ at random, with $x$ and $s$ each about half-full. The dimension of matrix $A$ varies from $10\times100$ to $300\times3000$. One of its implementation is given by Algorithm \ref{FULLMATPRO} (p. 210, \cite{FMW2007}). According to Algorithm \ref{FULLMATPRO}, we randomly generate 30 standard linear programming problems with full rank matrices. \vskip 2mm \begin{algorithm} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \caption{The standard linear programming problem with full rank} \label{FULLMATPRO} \begin{algorithmic}[1] \REQUIRE ~~\\ the number of equality constraints: $m$; \\ the number of unknown variables: $n$. \ENSURE ~~ \\ matrix $A$ and vectors $b \in \Re^{m}$ and $c \in \Re^{n}$. % \STATE density=0.2; \STATE A = sprandn(m,n,density); \% Generate a sparse matrix of give density. \STATE xfeas = [rand(n/2,1); zeros(n-(n/2),1)]; \STATE sfeas = [zeros(n/2,1); rand(n-(n/2),1)]; \STATE xfeas = xfeas(randperm(n)); \STATE sfeas = sfeas(randperm(n)); \STATE yfeas = (rand(m,1)-0.5)*4;\\ \% Choose b and c to make this (x,y,s) feasible. \STATE b = A*xfeas; \STATE c = A'*yfeas + sfeas; \end{algorithmic} \end{algorithm} \vskip 2mm For those 30 test problems, we compare Algorithm \ref{PNMTRLP} (the PFMTRLP method), Mehrotra's predictor-corrector algorithm (the subroutine linprog.m of the MATLAB environment), and the path-following method (the subroutine pathfollow.m). The numerical results are arranged in Table \ref{TABFULLRANK} and illustrated in Figure \ref{FIGFULLRANK}. The left sub-figure of Figure \ref{FIGFULLRANK} represents the number of iterations and the right sub-figure represents the consumed CPU time. From Table \ref{TABFULLRANK}, we find that PFMTRLP and linprog.m can solve all test problems, and their \emph{KKTErrors} are small. However, pathfollow.m cannot perform well for some higher-dimensional problems, such as exam $7, \, 11, \, 13, \, 23, \, 24, \, 25, \, 26, \, 28$, since their solutions do not satisfy the KKT conditions. From Figure \ref{FIGFULLRANK}, we also find that linprog.m performs the best, and the number of its iterations is less than 20. The number of iterations of PFMTRLP is around 20, and the number of iterations of pathfollow.m often reaches the maximum number (i.e. $200$ iterations). Therefore, PFMTRLP is also an efficient and robust path-following method for the linear programming problem with full rank. \begin{table}[htbp] \centering \fontsize{7}{7}\selectfont \caption{Numerical results of problems with full rank matrices.} \label{TABFULLRANK} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Problem ($m \times n$, $r$)} & \multicolumn{2}{c|}{PFMTRLP} & \multicolumn{2}{c|}{linprog} & \multicolumn{2}{c|}{pathfollow} \cr \cline{2-7} & KKTError & Gap & KKTError & Gap & KKTError & Gap \cr\hline Exam. 1 ($10\times100$, 10) & 3.77E-06 & 3.61E-05 & 1.98E-07 & 1.18E-08 & 1.03E-07 & 6.27E-06 \cr\hline Exam. 2 ($20\times200$, 20) & 2.07E-06 & 1.46E-04 & 3.20E-10 & 8.83E-14 & 6.62E-08 & 1.04E-05 \cr\hline Exam. 3 ($30\times300$, 30) & 9.68E-06 & 1.28E-03 & 1.07E-11 & 3.10E-10 & 1.83E-09 & 4.48E-07 \cr\hline Exam. 4 ($40\times400$, 40) & 3.16E-06 & 5.67E-04 & 6.55E-08 & 1.15E-06 & 1.34E-07 & 4.42E-05 \cr\hline Exam. 5 ($50\times500$, 50) & 1.14E-05 & 2.47E-03 & 5.12E-07 & 3.79E-05 & 1.25E-08 & 4.55E-06 \cr\hline Exam. 6 ($60\times600$, 60) & 1.42E-06 & 2.26E-04 & 6.83E-09 & 2.90E-07 & 3.95E-09 & 1.79E-06 \cr\hline Exam. 7 ($70\times3700$, 70) & 1.05E-04 & 1.27E-02 & 4.21E-07 & 3.71E-05 & \red{4.61E+04} & 4.48E-04 \cr\hline Exam. 8 ($80\times800$, 80) & 7.67E-06 & 2.36E-03 & 4.40E-09 & 5.09E-07 & 4.03E-07 & 2.64E-04 \cr\hline Exam. 9 ($90\times900$, 90) & 7.67E-06 & 2.43E-03 & 1.14E-09 & 1.08E-07 & 2.62E-08 & 1.64E-05 \cr\hline Exam. 10 ($100\times1000$, 100) & 1.93E-05 & 7.86E-03 & 2.34E-12 & 3.67E-11 & 9.09E-09 & 6.65E-06 \cr\hline Exam. 11 ($1100\times1100$, 110)& 9.00E-05 & 3.54E-02 & 6.36E-08 & 1.13E-05 & \red{8.52E+04} & 9.33E-04 \cr\hline Exam. 12 ($120\times1200$, 120) & 1.46E-05 & 5.08E-03 & 3.84E-08 & 1.18E-05 & 1.08E-09 & 4.78E-07 \cr\hline Exam. 13 ($130\times1300$, 130) & 8.44E-06 & 3.78E-03 & 1.37E-10 & 2.67E-08 & \red{1.23E+05} & 2.17E-03 \cr\hline Exam. 14 ($140\times1400$, 140) & 3.33E-06 & 1.55E-03 & 3.53E-07 & 3.78E-05 & 8.73E-07 & 8.96E-04 \cr\hline Exam. 15 ($150\times1500$, 150) & 7.23E-05 & 2.71E-02 & 3.59E-07 & 4.16E-05 & 1.25E-06 & 1.40E-03 \cr\hline Exam. 16 ($160\times1600$, 160) & 1.19E-05 & 6.79E-03 & 3.58E-08 & 1.22E-05 & 3.07E-07 & 4.06E-04 \cr\hline Exam. 17 ($170\times1700$, 170) & 4.45E-05 & 2.25E-02 & 6.33E-11 & 1.37E-08 & 3.53E-09 & 4.74E-06 \cr\hline Exam. 18 ($180\times1800$, 180) & 1.22E-04 & 6.32E-02 & 8.85E-07 & 3.28E-04 & 1.88E-08 & 2.37E-05 \cr\hline Exam. 19 ($190\times1900$, 190) & 6.51E-05 & 5.81E-02 & 8.86E-08 & 6.25E-06 & 2.06E-07 & 3.24E-04 \cr\hline Exam. 20 ($200\times2000$, 200) & 2.49E-05 & 1.61E-02 & 7.56E-07 & 4.67E-04 & 1.23E-06 & 1.91E-03 \cr\hline Exam. 21 ($210\times2100$, 210) & 1.42E-04 & 7.20E-02 & 3.50E-13 & 7.61E-11 & 1.33E-06 & 1.92E-03 \cr\hline Exam. 22 ($220\times2200$, 220) & 1.59E-05 & 1.04E-02 & 1.64E-07 & 7.08E-05 & 2.72E-08 & 4.33E-05 \cr\hline Exam. 23 ($230\times2300$, 230) & 3.64E-04 & 2.28E-01 & 8.65E-14 & 2.62E-11 & \red{2.75E+05} & 1.48E-02 \cr\hline Exam. 24 ($240\times24000$, 240)& 9.80E-05 & 8.36E-02 & 6.76E-07 & 2.19E-04 & \red{2.39E+05} & 2.19E-02 \cr\hline Exam. 25 ($250\times2500$, 250) & 3.06E-04 & 1.80E-01 & 8.40E-08 & 3.59E-05 & \red{2.13E+05} & 8.76E-02 \cr\hline Exam. 26 ($260\times2600$, 260) & 1.21E-05 & 8.87E-03 & 7.98E-09 & 2.15E-07 & \red{5.59E+05} & 3.12E-01 \cr\hline Exam. 27 ($270\times2700$, 2700)& 1.15E-04 & 1.44E-01 & 4.79E-07 & 1.33E-04 & 2.06E-08 & 4.20E-05 \cr\hline Exam. 28 ($280\times2800$, 280) & 3.33E-05 & 4.27E-02 & 2.58E-13 & 1.13E-10 & \red{4.94E+05} & 1.87E-02 \cr\hline Exam. 29 ($290\times2900$, 290) & 4.42E-05 & 4.38E-02 & 3.31E-07 & 1.04E-04 & 3.81E-08 & 8.19E-05 \cr\hline Exam. 30 ($300\times3000$, 300) & 7.84E-05 & 4.21E-02 & 1.82E-08 & 1.24E-05 & 5.38E-08 & 1.30E-04 \cr\hline \end{tabular}% \end{table}% \vskip 2mm \begin{figure}[htbp] \centering \begin{minipage}[t]{0.49\linewidth} \centering \subfigure[The number of iterations]{ \includegraphics[width=1\textwidth, height=0.25\textheight]{IterationPerExam.pdf} } \end{minipage} \begin{minipage}[t]{0.49\linewidth} \centering \subfigure[The computational time]{ \includegraphics[width=1\textwidth,height=0.25\textheight]{TimeConsumed.pdf} } \end{minipage} \caption{The number of iterations and the computational time.} % \label{FIGFULLRANK} \end{figure} \vskip 2mm \subsection{The rank-deficient problem with noisy data} \vskip 2mm For a real-world problem, the rank of matrix $A$ in problem \eqref{LPND} may be deficient and the constraints are even inconsistent when the right-hand-side vector $b$ has small noise. However, the constraints of the original problem are intrinsically consistent. In order to evaluate the effect of PFMTRLP handling those problems, we select some rank-deficient problems from the NETLIB collection \cite{NETLIB} as test problems and compare it with linprog.m for those problems with or without the small noisy data. \vskip 2mm The numerical results of the problems without the noisy data are arranged in Table \ref{NETLIBORIGINAL}. Then, we set $b = b + \text{rand}(m,\, 1)*\epsilon$ for those test problems, where $\epsilon = 10^{-5}$. The numerical results of the problems with the small noise are arranged in Table \ref{NETLIBNOISE}. From Tables \ref{NETLIBORIGINAL} and \ref{NETLIBNOISE}, we can find that PFMTRLP can solve all those problems with or without the small noise. However, from Table \ref{NETLIBNOISE}, we find that linprog.m can not solve some problems with the small noise since inprog.m outputs $NaN$ for those problems. Furthermore, although linprog.m can solve some problems, the KKT errors or the primal-dual gaps are large. For those problems, we conclude that linprog.m also fails to solve them. Therefore, from Table \ref{NETLIBNOISE}, we find that PFMTRLP is more robust than linprog.m for the rank-deficient problem with the small noisy data. \vskip 2mm \begin{table}[htbp] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \scriptsize \centering \fontsize{7}{7}\selectfont \caption{Numerical results of some rank-deficient problems from NETLIB.} \label{NETLIBORIGINAL} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Problem ($m \times n$, $r$)} & \multicolumn{4}{c|}{PFMTRLP} & \multicolumn{4}{c|}{linprog} \cr \cline{2-9} & KKTError & Gap & Iter & CPU & KKTError & Gap & Iter & CPU \cr\hline \tabincell{c}{lp\_brandy \\ ($220 \times 303$, 193)} & 2.37E-03 & 3.57E-02 & 38 & 0.19 & 2.44E-08 & 1.14E-08 & 16 & 0.09 \cr\hline \tabincell{c}{lp\_bore3d \\ ($233 \times 334$, 231)} & 3.00E-08 & 3.32E-06 & 43 & 0.22 & 5.67E-10 & \red{1.37E+03} & 17 & 0.24 \cr\hline \tabincell{c}{lp\_wood1p \\ ($244 \times 2595$, 243)} & 5.09E-05 & 3.09E-05 & 100 & 5.28 & 1.80E-12 & 3.16E-11 & 20 & 0.26 \cr\hline \tabincell{c}{lp\_scorpion \\ ($388 \times 466$, 358)} & 6.16E-07 & 8.65E-05 & 37 & 0.64 & 2.12E-13 & 5.08E-08 & 14 & 0.05 \cr\hline \tabincell{c}{lp\_ship04s \\ ($402 \times 1506$, 360)} & 7.06E-07 & 2.71E-03 & 36 & 1.83 & 2.39E-07 & 2.24E-04 & 13 & 0.03 \cr\hline \tabincell{c}{lp\_ship04l \\ ($402 \times 2166$, 360)} & 1.79E-07 & 1.00E-03 & 36 & 2.48 & 1.04E-11 & 1.02E-04 & 12 & 0.08 \cr\hline \tabincell{c}{lp\_degen2 \\ ($444 \times 757$, 442)} & 8.90E-07 & 4.45E-05 & 35 & 1.23 & 2.79E-13 & 9.43E-10 & 14 & 0.40 \cr\hline \tabincell{c}{lp\_bnl1 \\ ($643 \times 1586$, 642)} & 7.40E-07 & 6.50E-04 & 97 & 11.33 & 3.36E-09 & 3.16E-06 & 26 & 0.08 \cr\hline \tabincell{c}{lp\_ship08s \\ ($778 \times 2467$, 712)} & 4.34E-07 & 3.29E-03 & 41 & 12.72 & 2.12E-11 & 1.98E-07 & 14 & 0.04 \cr\hline \tabincell{c}{lp\_qap8 \\ ($912 \times 1632$, 742)} & 6.84E-07 & 1.82E-04 & 22 & 4.78 & 2.67E-11 & 6.48E-07 & 9 & 0.49 \cr\hline \tabincell{c}{lp\_25fv47 \\ ($821 \times 1876$, 820)} & 5.74E-07 & 9.20E-04 & 80 & 14.67 & 3.35E-10 & 6.10E-11 & 25 & 0.24 \cr\hline \tabincell{c}{lp\_ship08l \\ ($778 \times 4363$, 712)} & 9.95E-07 & 9.87E-03 & 40 & 18.84 & 1.07E-09 & 4.28E-03 & 15 & 0.15 \cr\hline \tabincell{c}{lp\_ship12l \\ ($1151 \times 5533$, 1041)} & 9.22E-07 & 4.21E-03 & 47 & 37.06 & 4.08E-10 & 1.15E-02 & 15 & 0.06 \cr\hline \tabincell{c}{lp\_ship12s \\ ($1151 \times 2869$, 1042)} & 3.01E-07 & 9.36E-04 & 47 & 26.11 & 1.75E-11 & 1.50E-05 & 16 & 0.05 \cr\hline \tabincell{c}{lp\_degen3 \\ ($1503 \times 2604$, 1501)} & 4.79E-07 & 1.01E-05 & 48 & 27.61 & 5.90E-10 & 3.55E-08 & 21 & 0.52 \cr\hline \tabincell{c}{lp\_qap12 \\ ($3192 \times 8856$, 2794)} & 7.00E-07 & 6.86E-04 & 25 & 228.75 & 1.01E-05 & 2.95E-04 & 85 & 218.06 \cr\hline \tabincell{c}{lp\_cre\_c \\ ($3068 \times 6411$, 2981)} & 7.97E-07 & 4.79E-01 & 90 & 516.67 & 5.15E-10 & 4.50E-02 & 27 & 0.21 \cr\hline \tabincell{c}{lp\_cre\_a \\ ($3516 \times 7248$, 3423)} & 7.86E-07 & 7.02E-01 & 83 & 707.98 & 7.26E-09 & 7.10E-02 & 28 & 0.24 \cr\hline \end{tabular}% \end{table}% \begin{table}[htbp] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \scriptsize \centering \fontsize{7}{7}\selectfont \caption{Numerical results of rank-deficient problems with noise $\epsilon = 10^{-5}$.} \label{NETLIBNOISE} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Problem ($m \times n$, $r$)} & \multicolumn{4}{c|}{PFMTRLP} & \multicolumn{4}{c|}{linprog} \cr \cline{2-9} & KKTError & Gap & Iter & CPU & KKTError & Gap & Iter & CPU \cr\hline \tabincell{c}{lp\_brandy \\ ($220 \times 303$, 193)} & 4.21E-02 & 2.63E+00 & 28 & 0.06 & \red{Failed} & Failed & 0 & 0.01 \cr\hline \tabincell{c}{lp\_bore3d \\ ($233 \times 334$, 231)} & 1.84E-01 & 1.69E+01 & 42 & 0.19 & \red{Failed} & Failed & 0 & 0.01 \cr\hline \tabincell{c}{lp\_scorpion \\ ($388 \times 466$, 358)} & 5.03E-02 & 1.05E+01 & 18 & 0.36 & \red{Failed} & Failed & 0 & 0.01 \cr\hline \tabincell{c}{lp\_degen2 \\ ($444 \times 757$, 442)} & 1.41E-03 & 9.63E-02 & 28 & 1.22 & 2.92E-04 & \red{2.12E+10} & 52 & 0.37 \cr\hline \tabincell{c}{lp\_ship04s \\ ($402 \times 1506$, 360)} & 3.91E-02 & \red{2.93E+02} & 23 & 1.56 & \red{Failed} & Failed & 0 & 0.00 \cr\hline \tabincell{c}{lp\_bnl1 \\ ($643 \times 1586$, 642)} & 3.49E-03 & 2.68E+00 & 61 & 7.70 & \red{Failed} & Failed & 0 & 0.01 \cr\hline \tabincell{c}{lp\_qap8 \\ ($912 \times 1632$, 742)} & 8.08E-07 & 2.19E-04 & 22 & 4.58 & 6.24E-04 & \red{6.93E+02} & 6 & 0.35 \cr\hline \tabincell{c}{lp\_25fv47 \\ ($821 \times 1876$, 820)} & 6.34E-07 & 1.01E-03 & 80 & 14.41 & \red{Failed} & Failed & 0 & 0.01 \cr\hline \tabincell{c}{lp\_ship04l \\ ($402 \times 2166$, 360)} & 9.44E-03 & 7.35E+01 & 25 & 2.03 & \red{Failed} & Failed & 0 & 0.00 \cr\hline \tabincell{c}{lp\_ship08s \\ ($778 \times 2467$, 712)} & 1.69E-02 & \red{1.72E+02} & 24 & 8.31 & \red{Failed} & Failed & 0 & 0.00 \cr\hline \tabincell{c}{lp\_wood1p \\ ($244 \times 2595$, 243)} & 8.91E-04 & 6.99E-04 & 60 & 3.16 & 2.41E-04 & \red{4.64E+07} & 33 & 0.62 \cr\hline \tabincell{c}{lp\_degen3 \\ ($1503 \times 2604$, 1501)} & 9.56E-04 & 2.92E-02 & 37 & 24.06 & 6.84E-04 & \red{1.89E+05} & 77 & 2.53 \cr\hline \tabincell{c}{lp\_ship12s \\ ($1151 \times 2869$, 1042)} & 1.36E-02 & 6.26E+01 & 31 & 18.39 & \red{Failed} & Failed & 0 & 0.03 \cr\hline \tabincell{c}{lp\_ship08l \\ ($778 \times 4363$, 712)} & 1.36E-02 & \red{1.55E+02} & 24 & 12.80 & \red{Failed} & Failed & 0 & 0.00 \cr\hline \tabincell{c}{lp\_ship12l \\ ($1151 \times 5533$, 1041)} & 1.18E-02 & 5.88E+01 & 31 & 29.44 & \red{Failed} & Failed & 0 & 0.00 \cr\hline \tabincell{c}{lp\_cre\_c \\ ($3068 \times 6411$, 2981)} & 5.81E-02 & \red{3.84E+04} & 43 & 277.59 & \red{Failed} & Failed & 0 & 0.01 \cr\hline \tabincell{c}{lp\_cre\_a \\ ($3516 \times 7248$, 3423)} & 7.37E-02 & \red{6.73E+04} & 37 & 350.22 & \red{Failed} & Failed & 0 & 0.00 \cr\hline \tabincell{c}{lp\_qap12 \\ ($3192 \times 8856$, 2794)} & 7.01E-07 & 6.86E-04 & 25 & 240.84 & 2.07E-02 & \red{1.40E+03} & 10 & 24.80 \cr\hline \end{tabular}% \end{table}% \section{Conclusions} \vskip 2mm For the rank-deficient linear programming problem, we give a preprocessing method based on the QR decomposition with column pivoting. Then, we consider the primal-dual path-following and the trust-region updating strategy for the postprocessing problem. Finally, we prove that the global convergence of the new method when the initial point is strictly prima-dual feasible. According to our numerical experiments, the new method (PFMTRLP) is more robust than the path-following methods such as pathfollow.m (p. 210, \cite{FMW2007}) and linprog.m \cite{MATLAB,Mehrotra1992,Zhang1998} for the rank-deficient problem with the small noisy data. Therefore, PFMTRLP is worth exploring further as a primal-dual path-following method with the new adaptive step size selection based on the trust-region updating stratgy. The computational efficiency of PFMTRLP has a room to improve. \vskip 2mm \section*{Acknowledgments} This work was supported in part by Grant 61876199 from National Natural Science Foundation of China, Grant YBWL2011085 from Huawei Technologies Co., Ltd., and Grant YJCB2011003HI from the Innovation Research Program of Huawei Technologies Co., Ltd.. The first author is grateful to professor Li-zhi Liao for introducing him the interior-point methods when he visited Hongkong Baptist University in July, 2012. The authors are grateful to two anonymous referees for their comments and suggestions which greatly improve the presentation of this paper. \vskip 2mm
1,108,101,566,151
arxiv
\chapter{Search for unstable sterile neutrinos} \label{ch:analysis} \section{Data selection} This analysis uses a large set of well-reconstructed, upwards-going, high-energy, muon data events. This dataset is assembled using the event selection described briefly in this section. More details on the event selection can be found in~\cite{Axani:2020zdv, Aartsen:2020fwb, Aartsen:2015rwa, Weaver:2015bja}. The event selection is a set of criteria designed to select signal events with high efficiency and to reject background events, yielding a high purity. The signal events are upwards-going muons. These are known to come from the charged-current interactions of atmospheric and astrophysical muon neutrinos and antineutrinos. The dominant background is atmospheric muons from cosmic ray air showers. IceCube detects these at a rate of about 3 kHz, while the rate of signal events is about 1~mHz~\cite{Aartsen:2015nss}. Simulation is used to tune the background rejection of the event selection. The software \texttt{CORSIKA} (COsmic Ray SImulations for KAscade) is used to simulate muon production in cosmic ray air showers in the atmosphere\cite{heck1998corsika, heck2000extensive}. \texttt{PROPOSAL} is then used to propagate the muons through the firn and ice to the detector~\cite{KOEHNE20132070}. The simulated cosmic ray muons are then weighted according to the Hillas-Gaisser H3a cosmic ray model~\cite{Gaisser:2013bla}. Other backgrounds include neutral current neutrino events and charged-current electron and tau neutrino events. These events are associated with cascades, which have a very different morphology than the signal track-like events, allowing them to be readily rejected. Furthermore, the flux of atmospheric electron and tau neutrinos is subdominant to that of muon neutrinos. The electron and tau neutrino backgrounds are also simulated. The event selection criteria is the union of two sets of criteria, or filters, described further in~\cite{Axani:2020zdv, Aartsen:2020fwb, Aartsen:2015rwa, Weaver:2015bja}. All events are required to: \begin{itemize} \item Pass the IceCube muon filter, which identifies track-like events. \item Have at least 15 DOMs triggered, with at least 6 triggered on direct light. Direct light is photons which have not significantly scattered, as determined by timing. \item Have a reconstructed track length, based off of direct light, of at least 200~m. \item Have a relatively smooth distribution of direct light along the track. \item Have a reconstructed energy between 500~GeV and 9976~TeV. \item Originate from or below the horizon: $\cos(\theta_{\rm{zenith}}) \leq 0$. \end{itemize} The event selection has a 99.9\% purity as determined from simulation. The expected composition of the event selection is given in Table~\ref{tab:event_composition}. The energy and cosine zenith distributions of these events is shown in~\ref{fig:mc_event_selection_distributions}. \begin{table}[h] \begin{center} \begin{tabular}{l c} \hline \hline Event Type & Expected Count \\ \hline Conventional atmospheric $\nu_\mu$ & $315,214 \pm 561$ \\ Astrophysical $\nu_\mu$ & $2,350 \pm 48$\\ Prompt atmospheric $\nu_\mu$ & $481 \pm 22$\\ All $\nu_\tau$ & $23\pm5$ \\ All $\nu_e$ & $1\pm1 $\\ Atmospheric $\mu$ & $18\pm4 $\\ \hline \end{tabular} \end{center} \caption[Expected contributions to the event selection]{Expected partial contributions to the event selection. Table recreated from~\cite{Axani:2020zdv, Aartsen:2020fwb}.} \label{tab:event_composition} \end{table} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{figs_analysis/MuEX_energy_mg.pdf} \includegraphics[width=0.75\linewidth]{figs_analysis/CosMuEX_Zenith_mg.pdf} \caption[Event selection energy and angular distribution from simulation]{(Top) Reconstructed energy distribution and (bottom) reconstructed cosine zenith distribution of the constituents of the event selection, as determined from simulation. Figures from~\cite{Axani:2020zdv, Aartsen:2020fwb}} \label{fig:mc_event_selection_distributions} \end{figure} \section{Systematic uncertainties} \subsection{Nuisance parameters} The treatment of systematic uncertainties in this analysis is almost identical to that of the eight-year traditional 3+1 search and more detailed descriptions can be found in~\cite{Axani:2020zdv, Aartsen:2020fwb, Aartsen:2015rwa}. Eighteen systematic uncertainties are incorporated into the analysis with nuisance parameters. These uncertainties are grouped into two broad categories: those relating to the total neutrino flux and those relating to detection. A table of these nuisance parameters, their prior central values, their prior widths, and the boundaries on them is given in Table~\ref{table:Priors}. As described in \Cref{chap:flux}, ten nuisance parameters are used to describe the uncertainties on atmospheric neutrino flux. One is the conventional atmospheric flux normalization: $\Phi_{\textrm{Conv.}}$. Another is the uncertainty on the spectral index of the atmospheric flux: $\Delta \gamma_{\textrm{atm}}$. This shifts both the conventional and prompt atmospheric fluxes. There are six uncertainties in the production of charged kaons in cosmic ray air showers. These are the six Barr parameters WM, WP, YM, YP, ZM, and ZP, previously discussed in \Cref{chap:flux}. Another is the uncertainty in the atmospheric density, which has been shown to affect the neutrino flux~\cite{Gaisser:2013lrk}. This effect is calculated using the reported systematic uncertainty of measurements from the AIRS satellite~\cite{AIRS}. Lastly, there is an uncertainty on the cross section of kaons interacting with atmospheric nuclei, largely nitrogen and oxygen nuclei: $\sigma_{\textrm{KA}}$. This causes an uncertainty in the energy of the kaons when they decay. In the parameterization here, 1$\sigma$ represents 7.5\% of the nominal cross section. Two nuisance parameters are used to describe the uncertainties on the astrophysical neutrino flux. They are the astrophysical flux normalization and the uncertainty on the astrophysical spectral index. These are correlated and are derived from previous IceCube measurements~\cite{Abbasi:2020jmh, aartsen2014observation, Schneider:2019ayi, Stachurska:2019}. The two final uncertainties that affect the flux of neutrinos at the detector are the neutrino-nucleon and antineutrino-nucleon cross sections. These cross sections enter the analysis twice: during propagation of the neutrino flux across the Earth and during interactions near the detector. The latter had previously been studied and deemed negligible~\cite{Jones:2015bya,delgado2015new}. The effects of the cross section uncertainties on the flux incident at the detector are included here. Four nuisance parameters describe the uncertainties in the detection of neutrinos and antineutrinos. One is the effective efficiency of detecting photons, which is referred to as the DOM efficiency. The DOM efficiency reflects properties of the DOM itself, such as the photocathode, collection, and wavelength efficiencies, as well as global effects that prevents detection of photons. These include shadowing from cables, and some properties of the bulk and hole ice. The effect is determined from simulation sets where photons are down-sampled. Another detector uncertainty is due to the effect of the hole ice, represented by the ``forward hole ice'' parameter. The refrozen ice in the drilled boreholes has different optical properties from the bulk of the ice in the detector. In particular, there is additional scattering near the DOMs. As a result, the angular acceptance of the DOMs is different from laboratory measurements. The ``forward hole ice'' parameter reflects uncertainty in the zenith-dependence of the angular acceptance of DOMs. Finally, this analysis includes two nuisance parameter related to the bulk ice, that is, the undisturbed, natural ice between and beyond the drilled boreholes. This analysis uses the ``SnowStorm'' method to calculate the effect of the uncertainties, which are highly correlated, of the many parameters in the ice model, shown in Fig.~\ref{fig:ice_coefficients}~\cite{Aartsen:2019jcj}. The Fourier transform of the depth-dependent coefficients is analyzed. The analysis-level effect of the most significant modes is encoded into two, correlated, effective gradients. These are referred to as ``Ice Gradient 0'' and ``Ice Gradient 1''. \begin{table}[h!] \begin{center} \begin{tabular}{ l c c c } \hline \hline \textbf{Parameter} & Central & Prior (Constraint) & Boundary \\ [0.5ex] \hline \hline \multicolumn{4}{c}{\textbf{Detector parameters}}\\ \hline DOM efficiency & 0.97 & 0.97 $\pm$ 0.10 & [0.94, 1.03] \\ \hline Bulk Ice Gradient 0 & 0.0 & 0 $\pm$ 1.0* & NA \\ \hline Bulk Ice Gradient 1 & 0.0 & 0 $\pm$ 1.0* & NA \\ \hline Forward Hole Ice & -1.0 & -1.0 $\pm$ 10.0 & [-5, 3] \\ \hline \multicolumn{4}{c}{\textbf{Conventional flux parameters}}\\ \hline Normalization ($\Phi_{\mathrm{conv.}}$) & 1.0 & 1.0 $\pm$ 0.4 & NA \\ \hline Spectral shift ($\Delta\gamma_{\mathrm{conv.}}$) & 0.00 & 0.00 $\pm$ 0.03 & NA \\ \hline Atm. Density & 0.0 & 0.0 $\pm$ 1.0& NA \\ \hline Barr WM & 0.0 & 0.0 $\pm$ 0.40 & [-0.5, 0.5] \\ \hline Barr WP & 0.0 & 0.0 $\pm$0.40 & [-0.5, 0.5] \\ \hline Barr YM & 0.0 & 0.0 $\pm$ 0.30 & [-0.5, 0.5] \\ \hline Barr YP & 0.0 & 0.0 $\pm$0.30 & [-0.5, 0.5] \\ \hline Barr ZM & 0.0 & 0.0 $\pm$ 0.12 & [-0.25, 0.5] \\ \hline Barr ZP & 0.0 & 0.0 $\pm$ 0.12 & [-0.2, 0.5] \\ \hline \multicolumn{4}{c}{\textbf{Astrophysical flux parameters}}\\ \hline Normalization ($\Phi_{\mathrm{astro.}}$) & 0.787 & 0.787 $\pm$ 0.36* & NA \\ \hline Spectral shift ($\Delta\gamma_{\mathrm{astro.}}$) & 0 & 0.0 $\pm$ 0.36* & NA \\ \hline \multicolumn{4}{c}{\textbf{Cross sections}}\\ \hline Cross section $\sigma_{\nu_\mu}$ & 1.00 & 1.00 $\pm$ 0.03 & [0.5, 1.5] \\ \hline Cross section $\sigma_{\overline{\nu}_\mu}$ & 1.000 & 1.000 $\pm$ 0.075 & [0.5, 1.5] \\ \hline Kaon energy loss $\sigma_{KA}$ & 0.0 & 0.0 $\pm$ 1.0 & NA \\ \hline \hline \end{tabular} \caption[Nuisance parameter central values, priors and boundaries]{Table of nuisance parameters, their central values, their priors and boundaries. Correlation between two nuisance parameters is indicated with *. Table reproduced from~\cite{Aartsen:2020fwb}.} \label{table:Priors} \end{center} \end{table} The shapes of the eighteen nuisance parameter systematics are shown in Figs.~\ref{fig:systematics_firsthalf} and~\ref{fig:systematics_secondhalf}. These figures show the fractional change that a positive $1\sigma$ variation of each of the nuisance parameters causes, \begin{equation} \frac{(\textrm{Nominal}+1\sigma \textrm{ variation}) - \textrm{Nominal}}{\textrm{Nominal}}, \end{equation} assuming the null hypothesis, that is, only three neutrinos. The shapes of these systematic effects have been shown previously in~\cite{Axani:2020zdv, Aartsen:2020fwb}. One notable difference is that in~\cite{Axani:2020zdv, Aartsen:2020fwb} the overall normalization effects had been removed from all shapes except for the atmospheric and astrophysical flux normalizations. Two other differences relate to the Barr gradient and astrophysical fluxes, and are noted in~\Cref{sec:minorsysimprovements}. \begin{figure} \centering \includegraphics[clip, trim = 0cm 0cm 14.8cm 0cm width=\linewidth]{figs_analysis/thesis_systematics_psigma_first_half.pdf} \caption[Effect of $1\sigma$ change in nuisance parameters]{Percent difference between the expected event distribution with one nuisance parameter value increased by $1\sigma$ and the nominal expectation, shown for half the nuisance parameters. The expectations assume no sterile neurino.} \label{fig:systematics_firsthalf} \end{figure} \begin{figure} \centering \includegraphics[clip, trim = 0cm 0cm 14.8cm 0cm width=\linewidth]{figs_analysis/thesis_systematics_psigma_second_half.pdf} \caption[Effect of $1\sigma$ change in nuisance parameters]{Percent difference between the expected event distribution with one nuisance parameter value increased by $1\sigma$ and the nominal expectation, shown for half the nuisance parameters. The expectations assume no sterile neurino.} \label{fig:systematics_secondhalf} \end{figure} \clearpage \subsection{Minor systematic improvements from traditional 3+1 search} \label{sec:minorsysimprovements} Three minor improvements were made after the eight-year traditional 3+1 search: \begin{itemize} \item This analysis assumes a livetime of 7.634 years instead of 7.6. No additional data was added. The assumed value of the livetime was changed to be more precise. \item The Barr gradients are calculated using atmospheric data from the AIRS satellite, instead of using a model of the atmosphere (US Standard)~\cite{AIRS, atmosphere1976national}. This makes the Barr gradient calculations consistent with that of the conventional atmospheric flux. \item The astrophysical and prompt fluxes are calculated with a more accurate model of the Earth's density profile. This model has 3 km of ice below the surface of the Earth, instead of 30 km of ice. This change makes the calculation of these fluxes consistent with that of the conventional atmospheric flux. \end{itemize} This changes were made for consistency and accuracy, and are not expected to significantly affect the result. \subsection{The Antarctic bedrock} \label{sec:bedrock} The Antarctic bedrock is located approximately 362~m below the bottom of the detector. Approximately 20\% of all the muons in the event selection originate in the bedrock. However, this fraction has a zenith dependence, peaking at the smallest zenith angles, as well as an energy dependence, which peaks at lower energies, shown in Fig.~\ref{fig:bedrock_fraction}. Two uncertainties associated with the bedrock are its position and its density. \begin{figure}[htb] \centering \includegraphics[width=4in]{figs_analysis/thesis_bedrock_fraction.pdf} \caption[Fraction of events that originate in the bedrock]{Fraction of events per bin that originate in the bedrock, rather than in the ice, as determined from simulation.} \label{fig:bedrock_fraction} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figs_analysis/histo-2D_Muon_Zenith_Energy.pdf} \includegraphics[width=0.8\linewidth]{figs_analysis/histo-2D_Muon_Zenith_Energy_uncertainty.pdf} \caption[Effect of 50 m change in bedrock depth]{(Top) The ratio of muon events per bin for a geometry with the bedrock raised 50~m to that for the nominal geometry. (Bottom) The statistical uncertainty in the simulation sets plotted above. Figures courtesy of David Vannerom.} \label{fig:bedrock_position} \end{figure} Several measurements of the position of the bedrock have been made. The PolarGap campaign has made LIDAR and RADAR measurements over the South Pole~\cite{2016AGUFM.C52A..01F, forsberg_2017}. Deviations of about 45~m in the bedrock depth across the detector are indicated from the data~\cite{koskinen_2017}. Simulation studies were performed to study the effect of an uncertainty on the bedrock position. A simulation set was made where the position of the bedrock was raised 50~m with respect to the nominal position. The muon rate was found to be consistent with that of the the nominal geometry, up to the level of the statistical uncertainty, shown in Fig.~\ref{fig:bedrock_position}. The uncertainty in the position of the bedrock is thus deemed negligible for this work. The uncertainty of the density of the bedrock beneath the Antarctic ice sheet is about 7\%~\cite{hinze_2003,doi:https://doi.org/10.1002/9783527626625.ch4,tc-12-2741-2018}. An increase in density could cause two counteracting effects. A higher bedrock density would provide more targets for neutrino interactions, increasing the number of muons that are produced. However, a higher bedrock density would also provide more targets for muon interactions, causing more muons to range out before reaching the detector, or causing them to reach the detector with a lower energy. The detector's efficiency drops rapidly with energy below about 1~TeV. A precise understanding of the overall bedrock effect requires dedicated simulation studies with low statistical uncertainty. Preliminary results suggest the effect of the bedrock density uncertainty is small: less than 2\%. This is small or comparable to the expected statistical uncertainty, shown in Fig.~\ref{fig:expected_stat_uncertainty}. The simulation studies are ongoing and were not able to be fully completed and understood in time for this thesis. Therefore, the result presented in this thesis does not include this potential systematic effect. Future work will incorporate the effect of the bedrock density, if necessary, once it is understood. However, at present we believe the bedrock systematic uncertainty will be negligible. \begin{figure}[tb!] \centering \includegraphics[width=4in]{figs_analysis/fractional_stat_uncertainty_poisson.pdf} \caption[Expected fractional statistical uncertainty]{Expected fractional statistical uncertainty per bin, assuming only three neutrinos. The asymmetrical Poisson errors are calculated using the Garwood method and averaged~\cite{10.2307/2333958, revstat2012Patil, https://doi.org/10.1002/bimj.4710350716}.} \label{fig:expected_stat_uncertainty} \end{figure} \section{Oscillograms and expected event distributions} \label{sec:oscillgorams_and_expected_event_distributions} This section presents the expected event distributions assuming a sterile neutrino with parameters $\Delta m^2_{41} = 1.0 \: \textrm{eV}^2$, $\sin^2 2\theta_{24} = 0.1$, and three values for the decay-mediating coupling, $g^2 = 0,~2\pi,\textrm{ and }4\pi$. The oscillograms in Figs.~\labelcref{fig:oscillograms_anu_analysis,fig:oscillograms_nu_analysis} show the disappearance probabilities at the detector in true quantities. Figure~\ref{fig:oscillograms_anu_analysis} shows the disappearance probability of muon antineutrinos, and Fig.~\ref{fig:oscillograms_nu_analysis} shows the disappearance probability of muon neutrinos. In both these figures, the left-most panel, corresponds to a standard 3+1 model, or no $\nu_4$ neutrino decay, while the middle two panels correspond to non-zero values of the coupling, which causes $\nu_4$ decay. The rightmost panel corresponds to the three-neutrino model. The features of the oscillograms are explained in~\Cref{sec:decay_prd}.IV.B. \begin{figure}[htb!] \centering \includegraphics[clip, trim = 0cm 0cm 0cm 1cm, width=\linewidth]{figs_analysis/atm_antinu_oscillograms.pdf} \caption[Oscillograms for antineutrinos]{Muon antineutrino disappearance as a function of true energy and angle. The left three panels show the prediction for a 3+1 sterile neutrino model with $\Delta m^2_{41} = 1.0 \: \textrm{eV}^2$ and $\sin^2 2\theta_{24} = 0.1$ and three values of the decay-mediating coupling. The leftmost panel corresponds to a traditional 3+1 model. The rightmost panel corresponds to the scenario where there are only three neutrinos.} \label{fig:oscillograms_anu_analysis} \end{figure} \begin{figure}[htb!] \centering \includegraphics[clip, trim = 0cm 0cm 0cm 1cm, width=\linewidth]{figs_analysis/atm_nu_oscillograms.pdf} \caption[Oscillograms for neutrinos]{Muon neutrino disappearance as a function of true energy and angle. The left three panels show the prediction for a 3+1 sterile neutrino model with $\Delta m^2_{41} = 1.0 \: \textrm{eV}^2$ and $\sin^2 2\theta_{24} = 0.1$ and three values of the decay-mediating coupling. The leftmost panel corresponds to a traditional 3+1 model. The rightmost panel corresponds to the scenario where there are only three neutrinos.} \label{fig:oscillograms_nu_analysis} \end{figure} \clearpage \begin{figure}[ht!] \centering \includegraphics[clip, trim = 0cm 0cm 0cm 1.5cm, width=\linewidth]{figs_analysis/fancy_event_dist_percent_diff_null.pdf} \caption[Expected event distributions compared to three-neutrino model]{Expected event distributions, shown as a percent difference from the three-neutrino model, for sterile neutrino parameters $\Delta m^2_{41} = 1.0 \: \textrm{eV}^2$, $\sin^2 2\theta_{24} = 0.1$, and three values of the decay-mediating coupling, $g^2 = 0,~2\pi,\textrm{ and }4\pi$.} \label{fig:event_dist_diff_3nu} \end{figure} \begin{figure}[hb!] \centering \includegraphics[clip, trim = 0cm 0cm 0cm 1.5cm, width=0.8\linewidth]{figs_analysis/fancy_event_dist_percent_diff_inf.pdf} \caption[Expected event distributions compared to traditional 3+1 model]{Expected event distributions, shown as a percent difference from the traditional 3+1 model, for sterile neutrino parameters $\Delta m^2_{41} = 1.0 \: \textrm{eV}^2$, $\sin^2 2\theta_{24} = 0.1$, and two values of the decay-mediating coupling, $2\pi,\textrm{ and }4\pi$.} \label{fig:event_dist_diff_nodecay} \end{figure} The expected event distributions for these same parameters are shown in Fig.~\ref{fig:event_dist_diff_3nu} as a percent difference from the three-neutrino scenario. For these fixed values of $\Delta m^2_{41}$ and $\sin^2 2\theta_{24}$, the traditional 3+1 model predicts a maximum 9\% per-bin deficit, which increases with non-zero $g^2$ up to 14\% and shifts in position to the minimum cosine zenith value. A comparison between the expected event distributions between zero and non-zero values of $g^2$ for fixed $\Delta m^2_{41}$ and $\sin^2 2\theta_{24}$ is shown in Fig.~\ref{fig:event_dist_diff_nodecay}. Nonzero $g^2$ causes a deficit of events at the smallest cosine zenith angles, up to about 7\% per bin. \clearpage \section{Likelihood function} The Poisson likelihood function for a single bin is \begin{equation} \mathcal{L}(\vec{\theta}| k) = \frac{\lambda (\vec{\theta})^k e^{- \lambda(\vec{\theta})}}{k!} \end{equation} where $\lambda(\vec{\theta})$ is the predicted bin count assuming hypothesis $\vec{\theta}$ and $k$ is the experimental bin count. However, finite Monte Carlo statistics do not allow for exact determination of the expected bin count, $\lambda(\vec{\theta})$. Therefore, following~\cite{Arguelles:2019izp}, we use an effective likelihood that accounts for finite Monte Carlo statistics, \begin{equation} \mathcal{L_{\rm{Eff}}}(\vec{\theta}| k) = \big( \frac{\mu}{\sigma^2} \big)^{\frac{\mu^2}{\sigma^2}+1} \Gamma \bigg( k + \frac{\mu^2}{\sigma^2}+1 \bigg) \bigg[ k! \big(\frac{\mu}{\sigma^2}+1 \big)^{k + \frac{\mu^2}{\sigma^2}+1} \Gamma \big( \frac{\mu^2}{\sigma^2}+1 \big) \bigg]^{-1} \label{eq:effective_likelihood} \end{equation} where $ \mu $ and $ \sigma^2 $ depend on $ \vec{\theta} $ and Monte Carlo weights. The parameters $\mu$ and $\sigma^2$ are given by, \begin{equation} \begin{split} \begin{aligned} \mu &\equiv \sum_{i=1}^m w_i \\ \sigma^2 &\equiv \sum_{i=1}^m w_i^2, \end{aligned} \end{split} \end{equation} where $m$ is the number of Monte Carlo events in the bin, and $w_i$ is the weight of the $i^{th}$ event, assuming $\vec{\theta}$. To account for systematic uncertainties, nuisance parameters with Gaussian priors on their values are used. The statistical likelihood from Eq.~\ref{eq:effective_likelihood} is multiplied by: \begin{equation} \mathcal{L(\theta_{\vec{\eta}})}= \prod_\eta \frac{1}{\sqrt{2 \pi \sigma_\eta^2}} e^{\frac{-(\theta_\eta - \Theta_\eta)^2}{2\sigma_\eta^2}} \end{equation} where $\theta_\eta$, $\Theta_\eta$, and $\sigma_\eta$ are the value, prior central value, and prior width of nuisance parameter $\eta$. The prior central values and widths are given in Table~\ref{table:Priors}. \section{Parameter scan} The analysis is performed over a three-dimensional scan of the physics parameters: $\Delta m_{41}^2$, $\sin^2 2 \theta_{24}$, and $g^2$. The parameters $\Delta m_{41}^2$ and $\sin^2 2 \theta_{24}$ are sampled log-uniformly in the ranges 0.01 -- 47 eV$^2$ and 0.01 -- 1, respectively, with ten samples per decade for each parameter. These choices are driven by the region where the anomalies discussed earlier appear. The parameter $g^2$ is sampled in steps of $\pi/2$ in the range 0 -- 4$\pi$, where $g^2 = 0$ corresponds to infinite $\nu_4$ lifetime. The upper limit on scanned values of $g^2$ is chosen to preserve unitarity and perturbability, which are assumed by the calculations~\cite{Peskin:1995ev}. \section{Frequentist analysis and Asimov sensitivity} The frequentist analysis assumes Wilks' theorem~\cite{Wilks_1938}. This allows confidence level intervals to be drawn based on the logarithm of the profile likelihood ratio. That is \begin{equation} \Delta \textrm{LLH}(\vec{\theta}) = \ln(\mathcal{L}(\hat{\vec{\theta}})) - \ln(\mathcal{L(\vec{\theta})}), \end{equation} where $\hat{\vec{\theta}}$ is the set of physics and nuisance parameters that maximizes $\mathcal{L}(\vec{\theta})$, i.e. the best fit. The test statistic, TS, is \begin{equation} \textrm{TS}(\vec{\theta}) = - 2 \Delta \textrm{LLH}(\vec{\theta}). \label{eq:teststatistic} \end{equation} Wilks' theorem states that, assuming certain criteria are met, in a likelihood space with $n$ degrees of freedom, this test statistic has a $\chi^2$-distribution, with $n$ degrees of freedom, $\chi^2_n$. The confidence level $C$ runs through all points $\vec{\theta}$ where \begin{equation} \int_0^{\textrm{TS}(\vec{\theta})} dx \chi^2_n (x) = C. \label{eq:wilks_CL} \end{equation} The values of the test statistic that satisfy Eq.~\ref{eq:wilks_CL} for several confidence levels, and for one and three degrees of freedom (DOF), are given in Table~\ref{tab:chi2}. If the assumptions underlying Wilks' theorem are strongly violated, the contours drawn using this technice may have improper coverage. Coverage of the contours will be checked using the Feldman Cousins method, which is always accurate but is computationally intensive, at a later date~\cite{Feldman:1997qc}. \begin{table}[h] \begin{center} \begin{tabular}{c | c c} \hline \hline Confidence Level & 1 DOF & 3 DOF \\ \hline 68.27\% & 1.00 & 3.53 \\ 90\% & 2.71 & 6.25 \\ 95\% & 3.84 & 7.82 \\ 99\% & 6.63 & 11.34 \\ \hline \hline \end{tabular} \end{center} \caption[Critical values of the $\chi^2$ test statistic]{Critical values of the $\chi^2$ test statistic for one and three degrees of freedom. Values from~\cite{Zyla:2020zbs}.} \label{tab:chi2} \end{table} The sensitivity is approximated by the Asimov sensitivity~\cite{Cowan:2010js}. The Asimov sensitivity is calculated over a scan of the three parameters, assuming three degrees of freedom and Wilks' theorem, and is shown in Fig.~\ref{fig:asimov_sensitivity}. Increased sensitivity is observed for larger values of $g^2$ for fixed values of $\Delta m^2_{41} = 1.0 \: \textrm{eV}^2$, $\sin^2 2\theta_{24} = 0.1$ around $\Delta m^2_{41}~1~\textrm{eV}^2$. An exact sensitivity would be calculated from an ensemble simulated pseudoexperiments, using the Feldman-Cousins method~\cite{Feldman:1997qc}. This is computationally expensive and will be completed later. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{figs_analysis/asimov_sensitivity_2_panel_co_dec12fluxes.pdf} \caption[Asimov sensitivity]{Asimov sensitivity.} \label{fig:asimov_sensitivity} \end{figure} \section{Bayesian analysis} A Bayesian analysis is performed in addition to the frequentist analysis. The Bayesian evidence is computed for every sterile neutrino parameter point in the scan, as well as for the three-neutrino model. The evidence is defined as: \begin{equation} \mathcal{E}(\vec{\Theta}) = \int d \vec{\eta} \: \mathcal{L}(\vec{\Theta}, \vec{\eta}) \: \Pi (\vec{\eta}) \end{equation} where $\Pi(\vec{\eta})$ are the Gaussian priors on the nuisance parameters. The integral is calculated with the MultiNest algorithm~\cite{Feroz:2013hea}. The evidence for each sterile neutrino model is compared to that of the three-neutrino model. The ratio of these evidences is the Bayes factor: \begin{equation} B_{ij} = \frac{\mathcal{E}_i}{\mathcal{E}_j} \end{equation} The Bayes factor is interpreted with Jeffreys scale, which quantifies the strength of evidence preferring one model, $i$, over the other, $j$. For negative values of the Bayes factor, model $j$ is preferred over $i$. \begin{table}[h] \begin{center} \begin{tabular}{c | c} \hline \hline $\log_{10}B_{ij}$ & Strength of evidence\\ \hline $0-0.5$ & Weak \\ $0.5-1$ & Substantial \\ $1-1.5$ & Strong \\ $1.5-2$ & Very strong \\ $>2$ & Decisive \\ \hline \hline \end{tabular} \end{center} \caption[Strength of evidence determined from the Bayes factor.]{Strength of evidence determined from the Bayes factor. Adapted from~\cite{Jeffreys:1939xee}} \label{tab:jeffreysscale} \end{table} \chapter{Specific contributions} All of the work in this thesis was done in collaboration with others. My specific contributions included: \begin{itemize} \item Studying and implementing systematic uncertainties and performing the analysis in~\cite{Moss:2017pur}. \item Calculating the IceCube likelihood to combine with short-baseline fit results and analyzing the results in~\cite{Moulai:2019gpi}. \item Studying the atmospheric neutrino flux uncertainties and implementing the Barr scheme in~\cite{Aartsen:2020iky, Aartsen:2020fwb}. \item Performing the analysis for the IceCube search discussed in this thesis. \item Building portions of, developing calibration methods for, and testing new gasses in a time projection chamber discussed in~\Cref{ch:MITPC}. \end{itemize} \chapter{MITPC} \label{ch:MITPC} For completeness, this appendix describes work performed early in my doctoral studies on developing a novel neutron detector: the MIT/UMichigan Time Projection Chamber (MITPC). A helium and carbon tetrafluoride gaseous time projection chamber (TPC) with a CCD camera was developed and deployed at the Double Chooz experiment to measure the fast neutron background~\cite{Hourlier:2016dqe}. It was brought to Fermi National Accelerator Laboratory to measure the beam-induced neutron background on the Booster Neutrino Beamline; it ran in the SciBooNE enclosure. It was necessary to demonstrate that the detector could contain and reconstruct neutron-induced nuclear recoils at higher energies. This was done by replacing the neutron target, helium, with neon. \addcontentsline{toc}{subsection}{\textit{Publication: Demonstrating a directional detector based on neon for characterizing high energy neutrons}} \includepdf[pages={-}]{apb/Hexley_2015_JInst.pdf} \chapter{Supplementary figures} \label{ch:app_results} \subsection{Data and systematic pulls} \label{sec:app_results_pulls} Figure~\ref{fig:pulls_null} shows the data and systematic pulls, as defined in~\Cref{chap:results}, for the frequentist fit assuming three neutrinos. Figure~\ref{fig:pulls_nodecay} shows the data and systematic pulls for the best fit of the subset of points corresponding to $g^2 = 0$, or the traditional 3+1 model. \begin{figure}[h] \centering \includegraphics[width=4.in]{figs_results/pulls_null_2panel.pdf} \caption[Systematic and data pulls assuming three neutrinos]{Systematic and data pulls for the fit assuming three neutrinos. The top panel shows the systematic pulls for each of the nuisance parameters. The large, main panel shows the data pulls for each bin.} \label{fig:pulls_null} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=4.in]{figs_results/pulls_nodecay_2panel.pdf} \caption[Systematic and data pulls for the best traditional 3+1 point]{Systematic and data pulls for the best fit of the subset of points associated with the traditional $3+1$ model, i.e. $g^2 = 0$. The top panel shows the systematic pulls for each of the nuisance parameters. The large, main panel shows the data pulls for each bin.} \label{fig:pulls_nodecay} \end{figure} \FloatBarrier \subsection{Fit systematic values} \label{sec:app_results_nuisance_values} Figures~\labelcref{fig:nuis_DE,fig:nuis_I0,fig:nuis_I1,fig:nuis_HI,fig:nuis_norm,fig:nuis_gamma,fig:nuis_atm,fig:nuis_WM,fig:nuis_WP,fig:nuis_YM,fig:nuis_YP,fig:nuis_ZM,fig:nuis_ZP,fig:nuis_AN,fig:nuis_ADG,fig:nuis_nuxs,fig:nuis_anuxs,fig:nuis_KL} show the fit values of each of the systematic parameters across the entire scanned physics space for the frequentist analysis. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/DE.pdf} \caption[The fit values of the DOM efficiency]{The fit values of the DOM efficiency nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 1 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_DE} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/I0.pdf} \caption[The fit values of Ice Gradient 0]{The fit values of the Ice Gradient 0 nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 2 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_I0} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/I1.pdf} \caption[The fit values of Ice Gradient 1]{The fit values of the Ice Gradient 1 nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 3 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_I1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/HI.pdf} \caption[The fit values of Hole Ice]{The fit values of the forward hole ice nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 1 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_HI} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/norm.pdf} \caption[The fit values of conventional normalization]{The fit values of the conventional atmospheric normalization nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 2 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_norm} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/gamma.pdf} \caption[The fit values of the cosmic ray spectral tilt]{The fit values of the cosmic ray spectral tilt nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 3\sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_gamma} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/atm.pdf} \caption[The fit values of the atmospheric density shift]{The fit values of the atmospheric density nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 2\sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_atm} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/WM.pdf} \caption[The fit values of Barr WM]{The fit values of the Barr WM nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 2 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_WM} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/WP.pdf} \caption[The fit values of Barr WP]{The fit values of the Barr WP nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 2 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_WP} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/YM.pdf} \caption[The fit values of Barr YM]{The fit values of the Barr YM nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 2 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_YM} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/YP.pdf} \caption[The fit values of Barr YP]{The fit values of the Barr YP nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 2 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_YP} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/ZM.pdf} \caption[The fit values of Barr ZM]{The fit values of the Barr ZM nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 2 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_ZM} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/ZP.pdf} \caption[The fit values of Barr ZP]{The fit values of the Barr ZP nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 2 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_ZP} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/AN.pdf} \caption[The fit values of the astrophysical normalization]{The fit values of the astrophysical normalization nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 2 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_AN} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/ADG.pdf} \caption[The fit values of the astrophysical spectral tilt]{The fit values of the astrophysical spectral tilt nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 2 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_ADG} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/nuxs.pdf} \caption[The fit values of the neutrino cross section]{The fit values of the neutrino cross section nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 1 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_nuxs} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/anuxs.pdf} \caption[The fit values of the antineutrino cross section]{The fit values of the antineutrino cross section nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 1 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_anuxs} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/nuis/KL.pdf} \caption[The fit values of the kaon-nucleus cross section]{The fit values of the kaon-nucleus cross section nuisance parameter from the frequentist analysis across the entire physics space. The color scale limits correspond to $\pm 1 \sigma$. ``NBS" means that no bedrock systematic uncertainty was included in this analysis.} \label{fig:nuis_KL} \end{figure} \chapter{Future prospects and conclusions} An eight-year search for light, unstable sterile neutrinos in the IceCube detector has found an anomalous result. The null hypothesis is rejected with a $p$-value of 2.8\%, which is equivalent to $2.2\sigma$. The frequentist analysis finds a best-fit point of $\Delta m^2_{41}$ = 6.7 eV$^2$, $\sin^2 2\theta_{24}$ = 0.33, and $g^2 = 2.5 \pi$. This corresponds to a $\nu_4$ lifetime of $\tau_4 / m_4 = 6 \times 10^{-16}$~s/eV. The traditional 3+1 model is rejected with a $p$-value of 4.9\%. The Bayesian analysis finds a best model with the same $\Delta m^2_{41}$ and $\sin^2 2\theta_{24}$ values as the frequentist best-fit, but the $g^2$ value is $1.5 \pi$. This model is preferred to the three-neutrino model by a factor of 41. The model associated with the frequentist best-fit is also a very good model, and is preferred to the three-neutrino model by a factor of 37. This search scanned very coarsely in the $g^2$, and somewhat coarsely in $\Delta m_{41}^2$ and $\sin^2 2\theta_{24}$. A finer scan is likely to find a better best fit. The frequentist analysis has assumed Wilks' theorem holds. The confidence interval limits should be verified using the Feldman Cousins test. If the effective number of degrees of freedom is less than three, the frequentist result will become more significant. A precise sensitivity based off of pseudoexperiments should also be calculated. The effect of the systematic uncertainty of the bedrock density should be finalized. If it is an important systematic effect, the best fit point and best model may change. The systematic treatment of the conventional atmospheric neutrino flux can be improved. The cosmic ray flux models used in the development of this thesis do not reflect a plethora of recent cosmic ray data~\cite{Alfaro:2017cwx, Atkin:2017vhi, An:2019wcw, IceCube:2020yct, Adriani:2019aft}. An upgraded cosmic ray flux treatment that incorporates recent data, such as a global fit, would be an important improvement~\cite{Evans:2016obt}. Similarly, the hadronic interaction uncertainties represented by the Barr parameters could be upgraded to reflect more recent measurements~\cite{Gaisser_PPPWNT2020,Aduszkiewicz:2017sei}. The hadronic interaction model used in this work, Sibyll 2.3c, has been upgraded to Sibyll 2.3d~\cite{Engel:2019dsg}. Future searches for light, sterile neutrinos in IceCube should extend to higher energies and consider nonzero values of $\theta_{14}$ and $\theta_{34}$. They can look in an additional oscillation channel by using an independent dataset of cascade events, which correspond to electromagnetic and hadronic showers. One could simultaneously search for a disappearance of track events and an appearance of cascade events due to non-zero $\theta_{34}$. This sterile result is the first reported anomaly in neutrino disappearance experiments. It is expected to have a significant impact on the sterile neutrino landscape. Both this result and the recently published, standard $3+1$ model result should be incorporated into global fits. These results should reduce tension between appearance and disappearance experiments, but the extent of this remains unknown. Lastly, accelerator-based neutrino experiments, such as MicroBooNE, which have completely different systematic uncertainties than IceCube, can perform follow-up searches for muon neutrino disappearance due to unstable sterile neutrinos. \section*{Acknowledgments} First and foremost, I would like to thank my advisor, Janet Conrad. After several years away from academia, Janet gave me the opportunity to return, which had felt unlikely, if not impossible. She offered me a position in her research group and guided me into graduate school. Throughout graduate school, she believed in me. She shared interesting research projects with me and always had ideas for solving problems. I may not have gotten a graduate degree without her. This thesis was made possible by collaborative work done by many people. In the IceCube Collaboration, I especially would like to thank Carlos Arg{\"u}elles, Spencer Axani, Alex Diaz, Ben Jones, Grant Parker, Austin Schneider, Ben Smithers, David Vannerom, Blake Watson, and Philip Weigel for their efforts and assistance. I thank Jason Koskinen and Klaus Helbing for their review of the analysis, Anna Pollman, Juanan Aguilar, and Ignacio Taboada for their help in getting the box opened, and Tom Gaisser for useful discussion. For the phenomenology work in this thesis, I extend my thanks to my collaborators Zander Moss, Gabriel Collin, and Mike Shaevitz. I also thank my DCTPC/MITPC collaborators: Josh Spitz, Allie Hexley, Anna Jungbluth, Adrien Hourlier, and Jaime Dawson. I would not have made it through graduate school without the friends I made at MIT, including Christina Ignarra, John Hardin, Alex Leder, Cheko Cantu, Axel Schmidt, and Lauren Yates, to name a few. Similarly, I am grateful for the friends I made at WIPAC, which include Logan Wille, Josh Wood, Ali Kheirandish, Sarah Mancina, Matt Meehan, Jeff Lazar, Ibrahim Safa, and Donglian Xu. I would like to acknowledge Cathy Modica, Sydney Miller, Nicole Laraia, Kim Krieger, and Tina Chorlton for their administrative assistance. I thank the rest of my thesis committee: Joe Formaggio and Claude Canizares. Lastly, I am grateful for my parents, Ellie Golestani and Javad Moulai, and my sister, Maryam Moulai, for their love and support. \chapter{The IceCube neutrino detector} \label{chap:icecube} \section{The IceCube Neutrino Observatory} \label{sec:icno} The IceCube Neutrino Observatory (ICNO) is located at the Amundsen-Scott South Pole Station in Antarctica~\cite{Aartsen:2016nxy}. The ICNO is depicted in Fig.~\ref{fig:ICNO}, with the Eiffel Tower shown for scale. The ICNO has two detector components. One component is the in-ice array, IceCube, which uses a cubic kilometer of clear, Antarctic ice to detect high-energy neutrinos via the emission of Cherenkov radiation. IceCube is located deep in the ice and is described in section~\ref{sec:inice}. The second component, IceTop, is a surface array that detects cosmic ray air showers and is only mentioned for completeness. As depicted in Fig.~\ref{fig:ICNO}, the in-ice array has a more-densely instrumented sub-array, named DeepCore, which detects neutrinos at lower energies than the main array~\cite{Collaboration:2011ym}. The IceCube Laboratory, housing a farm of computer servers, is at the surface~\cite{Aartsen:2016nxy}. Data collected in the array are sent to the IceCube Laboratory, where they are processed and filtered. Interesting data are sent to the north over satellite. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_detector/IceCubeArray_slim.png} \caption[Schematic illustration of IceCube Neutrino Observatory]{Schematic illustration of the IceCube Neutrino Observatory. Credit: IceCube Collaboration.} \label{fig:ICNO} \end{figure} \section{IceCube in-ice array} \label{sec:inice} IceCube instruments a cubic kilometer of ice with optical sensors~\cite{Aartsen:2016nxy}. This large volume is desirable because the atmospheric and astrophysical neutrino fluxes fall steeply with energy. The South Pole ice cap is about 3~km deep, and IceCube instruments depths between 1450~m and 2850~m below the surface. The optical sensors, called Digital Optical Modules (DOMs), are described in~\Cref{sec:dom}. Altogether, 5160 DOMs hang on 86 bundled cables called strings. Of the 86 strings, 78 are spaced 125~m apart horizontally in a hexagonal array. On these strings, the 60 DOMs per string are separated by 17~m vertically. There are 8 additional strings that are part of DeepCore. These DeepCore-specific strings are positioned closer together horizontally, and below 1750~m below the surface, the DOMs are positioned closer together vertically~\cite{Collaboration:2011ym}. \begin{figure}[t!] \centering \includegraphics[width=0.75\textwidth]{figs_detector/DOM.png} \caption[IceCube digital optical module]{Schematic illustration of an IceCube digital optical module (DOM). Credit: IceCube Collaboration.} \label{fig:dom} \end{figure} \section{Digital Optical Module} \label{sec:dom} The DOM is the basic unit of both detection and data acquisition (DAQ) in IceCube. DOMs are capable of bidirectional communication with adjacent DOMs and the central DAQ at the IceCube Laboratory and executing calibration tasks. A schematic of a digital optical module (DOM) is shown in Fig.~\ref{fig:dom}. Each DOM houses a 10'' Hamamatsu photo-multiplier tube (PMT). The regular DOMs use Hamamatsu R7081-02 PMTs while some DOMs in DeepCore use a higher quantum efficiency version, Hamamatsu 47081-02MOD~\cite{Abbasi_2010}. The PMT is positioned downwards-facing in a glass pressure sphere. Approximately 1~cm of clear silicone gel between the PMT window and the glass housing both mechanically supports the PMT and optically couples it to the glass. The glass housing is made of low-radioactivity borosilicate glass and can withstand 250~bar~\cite{Aartsen:2016nxy}. A mu-metal grid surrounds the bulb of the PMT to shield it from the ambient South Pole magnetic field, which is 550~mG at an angle of 17$^{\circ}$ from vertical. DOMs have several circuit boards. The Main Board digitizes PMT signals, communicates with adjacent DOMS and the central DAQ, and powers and directs all DOM subsystems as needed. The LED Flasher Board produces light within the array for the purposes of calibration~\cite{Aartsen:2013rt,Aartsen:2016nxy}. The standard Flasher Board has 12 light-emitting diodes, with six pointing out horizontally and six tilted up at 51.6$^{\circ}$. The Flasher Board is used to calibrate the position of DOMs in the ice and to measure the optical properties of the ice. The remaining circuit boards produce the high voltage for the PMT, set and monitor the PMT high voltage, and delay PMT signals. \section{The ice} \label{sec:ice} The deep glacial ice at the South Pole is a good medium for detection of Cherenkov radiation because it is exceptionally clear: the effective scattering length and absorption length are order 10~m and 100~m, respectively, for light at 400~nm. At depths below 1450~m, there are no bubbles naturally present~\cite{Lundberg:2007mf}. There are, however, impurities such as dust. The concentration of dust varies throughout the ice in a tilted, depth-dependent manner~\cite{Ackermann:2006pva}. Most notable is the presence of an approximately 100~m thick dust layer at a depth of about 2000~m; in this dust layer, scattering and absorption of light is much worse~\cite{Aartsen:2013rt}. The temperature and pressure also vary in the ice, modifying photon propagation. As well as inhomogeneous, the optical properties of the ice have been found to be anisotropous~\cite{Aartsen:2013ola}. Light birefringence in ice polycrystals has been proposed as an explanation of the anisotropy~\cite{Chirkin:2019vyq}. The optical properties thus requires calibration and modeling~\cite{Aartsen:2016nxy, Lundberg:2007mf, Aartsen:2013ola}. The dust concentration and tilt of the ice have been measured with laser dust loggers. After six IceCube boreholes were water drilled for the installation of strings, laser dust loggers were deployed to measure the amount of dust. Optical profiles of the boreholes were produced, with a resolution of approximately 2~mm~\cite{doi:10.1029/2009JD013741}. A tilt map of the ice was produced from comparison of measurements as a function of depth for the different boreholes. The optical properties of the glacial ice can be described with a six-parameter ice model and a table of depth-dependent coefficients~\cite{Ackermann:2006pva, Aartsen:2013rt}. The scattering component of the model makes use of Mie scattering theory, which describes the scattering of photons off targets of general sizes and predicts the scattering angle distribution for any wavelength~\cite{Lundberg:2007mf, doi:10.1002/andp.19083300302}. The depth-dependent coefficients correspond to scattering and absorption lengths. The absorption coefficient is the inverse of the average distance a photon travels before absorption. The scattering coefficient is the inverse of the average distance between consecutive scatters; an effective scattering coefficient is more useful for modeling the ice, as most scatters are in the forward direction. The coefficients are estimated for the wavelength 400~nm. The wavelength-dependency of these coefficients is modelled. The coefficients can be considered average values for 10~m thick layers. Calibration data, using light from the LEDs in the DOMS, is fit to this model to determine these coefficients~\cite{Aartsen:2013rt}. There is excellent agreement between the effective scattering coefficient extracted from flasher data and the average dust log~\cite{Aartsen:2013rt}. The values of the effective scattering coefficient and the dust absorption coefficient at 400~nm for the ice model used in this work are shown in Fig.~\ref{fig:ice_coefficients}. \begin{figure}[htb!] \centering \includegraphics[width=\textwidth]{figs_detector/spice32.pdf} \caption[Ice model scattering and absorption coefficients]{Effective scattering and dust absorption coefficients at 400~nm from the Spice3.2 ice model.} \label{fig:ice_coefficients} \end{figure} The ``hole ice'' is the refrozen ice in the boreholes that had been hot-water drilled for installation of the detector~\cite{Aartsen:2013rt, Aartsen:2016nxy}. The hole ice is about 60~cm in diameter and froze naturally. It has worse optical properties than the undisturbed glacial ice. The hole ice has two components. The outer component, at radii about 8-30~cm, is clear, but has an effective scattering length of about 50~cm, greatly diminished compared to that of the glacial ice. The inner component, about 16~cm in diameter, is called the bubble column. It is hypothesized that air introduced during drilling is pushed to the center of the borehole during refreezing. The bubble column has an even shorter scattering length. Two video cameras deployed on one of the strings recorded the freezing process and confirm the presence of the bubble column, which appears opaque, as well as fits to calibration flasher data. \begin{figure}[!tb] \centering \includegraphics[width=0.75\textwidth]{figs_detector/domfig3-DOMDataFlow} \caption[Data flow from PMT to central DAQ]{Data flow from PMT to central DAQ at the surface. Figure from~\cite{Aartsen:2016nxy}.} \label{fig:dom_dataflow} \end{figure} \section{Data acquisition} \label{sec:dom_daq} Data flow from the PMT to the central DAQ at the IceCube Laboratory is depicted schematically in Fig.~\ref{fig:dom_dataflow}. The PMT gain is about $10^7$~\cite{Abbasi_2010} for a single photon. The output signal of the PMT is AC-coupled with a toroidal transformal~\cite{Aartsen:2016nxy}. This output is then routed to a discriminator, which is set to fire when the voltage corresponds to 0.25 single photoelectrons (SPE). This is called a ``hit'' and begins a ``launch''. The time is captured. Four digitizers, one continuously sampling fast Analog-to-Digital Converter (fADC) and three custom Analog Transient Waveform Digitizers (ATWD), are used to capture the PMT signal. The signal has been routed through a delay board before reaching the three ATWDs. The three ATWDs have low, medium, and high gain, to capture a wide range of PMT signals. Using dedicated wiring, the DOM communicates with the nearest and next-to-nearest DOMS up and down on the string, to determine whether there is a local coincidence of hits. This will determine how much information is transmitted to the surface. In the case of a local coincidence, the hit is called called a Hard Local Coincidence (HLC) hit, and a timestamp, charge summary and waveform data are sent. In the case of no local coincidence, the hit is called Soft Local Coincidence (SLC) hit, and only a timestamp and charge summary are sent. At the IceCube Laboratory, a filter searches for events likely to correspond to muons~\cite{Aartsen:2016nxy}. Due to the morphology, these events are called tracks. All up-going tracks are selected. Data are sent to the north over satellite. \clearpage \section{Description of neutrino interaction} At energies above 100~GeV, neutrinos interact via Deep Inelastic Scattering (DIS), where a neutrino scatters off one quark in a nucleon~\cite{Formaggio:2013kya}. The products of this interaction are a hadronic shower and an outgoing lepton. In muon neutrino charged-current interactions, the outgoing lepton is a muon. The leading-order Feynman diagram for charged-current muon neutrino DIS is shown in Fig.~\ref{fig:dis}. The cross sections for charged-current interactions above 100~GeV are given in Fig.~\ref{fig:xs}, for both neutrinos, in light green, and antineutrinos, in dark green. The fraction of neutrino energy, $E_\nu$, that goes into the hadronic shower is the in-elasticity, $y$. The muon energy, $E_\mu$, is then $(1-y)E_\nu$. For energies in the range 100~GeV -- 20~TeV, the mean inelasticity, $\langle y \rangle$, is approximately 0.4 -- 0.5~\cite{Gandhi:1995tf, Aartsen:2018vez}. The mean inelasticity as a function of neutrino energy is shown in Fig.~\ref{fig:inelasticity}. The curves are mean inelasticity predictions for neutrinos (blue), antineutrinos (green), and the flux-averaged combination of neutrinos and antineutrinos (red). The black crosses show measurements from IceCube. \begin{figure}[!htb] \centering \includegraphics[width=0.75\textwidth]{figs_detector/feynman-dis} \caption[Feynman diagram for deep inelastic scattering]{Feynman diagram for the first-order contribution to muon-neutrino, charged-current DIS. Figure from~\cite{Conrad:1997ne}.} \label{fig:dis} \end{figure} \begin{figure}[htb!] \centering \includegraphics[clip, trim =0cm 0cm 10cm 0cm , width=0.75\textwidth]{figs_detector/fig13} \caption[Charged-current cross sections]{Neutrino and antineutrino charged-current cross sections. Figure from~\cite{CooperSarkar:2011pa}.} \label{fig:xs} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=0.75\textwidth]{figs_detector/split_fit_inel} \caption[Mean inelasticity versus neutrino energy]{Mean inelasticity as a function of neutrino energy. Figure from~\cite{Aartsen:2018vez}.} \label{fig:inelasticity} \end{figure} \clearpage \begin{figure}[h] \centering \includegraphics[width=4in]{figs_detector/thesis_zdist.pdf} \caption[Depth distribution of interactions from simulation]{Distribution of neutrino interaction depth relative to IceCube, from simulation. The vertical line at -862~m denotes the transition between bedrock and ice. The vertical line at -500~m denotes the bottom of the detector.} \label{fig:bedrock_position} \end{figure} In the events relevant for this thesis, neutrino interaction occurs either in the bedrock or ice below IceCube. About 20\% of the interactions occur in the bedrock, shown in Fig.~\ref{fig:bedrock_position}. The hadronic shower is contained outside the detector. The muon enters the detector and is observed. The muon, traveling faster than the speed of light in ice, will radiate Cherenkov light, which may be detected by DOMs. The muon will lose energy as it traverses the ice. Ionization, pair production, bremsstrahlung and photonuclear interactions all contribute to muon energy losses~\cite{Groom:2001kq}. The average stopping power of each process as a function of muon energy is shown in Fig.~\ref{fig:PROPOSAL} (top). Below approximately 2~TeV, ionization dominates. Above this energy, the other, radiative processes dominate. Secondary particles produced in these interactions include delta electrons, bremmsstrahlung photons, excited nuclei and pairs of electrons~\cite{KOEHNE20132070}. The energy spectra for these secondary particles, for the case of a 10~TeV muon traveling through rock, is given in Fig.~\ref{fig:PROPOSAL} (bottom). Charged secondaries of sufficiently high energy will themselves produce Cherenkov radiation, which may be detected by DOMs. These processes are stochastic, resulting in bursts of energy loss along the muon's trajectory. The muon is likely to exit the detector, as muons in the energy range 500 GeV -- 10 TeV can travel kilometers of ice. The median range of muons is shown in Fig.~\ref{fig:jvs_muon_range}. Event displays for two muons are shown in Figs.~\ref{fig:event_display0} and \ref{fig:event_display1} . Muons and antimuons have opposite electric charges. Other experiments may distinguish them by imposing a magnetic field and observing the direction in which their trajectories curve. IceCube has no such magnetic field, and therefore no ability to distinguish between muons and antimuons. This means that IceCube cannot distinguish between muon neutrinos, which would be tagged by the presence of a muon, and muon antineutrinos, which would be tagged by the presence of an antimuon. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{figs_detector/jvs_muon_range.png} \caption[Muon range in ice]{Muon range in ice as a function of initial muon energy, calculated with \texttt{PROPOSAL}~\cite{KOEHNE20132070}. The solid, red curve shows the median muon range. The dotted blue curve shows the range at which 50\% of muons will have kinetic energy larger than 1~TeV. Figure from \cite{vanSanten:2014kqa}.} \label{fig:jvs_muon_range} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=0.6\textwidth]{figs_detector/PROPOSAL_muon_E_loss.png} \includegraphics[width=0.6\textwidth]{figs_detector/PROPOSAL_secondaries.png} \caption[Muon energy losses in ice and secondary spectra]{(Top) Energy losses for muons in ice. (Bottom) Energy spectra of secondary particles for a 10~TeV muon traveling through rock. All calculations performed with \texttt{PROPOSAL}~\cite{KOEHNE20132070}. Figures from \cite{KOEHNE20132070}. } \label{fig:PROPOSAL} \end{figure} \begin{figure}[h] \centering \includegraphics[clip, trim = 5cm 0cm 5cm 0, width=0.9\textwidth]{figs_detector/MEOWS7.png} \caption[Event display for a 5.2 TeV muon]{Event display for an up-going muon. The thin, grey, vertical lines correspond to the 86 IceCube strings. The small grey dots on the strings are the DOMs that did not see light. The DOMs that observed light are shown as colored spheres. The color of the sphere indicates timing, where red DOMs saw light at the earliest times and blue DOMs at the latest times. The larger the size of the sphere, the more light was observed. The reconstructed muon trajectory is shown a red line. The reconstructed muon energy is 5.2~TeV. Figure courtesy of Sarah Mancina.} \label{fig:event_display0} \end{figure} \begin{figure}[h] \centering \includegraphics[clip, trim = 5cm 0cm 5cm 0, width=0.9\textwidth]{figs_detector/MEOWS2.png} \caption[Event display for a 7.0 TeV muon]{Event display for an up-going muon. The thin, grey, vertical lines correspond to the 86 IceCube strings. The small grey dots on the strings are the DOMs that did not see light. The DOMs that observed light are shown as colored spheres. The color of the sphere indicates timing, where red DOMs saw light at the earliest times and blue DOMs at the latest times. The larger the size of the sphere, the more light was observed. The reconstructed muon trajectory is shown a red line. The reconstructed muon energy is 7.0~TeV. Figure courtesy of Sarah Mancina.} \label{fig:event_display1} \end{figure} \section{Muon reconstruction} The analysis is performed using reconstructed muon properties: the reconstructed muon energy -- also referred to as the muon energy proxy -- and the reconstructed cosine of the muon's zenith angle, $\cos \theta_{\rm{z}}$, where the zenith angle is the angle with respect to the North Pole-South Pole axis. Events from the horizon have $\cos \theta_{\rm{z}} = 0$, while events coming from the North Pole have $\cos \theta_{\rm{z}} = -1$. The direction of the muon is reconstructed based off the arrival time of the first photon to reach each DOM, as well as the charge of all pulses from each DOM~\cite{Weaver:2015bja, Axani:2020zdv}. The reconstruction algorithm considers the shape of Cherenkov cone, as well as scattering and absorption of photons within the ice. The resolution of the angular reconstruction is 0.005 -- 0.015 in $\cos \theta_{\rm{z}}$. The angular reconstruction is described further in~\cite{Axani:2020zdv}. The energy resolution is much worse than the angular resolution. It is so large it is described in terms of the $\log_{10}$ of the energy: $\sigma_{\log_{10}(E_\mu)} \sim 0.4$. This is due to the stochastic nature of the muon's energy loss and the fact that the muon is not contained within the detector. Both of these are evident in Figs~\ref{fig:event_display0} and \ref{fig:event_display1}. The muon energy is related to its energy loss per unit length, which is proportional to the light produced and thus the light observed by the detector. The muon energy reconstruction algorithm used here, \texttt{MuEx}, uses an analytical expression for the observed light, taking into consideration the depth-dependent optical properties of the ice~\cite{Weaver:2015bja, Aartsen:2013vja}. The relationship between neutrino energy and reconstructed muon energy is shown in Figs.~\ref{fig:muex_vs_nu_true} and \ref{fig:nu_true_vs_muex}. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{figs_detector/reco_muon_vs_neutrino_energy.pdf} \caption[Reconstructed muon energy versus true neutrino energy]{The probability of obtaining a given reconstructed muon energy as a function of true neutrino energy.} \label{fig:muex_vs_nu_true} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{figs_detector/neutrino_energy_vs_muon_reco} \caption[True neutrino energy versus reconstructed muon energy]{The probability distribution of neutrino energy as a function of reconstructed muon energy. This probability distribution assumes a neutrino flux proportional to $E^{-2}$, where $E$ is the neutrino energy.} \label{fig:nu_true_vs_muex} \end{figure} \chapter{Atmospheric neutrino fluxes} \label{chap:flux} \section{The coupled cascade equations} Atmospheric neutrinos comprise the vast majority of neutrinos observed by the IceCube detector. They arise from the interactions of primary cosmic rays with nuclei in the Earth's atmosphere~\cite{gaisser2016cosmic}. These collisions produce hadronic showers. Secondary particles decay to atmospheric neutrinos. A cosmic ray air shower is depicted schematically in Fig.~\ref{fig:cosmic_ray_air_shower}. The evolution of particle populations during the hadronic cascade is described by the coupled cascade equations~\cite{gaisser2016cosmic}. These are coupled, differential transport equations that describe the propagation, interaction, and decay of particles. The coupled cascade equation for the flux, $\Phi_{E_i}^h$, of particle $h$ in discrete energy bin $E_i$ and at atmospheric slant depth, $X$, is given by Eq.~\ref{eq:cascade}, \begin{figure}[H] \centering \includegraphics[width=0.75\textwidth]{figs_flux/cosmic_ray.jpg} \caption[Schematic illustration of a cosmic ray air shower]{Schematic illustration of a cosmic ray air shower. Figure from CERN~\cite{CERN_cosmicrays}.} \label{fig:cosmic_ray_air_shower} \end{figure} \begin{equation} \begin{split} \frac{d \Phi_{E_i}^h}{d X} = &- \frac{\Phi_{E_i}^h}{\lambda^h_{\textrm{int},\:E_i}} \\ &- \frac{\Phi_{E_i}^h}{\lambda^h_{\textrm{dec},\:E_i}(X)} \\ &+ \sum_{E_k \geq E_i} \sum_l \frac{c_{l(E_k) \rightarrow h(E_i)}}{\lambda^l_{\textrm{int},\:E_k}} \Phi^l_{E_k} \\ &+ \sum_{E_k \geq E_i} \sum_l \frac{d_{l(E_k) \rightarrow h(E_i)}}{\lambda^l_{\textrm{dec},\:E_k}(X)} \Phi^l_{E_k}. \end{split} \label{eq:cascade} \end{equation} The first term accounts for the loss of particle $h$ due to interactions, with interaction length \begin{equation} \lambda^h_{\textrm{int},\:E_i} = \frac{m_{\textrm{air}}}{\sigma^{\textrm{inel}}_{p-\textrm{air}}}, \end{equation} where $m_{\textrm{air}}$ is the mean mass of nuclei in the air, and $\sigma^{\textrm{inel}}_{p-\textrm{air}}$ is the inelastic cross section between a proton and an air nucleus. The second term accounts for loss of particle $h$ due to decay, with decay length \begin{equation} \lambda^h_{\textrm{dec},\:E_i}(X) = \frac{c \tau_h E_i \rho_{\textrm{air}}(X)}{m_h}, \end{equation} where $\tau_h$ and $m_h$ are the lifetime and mass of particle $h$, and $\rho_{\textrm{air}}(X)$ is the density of the air at slant depth $X$. The third term accounts for gain of particle $h$ due to interactions of other particle species; the coefficients representing this, $c_{l(E_k) \rightarrow h(E_i)}$, are determined from hadronic interaction models. The fourth term accounts for gain of particule $h$ due to the decay of other particle species. The atmospheric slant depth, $X$ is given by \begin{equation} \label{eq:slantdepth} X(h_0) = \int_0^{h_0} dl \rho_{\rm{air}}(h_{\rm{atm}}(l)), \end{equation} where $\rho_{\rm{air}}$ is the mass density of the air as a function of the atmospheric height, $h_{\rm{atm}}$, and the density is integrated along the path, $l$, of the cascade. In this work, the atmospheric neutrino flux is calculated with the program \texttt{Matrix Cascade Equations} (\texttt{MCEq}). \texttt{MCEq} casts the coupled cascade equations as a matrix equation, which it solves numerically~\cite{Fedynitch:2015zma, fedynitch2012influence}. Over fifty baryons, mesons, and leptons are considered in the calculations. The calculation requires three inputs; these inputs are informed by a wide range of collider and fixed target experiments, as we will discuss below. One input is the primary cosmic ray model, which describes the composition and spectra of the incoming particles. Another is the hadronic interaction model, which describes the cross sections of interactions in the cascade. The last is the description of the temperature or density profile of the atmosphere. The combined muon neutrino and antineutrino spectrum, and partial contributions from the various parent particles, are shown in Fig.~\ref{fig:mceq_anatoli_fluxes}. These spectra were calculated with \texttt{MCEq} and assume Thunman-Ingelman-Gondolo~\cite{Gondolo:1995fq} as the primary cosmic ray flux model and SIBYLL-2.3 RC1~\cite{Engel:2015} as the hadronic interaction model. The atmospheric muon neutrino flux is comprised of the \textit{conventional} component and the \textit{prompt} component, shown in Fig.~\ref{fig:mceq_anatoli_fluxes}. The conventional flux originates in the decays of charged pions, charged kaons, and muons. Muon neutrinos come from the decays: \begin{equation} \begin{split} \begin{aligned} \pi^+ &\rightarrow \mu^+ + \nu_\mu \\ K^+ &\rightarrow \mu^+ + \nu_\mu \\ \mu^- &\rightarrow e^- + \bar{\nu}_e + \nu_\mu \end{aligned} \end{split} \label{eq:atm_flux_decays} \end{equation} while muon antineutrinos come from the decays of the oppositely charged particles. Above around 100~GeV, neutrinos from kaon decays are predominant to those from either pion or muon decays. The prompt flux comes from the decays of heavier particles, such as charmed mesons, although it has yet to be observed. It is expected to be predominant at energies above about $10^5$~GeV. \begin{figure} \centering \includegraphics[clip, trim = 13cm 8.5cm 0cm 0cm, width=0.8\textwidth]{figs_flux/anatoli_detailed_flux.pdf} \caption[Energy spectra of atmospheric muon neutrinos and their various constituents]{Energy spectrum of atmospheric muon neutrinos and antineutrinos, as well as the partial contributions to the total flux, as calculated with \texttt{MCEq}. The spectrum is scaled by the energy cubed. Figure from \cite{Fedynitch:2015zma}.} \label{fig:mceq_anatoli_fluxes} \end{figure} \section{Cosmic ray and hadronic interaction models} There is a range of available models that describe the flux and composition of primary cosmic rays. The various models make different assumptions about the sources of cosmic rays and also are informed by different datasets. The models considered in this work are PolyGonato~\cite{Hoerandel:2002yg}, Hillas-Gaisser 2012 H3a~\cite{Gaisser:2012zz}, and Zatsepin-Sokolskaya/PAMELA~\cite{Zatsepin:2006, Adriani69}. These models account for cosmic ray hydrogen, helium, and heavier nuclei. The Hillas-Gaisser 2012 H3a model, which is the one used in this work, includes five groups of nuclei: hydrogen, helium, carbon-nitrogen-oxygen, magnesium-aluminium-silicon, and iron. The all-particle cosmic ray energy spectrums predicted by these models are shown by the curves on Fig.~\ref{fig:fedynithc_cr_models}. Data, from various ground-based arrays, are shown as points~\cite{takeda2003energy, nagano1984energy, Arqueros:1999uq, bird1994cosmic, Grigorov:1970xu, Glasmacher:1999id, Abbasi:2007sv, asakimori_93, asakimori_91, Apel:2011mi, ANTONI20051, danilova_77, Fomin_91, Tsunesada:2011mp, Amenomori:2008aa, Amenomori:1995nt}. The abrupt change in the all-particle spectrum at about 10$^{6.5}$~GeV is called the ``knee'' of the cosmic ray spectrum. One can see that the data sets have a large spread in the knee region, and this will need to be addressed as a systematic uncertainty. \begin{figure}[H] \centering \includegraphics[width=0.85\textwidth]{figs_flux/primary_models_v2.pdf} \caption[All-particle cosmic ray spectrum]{All-particle cosmic ray spectrum. Model predictions are shown as solid or dashed curves. Data from ground-based arrays are shown as points~\cite{takeda2003energy, nagano1984energy, Arqueros:1999uq, bird1994cosmic, Grigorov:1970xu, Glasmacher:1999id, Abbasi:2007sv, asakimori_93, asakimori_91, Apel:2011mi, ANTONI20051, danilova_77, Fomin_91, Tsunesada:2011mp, Amenomori:2008aa, Amenomori:1995nt}. Figure from \cite{fedynitch2012influence}.} \label{fig:fedynithc_cr_models} \end{figure} A selection of models are available to describe the hadronic interactions in cosmic ray air showers. The models considered in this work are SIBYLL-2.3c~\cite{Riehn:2017mfm}, SIBYLL-2.3~\cite{Riehn:2015oba}, EPOS-LHC~\cite{Pierog:2013ria}, and QGSJET-II-04~\cite{Ostapchenko:2010vb}. These models are tuned to data from accelerator experiments. Data from Large Hardon Collider, which has run at around 10~TeV, can constrain the interactions of cosmic ray particles at the knee~\cite{dEnterria:2011twh}. This is because of the Lorentz transformation between these two reference frames: detectors at the Large Hardon Collider operate in the center-of-mass frame, while the atmosphere, which is impinged by cosmic rays, is effectively a fixed target. The center-of-mass energy of cosmic ray interactions is \begin{equation} \sqrt{s} \simeq \sqrt{2 E m}, \end{equation} where $E$ is the energy of the cosmic ray, $m$ is the mass of the target particle, and the mass of the cosmic ray particle is negligible compared to its energy. The hadronic interaction models do not come with uncertainties and there is significant discrepancy between the models~\cite{Pierog_2019}. Three of the properties that vary between the models are the inelastic cross sections, the shower multiplicity, and interaction elasticity. This causes different predictions of the atmospheric neutrino and antineutrino fluxes. In particular, the ratio of the muon neutrinos to muon antineutrinos varies largely. This is an important systematic uncertainty for the work in this thesis, as the matter resonance due to the presence of a sterile neutrino is expected to appear in the muon antineutrino flux alone. \section{Barr scheme for hadronic interaction uncertainties} The Barr scheme is a method for assigning hadronic interaction uncertainties for the purpose of calculating conventional atmospheric neutrino fluxes~\cite{Robbins:820971, Barr:2006it}. Uncertainties are assigned based on measurements of the production of charged pions and kaons from projectiles impinging on targets, \begin{equation} \begin{split} \begin{aligned} p N &\rightarrow \pi^\pm X \\ p N &\rightarrow K^\pm X. \\ \end{aligned} \end{split} \end{equation} The projectile, $p$, is most often a proton. The target $N$, is a light element, for example, beryllium, carbon or aluminium. The other products, $X$, are unconstrained. Uncertainties are given as a function of $E_i$, the incident particle's energy, and $x_{\textrm{lab}}$, the fraction of the incident particle's energy that is carried by the secondary meson. Uncertainties are assigned based off the consistency between datasets, as well as the amount of extrapolation that is necessary between different nuclear targets, energies, and values of the transverse momentum. The assigned uncertainties for incident particle energies above 30~GeV are given in Table~\ref{tab:barr_uncertainties}. \begin{table} \centering \begin{tabular}{c c c c c} \hline Label & Incident energy, $E_i$ & Meson & $x_{\textrm{lab}}$ & Uncertainty \\ \hline \hline G & >30 GeV & $\pi^{\pm}$ & $0-0.1$ & 30\% \\ H & >30 GeV & $\pi^{\pm}$ & $0.1-1$ & 15\% \\ I & >500 GeV & $\pi^{\pm}$ & $0.1-1$ & 12.2\% $\times \log_{10}(E_i/500~\textrm{GeV})$\\ \hline W & >30 GeV & $K^{\pm}$ & $0-0.1$ & 40\% \\ Y & >30 GeV & $K^{\pm}$ & $0.1-1$ & 30\% \\ Z & >500 GeV & $K^{\pm}$ & $0.1-1$ & 12.2\% $\times \log_{10}(E_i/500~\textrm{GeV})$\\ \hline \end{tabular} \caption[Estimated uncertainties in the production of charged kaons and pions]{Estimated uncertainties in the production of charged kaons and pions. Values from \cite{Barr:2006it}.} \label{tab:barr_uncertainties} \end{table} Since the conventional flux is predominantly composed of kaons, in the energy range relevant for this thesis, the kaon production uncertainties are more important than the pion production uncertainties. The G, H, and I uncertainties from Table~\ref{tab:barr_uncertainties} are therefore not explicitly incorporated into the analysis. The uncertainties on the production of oppositely charged kaons are considered separately. Six \textit{Barr parameters} -- WP, WM, YP, YM, ZP, ZM -- refer to the uncertainties in Table~\ref{tab:barr_uncertainties}, where the suffix P indicates it references the production of positively charged kaons, and the suffix M indicates it references that of negatively charged kaons. The propagation of these uncertainties to the muon neutrino and antineutrino fluxes is calculated with \texttt{MCEq} as in~\cite{Fedynitch:2017M4, Yanez:20197d}. For each of the six Barr parameters, $\mathcal{B}_i$, muon neutrino and antineutrino flux derivatives are calculated: \begin{equation} \frac{\partial \Phi}{\partial \mathcal{B}_i} = \frac{\Phi(\mathcal{B}_i = \delta) - \Phi_(\mathcal{B}_i = -\delta)}{2 \delta}. \end{equation} For example, $\Phi(\textrm{YP} = \delta)$, is the flux where $K^+$ production in the region correlating to Y has been increased by the relative amount $\delta$ compared to the nominal model. If one assumes the nominal flux prediction, $\Phi_0$, underestimates charged meson production by $\mathcal{B}_i$ for each of the respective regions in this parameterization, a corrected flux prediction is \begin{equation} \Phi(\mathcal{B}_1,\mathcal{B}_2,...) = \Phi_0 + \sum_i \mathcal{B}_i \frac{\partial \Phi}{\partial \mathcal{B}_i}. \end{equation} \section{Atmospheric flux uncertainty treatment} The various combinations of cosmic ray and hadronic interaction models yield discrete predictions for the atmospheric muon neutrino and antineutrino fluxes, as well as their ratio. It is unknown which is the closest to the truth, and how close it is. A set of eight physically-motivated systematic uncertainties allows for a continuous range of flux predictions that cover the discrete ones. The normalization and composition of cosmic rays above the knee have a large uncertainty~\cite{fedynitch2012influence}. This leads to a large uncertainty in the normalization of the atmospheric neutrino flux. Here it is taken to be 40\%. Additionally, there is an uncertainty in the spectral index of the cosmic ray flux, leading to one in the neutrino flux. In this work, the effect of the uncertainty in the spectral index, $\Delta \gamma$, on the neutrino flux is parameterized as \begin{equation} \Phi(E, \: \Delta \gamma) = \Phi (E) \bigg(\frac{E}{2.2~\textrm{TeV}} \bigg)^{-\Delta \gamma}. \end{equation} The uncertainty on $\Delta \gamma$ is taken to be 0.03. As discussed previously, the six Barr parameters, WP, WM, YP, YM, ZP, and ZM, represent the uncertainty in the production of charged kaons. In this work, the central prediction of the neutrino flux is that from Hillas-Gaisser 2012 H3a as the cosmic ray model, and Sibyll2.3c as the hadronic interaction model. Figs.~\labelcref{fig:barr_envelope_edist,fig:barr_envelope_edist_90GeV,fig:barr_envelope_edist_900GeV,fig:barr_envelope_edist_9TeV,fig:barr_envelope_edist_90TeV,fig:barr_nubar_nu_true} show that the uncertainty on the flux prediction using this central model and considering this set of eight systematic uncertainties spans the predictions from the combinations of three cosmic ray models and four hadronic interaction models. In these plots, the curves represent the discrete flux predictions, the color indicates the hadronic interaction model, and the linestyle indicates the cosmic ray model. The grey band shows the $1\sigma$ range of predictions based off the central model and the eight systematic uncertainties considered here. These uncertainties are considered uncorrelated, and their effects are added in quadrature. Figure~\ref{fig:barr_envelope_edist} shows that these eight systematic uncertainties cover the discrete, predicted energy distributions of the muon neutrino and antineutrino fluxes. The distributions shown are averaged over the zenith angles. Figures~\labelcref{fig:barr_envelope_edist_90GeV,fig:barr_envelope_edist_900GeV,fig:barr_envelope_edist_9TeV,fig:barr_envelope_edist_90TeV} show these uncertainties cover the discrete, predicted cosine zenith distributions, at the energies 90~GeV, 900~GeV, 9~TeV and 90~TeV. Finally, Fig.~\ref{fig:barr_nubar_nu_true} shows that the discrete predictions of the muon antineutrino to neutrino ratio, as a function of energy, are spanned by these systematic uncertainties. A benefit of this uncertainty parameterization is that the discrepancy in the atmospheric fluxes from the various models can be included in an analysis in a continuous manner. It is not necessary to perform the sterile search analysis multiple times, assuming a different, discrete model each time, as was done in the one-year search~\cite{Jones:2015bya,Collin:2018juc}. Instead one can incorporate these eight systematic uncertainties as continuous nuisance parameters that reflect the underlying physical uncertainties. Furthermore, the posterior distributions of these nuisance parameters are informative of the respective hadronic processes. In addition to the eight atmospheric flux systematic uncertainties discussed here, the analysis in this thesis includes two more, mentioned here for completeness. One is the atmospheric density. The other is the cross section of kaons interacting with atmospheric nuclei. These will be discussed further in \Cref{ch:analysis} and are not included in the envelopes shown here. \hfill \begin{figure}[H] \centering \includegraphics[clip, trim = 9.3cm 1.3cm 0.5cm 3.5cm, width=0.4\textwidth]{figs_flux/plot_just_for_legend.pdf} \includegraphics[width=0.6\textwidth]{figs_flux/Envelope_barr_zenith_averaged_dist_units__.pdf} \caption[Energy spectrum band from Barr scheme]{The spectrum of atmospheric muon (top) neutrinos and (bottom) antineutrinos. The fluxes are scaled by energy cubed. The different colors correspond to different hadronic interaction models. The different linestyles correspond to different primary cosmic ray models. The grey band represents the $1\sigma$ uncertainty around the central model when considering eight relevant nuisance parameters. The fluxes shown here are averaged over the cosine zenith angles.} \label{fig:barr_envelope_edist} \end{figure} \begin{figure}[H] \centering \includegraphics[clip, trim = 9.3cm 1.3cm 0.5cm 3.5cm, width=0.4\textwidth]{figs_flux/plot_just_for_legend.pdf} \includegraphics[width=0.6\textwidth]{figs_flux/zenith_dist_0.pdf} \caption[Angular distribution spanned by Barr scheme at 90~GeV]{The angular distribution of atmospheric muon (top) neutrinos and (bottom) antineutrinos at 90~GeV. The different colors correspond to different hadronic interaction models. The different linestyles correspond to different primary cosmic ray models. The grey band represents the $1\sigma$ uncertainty around the central model when considering eight relevant nuisance parameters. The fluxes shown here are averaged over the cosine of the zenith angles.} \label{fig:barr_envelope_edist_90GeV} \end{figure} \begin{figure}[H] \centering \includegraphics[clip, trim = 9.3cm 1.3cm 0.5cm 3.5cm, width=0.4\textwidth]{figs_flux/plot_just_for_legend.pdf} \includegraphics[width=0.6\textwidth]{figs_flux/zenith_dist_8.pdf} \caption[Angular distribution spanned by Barr scheme at 900~GeV]{The angular distribution of atmospheric muon (top) neutrinos and (bottom) antineutrinos at 900~GeV. The different colors correspond to different hadronic interaction models. The different linestyles correspond to different primary cosmic ray models. The grey band represents the $1\sigma$ uncertainty around the central model when considering eight relevant nuisance parameters. The fluxes shown here are averaged over the cosine of the zenith angles.} \label{fig:barr_envelope_edist_900GeV} \end{figure} \begin{figure}[H] \centering \includegraphics[clip, trim = 9.3cm 1.3cm 0.5cm 3.5cm, width=0.4\textwidth]{figs_flux/plot_just_for_legend.pdf} \includegraphics[width=0.6\textwidth]{figs_flux/zenith_dist_16.pdf} \caption[Angular distribution spanned by Barr scheme at 9~TeV]{The angular distribution of atmospheric muon (top) neutrinos and (bottom) antineutrinos at 9~TeV. The different colors correspond to different hadronic interaction models. The different linestyles correspond to different primary cosmic ray models. The grey band represents the $1\sigma$ uncertainty around the central model when considering eight relevant nuisance parameters. The fluxes shown here are averaged over the cosine of the zenith angles.} \label{fig:barr_envelope_edist_9TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[clip, trim = 9.3cm 1.3cm 0.5cm 3.5cm, width=0.4\textwidth]{figs_flux/plot_just_for_legend.pdf} \includegraphics[width=0.6\textwidth]{figs_flux/zenith_dist_24.pdf} \caption[Angular distribution spanned by Barr scheme at 90~TeV]{The angular distribution of atmospheric muon (top) neutrinos and (bottom) antineutrinos at 90~TeV. The different colors correspond to different hadronic interaction models. The different linestyles correspond to different primary cosmic ray models. The grey band represents the $1\sigma$ uncertainty around the central model when considering eight relevant nuisance parameters. The fluxes shown here are averaged over the cosine of the zenith angles.} \label{fig:barr_envelope_edist_90TeV} \end{figure} \begin{figure}[H] \centering \includegraphics[clip, trim = 9.3cm 1.3cm 0.5cm 3.5cm, width=0.4\textwidth]{figs_flux/plot_just_for_legend.pdf} \includegraphics[width=0.75\textwidth]{figs_flux/N0bug_dg001_10_5_nubar_nu_ratio_nolegend.pdf} \caption[Ratio of antineutrinos to neutrinos spanned by Barr scheme]{Ratio of atmospheric muon antineutrinos to neutrinos, as a function of energy. The different colors correspond to different hadronic interaction models. The different linestyles correspond to different primary cosmic ray models. The grey band represents the $1\sigma$ uncertainty around the central model when considering eight relevant nuisance parameters. The fluxes shown here are averaged over the cosine of the zenith angles.} \label{fig:barr_nubar_nu_true} \end{figure} \section{Neutrino flux at detector} For the work in this thesis, nine components to the total neutrino flux are calculated. Seven relate to the conventional atmospheric flux: the central model and the six Barr fluxes. The atmospheric density profile assumed in the calculation of these fluxes is based off data from AIRS satellite~\cite{AIRS}. The prompt component of the conventional flux is included based on the prediction from the BERSS model~\cite{Bhattacharya:2015jpa}. The final component is the astrophysical flux. Astrophysical neutrinos were discovered by IceCube~\cite{aartsen2014observation}; their origin remains unknown. The model of the astrophysical flux used in this work is based on previous measurements from IceCube~\cite{Abbasi:2020jmh, aartsen2016observation, Schneider:2019ayi, Stachurska:2019}. The astrophysical flux is assumed to be isotropic and to follow a single power law, \begin{equation} \frac{dN_{\nu_\mu,\bar{\nu}_\mu}}{dE} = N_{\textrm{astro}} \bigg( \frac{E}{100~\textrm{TeV}} \bigg)^{-2.5}, \end{equation} where the normalization is \begin{equation} N_{\textrm{astro}} = 0.787 \times 10^{-18}~\textrm{GeV}^{-1}~\textrm{sr}^{-1}~\textrm{s}^{-1}~\textrm{cm}^{-2}. \end{equation} This assumes the astrophysical flux is equally composed of the electron, muon and tau flavors, and is equally composed of neutrinos and antineutrinos. The uncertainties on the normalization and spectral index are significant and correlated, and discussed further in \Cref{ch:analysis}. The prediction of the neutrino flux incident on the Earth is converted into a prediction of the flux at the IceCube detector. This calculation is done with the neutrino evolution code \texttt{nuSQuIDS}~\cite{ARGUELLESDELGADO2015569}. This software accounts for processes that affects the propagation of neutrinos through the Earth. One process is neutrino oscillations between four flavors, including the effects due to matter. Another significant process is charged and neutral current interactions in the Earth, which lead to neutrino absorption at high energies and long baselines. A subdominant process is tau regeneration, which is the production of neutrinos from charged current interactions of tau neutrinos and subsequent tau decay~\cite{PhysRevLett.81.4305}. The Earth is assumed to be spherically symmetric. The density profile of the Earth is assumed to follow the Preliminary Reference Earth Model (PREM)~\cite{DZIEWONSKI1981297}. Lastly, neutrino decay affects the propagation of neutrinos. This effect is handled by~\texttt{nuSQUIDSDecay}, a class specialization of \texttt{nuSQuIDS}~\cite{Moss:2017pur}. Neutrinos are assumed to be Dirac. The coupling that mediates the decay is assumed to be purely scalar. The fourth mass state may decay to a massless scalar and another neutrino. A right-handed neutrino would decay to a left-handed neutrino, while a right-handed antineutrino would decay to a left-handed antineutrino. These daughter neutrinos, as well as the daughter scalar particle, are not detectable. The value of the coupling is converted to a lifetime with Eq.~\ref{eq:coupling_mass_lifetime}. The \texttt{nuSQUIDSDecay} partial rate constructor is used, where the chirality-violating process rate is the inverse of the lifetime. \chapter{Introduction} \section{Neutrino oscillations in vacuum} In the Standard Model of particle physics, there exist three neutrino flavor states: electron neutrino ($\nu_e$), muon neutrino ($\nu_\mu$), and tau neutrino ($\nu_\tau$)~\cite{Griffiths:2008zz, Thomson:2013zua, Giunti:2007ry, Zyla:2020zbs}. The flavor of a neutrino can only be determined in charged current interactions, which involve the exchange of a $W^\pm$ boson, and when the charged lepton that is produced can be identified. In these interactions, an incoming electron neutrino will produce an outgoing electron ($e^-$); a muon neutrino, a muon ($\mu^-$); and a tau neutrino, a tau ($\tau^-$), as shown in Fig~\ref{fig:nu_cc_flavors}. In the charged current interactions of antineutrinos, the corresponding positively charged lepton will be produced. \begin{figure} \centering \includegraphics[width=\textwidth]{figs_intro/nu_cc_flavors} \caption[Feynman vertices for charged current neutrino interactions]{Feynman vertices for charged current neutrino interactions. Figure produced with TikZ-Feynman~\cite{Ellis:2016jkw}.} \label{fig:nu_cc_flavors} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figs_intro/NO_schematic_3nu} \caption[Schematic illustration of the Normal Ordering]{Schematic of the Normal Ordering, with approximate neutrino flavor contributions to each neutrino mass state.} \label{fig:3nu} \end{figure} In the massive neutrino Standard Model, there exist three neutrino mass states ($\nu_1$, $\nu_2$, and $\nu_3$) that are distinct from the flavor states. The flavor and mass states mix via a unitary mixing matrix, the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, $U$: \begin{equation} \label{eq:nu_mixing} \begin{pmatrix} \nu_e \\ \nu_\mu \\ \nu_\tau \end{pmatrix} = \begin{pmatrix} U_{e1} & U_{e2} & U_{e3} \\ U_{\mu 1} & U_{\mu 2} & U_{\mu 3} \\ U_{\tau1} & U_{\tau2} & U_{\tau3} \\ \end{pmatrix} \begin{pmatrix} \nu_1 \\ \nu_2 \\ \nu_3 \end{pmatrix}. \end{equation} This is equivalent to \begin{equation} | \nu_\alpha \rangle = \sum_{i=1}^3 U_{\alpha i}^* |\nu_i \rangle, \end{equation} which follows the notation in \cite{Giunti:2007ry,Zyla:2020zbs}. This is illustrated schematically in Fig.~\ref{fig:3nu}. The mass states, $|\nu_i\rangle$, are eigenstates of the Hamiltonian. Assuming that the neutrino is a plane wave, the time evolution of the mass states is \begin{equation} |\nu_i(t) \rangle = e^{-i E_i t} | \nu_i \rangle, \end{equation} and therefore, the time evolution of the flavor states is \begin{equation} | \nu_\alpha (t) \rangle = \sum_{i=1}^3 U_{\alpha i}^* e^{-i E_i t} | \nu_i \rangle, \end{equation} where $E_i$ is the energy of the $i^{th}$ mass state and $t$ is time. The neutrino mass states have three distinct masses, $m_i$. The mass-squared splittings are \begin{equation} \Delta m^2_{ij} \equiv m_i^2 - m_j^2. \end{equation} Neutrinos are highly relativistic particles, which means \begin{equation} \begin{aligned} E_i &= \sqrt{p^2 + m_i^2} \simeq p + \frac{m_i^2}{2E_i} \\ t &= L. \end{aligned} \end{equation} Here it is assumed that all mass states have the same momentum, $p$, and $L$ is the baseline the neutrino has traveled, using natural units. All this together gives rise to the phenomenon of neutrino oscillation, where a neutrino born in one flavor state, $\nu_\alpha$, has an oscillatory probability of transforming into another state $\nu_\beta$, as it travels through space. This oscillation probability is given by \begin{equation} P(\nu_\alpha \rightarrow \nu_\beta) = |\langle \nu_\beta | \nu_\alpha (t) \rangle |^2 \simeq \sum\limits_{\substack{k,j \in [1,2,3] \\ k \neq j}} U_{\alpha k}^* U_{\beta k} U_{\alpha j} U_{\beta j}^* \exp \bigg( \frac{-i \Delta m^2_{kj} L} {2E} \bigg) \end{equation} where $E$ is the energy of the neutrino. This formula is applicable if one can assume that the difference in energies of the various mass eigenstates can be neglected. A review on neutrino oscillation that explores the assumptions that were made here is in~\cite{Akhmedov:2019iyt}. The oscillation of 1 GeV neutrinos born in the muon flavor is shown in Fig.~\ref{fig:std_oscillations}. The observation of neutrino oscillation is the reason neutrinos are known to have mass, since nonzero $\Delta m^2$ requires at least one nonzero mass state~\cite{Esteban:2020cvm, Zyla:2020zbs, deSalas:2020pgw, Capozzi:2018ubv}. \begin{figure}[htb] \centering \includegraphics[width=0.75\textwidth]{figs_intro/std_osc_baseline_1GeV} \caption[Neutrino oscillation probabilities for 1 GeV neutrinos]{Neutrino oscillation probabilities for 1 GeV neutrinos born in the muon flavor, shown as a function of distance from the point of creation, or baseline. The oscillation probability of $\nu_\mu \rightarrow \nu_e$ is shown in yellow. The survival probability of $\nu_\mu \rightarrow \nu_\mu$ is shown in green. The oscillation probability of $\nu_\mu \rightarrow \nu_\tau$ is shown in blue. Probabilities were calculated with \texttt{nuSQuIDS}~\cite{ARGUELLESDELGADO2015569, nusquids}.} \label{fig:std_oscillations} \end{figure} Like any unitary matrix, the PMNS matrix may be decomposed as the product of rotation matrices. A convenient representation is: \begin{equation} \label{eq:pmns_rotation} U = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos \theta_{23} & \sin \theta_{23} \\ 0 & -\sin \theta_{23} & \cos \theta_{23} \\ \end{pmatrix} \begin{pmatrix} \cos \theta_{13} & 0 & \sin \theta_{13} e^{i\delta_{\rm{CP}}} \\ 0 & 1 & 0 \\ -\sin \theta_{13}e^{-i \delta_{\rm{CP}}} & 0 & \cos \theta_{13} \\ \end{pmatrix} \begin{pmatrix} \cos \theta_{12} & \sin \theta_{12} & 0\\ -\sin \theta_{12} & \cos \theta_{12} & 0\\ 0 & 0 & 1 \end{pmatrix}. \end{equation} In Eq.~\ref{eq:pmns_rotation}, it is assumed that neutrinos are Dirac and not Majorana, which is equivalent to assuming that neutrinos are not their own antiparticles. If neutrinos are Majorana, the representation of the PMNS matrix in Eq.~\ref{eq:pmns_rotation} would need to be modified by a fourth factor, a complex matrix, at the far right-hand-side. The Majorana nature of neutrinos is largely irrelevant to this thesis. The phase angle $\delta_{\rm{CP}}$ in Eq.~\ref{eq:pmns_rotation} represents violation of charge conjugation parity (CP) symmetry. A value of $\delta_{\rm{CP}}$ that is neither 0$^\circ$ nor 180$^\circ$ would mean that neutrinos and antineutrinos have different oscillation probabilities. The global best-fit value of $\delta_{\rm{CP}}$ is consistent with 180$^\circ$~\cite{Esteban:2020cvm, deSalas:2020pgw}. The work in this thesis is largely insensitive to CP violation. For simplicity, $\delta_{\rm{CP}}$ is assumed to be 0 in this work; equivalently, neutrinos and antineutrinos are assumed to have identical oscillation probabilities. Another unknown is the neutrino mass ordering, that is, the order of $m_1$, $m_2$, and $m_3$. Solar neutrino experiments have determined that $m_2$ is greater than $m_1$, with $\Delta m_{21}^2 \approx 7 \times 10^{-5}$~eV$^2$~\cite{Zyla:2020zbs}. Atmospheric neutrino experiments have determined that $|\Delta m_{31}|^2$ is much greater than $|\Delta m^2_{21}|$, with $|\Delta m_{31}|^2 \approx 2 \times 10^{-3}$~eV$^2$. The Normal Ordering refers to the scenario where $m_3$ is larger than $m_1$, as depicted in Fig.~\ref{fig:3nu}, and the Inverted Ordering is the alternative. The work in this thesis is largely insensitive to the ordering of the masses for the three known neutrinos. The Normal Ordering is assumed. In the approximation of two neutrino flavors and two neutrino mass states, there is only one rotation angle, $\theta$, and one mass-squared splitting, $\Delta m^2$. In this scenario, the probability for a $\nu_\alpha$ to oscillate to $\nu_\beta$ is given by Eq.~\ref{eq:twoflavorosc} and the survival probability is for $\nu_\alpha$ to remain $\nu_\alpha$ is given by Eq.~\ref{eq:twoflavorsurv}. The amplitude of oscillation is set by the mixing angle, $\theta$, and the frequency is set by $\Delta m^2$: \begin{equation} \label{eq:twoflavorosc} P(\nu_\alpha \rightarrow \nu_\beta) =\sin^2 (2 \theta) \sin^2 \bigg( \frac{\Delta m^2 L}{4 E} \bigg) \end{equation} \begin{equation} \label{eq:twoflavorsurv} P(\nu_\alpha \rightarrow \nu_\alpha) = 1 - \sin^2 (2 \theta) \sin^2 \bigg( \frac{\Delta m^2 L}{4 E} \bigg) \end{equation} \section{Anomalies} Some long-standing neutrino experimental results do not fit within the three-neutrino oscillation framework~\cite{Conrad:2013mka}. Two of the most significant of these are the Liquid Scintillator Neutrino Experiment (LSND) and MiniBooNE. While inconsistent with the massive neutrino Standard Model, these results are consistent with each other, motivating searches for new physics, including the work in this thesis. These experiments are reviewed below and a more detailed review can be found in~\cite{Conrad:2013mka}. LSND ran at the Los Alamos National Laboratory LAMPF/LANSCE accelerator during 1993-1998~\cite{Athanassopoulos:1996ds,Aguilar:2001ty}. A 798 MeV proton beam struck a target, producing charged pions ($\pi^\pm$) and muons that came to rest in the beam dump. Stopped negatively charged pions would undergo nuclear capture, while stopped positively charged pions would undergo the decay chain \begin{equation} \begin{aligned} \pi^+ \rightarrow \nu_\mu \: &\mu^+ \\ &\mu^+ \rightarrow \bar{\nu}_\mu \: \nu_e \: e^+, \end{aligned} \end{equation} where the muon also comes to rest before it decays. This produced a beam containing muon antineutrinos, muon neutrinos and electron neutrinos, with electron antineutrino contamination at the level of approximately $10^{-4}$ of the muon antineutrino flux in the relevant energy range~\cite{BURMAN1990621,BURMAN1996416,Conrad:2013mka}. The detector was a tank filled with mineral oil doped with scintillator and was instrumented with photomultiplier tubes. The center of the detector was 30~m away from the beam stop. In the detector, electron antineutrinos would interact via inverse beta decay, \begin{equation} \bar{\nu}_e \: p \rightarrow e^+ \: n, \end{equation} while electron neutrinos would interact by scattering off carbon, \begin{equation} \nu_e + \:^{12}\rm{C} \rightarrow ^{12}\rm{N}_{\rm{ground\:state}} + e^-. \end{equation} These two interactions were distinguishable, as the electron antineutrino signal had two features, annihilation of the positron ($e^+$) and a 2.2 MeV gamma from neutron capture, while the electron neutrino signal had one, the electron. LSND observed an excess of electron antineutrino events~\cite{Aguilar:2001ty}. The LSND result is shown in Fig.~\ref{fig:lsnd}. Data are shown as black points. The purple and red stacked histograms are the expected background contributions. The beam-off backgrounds have been subtracted, which can explain the negative data point. The significance of the LSND anomaly is 3.8$\sigma$. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figs_intro/marjon_purple_lsnd_crop.png} \caption[LSND anomaly]{Distribution of LSND beam-on events as a function of baseline/energy. Figure adapted from \cite{Aguilar:2001ty}.} \label{fig:lsnd} \end{figure} The MiniBooNE experiment was designed to probe the LSND anomaly~\cite{Aguilar-Arevalo:2013pmq, Aguilar-Arevalo:2020nvw}. MiniBooNE ran at Fermi National Accelerator Laboratory during 2002 - 2019. MiniBooNE used the Booster Neutrino Beamline, created from 8~GeV protons hitting a target. Charged mesons, namely, pions and kaons were produced, and the polarity of the magnetic focusing horn would select for either positively or negatively charged mesons. These mesons would decay in flight to produce a muon neutrino or muon antineutrino beam. The MiniBooNE detector sat 541~m downstream of the target and was composed of a large tank filled with mineral oil and instrumented with photomultiplier tubes. MiniBooNE observed an excess of electron-like events in both the muon neutrino beam and muon antineutrino beam, shown in Fig.~\ref{fig:miniboone_anomaly}. The data are shown as black points and the expected backgrounds are shown as stacked histograms. At lower energies, the data sits above the background expectation. This is known as the MiniBooNE anomaly, or MiniBooNE low-energy excess. The significance of the MiniBooNE anomaly is 4.8$\sigma$. If the appearance of electron-like events in MiniBooNE and LSND is interpreted as a $\nu_\mu \rightarrow \nu_e$ or $\bar{\nu}_\mu \rightarrow \bar{\nu}_e$ oscillation within a two-flavor model, then using Eq.~\ref{eq:twoflavorosc} the data point to value a $\Delta m^2$ larger than $10^{-2}$~eV$^{-2}$. This cannot be accommodated with the three-neutrino framework, because the two known $|\Delta m_{ij}^2|$ do not add to a large enough value. Curiously, while the LSND and MiniBooNE experiments had significant systematic differences, such as a factor of ten in energy and different types of beams (decay-at-rest vs. decay-in-flight), the allowed regions of [$\Delta m^2$, $\sin^2 2 \theta$] in the oscillation interpretations of LSND and MiniBooNE are highly compatible~\cite{Aguilar-Arevalo:2020nvw}. The combined LSND and MiniBooNE excesses have a significance of $6.1\sigma$. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figs_intro/miniboone_histNU-stacked_wbf} \includegraphics[clip, trim = 0cm 12.2cm 0cm 0cm, width=0.65\textwidth]{figs_intro/mb-fig1.pdf} \includegraphics[clip, trim = 0cm 0cm 0cm 21.9cm, width=0.65\textwidth]{figs_intro/mb-fig1.pdf} \caption[MiniBooNE anomaly]{(Top) MiniBooNE anomaly in neutrino mode. Figure from \cite{Aguilar-Arevalo:2020nvw}. (Bottom) MiniBooNE anomaly in antineutrino mode. Figure adapted from \cite{Aguilar-Arevalo:2013pmq}.} \label{fig:miniboone_anomaly} \end{figure} \clearpage \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figs_intro/dayabay.png} \caption[Reactor Antineutrino Anomaly]{Ratio of reactor antineutrino flux measurement to prediction as a function of distance from the reactor core for Daya Bay and prior experiments. Figure from~\cite{Adey:2018qct}.} \label{fig:daya_bay} \end{figure} While LSND and MiniBooNE both observed anomalies in muon (anti)neutrino beams, anomalous results have also been observed in electron (anti)neutrino beams. These include the reactor anomalies and the gallium anomaly. The reactor antineutrino anomaly is a deficit in the flux of electron antineutrinos emerging from nuclear reactors as compared to theoretical predictions~\cite{Mention_PhysRevD.83.073006, Mueller:2011nm, Huber:2011wv, Giunti:2019qlt}. This anomaly has been observed in a number of experiments at various baselines from the reactor cores. A recent precision measurement from the experiment Daya Bay, as well as measurements from prior experiments, is shown in Fig.~\ref{fig:daya_bay}~\cite{Adey:2018qct}. The measurements are normalized to theoretical predictions, accounting for the effects of oscillations. The world average measurement sits significantly below the prediction. Ratios of measurements at different baselines from reactors have found small signals, at the level of $1-2\sigma$~\cite{Atif:2020glb, Diaz:2019fwt}. Finally, several experiments have observed a ``5~MeV bump,'' an unexpected feature in the antineutrino spectra with a magnitude of about 10\%~\cite{Huber:2016xis}. Lastly, the gallium anomaly is a deficit of electron neutrinos observed in the electron-capture decays of $^{37}$Ar and $^{51}$Cr by the GALLEX and SAGE experiments~\cite{Anselmann:1994ar, Hampel:1997fc, Kaether:2010ag, Abdurashitov:1996dp, Abdurashitov:1998ne, Abdurashitov:2005tb, Abdurashitov:2009tn}. These were gallium-based experiments designed to measure the solar neutrino flux and which used $^{37}$Ar and $^{51}$Cr for calibration. These are electron neutrino emitting radioactive sources. These experiments observed a $2.3 \sigma$ deficit in electron neutrinos from these radioactive sources~\cite{Kostensalo:2019vmv}. \section{Traditional 3+1 model} One model that has been put forth to account for the anomalies previously described is a 3+1 sterile neutrino model~\cite{Diaz:2019fwt, Kopp:2013vaa, Abazajian:2012ys}. A sterile neutrino, $\nu_s$, is a hypothetical flavor of neutrino which does not interact via the weak nuclear force. A 3+1 model is schematically depicted in Fig.~\ref{fig:31schematic}, for the case of the normal ordering $(m_3 > m_2)$. In a 3+1 model, a fourth, heavier mass state, $\nu_4$ is introduced. This mass state is mostly comprised of the sterile flavor, $\nu_s$. The three lighter mass states include a small sterile component. If the fourth mass state is heavy enough, the three lighter masses become degenerate. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figs_intro/marjon_31_schematic.pdf} \caption[Schematic illustration of a 3+1 sterile neutrino model]{Schematic illustration of a 3+1 sterile neutrino model, assuming the normal neutrino ordering. The various components of the mass states, as well as the differences in the masses, are not to scale. This figure is for illustrative purposes only.} \label{fig:31schematic} \end{figure} In a 3+1 sterile neutrino model, the three-flavor mixing matrix, that is, the PMNS matrix, given in Eq.~\ref{eq:nu_mixing}, is expanded by one column, to account for the fourth mass state, and one row, to account for the sterile flavor. This expanded, four-flavor mixing matrix is shown in Eq.~\ref{eq:nu_mixing_31}: \begin{equation} \label{eq:nu_mixing_31} \begin{pmatrix} \nu_e \\ \nu_\mu \\ \nu_\tau \\ \nu_s \end{pmatrix} = \begin{pmatrix} U_{e1} & U_{e2} & U_{e3} & U_{e4}\\ U_{\mu 1} & U_{\mu 2} & U_{\mu 3} & U_{\mu 4} \\ U_{\tau1} & U_{\tau2} & U_{\tau3} & U_{\tau 4}\\ U_{s 1} & U_{s2} & U_{s3} & U_{s4} \\ \end{pmatrix} \begin{pmatrix} \nu_1 \\ \nu_2 \\ \nu_3 \\ \nu_4 \end{pmatrix} \end{equation} The representation of the mixing matrix as the product of rotation matrices, given for three-flavor case in Eq.~\ref{eq:pmns_rotation}, requires equivalent modification. Following \cite{Barry:2011wb}, the four-flavor mixing matrix can be written as \begin{equation} \label{eq:mixing_matrix_rotation_matrices_4_flavors} U = R_{34} \tilde{R}_{24} \tilde{R}_{14} R_{23} \tilde{R}_{13} R_{12} P, \end{equation} where $R_{ij}$ and $\tilde{R}_{ij}$ are, respectively, real and complex rotation matrices in $ij$ space. For example, \begin{equation} R_{34} \equiv \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \cos \theta_{34} & \sin \theta_{34} \\ 0 & 0 & - \sin \theta_{34} & \cos \theta_{34} \end{pmatrix} \textrm{~and~} \tilde{R}_{14} \equiv \begin{pmatrix} \cos \theta_{14} & 0 & 0 & \sin \theta_{14} e^{-i\delta_{14}} \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ - \sin \theta_{14} e^{i \delta_{14}}& 0 & 0 & \cos \theta_{14} \end{pmatrix}. \end{equation} The factor $P$ is the diagonal Matrix: \begin{equation} P = \textrm{diag}(1, e^{i \alpha /2}, e^{i(\beta/2 + \delta_{13})}, e^{(\gamma/2+\delta_{14})}), \end{equation} where $\alpha$, $\beta$ and $\gamma$ are the Majorana phases. These phases are only nonzero if neutrinos are Majorana particles, i.e. they are their own antiparticles. In a 3+1 model, neutrinos oscillate between all four flavors. This could account for electron neutrinos appearing in a muon neutrino beam, as seen by LSND and MiniBooNE, as well as a disappearance of electron neutrinos from an electron neutrino beam, as seen in the reactor and gallium experiments. The oscillation amplitudes for oscillation channels relevant to this thesis, and assuming a two-flavor approximation, are given in Table~\ref{tab:osc_amplitudes}. These amplitudes are parameterized in three ways: in terms of the mixing matrix elements, $U_{\alpha i}$, which are the physical parameters; in terms of effective mixing angles, $\theta_{\alpha \beta}$; and in terms of rotation angles, $\theta_{ij}$. In this thesis, plots and discussion related to global fits will use oscillation amplitudes given in terms the effective angles. Plots and discussion related only to the experiment IceCube will use oscillation amplitudes given in terms of rotation angles. \begin{table}[h] \label{tab:osc_amplitudes} \begin{tabular}{c|c|c|c} Channel & \begin{tabular}{c} Mixing \\ matrix \\ elements \end{tabular} & \begin{tabular}{c} Effective \\ mixing \\ angles \end{tabular} & \begin{tabular}{c} Rotation \\ angles \end{tabular} \\ \hline $\nu_e \rightarrow \nu_e$ & $4(1 - |U_{e4}|^2)|U_{e4}|^2 $& $\sin^2 2\theta_{ee}$& $\sin^2 2\theta_{14}$ \\ $\nu_\mu \rightarrow \nu_\mu$ & $4(1 - |U_{\mu4}|^2)|U_{\mu4}|^2 $& $\sin^2 2\theta_{\mu\mu}$& $4\cos^2 \theta_{14}\sin^2 \theta_{24}(1 - \cos^2 \theta_{14}\sin^2 \theta_{24})$ \\ $\nu_\mu \rightarrow \nu_e$ & $4|U_{e4}|^2|U_{\mu 4}|^2 $& $\sin^2 2\theta_{\mu e}$& $\sin^2 2\theta_{14}\sin^2 \theta_{24}$ \\ \end{tabular} \caption[Oscillation amplitudes for 3+1 model]{Oscillation amplitudes for three neutrino oscillation channels given in terms of the mixing matrix elements, the effective mixing angles, and rotation angles. Note that $\sin^2 2\theta_{24} = \sin^2 2\theta_{\mu\mu}$ for $\theta_{14} = 0$.} \label{tab:osc_amplitudes} \end{table} \section{Tension in the 3+1 fits} The oscillation amplitudes for the three channels shown in Table~\ref{tab:osc_amplitudes}, which are $\nu_e \rightarrow \nu_e$, $\nu_\mu \rightarrow \nu_\mu$ and $\nu_\mu \rightarrow \nu_e$, depend on only two mixing matrix elements: $U_{e 4}$ and $U_{\mu4}$. This means that these two parameters are over-constrained by experimental data spanning the three distinct channels. Global fits spanning these channels have been performed by several groups, and they find similar allowed regions, with $\Delta m_{41}^2$ around 1 eV$^2$~\cite{Diaz:2019fwt, Dentler:2018sju, Gariazzo:2017fdh}. The most recent global fit finds a $5\sigma$ improvement over the null hypothesis, i.e. only three neutrinos~\cite{Diaz:2019fwt}. Nevertheless, all three fitting groups find a similar problem in the fits: inconsistency between subgroups of the data. One natural way to split up the data is into ``appearance'' and ``disappearance'' experiments. Appearance experiments are those in the $\nu_\mu \rightarrow \nu_e$ channel, while disappearance experiments are those in the $\nu_e \rightarrow \nu_e$ and $\nu_\mu \rightarrow \nu_\mu$ channels. One method of characterizing consistency between subgroups of data is the \textit{parameter goodness-of-fit} (PG)\cite{Maltoni:2002xd,Maltoni:2003cu}. One performs three separate fits: one on the appearance experiments, one on the disappearance experiments, and one on all the datasets. An effective $\chi^2$ is defined~\cite{Diaz:2019fwt}: \begin{equation} \chi^2_{\textrm{PG}} = \chi^2_{\textrm{global}} - (\chi^2_{\textrm{appearance}} + \chi^2_{\textrm{disappearance}}). \end{equation} The effective number of degrees of freedom is: \begin{equation} N_{\textrm{PG}} = (N_{\textrm{appearance}} + N_{\textrm{disappearance}}) - N_{\textrm{global}}. \end{equation} The $\chi^2_{\textrm{PG}}$ is then interpreted as a $\chi^2$ with $N_{\textrm{PG}}$ degrees of freedom and a probability ($p$-value) is calculated. A recent fit to short-baseline experiments found the $p$-value for the PG test to be 4E-6, which has a significance of 4.5$\sigma$~\cite{Diaz:2019fwt}. This means that the appearance datasets and the disappearance datasets are highly inconsistent. This is referred to as tension in the fits. This motivates the consideration of models beyond a traditional 3+1 sterile neutrino model that could relieve this tension. \chapter{Phenomenology of unstable sterile neutrinos in IceCube} \section{Unstable sterile neutrino model in IceCube} \label{sec:prd_decay} In the Standard Model of Particle Physics, stable particles are those that are protected by a fundamental symmetry. Others decay. There is no fundamental symmetry that protects neutrinos. In the massive neutrino Standard Model, the heavier two of the three mass states can decay radiatively. Two processes for a heavier mass state, $\nu_i$, to decay to a lighter mass state, $\nu_j$, are \begin{equation} \begin{split} \begin{aligned} \nu_i \rightarrow \nu_j + \gamma \hspace{2.5cm}&\textrm{~with lifetime~} \tau \simeq 10^{36} (m_i/\textrm{eV})^{-5} ~\textrm{years} \\ \nu_i \rightarrow \nu_j + \gamma +\gamma \hspace{2.5cm}&\textrm{~with lifetime~} \tau \simeq 10^{67} (m_i/\textrm{eV})^{-9} ~\textrm{years}, \end{aligned} \end{split} \end{equation} where $\gamma$ is a photon~\cite{Diaz:2019fwt, PhysRevD.25.766, PhysRevD.28.1664}. These decays are slow; for the neutrinos of the massive neutrino Standard Model, the lifetimes are many times longer the age of the Universe. If there is a fourth mass state, it may decay. The decay of a heavy, fourth mass state, in the range 1~keV - 1~MeV was previously considered to explain the LSND anomaly~\cite{PalomaresRuiz:2005vf, PalomaresRuiz:2006ue}. In the following publication~\cite{Moss:2017pur}, a model involving oscillations and decay for eV-scale neutrinos was developed and applied to the IceCube detector. This used the open dataset from the one-year high-energy sterile neutrino search~\cite{TheIceCube:2016oqi}. \label{sec:decay_prd} \addcontentsline{toc}{subsection}{\textit{Publication: Exploring a nonminimal sterile neutrino model involving decay at IceCube}} \includepdf[pages={-}]{figs_pheno/PhysRevD97055017.pdf} \section{Incorporating IceCube data into global fits} Following~\cite{Moss:2017pur}, global fits were performed on short-baseline data using both a traditional $3+1$ model and the invisible decay version of the unstable sterile neutrino model, referred to as $3+1+\textrm{decay}$~\cite{Diaz:2019fwt}. These fits used data from fourteen different neutrino experiments, excluding IceCube. Invisible decay refers to having no detectable daughter particles. It was found that the best-fit point of the unstable sterile neutrino model is preferred to that of the traditional $3+1$ model. The significance of the improvement is $2.6\sigma$. Moreover, the observed tension between appearance and disappearance experiments is reduced, but not resolved. In the following publication, the one-year dataset from IceCube was incorporated into the global fit results~\cite{Moulai:2019gpi}. \label{sec:fits_prd} \addcontentsline{toc}{subsection}{\textit{Publication: Combining sterile neutrino fits to short-baseline data with IceCube data}} \includepdf[pages={-}]{figs_pheno/PhysRevD101055020.pdf} \includepdf[pages={-}]{figs_pheno/supplementary_material.pdf} \section{More on the phenomenology of unstable sterile neutrinos in IceCube} The results of combining the one-year IceCube data set with the global fit results from short-baseline measurements motivated further study of the unstable sterile neutrino model in IceCube. Decay of the fourth mass state causes two effects. One effect is a dampening of oscillation. The second effect is an overall disappearance of these particles. Transition probabilities as a function of baseline for muon antineutrinos traversing the diameter of the Earth are shown in Fig.~\ref{fig:osc_decay_baseline}. Transition probabilities as a function of energy for muon antineutrinos of energies 1~TeV and 2.3~TeV are shown in Fig.~\ref{fig:osc_decay_escan_antinu} for muon antineutrinos and Fig.~\ref{fig:osc_decay_escan_nu} for muon neutrinos. These probabilities are calculated with \texttt{nuSQUIDSDecay}\cite{Moss:2017pur}. In all of these figures, the additional mass-squared splitting and mixing amplitude are $\Delta m^2_{41} = 1.0~\textrm{eV}^2$ and $\sin^2 2\theta_{24} = 0.1$. Furthermore, all these figures show transition probabilities for four values of the square of the coupling that mediates the decay, $g^2$. The relationship between this coupling, $g$, the mass of the fourth neutrino mass state, $m_4$, and its lifetime, $\tau$ is~\cite{Moss:2017pur} \begin{equation} {\tau} = \frac{16 \pi}{g^2 m_4}, \label{eq:coupling_mass_lifetime} \end{equation} where it has been approximated that the lightest mass state is $\nu_1$ and it is massless. The traditional $3+1$ model corresponds to $g^2 = 0$. Decay of the fourth mass state causes a dampening of the oscillation between the muon and sterile flavors, shown in Fig.~\ref{fig:osc_decay_baseline}. In the top panel, the scenario with $g^2 = 0$ causes oscillations that appear in the true neutrino cosine zenith distribution, shown in the oscillograms in the publication in~\Cref{sec:decay_prd} and later in~\Cref{sec:oscillgorams_and_expected_event_distributions}. This effect disappears very quickly with non-zero decay-mediating coupling, $g^2$. While IceCube has good angular resolution, the angular binning and the energy resolution wash out these fine features in the reconstructed event distribution, as will be shown in~\Cref{sec:oscillgorams_and_expected_event_distributions}. Decay of the fourth mass state shifts the disappearance maximum to a longer baseline, shown in Fig.~\ref{fig:osc_decay_baseline}. As $g^2$ goes from zero to $4\pi$, there is a crossover in disappearance probabilities at fixed baseline. The shift in the position of the disappearance maximum moves the resonance in cosine zenith. This will be observable in the expected event distributions. Decay of the fourth mass state widens the resonance seen in Fig.~\ref{fig:osc_decay_escan_antinu} (top). Below 200~GeV, oscillation between the muon and tau flavors becomes significant. This appears as disappearance of the muon flavor in Figs~\ref{fig:osc_decay_escan_antinu} and~\ref{fig:osc_decay_escan_nu}, without associated increase of the sterile flavor. At the highest energies, the Earth becomes opaque to neutrinos: a muon neutrino is likely to interact before crossing the diameter of the Earth. This causes the dramatic disappearance of muon flavor without associated increase in the sterile flavor. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figs_pheno/osc_decay_track_1TeV.pdf} \includegraphics[width=\textwidth]{figs_pheno/osc_decay_track_2300GeV.pdf} \caption[Transition probabilities versus baseline along Earth's diameter for 3+1+decay]{Transition probabilities of a 1~TeV (top) and 2.3~TeV (bottom) muon antineutrino traversing the diameter of the Earth for an unstable sterile neutrino model with invisible decay and parameters $\Delta m^2_{41} = 1.0 \: \textrm{eV}^2$ and $\sin^2 2\theta_{24} = 0.1$. The four different colors correspond to four values of the decay-mediating coupling, $g^2$, where $g^2 = 0$ corresponds to a traditional 3+1 model, i.e. no neutrino decay. The solid curves correspond to muon antineutrino survival probabilities. The dashed curves correspond to the probability of antineutrino oscillating into the sterile flavor.} \label{fig:osc_decay_baseline} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figs_pheno/probability_escan_g2_0-4pi_stacked.pdf} \caption[Transition probabilities versus energy for antineutrinos traversing the Earth for 3+1+decay]{Probabilities, shown as a function of energy, of muon antineutrinos traversing the diameter of the Earth to (top) survive in the muon flavor and (bottom) transition to the sterile flavor, assuming an unstable sterile neutrino model with invisible decay and parameters $\Delta m^2_{41} = 1.0 \: \textrm{eV}^2$ and $\sin^2 2\theta_{24} = 0.1$. The four different colors correspond to four values of the decay-mediating coupling, $g^2$, where $g^2 = 0$ corresponds to a traditional 3+1 model, i.e. no neutrino decay. At the highest energies, the survival probability of muon antineutrinos goes to zero because of the likelihood of interactions within the Earth.} \label{fig:osc_decay_escan_antinu} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{figs_pheno/probability_escan_nu_g2_0-4pi_stacked.pdf} \caption[Transition probabilities versus energy for neutrinos traversing the Earth for 3+1+decay]{Probabilities, shown as a function of energy, of muon neutrinos traversing the diameter of the Earth to (top) survive in the muon flavor and (bottom) transition to the sterile flavor, assuming an unstable sterile neutrino model with invisible decay and parameters $\Delta m^2_{41} = 1.0 \: \textrm{eV}^2$ and $\sin^2 2\theta_{24} = 0.1$. The four different colors correspond to four values of the decay-mediating coupling, $g^2$, where $g^2 = 0$ corresponds to a traditional 3+1 model, i.e. no neutrino decay. At the highest energies, the survival probability of muon neutrinos goes to zero because of the likelihood of interactions within the Earth.} \label{fig:osc_decay_escan_nu} \end{figure} \chapter{Results} \label{chap:results} Good agreement between data and the Monte Carlo description of the best-fit point is found. The projected distributions of reconstructed muon energy and reconstructed cosine zenith angle for both the data and the best-fit expectation are shown in Fig.~\ref{fig:1ddist}. The best-fit distribution accounts for the best-fit sterile parameters and the best-fit systematic values. Pearson $\chi^2$ tests assuming three degrees of freedom yield p-values of 46\% and 43\% for the energy proxy and cosine zenith distributions, respectively. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/100percent_dist_1d_ratio.pdf} \caption[Projected energy and angular distribution of the data]{Projected reconstructed energy and cosine zenith distributions of the data compared to the best-fit expectation, including the best-fit systematics.} \label{fig:1ddist} \end{figure} \clearpage \begin{figure}[ht] \centering \includegraphics[width=4.in]{figs_results/data_distribution.pdf} \caption[Data distribution]{The binned, two-dimensional distribution of the eight-year dataset. The same dataset is used in~\cite{Axani:2020zdv, Aartsen:2020fwb} and this plot is a reproduction of a plot from those references.} \label{fig:data} \end{figure} The eight-year dataset contains 305,735 reconstructed events. The two-dimensional distribution of the data is shown in Fig.~\ref{fig:data}. The best-fit point occurs at $\Delta m^2_{41}$ = 6.6 eV$^2$, $\sin^2 2\theta_{24}$ = 0.33, and $g^2 = 2.5 \pi$. The three-neutrino hypothesis is rejected with a $p$-value of 2.8\%, assuming three degrees of freedom. The best-fit physics and nuisance parameters and their uncertainties are given in Table~\ref{table:results}. The reported $1\sigma$ ranges come from marginalizing over all other parameters. The result of the frequentist analysis is shown in Figs.~\ref{fig:freq_result} and \ref{fig:freq_result_3panels}. In Fig.~\ref{fig:freq_result}, each of the nine panels corresponds to a fixed value of $g^2$, and the solid, dashed, and dotted white curves represent the 90\%, 95\% and 99\% C.L. regions, respectively. In Fig.~\ref{fig:freq_result_3panels}, the three panels correspond to the three confidence levels, and the different colored curves correspond to different $g^2$ values. In both figures, the best fit point is marked with a star. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/frequentist_9panels.pdf} \caption[Result of the frequentist analysis]{Result of the frequentist analysis. ``NBS'' means that this analysis did not include a bedrock systematic uncertainty. See \Cref{sec:bedrock}.} \label{fig:freq_result} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figs_results/frequentist_3panels.pdf} \caption[Result of the frequentist analysis]{Result of the frequentist analysis. ``NBS'' means that this analysis did not include a bedrock systematic uncertainty. See \Cref{sec:bedrock}.} \label{fig:freq_result_3panels} \end{figure} \begin{table}[h!] \begin{center} \begin{tabular}{ l c } \hline \hline \textbf{Parameter} & Best fit $\pm 1 \sigma$ \\ [0.5ex] \hline \hline \multicolumn{2}{c}{\textbf{Physics Parameters}}\\ \hline $\Delta m_{41}^2$ & $6.7^{+3.9}_{-2.5}$ eV$^{2}$\\ \hline $\sin^2 2 \theta_{24}$ & $0.33^{+0.20}_{-0.17}$\\ \hline $g^2$ & $2.5 \pi \pm 1.5 \pi$\\ \hline \multicolumn{2}{c}{\textbf{Detector parameters}}\\ \hline DOM efficiency & $0.9653 \pm 0.0004$ \\ \hline Bulk Ice Gradient 0 & $0.008 \pm 0.015$\\ \hline Bulk Ice Gradient 1 & $0.35 \pm 0.09$\\ \hline Forward Hole Ice & $-3.5 \pm 0.06$ \\ \hline \multicolumn{2}{c}{\textbf{Conventional flux parameters}}\\ \hline Normalization ($\Phi_{\mathrm{conv.}}$) & $1.4 \pm 0.1$ \\ \hline Spectral shift ($\Delta\gamma_{\mathrm{conv.}}$) & $0.071 \pm 0.001$ \\ \hline Atm. Density & $0.005 \pm 0.035$ \\ \hline Barr WM & $-0.03 \pm 0.04$ \\ \hline Barr WP & $-0.06 \pm 0.02$ \\ \hline Barr YM & $0.08 \pm 0.05$ \\ \hline Barr YP & $-0.21 \pm 0.05$ \\ \hline Barr ZM & $-0.012 \pm 0.004$ \\ \hline Barr ZP & $-0.031 \pm 0.002$ \\ \hline \multicolumn{2}{c}{\textbf{Astrophysical flux parameters}}\\ \hline Normalization ($\Phi_{\mathrm{astro.}}$) & $1.1 \pm 0.05$ \\ \hline Spectral shift ($\Delta\gamma_{\mathrm{astro.}}$) & $0.39 \pm 0.03$ \\ \hline \multicolumn{2}{c}{\textbf{Cross sections}}\\ \hline Cross section $\sigma_{\nu_\mu}$ & $0.998 \pm 0.002$ \\ \hline Cross section $\sigma_{\overline{\nu}_\mu}$ & $0.998 \pm 0.003$ \\ \hline Kaon energy loss $\sigma_{KA}$ & $-0.38 \pm 0.04$ \\ \hline \hline \end{tabular} \caption[Best-fit physics and nuisance parameter values.]{Best-fit physics and nuisance parameter values. The $1\sigma$ range are determined by marginalizng over all other parameters.} \label{table:results} \end{center} \end{table} \clearpage \begin{figure}[ht] \centering \includegraphics[width=4.in]{figs_results/best_fit_osc.pdf} \caption[Oscillogram for best-fit point]{Oscillogram for the best-fit point. The lines at 10~TeV and 30~TeV indicate the true neutrino energy below which about 90\% and 99\% of the events are expected to be, respectively.} \label{fig:best_fit_osc} \end{figure} The oscillogram corresponding to the best-fit point is shown in Fig.~\ref{fig:best_fit_osc}. The horizontal line at 10~TeV indicates the true neutrino energy below which about 95\% of the events are expected to be, based off of simulation and assuming the prior values of the nuisance parameters. The horizontal line at 30~TeV indicates the true energy below which about 99\% of the events are expected to be. The expected distribution of the best-fit point and data are each compared to a reference model in Fig.~\ref{fig:quilt}. The reference model is the expectation associated with the three neutrino hypothesis and the systematic values from the best-fit point. These comparions are shape only; overall normalization effects have been removed. Both plots show a deficit of events in the upper left corner, a relative excess of events in the upper right corner, and a modest deficit of events in the bottom left corner. Low statistics at the highest energies yield large fluctuations in the data plot. \begin{figure}[htbp] \centering \includegraphics[width=4.in]{figs_results/quilt_bestfit_null_shape_percent.pdf} \includegraphics[width=4.in]{figs_results/quilt_data_null_shape_percent.pdf} \caption[Comparison of best-fit expectation and data to reference model]{(Top) The shape-only percent difference between the best fit expectation and a reference model. (Bottom) The shape-only percent difference between the data and a reference model. The reference model is the expectation associated with the three-neutrino model and the best-fit systematics. Overall normalization effects have been removed.} \label{fig:quilt} \end{figure} \clearpage \begin{figure}[h] \centering \includegraphics[width=4.in]{figs_results/pulls_bf_2panel.pdf} \caption[Systematic and data pulls at best-fit point]{Systematic and data pulls at the best-fit point. The top panel shows the systematic pulls for each of the nuisance parameters. The main panel shows the data pulls.} \label{fig:pulls_bf} \end{figure} Figure~\ref{fig:pulls_bf} shows the data and systematic pulls for the best-fit point. The data pull for the $i^{\rm{th}}$ bin is defined as \begin{equation} \rm{Pull}_{\it{i}} \equiv \frac{Data_{\it{i}} - Expectation_{\it{i}}}{\sigma_{Expectation_{\it{i}}}}, \end{equation} where $\sigma_{\rm{Expectation}_i}$ is the asymmetrical Poisson error of the expectation. The Poisson error is calculated using the Garwood method~\cite{10.2307/2333958, revstat2012Patil, https://doi.org/10.1002/bimj.4710350716}. The systematic pull for the $i^{\rm{th}}$ nuisance parameter is defined as \begin{equation} \rm{Pull_{\it{i}}} \equiv \frac{Fit_{\it{i}} - Prior\: Center_{\it{i}}}{Prior\: Width_{\it{i}}}. \end{equation} All systematics pull less than about 1$\sigma$, with the exception of the cosmic ray spectral shift, which pulls $2.4\sigma$. The data pulls appear roughly uniformly distributed. The data and systematic pulls for the fit to the three-neutrino scenario and the best-fit of the subset of points corresponding to the traditional $3+1$ model are given in~\Cref{sec:app_results_pulls}. The fit values of the nuisance parameters for each point in the scan are given in~\Cref{sec:app_results_nuisance_values}. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs_results/bayes_9panels.pdf} \caption[Result of the Bayesian analysis]{Result of the Bayesian analysis. ``NBS'' means the analysis did not include a bedrock systematic uncertainty. See \Cref{sec:bedrock}.} \label{fig:bayes_result} \end{figure} The result of the Bayesian analysis is shown in Fig.~\ref{fig:bayes_result}. The best model has the parameters $\Delta m^2_{41}$ = 6.6 eV$^2$, $\sin^2 2\theta_{24}$ = 0.33, and $g^2 = 1.5 \pi$. The Bayes' factor is defined as the ratio of the evidence integral for a particular sterile neutrino model to the evidence integral for the three-neutrino model. The Baye's factor for the best model is 0.025, which corresponds to very strong evidence in favor of the best model. The model corresponding to the Frequentist best fit point is similarly favored over the three-neutrino model; the Bayes' factor is 0.027. The posterior distributions of each of the nuisance parameters for the best model are shown in Fig.~\ref{fig:decay_posterior_alone}. The correlation between all the nuisance parameters is shown in Fig.~\ref{fig:correlation}. \begin{figure}[ht] \centering \includegraphics[height=8.5in]{figs_results/decay_posterior_reshape.pdf} \caption{Posterior distributions of the nuisance parameters for the best model.} \label{fig:decay_posterior_alone} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs_results/correlation.pdf} \caption{Correlation between the nuisance parameters.} \label{fig:correlation} \end{figure} \clearpage The constraining power of the frequentist and Bayesian analyses on the value of $g^2$ are compared in Fig.~\ref{fig:g2_threenu}. The black curve shows the $-2\Delta$LLH value for each scanned value of $g^2$, profiling over $\Delta m_{41}^2$ and $\sin^2 2 \theta_{24}$. This is plotted on the left y-axis. The red curve shows the profiled value of the $\log_{10}$(Baye's factor) for each value of $g^2$, and is plotted on the right y-axis. The two y-axes have a common scale. The value of $-2\Delta$LLH for the three-neutrino model is 9.06, the top of the figure, while on the right hand side, $\log_{10}$(Baye's factor)$=0$ corresponds to the three-neutrino model. The value of $g^2 = 0$ is rejected with a $p$-value of 4.9\%. \begin{figure}[ht] \centering \includegraphics[width=4in]{figs_results/g2_threenu.pdf} \caption[$-2\Delta$LLH and $\log_{10}$(Baye's factor) versus $g^2$]{Profiled -$2\Delta$LLH, plotted on the left y-axis, and $\log_{10}$(Baye's factor), plotted on the right y-axis, versus $g^2$. ``NBS'' means that this analysis did not include a bedrock systematic uncertainty.} \label{fig:g2_threenu} \end{figure} \clearpage \section{Discussion} As in the traditional 3+1 eight-year search, the only nuisance parameter that pulls greater than 2$\sigma$ is the cosmic ray spectral index shift, which pulls $2.4\sigma$. For the three-neutrino fit, this parameter pulls $2.2\sigma$. For the best fit of the subset of points with $g^2=0$, this parameter pulls $2.4\sigma$. An upgraded systematic treatment for the cosmic ray flux is warranted. In particular, recent data show a break in the cosmic ray spectral index that is not accounted for in the cosmic ray model used here~\cite{An:2019wcw,Alfaro:2017cwx}. Nevertheless, the effect of the cosmic ray spectral index shift is purely in reconstructed energy, while the signal shape is two-dimensional. In this analysis, the best fit of the subset of sampled points with $g^2 = 0$ is $\Delta m_{41}^2 = 4.2~\textrm{eV}^2$ and $\sin^2 2\theta_{24} = 0.11$, which is consistent with the result from the traditional 3+1 eight-year search. Figure~\ref{fig:delta_pull} show the difference in pulls between the best-fit point and the three-neutrino fit and best no-decay fit point, respectively. The red bins are where the best-fit expectation decreases the pulls. The blue bins are where the best-fit expectation increases the pulls. In comparison to both the three-neutrino fit and the traditional 3+1 best-fit, the overall best fit here reduces data pulls near the horizon at the higher energies, and for straight up-going events, especially at the lower energies. Statistical uncertainty for straight up-going events at the highest energies is large, so while the expected effect of the signal is large there, as shown in Fig.~\ref{fig:quilt} (top), data pulls are expected to be small. \begin{figure}[htb] \centering \includegraphics[width=3.7in]{figs_results/delta_pulls_bf_3nu_2panel.pdf} \includegraphics[width=3.7in]{figs_results/delta_pulls_bf_meows_2panel.pdf} \caption[Change in pulls by best-fit point]{(Top) Difference in the absolute value of pulls between the best-fit point and the three neutrino model. (Bottom) Difference in the absolute value of pulls between the best-fit point and the best-fit of the subset of points corresponding to no decay.} \label{fig:delta_pull} \end{figure} \clearpage \begin{figure}[ht] \centering \includegraphics[width=3in]{figs_results/frequentist_otherpoint_30} \caption[The $g^2=3\pi$ panel from the frequentist result.]{The $g^2=3\pi$ panel from the frequentist result. The best-fit point from combining global fits to short baseline data with one year of IceCube data is marked by an `x'. ``NBS'' means that this analysis did not include a bedrock systematic uncertainty. See \Cref{sec:bedrock}.} \label{fig:otherpoint} \end{figure} The best-fit point from combining global fits to short baseline data with one year of IceCube data, discussed in \Cref{sec:fits_prd}, is $\Delta m_{41}^2 = 1.35~\textrm{eV}^2$, $\sin^2 2\theta_{24} = 0.05$ and $g^2=3.06\pi$. Rounding 3.06 to 3, the point is marked by an `x' on the frequentist result panel for $g^2 = 3\pi$ in Fig.~\ref{fig:otherpoint}. This point sits just outside the 95\% C.L. allowed region. \chapter{Status of 3+1 sterile neutrinos in IceCube} \section{Neutrino oscillations in matter} The experiment described in this thesis, IceCube, is uniquely capable of searching for a signature of sterile neutrinos that involves matter effects in oscillations. Neutrinos traversing matter experience different oscillation probabilities than those traversing vacuum~\cite{wolfenstein1978neutrino, mikheyev1986resonant, liu1998parametric, akhmedov2000parametric}. This is because the different neutrino flavors undergo different interactions with matter, therefore experiencing different potentials, modifying the Hamiltonian. In fact, oscillations in matter may have a larger amplitude than oscillations in vacuum. Following~\cite{Akhmedov:1999uz}, the Schr\"{o}dinger equation for a two-flavor approximation in vacuum can be written as \begin{equation} i \frac{d}{dt} \begin{pmatrix}\nu_\mu \\ \nu_s \end{pmatrix} = \begin{pmatrix} -\frac{\Delta m^2}{4E} \cos 2 \theta_0 & \frac{\Delta m^2}{4E} \sin 2 \theta_0 \\ \frac{\Delta m^2}{4E} \sin 2 \theta_0 & \frac{\Delta m^2}{4E} \cos 2 \theta_0 \end{pmatrix} = \begin{pmatrix}\nu_\mu \\ \nu_s \end{pmatrix}, \end{equation} where $\sin^2 2\theta_0$ is the oscillation amplitude in vacuum. In the case of oscillations in matter, the $\nu_\mu$ state experiences a potential due to neutral current (NC) interactions off quarks, while the $\nu_s$ state does not. This potential is \begin{equation} V_{\textrm{NC}} = \mp \frac{\sqrt{2}}{2} G_F N_n, \end{equation} where $-(+)$ corresponds to $\nu_\mu$($\bar{\nu}_\mu$), $G_F$ is the Fermi constant and $N_n$ is the neutron number density. Adding this potential to the effective Hamiltonian, \begin{equation} i \frac{d}{dt} \begin{pmatrix}\nu_\mu \\ \nu_s \end{pmatrix} = \begin{pmatrix} -\frac{\Delta m^2}{4E} \cos 2 \theta_0 \mp \frac{\sqrt{2}}{2} G_F N_n & \frac{\Delta m^2}{4E} \sin 2 \theta_0 \\ \frac{\Delta m^2}{4E} \sin 2 \theta_0 & \frac{\Delta m^2}{4E} \cos 2 \theta_0 \end{pmatrix} = \begin{pmatrix}\nu_\mu \\ \nu_s \end{pmatrix}, \end{equation} and assuming constant matter density, one can diagonalize the effective Hamiltonian to find the oscillation amplitude in matter: \begin{equation} \label{eq:msw_amp} \sin^2 2 \theta_{\rm{matter}} = \frac{\big(\frac{\Delta m^2}{2E}\big)^2 \sin^2 2 \theta_0}{ \big(\frac{\Delta m^2}{2E} \cos 2 \theta_0 \pm \frac{\sqrt{2}}{2} G_F N_n \big)^2 + \big(\frac{\Delta m^2}{2E}\big)^2 \sin^2 2 \theta_0}. \end{equation} The eigenstates in matter are a linear combination of the flavor eigenstates. A critical energy exists where regardless of the magnitude of the oscillation amplitude in vacuum, the oscillation amplitude in matter becomes unity either for neutrinos or antineutrinos. This critical energy is \begin{equation} E_{\rm{critical}} = \mp \frac{\Delta m^2 \cos 2\theta_0}{\sqrt{2} G_F N_n}. \end{equation} If $\Delta m^2$ is positive and the mixing angle is relatively small, that is, $|\theta_0|< \frac{\pi}{4}$, the resonance occurs for antineutrinos rather than neutrinos. For the following values, \begin{equation} \begin{split} \begin{aligned} \Delta m_{41}^2 &\approx 1~\textrm{eV}^2 \\ \cos 2\theta_0 &\approx 1 \\ \rho_{\textrm{Earth}} &\approx 6~\textrm{g/cm}^3, \end{aligned} \end{split} \end{equation} where $\rho_{\textrm{Earth}}$ is the average density of the Earth, and with the approximation that half of the Earth's mass is from neutrons, one finds the critical energy to be about 3~TeV. This suggests that for sterile parameter values consistent with global fit findings, a resonant transition would occur for antineutrinos traversing the Earth at TeV energies~\cite{Nunokawa:2003ep, Choubey:2007ji, Razzaque:2011ab}. This derivation made two simplifying assumptions: constant density and two-flavor oscillations. Reality is more complicated. Oscillation probabilities for neutrinos traversing the diameter of the Earth, accounting for both the radially varying Earth density and a full four-neutrino oscillation framework, are calculated with \texttt{nuSQuIDS} and shown in the Figs.~\ref{fig:matter_oscillation_prob_vs_baseline} and \ref{fig:matter_oscillation_prob_vs_energy}. In these calculations, the sterile parameters are \begin{equation} \begin{split} \begin{aligned} \Delta m^2 &= 1~\textrm{eV}^2 \\ \sin^2 2 \theta_{24} &= 0.1, \\ \end{aligned} \end{split} \end{equation} and all other sterile mixing angles and CP-violating angles are zero. The neutrinos are born in the muon flavor. In Fig.~\ref{fig:matter_oscillation_prob_vs_baseline}, the oscillation probabilities for neutrinos and antineutrinos are shown as a function of baseline across the diameter of the Earth, for the energy 2.3~TeV. At this energy, the probability of oscillating from the muon flavor into either the electron or tau flavors is minuscule. The left panel corresponds to neutrinos; only small oscillations from the muon to sterile flavor are observed, shown in magenta. The right panel corresponds to antineutrinos; a resonant oscillation into the sterile flavor is observed. The magnitude of this conversion is much greater than 0.1 which is the value of $\sin^2 \theta_{\mu \mu}$, and would be the amplitude of the oscillation in vacuum. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figs_status/baseline_for_seminar_bf_2.pdf} \caption[Traditional 3+1 oscillation probabilities versus baseline along Earth's diameter]{Oscillation probabilities as a function of baseline for a muon neutrino (left) and a muon antineutrino (right) traversing the diameter of the Earth for $\Delta m_{41}^2 = 1~\textrm{eV}^2$, $\sin^2 2 \theta_{24} = 0.1$, and 2.3~TeV of energy. Probabilities are calculated with \texttt{nuSQuIDS}~\cite{nusquids}.} \label{fig:matter_oscillation_prob_vs_baseline} \end{figure} \begin{figure}[hb] \centering \includegraphics[width=0.75\textwidth]{figs_status/prob_vs_energy.pdf} \caption[Traditional 3+1 oscillation probabilities versus energy for neutrinos traversing the Earth]{Oscillation probabilities as a function of energy for muon neutrinos (thin lines) and muon antineutrinos (thick lines) after having traversed the diameter of the Earth, for $\Delta m_{41}^2 = 1~\textrm{eV}^2$ and $\sin^2 2 \theta_{24} = 0.1$. The probability of remaining in the muon flavor is shown in green. The probability of oscillating into the sterile flavor is shown as magenta.} \label{fig:matter_oscillation_prob_vs_energy} \end{figure} Figure~\ref{fig:matter_oscillation_prob_vs_energy} shows the same phenomenon, over the energy range between 100~GeV and 100~TeV. This figure only shows the final oscillation probabilities across the diameter of the Earth. The thick magenta line shows the probability of a muon antineutrino oscillating into the sterile flavor, and the peak of the resonance is found at 2.3~TeV, corresponding to the scenario shown in Fig.~\ref{fig:matter_oscillation_prob_vs_baseline}. The thick green line shows the probability of such a muon antineutrino remaining in the muon flavor. The thin lines show the corresponding probabilities for neutrinos, rather than antineutrinos. At the lowest energies, oscillation into the electron and tau flavors is non-negligible, which accounts for the shown probabilities not adding up to unity. \section{Brief description of IceCube} IceCube is a neutrino detector that can observe TeV neutrinos originating from across the Earth, allowing for a unique search for sterile neutrinos that makes use of the matter effect parametric resonance discussed previously. A brief description of IceCube is given here to facilitate understanding of the rest of this chapter and the following one. A longer description is given in Chapter~\ref{chap:icecube}. IceCube is a gigaton ice-Cherenkov neutrino detector located deep in the ice at the South Pole. Over 5000 optical sensors are deployed in the ice to observe light produced in neutrino interactions. IceCube observes atmospheric and astrophysical neutrinos coming from all directions, although the work in this thesis uses upward-going neutrinos, that is, those originating from below the horizon. IceCube detects neutrinos with energies above 100~GeV, and as high as several PeV. The atmospheric and astrophysical neutrino fluxes fall steeply with energy, while the efficiency of the detector increases with energy. This results in a peak of events near one TeV~\cite{Aartsen:2015rwa}, which is shown in Fig.~\ref{fig:event_dist_atmo_astro}. \begin{figure}[h] \centering \includegraphics[width=0.65\textwidth]{figs_status/event_dist.pdf} \caption[Predicted atmospheric and astrophysical event distribution]{Predicted event distribution of muons associated with atmospheric and astrophysical neutrinos. This prediction corresponds to a livetime of 7.6 years. These events originate below the horizon.} \label{fig:event_dist_atmo_astro} \end{figure} \section{Eight-year search results} A one-year search for an eV-scale sterile neutrino at TeV-energies was performed in 2015 in IceCube and found no evidence for them~\cite{Jones:2015bya, delgado2015new, TheIceCube:2016oqi}. A subsequent eight-year search, with a fifteen-fold increase in statistics, was performed~\cite{Axani:2020zdv, Aartsen:2020iky, Aartsen:2020fwb}. This search made use of the atmospheric neutrino flux systematic treatment described in Chapter~\ref{chap:flux}. Both a frequentist analysis and a Bayesian analysis were performed. The result of the frequentist analysis is shown in Fig.~\ref{fig:meows_freq} and the result of the Bayesian analysis is shown in Fig.~\ref{fig:meows_bayes}. The frequentist analysis found a best-fit point at $\Delta m^2_{41} = 4.5~\textrm{eV}^2$ and $\sin^2 2\theta_{24} = 0.10$, and found that the null hypothesis was rejected with a $p$-value of 8\%. The Bayesian analysis found a best-model location with approximately the same sterile parameter values, and found that model to be strongly preferred, by a factor of about 10, to the null hypothesis, that is, no sterile neutrino. \begin{figure}[h] \centering \includegraphics[width=0.65\textwidth]{figs_status/Bayes_result_dm2_2.pdf} \caption[Frequentist, traditional 3+1, eight-year result]{Frequentist result from an eight-year search for sterile neutrinos in IceCube. The best-fit point is marked with a star. The confidence level contours are found assuming Wilks' theorem, and are shown at 90\% and 99\% in dashed and solid blue curves, respectively. The median sensitivity at 99\% confidence level is found from pseudoexperiments, and is shown as a thin dashed white curve. The $1\sigma$ and $2\sigma$ ranges of the sensitivity are shown in green and yellow bands, respectively. Results from other experiments are shown as black curves, and they reject the region to their right. Figure from~\cite{Aartsen:2020iky}.} \label{fig:meows_freq} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{figs_status/Bayes_result_dm2_1-2.pdf} \caption[Bayesian, traditional 3+1, eight-year result]{Bayesian result from an eight-year search for sterile neutrinos in IceCube. The logarithm of the Bayes factor~\cite{Jeffreys:1939xee} for a particular sterile model relative to the null hypothesis is shown. The best-model point is marked with a star. Figure from~\cite{Aartsen:2020iky}.} \label{fig:meows_bayes} \end{figure} \section{Jumping off point for the work in this thesis} The result of the eight-year search for sterile neutrinos in IceCube is intriguing. The best-fit point and the point corresponding to the best-model are in the region of interest from global fits. This result is likely to modify the global fit results. Nevertheless, the result is not very statistically significant and tension is likely to remain in the global fits. An alternative model that could resolve the tension is motivated. The rest of this thesis describes such a sterile neutrino model and a search for it in the IceCube experiment.
1,108,101,566,152
arxiv
\section{Introduction} A graph $G=(V,E)$ is a {\em pairwise compatibility graph} (PCG) if there exists a tree $T$, an edge-weight function $w$ that assigns positive values to the edges of $T$ and two non-negative real numbers $d_{min}$ and $d_{max}$, with $d_{min} \leq d_{max}$, such that each vertex $u \in V$ is uniquely associated to a leaf $l_u$ of $T$ and there is an edge $(u,v) \in E$ if and only if $d_{min} \leq d_{T,w} (l_u, l_v) \leq d_{max}$ where $d_{T,w} (l_u, l_v)$ is the sum of the weights of the edges on the unique path from $l_u$ to $l_v$ in $T$. In such a case, we say that $G$ is a PCG of $T$ for $d_{min}$ and $d_{max}$; in symbols, $G=PCG(T, w, d_{min}, d_{max})$. It is clear that if a tree $T$, an edge-weight function $w$ and two values $d_{min}$ and $d_{max}$ are given, the construction of a PCG is a trivial problem. We focus on the reverse of this problem, i.e., given a graph $G$ we have to find out a tree $T$, an edge-weight function $w$ and suitable values, $d_{min}$ and $d_{max}$, such that $G=PCG(T,w,d_{min}, d_{max})$. Such a problem is called the {\em pairwise compatibility tree construction problem}. The concept of pairwise compatibility was introduced in \cite{Kal03} in a computational biology context and the weight function $w$ has positive values, as it represents a not null distance. There are several known specific graph classes of pairwise compatibility graphs, e.g., cliques and disjoint union of cliques \cite{B}, chordless cycles and single chord cycles \cite{YHR09}, some particular subclasses of bipartite graphs \cite{YBR10}, some particular subclasses of split matrogenic graphs \cite{CPS12}. Furthermore a lot of work has been done concerning some particular subclasses of PCG as leaf power graphs \cite{B}, exact leaf power graphs \cite{BLR10} and lately a new subclass has been introduced, namly the min-leaf power graphs \cite{CPS12}. Initially, the authors of \cite{Kal03} conjectured that every graph is a PCG, but this conjecture has been confuted in \cite{YBR10}, where a particular bipartite graph with 15 nodes has been proved not to be a PCG. This latter result has given rise to this research as it is natural to ask for the smallest graph that is not a PCG. \medskip A {\em caterpillar} $\Gamma_n$ is an $n$-leaf tree for which any leaf is at a distance exactly one from a central path called {\em spine}. A {\em centipede} is an $n$-leaf caterpillar, in which the edges incident to the leaves produce a perfect matching. Deleting from an $n$-leaf centipede the degree two vertices and merging the two edges incident to each of these vertices into a unique edge, results in a new caterpillar that we will call {\em reduced centipede} and denote by $\Pi_n$ (as an example, $\Pi_5$ is depicted at the top left of Fig. \ref{fig.5nodes}). Caterpillars are interesting trees in the context of PCGs, as in most of the cases, the pairwise compatibility tree construction problem admits as solution a tree that is in fact a caterpillar. For this reason, we focus on this special kind of tree. In this note, we prove that all the graphs with at most seven vertices are PCGs. More precisely, we demonstrate the following results: \begin{itemize} \item If $G=PCG(\Gamma_n,w,d_{min}, d_{max})$, then there always exist a new edge-weight function $w'$, and a new value $d'_{max}$ such that it also holds: $G=PCG(\Pi_n,w',d_{min}, d'_{max})$. \item It is well known that graphs with five vertices or less are all PCGs and the witness trees -- not all caterpillars -- are shown in \cite{P02}. For each one of these graphs we prove that it is PCG of a reduced centipede, providing accordingly, an edge-weight function $w$ and the two values $d_{min}$ and $d_{max}$. \item All the graphs with six and seven vertices, except for the wheel $W_7$ (i.e. the graph formed by connecting a single vertex to all vertices of a cycle of length six -- see Figure \ref{fig.wheel}.a), are PCGs of a reduced centipede and, for each of them, we provide the edge-weight function $w$ and the two values $d_{min}$ and $d_{max}$ such that it is $PCG(\Pi_n, w, d_{min}, d_{max})$, $n=6, 7$. \item For what concerns the wheel $W_7$, it is known \cite{CFS} that $W_7$ is not PCG of the reduced centipede $\Pi_7$ (and hence it is not PCG of a caterpillar). We show that $W_7$ is PCG of a tree different from a caterpillar. \end{itemize} \section{Preliminaries} In this section we list some results that will turn out to be useful in the rest of the paper. Let $T$ be a tree such that there exist an edge-weight function $w$ and two non-negative values $d_{min}$ and $d_{max}$ such that $G=PCG(T,w,d_{min}, d_{max})$. Observe that if $T$ has at least $4$ vertices and contains a vertex $v$ of degree $2$, then we can construct a new tree $T'$ in which $v$ is eliminated, the two edges $(x,v)$ and $(v,y)$ incident to $v$ are merged into a unique edge $(x,y)$ and a new function $w'$ is defined from $w$ only modifying the weight of the new edge, that is set equal to the sum of the weights of the old edges: $w'((x,y))=w((x,v))+w((v,y))$. It is easy to see that $G=PCG(T',w',d_{min}, d_{max})$. For this reason, from now on, we will assume that all the trees we handle do not contain vertices of degree two. \begin{proposition} \cite{CMPS} \label{prop.integer} Let $G=PCG(T,w, d_{min},d_{max})$, where $d_{min}, d_{max}$ and the weight $w(e)$ of each edge $e$ of $T$ are positive real numbers. Then it is possible to choose $\hat{w},\hat{d}_{min},\hat{d}_{max}$ such that for any $e$, the quantities $\hat{d}_{min},\hat{d}_{max}$ and $\hat{w}(e)$ are natural numbers and $G=PCG(T,\hat{w},\hat{d}_{min},\hat{d}_{max})$. \end{proposition} We prove here the following useful lemma: \begin{lemma} \label{lemma.1} Let $G=PCG(T,w, d_{min},d_{max})$. It is possible to choose $\hat{w},\hat{d}_{min},\hat{d}_{max}$ such that $\min \hat{w}(e)=1$, where the minimum is computed on all the edges of $T$, and $G=PCG(T,\hat{w},\hat{d}_{min},\hat{d}_{max})$. \end{lemma} \begin{proof} According to Proposition \ref{prop.integer}, we can assume that the edge weight $w$ and the two values $d_{min},d_{max}$ are integers. Let $e_1, \ldots , e_n$ be the edges of $T$ incident to the leaves. Without loss of generality, we can assume $w(e_1)=\min_i w(e_i)$. We define $\hat{w}$ as follows: $\hat{w}(e_1)=1$ and for each $i=2, \ldots , n$ define $\hat{w}(e_i)=w(e_i)-w(e_1)+1$. Clearly, the function $\hat{w}$ is well defined as all its values are positive. As the weight of any edge incident to a leaf has been decreased by exactly $w_1-1$ and the rest of the weights remained unchanged, then for of any two leaves $l_i,l_j$ it holds that $d_{T, \hat{w}}(l_i,l_j)=d_{T, w}(l_i,l_j)-2w(e_1)+2$. Let $\hat{d}_{min}=\mbox{max}\{d_{min}-2w(e_1)+2, 0 \}$ and $\hat{d}_{max}=d_{max}-2w(e_1)+2$. It is easy to see that $G=PCG(T,\hat{w},\hat{d}_{min},\hat{d}_{max})$ indeed, if $\hat{d}_{min}=0$ then it means that there was no path weight below $d_{min}$, with respect to $w$. \qed \end{proof} The previous results imply that it is not restrictive to assume that the weights and $d_{min}$ and $d_{max}$ are integers and that the smallest weight is $1$. Thus, in the rest of the paper we will use these assumptions. \section{PCGs of Caterpillars} In this section we will prove that we can get rid of different kinds of caterpillar structures and restrict to consider only reduced centipedes. \begin{theorem} \label{th.caterpillar} Let $G$ be an $n$ vertex graph, $\Gamma_n$ and $\Pi_n$ be an $n$-leaf caterpillar without degree 2 vertices and an $n$-leaf reduced centipede, respectively. Let $G=PCG(\Gamma_n,w,d_{min}, d_{max})$. It is possible to choose $w'$ and $d'_{max}$ such that $G=PCG(\Pi_n,w',d_{min}, d'_{max})$. \end{theorem} \begin{proof} In order not to overburden the exposition, let $\Gamma = \Gamma_n$ and $\Pi = \Pi_n$. If $\Gamma$ is a reduced centipede, the claim is trivially proved, so assume it is not. We lead the proof into two steps. First we define a non-negative edge-weight function $w''$ proving that $\Gamma$ weighted by $w$ and $\Pi$ weighted by $w''$ generate the same PCG $G$ for the same values $d_{min}$ and $d_{max}$. Then we modify $w''$ into a positive weight function $w'$ and introduce two new values $d'_{min}$ and $d'_{max}$ proving that $G$ is also $PCG(\Pi,w',d'_{min}, d'_{max})$. Draw $\Gamma$ so that: i) the spine lies on a horizontal line, ii) all the leaves lie on a parallel line and iii) the edges between the spine and the leaves are represented as non-crossing line segments; number the leaves and the vertices of the spine from left to right $l_1, \ldots , l_n$ and $s_1, \ldots s_k$, $k <n$, respectively. By drawing the reduced centipede $\Pi$ in a similar way, we number the leaves and the vertices of the spine from left to right by $m_1, \ldots , m_n$ and $t_2, \ldots t_{n-1}$. We define the edge-weight function $w''$ as follows: \begin{itemize} \item let $p(l_i)$ the unique adjacent vertex of $l_i$ in $\Gamma$; for each $1 < i < n$, define $w''((m_i, t_i))=w((l_i, p(l_i)))$; \item define $w''((m_1, t_2))=w((l_1, p(l_1)))$ and $w''((m_n, t_{n-1}))=w((l_n, p(l_n)))$; \item for each $2\leq i \leq n-2$, define $w''((t_i, t_{i+1}))=0$ if and only if $p(l_i)=p(l_{i+1})$ in $\Gamma$; \item for each $2\leq i \leq n-2$, define $w''((t_i, t_{i+1}))=w((p(t_i), p(t_{i+1})))$ if and only if $p(l_i) \neq p(l_{i+1})$ in $\Gamma$. \end{itemize} Observe that $w''$ is well defined, as $\Gamma$ has no degree 2 vertices. It is quite easy to convince oneself that for each pair of leaves in $\Gamma$, $l_i$ and $l_j$, $d_{\Gamma, w}(l_i,l_j)$ is exactly the same as $d_{\Pi, w''}(m_i, m_j)$ and that $d_{min}$ and $d_{max}$ remain unchanged, so $G=PCG(\Gamma, w'', d_{min}, d_{max})$. \medskip It remains to show that we can reassign the edge-weights of $\Pi$ in a way that any edge gets a positive weight and $\Pi$ is the pairwise compatibility tree of $G$. To this purpose, we denote by $E(H)$ the edge set of any graph $H$, and we introduce the following two quantities: $$ L=\min_{(u,v)\not\in E(G)}\left\{| d_{min}-d_{\Pi,w''}(l_{u},l_{v}) |,| d_{max}-d_{\Pi,w''}(l_{u},l_{v}) |\right\}, \qquad N=|\left\{e: e \in E(\Pi), w(e)=0 \right\}|, $$ $L$ is the smallest distance between the quantities $d_{min},d_{max}$ and the weighted distances on the tree of the paths corresponding to non-edges of $G$; $N$ is the number of edges of $\Pi$ of weight $0$. Observe that, unless $G$ coincides with the clique $K_n$ (which trivially is PCG of the reduced centipede), there always exists a pair of leaves such that their distance on $\Pi$ falls out of the interval $[d_{min}, d_{max}]$ and hence $L>0$. Furthermore, as any edge incident to a leaf in $\Pi$ is strictly greater than $0$ and in view of the hypothesis that the caterpillar $\Gamma$ is not a reduced caterpillar, it holds $1 \leq N \leq n-3$ (the bound $n-3$ is reached when $\Gamma$ is a star). So, the value $\epsilon=\frac{L}{N+1}$ is well defined. Now define a new weight function $w'$ on $\Pi$ by assigning the weight $\epsilon$ to any edge of weight $0$. More formally, $w'(e)=w''(e)$ if $w''(e) \neq 0$ and $w'(e)=\epsilon$ otherwise. In this way the distance between any two leaves in $\Pi$ can result increased by a value upper bounded by $\epsilon N < L$. Set the new value $d'_{max}=d_{max}+\epsilon N$. The following three observations conclude the proof: \begin{itemize} \item any distance between leaves in $\Pi$ that was strictly smaller than $d_{min}$ with respect to the weight function $w''$ remains so after this transformation in view of the fact that $\epsilon N < L$; \item any distance that was strictly greater than $d_{max}$ with respect to the weight function $w''$ is strictly greater than $d'_{max}$ due to the definition of $L$; \item any distance that was in the interval $[d_{min}, d_{max}]$ with respect to the weight function $w''$ is now in the interval $[d_{min}, d'_{max}]$. \qed \end{itemize} \end{proof} Observe that the previous statement suggests not to consider all kinds of caterpillars, but to restrict to reduced centipedes, only. In the next section we exploit this result. \begin{figure}[!ht] \centering \includegraphics[width = \textwidth]{5nodes.eps} \caption{All the non isomorphic connected cyclic graphs with 5 vertices with their representation as PCGs of the reduced centipede (top left).} \label{fig.5nodes} \end{figure} \section{Graphs on at most seven vertices} In this section we show that all graphs with at most seven vertices, except for the wheel $W_7$, are PCGs of a reduced centipede. Analogously to what we did in the proof of Theorem \ref{th.caterpillar}, name the leaves of $\Pi_n$ from left to right with $l_1, \ldots , l_n$ and the vertices of the spine from left to right with $s_2, \ldots s_{n-1}$. As, for any $n$, there exists a unique unlabeled reduced centipede with $n$ leaves $\Pi_n$, in the following we consider the edges of $\Pi_n$ as ordered in the following way: $e_1= (l_1, s_2)$; $e_i=(l_i, s_i)$ for each $2 \leq i \leq n-1$; $e_n=(l_n, s_{n-1})$; finally, $e_{n+i-1}=(s_i, s_{i+1})$ for each $2 \leq i \leq n-2$. Now, the edge-weight function $w$ can be expressed as a $(2n-3)$ long vector $\vec{w}$, where the component $w_i$ is a positive integer representing the weight assigned to edge $e_i$. In Figure \ref{fig.5nodes} all the 18 connected non isomorphic cyclic graphs with 5 vertices are depicted, together with the vector $\vec{w}$ and the values of $d_{min}$ and $d_{max}$ that witness that all of them are PCGs of $\Pi_5$. Observe that the connected non isomorphic graphs on 5 vertices are 21, we have omitted the 3 graphs that are trees, which are trivially PCGs. We remind that it is already proved in \cite{P02} that all the graphs with at most five vertices are PCG, but the provided trees were all different and not all caterpillars. \medskip For what concerns graphs with 6 and 7 vertices, except for the wheel $W_7$, we get a similar result. For the sake of brevity we do not depict all these graphs (there are 112 connected non isomorphic graphs with 6 vertices and 853 with 7 vertices), but the values of $\vec{w}$, $d_{min}$ and $d_{max}$ we got with the help of an enumerative C program can be found at the web page \url{https://sites.google.com/site/pcg6and7vertices/} .\\ Thus, we obtain the following result: \begin{lemma} \label{lemma:567} All graphs with at most 7 vertices except for the wheel $W_7$ are PCGs of a reduced centipede. \end{lemma} \begin{lemma} \label{lemma:wheel} The graph $W_7$ is a PCG. \end{lemma} \begin{proof} Consider the edge-weighted tree $T$ depicted in Figure \ref{fig.wheel}.b and the two values $d_{min}=5$ and $d_{max}=7$. It is immediate to see that $W_7=PCG(T, w, d_{min}, d_{max})$. \qed \end{proof} \begin{figure}[!ht] \centering \includegraphics[width = 0.7\textwidth]{wheel2.eps} \caption{(a.) The wheel $W_7$ and (b.) the edge-weighted tree $T$ such that $W_7=PCG(T, w, 5, 7)$.} \label{fig.wheel} \end{figure} This result is in agreement with the negative result in \cite{CFS}, stating that it is not possible to find any edge-weight function $w$ and any two values $d_{min}$ and $d_{max}$ such that $W_7=PCG(\Pi_7, w, d_{min}, d_{max})$. \medskip From Lemmas \ref{lemma:567} and \ref{lemma:wheel} it immediately derives the main result of this note: \begin{theorem} All graphs with at most 7 vertices are PCGs. \end{theorem}
1,108,101,566,153
arxiv
\section{Introduction \label{sec:introduction}} LHD inward-shifted configurations are unstable to resistive MHD pressure-gradient-driven modes \cite{1,2}, because the magnetic hill is located near the magnetic axis and they are not stabilized by the magnetic well or the magnetic shear \cite{3}. Previous stability studies of linear MHD pointed out that pressure-gradient-driven low $n$ modes are unstable \cite{4,5} and limit the operation LHD efficiency increasing slightly the energy transport out of the system \cite{6}. A stabilizing mechanism that avoids the excitation of low n interchange modes for $ \beta_{0} < 1 \% $ exists; the pressure profile is flattened around the rational surfaces \cite{7,8} where the mode growth saturates. Pressure evolves to a staircase-like profile and the modes will suffer periodic excitations and relaxations. These previous studies were extended to higher $ \beta_{0}$ values and the stabilizing mechanism was confirmed too \cite{9}, but also it was noted that the plasma can be disruptive if the interaction of the modes with different helicities is strong \cite{10}. In the LHD operations with inward-shifted configurations, pellet fuelled plasmas with peaked pressure profiles and intense NBI heating \cite{11} (with and without large net toroidal current \cite{12}), periodic relaxation events similar to sawtooth phenomena are triggered when the pressure gradient increases\cite{13,14}. Several types of sawtooth like activity were observed but the most frequent is related with the modes $n/m = 1/3$ and $1/2$. During the $1/3$ sawtooth like activity, there are sharp oscillations of the soft X ray emissivity and the two dimensional structure of the soft X ray radiation shows three magnetic islands near the magnetic axis. The pressure profile is flattened and the mode saturates while the heat flux from the core to the edge is enhanced. In the $m = 2$ case, the size of the deformation is too small to be distinguished by the soft X ray camera. It is driven around $\rlap{-} \iota = 1/2$ at $0.2 < \rho < 0.5$. The $1/3$ sawtooth like activity was simulated in previous studies \cite{15,16}. Two different events were observed: the non resonant $1/3$ sawtooth like events (the mode $1/3$ is outside the plasma, there is no $1/3$ rational surface) and the resonant $1/3$ sawtooth like events (the mode $1/3$ is inside the plasma, there is a $1/3$ rational surface close to the magnetic axis). The effect of non resonant $1/3$ sawtooth like events in the plasma performance is small and the equilibria do not suffer a large distortion after their excitation, only a slightly increase of the $1/2$ instability in the middle plasma, $0.4<\rho<0.6$, before the pressure profile is flattened and the mode saturates. Therefore this activity is in the so called soft MHD limit. During the resonant $1/3$ sawtooth like events the plasma core region, $\rho < 0.25$, shows a collapse behaviour, or hard MHD limit relaxation, because the $1/3$ magnetic islands overlap with other dominant modes magnetic islands like the $3/8$, $3/7$ and $2/5$, and a stochastic region appears between the magnetic axis and $\rho = 0.4$. The equilibria show large changes in the inner plasma region, $\rho < 0.4 $, leading to a loss of the device efficiency to confine the plasma. The soft MHD activities in advanced LHD operation scenarios is not considered very restrictive for the device performance \cite{17,18}, but if the hard limit is exceeded a strong MHD activity could strongly limit the device operation, thus it is important to predict the effect of the MHD activity on the hard MHD limit. The hard MHD limit of the $1/2$ sawtooth like activity is studied and designated as internal disruptions. Previous studies in other Stellarator devices like Heliotron-E \cite{19} and CHS \cite{20}, the internal disruptions were observed before or after the sawtooth like activity and it was stated that the large changes in the plasma equilibria after these events can be a collapse behaviour or hard MHD limit. In the present research, we simulate an internal disruption or a $1/2$ hard MHD limit event. The effects of the internal disruptions on the LHD device performance are qualitatively clarified. To avoid the adverse effects of the internal disruption is an important task for the future advanced operation scenarios in LHD, because they can meet the conditions to drive these events which will limit the device efficiency to confine the plasma. The simulation is made using the FAR3D code \cite{21, 22, 23}. This code solves the reduced non-linear resistive MHD equations to follow the system evolution under the effect of a perturbation of the equilibrium. The equilibria were calculated with the VMEC code \cite{24} using the electron density and temperature profiles reconstructed with Thomson scattering and electron cyclotron emission data after the last pellet injection for a LHD configuration without net toroidal current before a sawtooth like activity \cite{13}. This paper is organized as follows. The model equations, numerical scheme and equilibrium properties in section \ref{sec:model}. The simulation results are presented in section \ref{sec:simulation}. Finally, the conclusions of this paper are presented in section \ref{sec:conclusions}. \section{Equations and numerical scheme \label{sec:model}} For high-aspect ratio configurations with moderate $\beta$-values (of the order of the inverse aspect ratio), we can apply the method employed in Ref.\cite{25} for the derivation of the reduced set of equations without averaging in the toroidal angle. In this way, we get a reduced set of equations using the exact three-dimensional equilibrium. In this formulation, we can add linear helical couplings between mode components, which were not included in the formulation developed in Ref.\cite{25}. The main assumptions for the derivation of the set of reduced equations are high aspect ratio, medium $\beta$ (of the order of the inverse aspect ratio $\varepsilon=a/R_0$), small variation of the fields, and small resistivity. With these assumptions, we can write the velocity and perturbation of the magnetic field as \begin{equation} \mathbf{v} = \sqrt{g} R_0 \nabla \zeta \times \nabla \Phi, \quad\quad\quad \mathbf{B} = R_0 \nabla \zeta \times \nabla \psi, \end{equation} where $\zeta$ is the toroidal angle, $\Phi$ is a stream function proportional to the electrostatic potential, and $\psi$ is the perturbation of the poloidal flux. The equations, in dimensionless form, are \begin{equation} \frac{{\partial \psi }}{{\partial t}} = \nabla _\parallel \Phi + \frac{\eta}{S} J_\zeta \end{equation} \begin{eqnarray} \frac{{\partial U}}{{\partial t}} = - {\mathbf{v}} \cdot \nabla U + \frac{{\beta _0 }}{{2\varepsilon ^2 }}\left( {\frac{1}{\rho }\frac{{\partial \sqrt g }}{{\partial \theta }}\frac{{\partial p}}{{\partial \rho }} - \frac{{\partial \sqrt g }}{{\partial \rho }}\frac{1}{\rho }\frac{{\partial p}}{{\partial \theta }}} \right) \nonumber\\ + \nabla _\parallel J^\zeta + \mu \nabla _ \bot ^2U \end{eqnarray} \begin{equation} \label{peq} \frac{{\partial p}}{{\partial t}} = - {\mathbf{v}} \cdot \nabla p + D \nabla _ \bot ^2p + Q \end{equation} Here, $U = \sqrt g \left[{ \nabla \times \left( {\rho _m \sqrt g {\bf{v}}} \right) }\right]^\zeta$, where $\rho_m$ is the mass density. All lengths are normalized to a generalized minor radius $a$; the resistivity to $\eta_0$ (its value at the magnetic axis); the time to the poloidal Alfv\' en time $\tau_{hp} = R_0 (\mu_0 \rho_m)^{1/2} / B_0$; the magnetic field to $B_0$ (the averaged value at the magnetic axis); and the pressure to its equilibrium value at he magnetic axis. The Lundquist number $S$ is the ratio of the resistive time $\tau_R = a^2 \mu_0 / \eta_0$ to the poloidal Alfv\' en time. Each equation has a perpendicular dissipation term, with the characteristic coefficients $D$ (the collisional cross-field transport), and $\mu$ (the collisional viscosity for the perpendicular flow). A source term $Q$ is added to equation (\ref{peq}) to balance the energy losses. Equilibrium flux coordinates $(\rho, \theta, \zeta)$ are used. Here, $\rho$ is a generalized radial coordinate proportional to the square root of the toroidal flux function, and normalized to one at the edge. The flux coordinates used in the code are those described by Boozer \cite{26}, and $\sqrt g$ is the Jacobian of the coordinates transformation. All functions have equilibrium and perturbation components like $ A = A_{eq} + \tilde{A} $. The operator $ \nabla_{||} $ denotes derivation in the direction parallel to the magnetic field, and is defined as \begin{equation*} \nabla_{||} = \frac{\partial}{\partial\zeta} + \rlap{-} \iota\frac{\partial}{\partial\theta} - \frac{1}{\rho}\frac{\partial\tilde{\psi}}{\partial\theta}\frac{\partial}{\partial\rho} + \frac{\partial\tilde{\psi}}{\partial\rho}\frac{1}{\rho}\frac{\partial}{\partial\theta}, \end{equation*} where $\rlap{-} \iota$ is the rotational transform. The FAR3D code uses finite differences in the radial direction and Fourier expansions in the two angular variables. The numerical scheme is semi-implicit in the linear terms. The nonlinear version uses a two semi-steps method to ensure $(\Delta t)^2$ accuracy. \subsection{Equilibrium properties} A free-boundary version of the VMEC equilibrium code \cite{24} was used as input. The equilibrium is calculated from experimental data measured before a sawtooth like event \cite{13}. The electron density and temperature profiles where reconstructed by Thomson scattering data and electron cyclotron emission. The plasma is a high density plasma produced by sequentially injected hydrogen pellets and strongly heated by 3 NBI after the last pellet injection. The vacuum magnetic axis is inward-shifted ($R_{{\rm{axis}}} = 3.6$ m), the magnetic field at the magnetic axis is $2.75$ T, the inverse aspect ratio $\varepsilon$ is $0.16$, and $\beta_0$ is $1.34 \%$. The equilibrium pressure profile and rotational transform are plotted in figure~\ref{FIG:1}. \begin{figure}[h] \centering \includegraphics{Eqiota} \caption{Pressure profile and rotational transform in the equilibrium.} \label{FIG:1} \end{figure} \subsection{Calculation parameters} The calculations have been done with a uniform radial grid of 500 points. Up to 515 Fourier components have been included in the calculations. The maximum dynamic mode $n$ value is 30 and $n=0$ and $0 \le m \le 5$ for the equilibrium components. The Lundquist number is $S=10^5$, and the coefficients of the dissipative terms are $\mu=7.5 \times 10^{-6}$ and $D=1.25 \times 10^{-5}$. They are normalized to $a^2/\tau_{hp}$. The Lundquist number ($S$) in the simulation is $2$-$3$ orders lower than the experimental value. For $S = 10^{5}$ the plasma resistivity in the simulation is larger than in the experiment. The $S$ value is small for computational reasons and the consequence is that the events in the simulation will be stronger than the activity observed in the experiment, but the driver is the same, a MHD resistive mode \cite{27}. To reach a smooth saturation the $\beta$ value increases gradually in the simulation. The starting $\beta$ is half of the experimental value ($\beta_{0} = 1.48 \%$). In this work we study an internal disruption driven at $\beta_{0} = 1.184 \%$. The source term $Q$ added to equation (\ref{peq}) is a Gaussian centered near the magnetic axis. This energy input is dynamically fitted, in such a way that the value of the volume integral of the pressure is kept almost constant during the evolution. The internal disruption is driven when the source term is increased above the constant volume integral limit. \section{Simulation results \label{sec:simulation}} For each $\beta$-value, fluctuations nonlinearly evolve to a saturated state. The energy at saturation increases as the $\beta$-value raises and there are strong oscillations in the steady state from $\beta_{0} \approx 1 \%$. There are some overshoots when $\beta$ is changed (transitions from one period to the next) but the evolution is smooth most of the time. In all the calculations, we assume that the resistive time $\tau_R$ is 1 second. The disruptive process begins at $t = 0.61$ s when the energy source input increases above the system requirement to keep the energy balance. The normalized full kinetic and magnetic energy evolution are shown in the graph~\ref{FIG:2} (a) and the system energy losses (proportional to the volume integral of the pressure) in the graph~\ref{FIG:2} (b). The disruptive process has three main phases, the pre-disruptive, disruptive and post-disruptive phase. There are five main events, large peaks at the KE and ME profiles, especially during the disruptive phase. After the main events in the pre and post disruptive phases at $t = 0.6150$ and $t = 0.6695$ s, the system shows a transition and the plasma equilibrium properties change. \begin{figure}[h] \centering \includegraphics{Ener} \caption{Evolution of the normalized full kinetic (K.E.) and magnetic (M.E.) energy evolution (a). Pre-disruptive phase between the blue and the orange lines, disruptive phase between the orange and brown line (with three internal disruptions: events I, II and III) and the post-disruptive phase between the brown and blue line. The system energy losses are shown in the graph (b).} \label{FIG:2} \end{figure} The profiles of the energy losses show the effect of soft and hard MHD events on the system performance. If the graph tendency changes abruptly the system reaches a hard MHD limit and a collapse is driven, as can be observed at $t = 0.615$ , $0.623$, $0.638$ and $0.659$ s. If the slope slightly changes a soft MHD limit is reached and non resonant $1/3$ sawtooth or $1/2$ sawtooth events are driven, where the system suffers small energy losses and the plasma equilibria swiftly changes. The driver of each main event is studied following the dominant modes ME and KE evolution (graph~\ref{FIG:3}). In the pre-disruptive phase, from $ t = 0.6$ s, there is a fast increase of the $1/2$ mode ME. A main event is driven at $t = 0.615$ s (orange arrow) when the $1/2$ mode ME increases one order and the $2/3$ mode ME reaches a local maximum. The mode $1/2$ KE increases 2 orders while the mode $2/3$ KE reaches a local maximum before a fast 2 orders drop. The $1/3$ mode ME reaches a local maximum too. During the disruptive phase, from $t = 0.615$ to $0.67$ s, there are three main peaks in the mode $1/2$ energy at $t = 0.623$, $0.638$ and $0.6594$ s (red arrows), followed by energy peaks of the mode $2/3$ and $1/3$. At the post-disruptive phase from $t = 0.67$ to $0.68$ s, at $t = 0.67$ s (brown arrow) a main event is driven when the $1/2$ and $1/3$ ME decreases while $2/3$ increases with a $2/5$ local maximum. The energy evolution of the dominant modes after the disruptive process is similar to the evolution before the pre-disruptive phase. In summary, the most important modes are the $1/2$, $2/3$ and $1/3$, but the effect of other modes like the $2/5$, $3/7$ and $3/8$ are relevant when they reach a local energy maximum, because the associated magnetic islands can overlap with other dominant mode magnetic islands and link different stochastic plasma regions, increasing the system energy losses. \begin{figure}[h] \centering \includegraphics{DomEner} \caption{Magnetic (a) and kinetic (b) energy evolution of the dominant modes. Main events are denoted with arrows.} \label{FIG:3} \end{figure} Another tool to study the effect of the MHD instabilities on the device performance is a simulation model of the line integrated intensity (similar measurement chords than the soft X ray camera)\cite{13}. A drop in the intensity chords grants information of where the instability is driven and how strong is the relaxation. The line integrated intensity is roughly proportional to the squared value of the pressure along a measurement chord, expressed like $ I = \int dl p^2 $ where $ dl = \sqrt{dR^2 + dZ^2} $ with $R$ the major radius and $Z$ is the height in real LHD coordinates. No plasma poloidal rotation is considered in first approximation because the poloidal rotation profile depends on the operations characteristic \cite{28}\cite{29}. If the poloidal rotation is added and the plasma rotates like a rigid, the rotation effect increases or reduces the line integrated intensity drops. The intensity is reconstructed at several minor radius positions between the plasma core and the periphery, graph~\ref{FIG:4}. The chords have an index between $1-19$, from the outer torus periphery to the inner torus periphery. The lines $1 - 4$ and $16 - 19$ show the intensity in the plasma periphery, $5 - 7$ and $13 - 15$ the middle plasma and $8 - 12$ the inner plasma. The most important events in each disruption phase are named with the letters A to E. \begin{figure}[h] \centering \includegraphics{SXR} \caption{Profiles of the line integrated intensity in the outer torus (a) and inner torus (b). The periods when the most important events are driven in each disruption phase are denoted with the letters A to E.} \label{FIG:4} \end{figure} The most important chord intensity oscillations are observed in the inner torus but the patterns are similar in both plasma regions. Each main event is studied alone. Event A; two instabilities are driven in the middle plasma at $t = 0.6120$ and $t= 0.6145$ s. Both instabilities propagate from the middle plasma region, flattening around the chord $13$, to the plasma core in $1$ ms, drop in the chords $9$ - $11$, and to the plasma periphery in $2$ ms, peak in the chords $14$ - $17$. During the second instability the chord $15$ drop is large because the instability is driven more close to the periphery. Event B; the instability is driven too in the middle plasma region between the chords $12$ - $14$. The intensity in the plasma core, chords $9$ - $11$, show a large drop from $t = 0.6230$ s, while the chord 16 intensity slightly increases. The instability reaches the plasma core in a time as fast as $0.5$ ms, but takes around $2$ ms to reach the plasma periphery, chord $18$. The large flattening in the middle plasma chords and the sharp intensity drop in the inner plasma points out that the instabilities in the middle plasma and the core are linked. Events C and D; similar patterns than the event B. The instability weakens after each internal disruption. Event E; the instability is driven again in the middle plasma region, chords $13$ - $14$ are nearly flat. The intensity drops in the inner plasma $0.5$ ms later, chords $9$ - $12$, and the intensity increases in the periphery after $0.5$ ms, chord $15$, and $2.5$ ms in the chords $16$ - $17$. The instability is weaker than an internal disruption and the instabilities in the middle plasma and the core are not linked. A soft MHD event is driven if the dominant modes are destabilized after a pressure gradient limit is exceeded, but a hard MHD event is observed if the magnetic islands of the dominant modes are overlapped and a collapse event begins. A flattening in the pressure profile is connected with the presence of magnetic islands. The averaged pressure gradient between the magnetic axis and the outer core boundary is shown in the graph~\ref{FIG:5}. A drop in the averaged pressure gradients suggests that there are pressure profile flattening in these plasma regions. \begin{figure}[h] \centering \includegraphics{Pgrad} \caption{Averaged pressure gradient between the magnetic axis and the outer core boundary (red line) and the middle plasma (blue line). The most important events are denoted with letters from A to E.} \label{FIG:5} \end{figure} During the main events, the pressure gradient decreases between $0.3 < \rho < 0.6$ while the pressure drops in the plasma core. After the main relaxations, the pressure gradient in the middle and inner plasma increases with the pressure value on the magnetic axis, therefore the flattening of the pressure profile reduces until a MHD stability limit is reached. If a soft MHD limit is exceeded the pressure gradient profiles only show slight changes in their slope, but in a hard MHD limit the pressure gradient and pressure value in the plasma core drop fast, because a collapse event is driven, like in the main events A to E. Before the disruptive phase, the pressure gradient in the plasma core is larger than the pressure gradient in the middle plasma, but during the disruptive and post disruptive phase the pressure gradients in the plasma core are smaller than in the middle plasma. This result points out that there is a pressure profile flattening in the inner plasma region along the disruptive process which appears during the pre disruptive phase. The events B, C and D are driven when a pressure gradient limit is exceeded between the magnetic axis and the middle plasma. Before the onset of the events the pressure gradient is nearly constant close to the hard MHD limit. The instability keeps active between the middle and inner plasma region and the growth of the pressure gradients and the pressure value in the plasma core is bounded by the hard MHD limit. The hard MHD limit for the equilibria after the last main event E, at the beginning of the post-disruptive phase, is less restrictive and there is not a large instability in the middle plasma region. The full disruptive process ends when the pressure gradient in the plasma core is higher than the pressure gradient in the middle plasma, because as we will discuss in the next sections, a large pressure gradient in the plasma core avoids that the instabilities in the middle plasma can reach the inner plasma region. The next section target is to study equilibrium properties during each disruptive phase and to predict the plasma properties when a hard MHD limit is exceeded and a collapse event is driven. The next simulations characterize the evolution. The instantaneous rotational transform profile gives us information on the instantaneous position of the rational surfaces and of resonant modes; the expression is \begin{equation} \label{iota} \rlap{-} \iota (\rho)+ \tilde{\rlap{-} \iota}(\rho) = \rlap{-} \iota+ \frac{1}{\rho}\frac{\partial\tilde{\psi}}{\partial\rho} \end{equation} The averaged pressure profiles show the pressure profile flattenings driven by unstable modes near the rational surfaces. Its expression is $\left\langle p \right\rangle = p_{\rm{eq}}(\rho) + \tilde{p}_{00} (\rho)$, where the angular brackets indicate average over a flux surface and $\tilde{p}_{00}$ is the $(n=0, m=0)$ Fourier component of the pressure perturbation. The two-dimensional contour plots of the pressure profile are useful to see the plasma regions with large gradients and the shape of the flux surfaces. It is written in terms of the Fourier expansion $p = p_{eq}(\rho) + \sum_{n,m} \tilde{p}_{n,m}(\rho)cos(m\theta + n\zeta)$. The Poincar\'e plots of the magnetic field structure; visualize the topology of the (instantaneous) magnetic field and the plasma regions with magnetic islands. If the magnetic islands are overlapped some stochastic regions appear and the magnetic field lines will cover a volume of the torus \cite{30}. There are two different plots: the first one with the dominant modes only to see the size of the largest magnetic islands, and the second one with all the modes to observe the stochastic regions. The stochastic regions are associated to different rational surfaces very close between them but not always overlapped. \subsection{Pre-disruptive phase} Two instabilities are driven between $t = 0.6139$-$0.6145$ s and $t = 0.6163$-$0.6169$ s. In these time periods there are two local maxima of the magnetic and kinetic energy of the dominant modes (see graph~\ref{FIG:3}). The deformations of the instantaneous rotational transform and the flattening of the averaged pressure profile during these instabilities are shown in the graph~\ref{FIG:6} and graph~\ref{FIG:7}. \begin{figure}[h] \centering \includegraphics{Iota1a} \caption{Averaged pressure profile (a) and instantaneous rotational transform (b) for the first instabilities driven during the pre-disruption phase. The location of the most important rational surfaces are included.} \label{FIG:6} \end{figure} \begin{figure}[h] \centering \includegraphics{Iota1b} \caption{Averaged pressure profile (a) and instantaneous rotational transform (b) for the second instabilities driven during the pre-disruption phase. The location of the most important rational surfaces are included.} \label{FIG:7} \end{figure} At $t = 0.6101$ s the main flattenings of the pressure profile are driven by the modes $1/2$ and $2/3$ around $\rho = 0.5$ and $0.67$. At $t = 0.6139$ s a new profile flattening appears around $\rho = 0.3$ driven by the modes $2/5$ and $3/7$, while the iota profile shows large deformations near the magnetic axis and it falls below the value $\rlap{-} \iota = 1/3$ around $\rho = 0.05$. Before the onset of the second instability at $t = 0.6161$ s, the flattening driven by the mode $2/5$ decreases while the flattening driven by the modes $3/7$ and $3/8$ remains, and the $1/2$ flattening increases. The iota profile is distorted near the magnetic axis and it falls again below the value $\rlap{-} \iota = 1/3$. The unstable modes disturb the flux surfaces shape, graph~\ref{FIG:8} and graph~\ref{FIG:9}. In the middle plasma region the flux surfaces are more deformed at $t = 0.6131$ and $0.6157$ s than at $t = 0.6101$ s. If the instability is large enough, the flux surfaces are torn (red circles) and small amounts of plasma are expelled to the periphery, $t = 0.6137$ and $0.6165$ s. The pressure value in the plasma core drops and the pressure profile is flattened in the inner plasma region. A large pressure value in the core avoids the onset of strong instabilities in the inner plasma, mainly driven by the destabilizing effect of the $1/2$ mode. In the pre-disruptive phase the pressure in the core decreases and the instability in the middle plasma region distorts the inner plasma flux surfaces. These results point out that the MHD limit in the inner plasma is linked to a minimum pressure value in the core while in the middle plasma the MHD limit is related with a maximum value of the pressure gradient. An explanation of this effect is the deepening of the magnetic well in the plasma core and the increase of the inner plasma region inside the magnetic well. In LHD inward configurations the magnetic well is limited to the inner plasma but in outward configurations the magnetic well can cover the entire minor radius. The key parameter is the location of the vacuum magnetic axis $R_{ax}$. In the simulation the $R_{ax} = 3.6$ m and the magnetic well is limited to the inner plasma core. Along the LHD discharge the magnetic axis drifts outward as the beta value increases, thus the plasma region inside the magnetic well increases. In the simulation the increase of the pressure on the magnetic axis and the pressure gradient on the plasma core show a similar effect: the plasma region inside the magnetic well increases because the beta value is higher and the instantaneous position of the magnetic axis has drifted outward. The evolution of the magnetic well along the simulation is a key factor to understand why a large pressure gradient in the plasma core stabilizes the modes in the inner plasma region. Previous studies showed the interchange modes are unstable below a $\beta$ value in LHD inward configuration around $\rho = 0.5$, and that the configuration with broad pressure profiles are more unstable than configurations with peaked pressure profiles \cite{31,32}. \begin{figure}[h] \centering \includegraphics{Prp1a} \caption{Poloidal section of the pressure for the first event in the pre-disruptive phase.} \label{FIG:8} \end{figure} \begin{figure}[h] \centering \includegraphics{Prp1b} \caption{Poloidal section of the pressure for the second event in the pre-disruptive phase.} \label{FIG:9} \end{figure} In the simulation the instability in the middle plasma region disturbs the inner plasma and a $m = 3$ instability appears near the plasma core, $t = 0.6143$ and $0.6165$ s, but the instabilities are not linked. The plasma relaxation is stronger if the correlation between both instabilities is large, because the instability in the middle plasma can reach easily the inner plasma region and the destabilizing effect of the $1/2$ mode propagates along all the plasma. This condition is not reached in the pre-disruptive phase. \begin{figure}[h] \centering \includegraphics{Stc1a} \caption{Magnetic islands of the dominant modes (a) and stochastic regions (b) in the pre-disruptive phase. first instability.} \label{FIG:10} \end{figure} \begin{figure}[h] \centering \includegraphics{Stc1b} \caption{Magnetic islands of the dominant modes (a) and stochastic regions (b) in the pre-disruptive phase, second instability.} \label{FIG:11} \end{figure} The sizes of the magnetic islands (up) and stochastic regions (down) formed by the dominant modes, graph~\ref{FIG:10} and graph~\ref{FIG:11}, show the correlation between the instability in the middle and inner plasma region. At $t = 0.6105$ s there is no overlapping between the magnetic islands of the dominant modes and the confinement surfaces are well defined (in the bottom figures, the particles are confined in the closed magnetic surfaces in the inner plasma and in the regions with the same colour between the inner and plasma periphery). The largest stochastic region is related with the $1/2$ magnetic islands in the middle plasma and the island size increases with the pressure gradient. If the $1/2$ island size is large enough to overlap with other dominant mode magnetic islands, the hard MHD limit is exceeded and a collapse event begins. A large stochastic region strongly deforms and tears the flux surfaces in the middle plasma. The instability in the middle plasma induces strong deformations in the inner plasma and $1/3$ magnetic islands appear in the plasma core at $t = 0.6143$ and $t = 0.6167$ s. The stochastic region in the middle plasma reaches the inner plasma but it is not linked with the stochastic region of the plasma core, because the confinement surfaces are still deformed but not torn. At $t = 0.6201$ s the size of the magnetic islands decreases, the stochastic region in the inner plasma disappears after a magnetic reconnection and the magnetic surfaces are recovered, but a large stochastic region remains in the middle plasma. After the second event the system enters in the disruptive phase. In the disruptive phase the $1/2$ instability in the middle plasma is stronger than in the pre-disruptive phase and its destabilizing effect on the inner plasma region is larger, thus the hard MHD limit is easily reached when the pressure gradient builds up and a collapse event can be driven. \subsection{Disruptive phase} The pressure and iota profile evolution during the internal disruptions I, II and III share similar patterns, figure~\ref{FIG:12}. The plots of the events II and III are omitted. \begin{figure}[h] \centering \includegraphics{Iota2} \caption{Averaged pressure profile (a) and instantaneous rotational transform (b) for the internal disruption I.} \label{FIG:12} \end{figure} \begin{figure}[h] \centering \includegraphics{Prp2a} \caption{Poloidal section of the pressure for the internal disruption I. The plasma region where the flux surface are torn as shown the red circles.} \label{FIG:13} \end{figure} During the internal disruptions there is a strong deformation of the pressure profile in the middle plasma region driven by the mode $1/2$, with a profile inversion at $t = 0.6241$, $0.6387$ and $0.6599$ s. After the onset of the instability in the middle plasma, a new flattening appears in the inner plasmas around the rational surfaces $2/5$, $3/7$ and $3/8$. The iota profile is deformed in the plasma core and it falls below $\rlap{-} \iota = 1/3$ at $t = 0.6259$, $t = 0.6413$ and $0.6619$ s; three magnetic islands appear near the magnetic axis. The iota and pressure profile deformations are larger than in the pre-disruptive phase and the effects of the instability on the plasma equilibria are stronger, as can be seen in the flux surface shape during the internal disruptions, figure~\ref{FIG:13} and figure~\ref{FIG:14}. \begin{figure}[h] \centering \includegraphics{Prp2b} \caption{Poloidal section of the pressure for the internal disruptions II and III. The plasma region where the flux surface are torn as shown the red circles.} \label{FIG:14} \end{figure} In the event I the flux surfaces in the middle plasma are perturbed by the destabilizing effect of the $1/2$ mode, $t = 0.6225$ s, until the instability is strong enough to tear the flux surfaces (red circles) and an amount of plasma is expelled to the periphery at $t = 0.6233$ s. Then, the instability reaches the inner plasma where the flux surfaces are strongly deformed. The flux surfaces break down in the plasma core, $t = 0.6259$ s, and three islands appear near the magnetic axis. After the internal disruption the flux surfaces are recovered in the inner plasma, $t = 0.6285$ s, but the $1/2$ instability keeps active and the pressure profile flattening in the middle plasma remains. The MHD hard limit for the pressure gradient can be easily exceeded when the pressure gradient builds up again, because the $1/2$ instability will have a large destabilizing effect on the inner plasma region, and another collapse event is driven. The internal disruption I is the strongest relaxation, it shows the largest flux surface tearing in the middle plasma and magnetic surface break down in the plasma core. \begin{figure}[h] \centering \includegraphics{Stc2a} \caption{Magnetic islands of the dominant modes and stochastic regions during the internal disruption I.} \label{FIG:15} \end{figure} \begin{figure}[h] \centering \includegraphics{Stc2b} \caption{Magnetic islands of the dominant modes and stochastic regions during the internal disruption II.} \label{FIG:16} \end{figure} The magnetic islands and stochastic regions in the plasma for the internal disruption I, figure~\ref{FIG:15}, and the internal disruptions II and III, figure~\ref{FIG:16} and figure~\ref{FIG:17}, show the evolution of the instability. Before the onset of the internal disruptions, $t = 0.6201$, $t = 0.6320$ and $t = 0.6510$ s the magnetic islands of the dominant modes are not large enough to overlap between them, therefore there are no linked stochastic regions and the confinement magnetic surfaces are well defined. The internal disruptions are driven when the size of the magnetic islands increases and the magnetic islands of the dominant modes overlap. Large stochastic regions appear in the middle plasma and tear the flux surfaces at $t = 0.6246$, $0.6395$ and $0.6599$ s. The stochastic region in the middle plasma during the internal disruption I is the largest, reason why the flux surface tearing is the strongest too. The instability reaches the plasma core and the $1/3$ islands appear near the magnetic axis, $t = 0.6259$, $t = 0.6413$ and $t = 0.6619$ s. The pressure gradient limit in the inner plasma is related with the size of the $1/3$ magnetic islands and the overlapping with other dominant modes magnetic islands like the $3/8$, $3/7$ and $2/5$. The internal disruption is driven if the island overlapping is large enough to break down the confinement surfaces in the plasma core and a stochastic region links the middle and plasma core. After the collapse event the shape of the flux surfaces and confinement surfaces are recovered; the magnetic islands size decrease, the stochastic regions are reduced and a magnetic reconnexion takes place. \begin{figure}[h] \centering \includegraphics{Stc2c} \caption{Magnetic islands of the dominant modes and stochastic regions during the internal disruption III.} \label{FIG:17} \end{figure} After the internal disruption I, the unstable modes in the inner plasma saturate and the reconnection takes place. In the middle plasma the mode $1/2$ is not fully saturated and the instability keeps active, but the $1/2$ magnetic island size decreases and the $1/2$ destabilizing effect on the inner plasma is smaller. As soon as the pressure gradient builds up, the $1/2$ effect quickly destabilizes the inner plasma modes again driving a large overlapping between the magnetic islands, therefore the hard MHD limit of the pressure gradient is easily exceeded and the internal disruptions II and III are driven. In summary, the maintained stochasticity in the middle plasma region affects the quality of the flux surfaces in the inner plasma region driving the mode destabilization. If this effect is large enough an internal disruption can be driven. \subsection{Post-disruptive phase} The disruptive phase ends at $t = 0.6695$ s when a main event with different patterns than an internal disruption is driven. The deformation of the pressure and iota profiles is smaller than in the disruptive phase, figure~\ref{FIG:18}, but the main flattening of the pressure profile remains in the middle plasma region driven by the mode $1/2$. There is a small profile inversion at $t = 0.6695$ s in the middle plasma region, as well as a flattening in the inner plasma around $\rho = 0.3$ by the modes $2/5$, $3/7$ and $3/8$. At $t = 0.6709$ s the profile deformation decreases in the middle plasma and increases in the inner plasma. At $t = 0.6751$ s, the profile flattening decreases in the inner plasma but increases near the periphery by the mode $2/3$ effect. The iota profile does not suffer any large distortion and never falls below $\rlap{-} \iota = 1/3$. The instability effect on the equilibria is weaker than in the case of the other main events and the flux surface perturbations is small, figure~\ref{FIG:19}. The flux surfaces between $t = 0.6647$ and $0.6675$ s show only small deformations. The pressure gradient increases in the plasma core until $t = 0.6695$ s, figure~\ref{FIG:5}, when a instability appears in the middle plasma region and the flux surfaces begin to be deformed but not torn. The instability reaches the inner plasma region around $t = 0.6709$ s, but the perturbation in the middle plasma is not large enough to induce strong deformations in the inner plasma. The $1/3$ magnetic islands are not observed and the pressure does not show a large drop in the plasma core. At $t = 0.6751$ s the flux surfaces are recovered in the inner and middle plasma and the pressure value near the core keeps increasing. \begin{figure}[h] \centering \includegraphics{Iota3} \caption{Averaged pressure profile (a) and instantaneous rotational transform (b) for the post-disruptive phase} \label{FIG:18} \end{figure} \begin{figure}[h] \centering \includegraphics{Prp3} \caption{Poloidal section of the pressure for the post-disruptive phase.} \label{FIG:19} \end{figure} The magnetic topology explains why the flux surface deformation is smaller during the post-disruptive main event, figure~\ref{FIG:20}. The magnetic islands size of the dominant modes at $t = 0.6670$ s is small and there is no overlapping between them. At $t = 0.6709$ s the island overlapping increases in the inner plasma, but the stochastic region does not reach the plasma core and the $1/3$ island is not observed near the magnetic axis. At $t = 0.6750$ s, the dominant mode islands are not overlapped across the plasma, the stochastic regions are small and the confinement surfaces are well defined even in the middle plasma region. These results point out that a pressure limit is overcome in the middle plasma region but the instability is weak and it does not reach the plasma core, because the magnetic islands size of the dominant modes is not large enough to overlap between them and create long stochastic regions. In summary, this main event shows a transition from a hard MHD regime to another where only soft MHD events are driven. In the post-disruptive phase the equilibrium properties change, the pressure gradient limit in the middle plasma is less restrictive and the pressure in the plasma core is large enough to avoid the onset of hard MHD relaxations, figure~\ref{FIG:3} and figure~\ref{FIG:5}. \begin{figure}[h] \centering \includegraphics{Stc3} \caption{Magnetic islands of dominant modes and stochastic regions during the post-disruptive phase.} \label{FIG:20} \end{figure} \section{Conclusions and discussion \label{sec:conclusions}} The present research aim is to simulate an internal disruption event as a hard MHD limit of the $1/2$ sawtooth like activity. Using this example, the concept of hard MHD limit is studied and defined as a pressure gradient limit when the LHD plasma can suffer a collapse behaviour driven by the magnetic islands overlapping of the dominant modes. The soft and hard MHD limits in the inward LHD configurations decrease the device performance in advanced operation scenarios. The present research points out that a soft MHD relaxation, a $1/2$ sawtooth like event, can evolve to an internal disruption in the hard MHD limit. If an internal disruption is driven the LHD performance will reduce dramatically. The disruptive process can be divided into three main stages: the pre-disruptive, the disruptive and the post-disruptive phases. In the pre-disruptive phase the pressure gradient exceeds the hard MHD limit, while the pressure gradient in the inner plasma quickly falls below its value in the middle plasma. The destabilizing effect of the mode $1/2$ drives an instability and the pressure profile is flattened in the middle plasma. The instability grows and the flux surface deformation increases in the middle plasma. The magnetic islands of the dominant modes $1/2$, $2/5$, $3/8$ and $3/7$ overlap and a stochastic region appears between the inner and middle plasma region. The flux surfaces in the middle plasma suffer small tearing effects and amounts of plasma are expelled to the periphery. The instability in the middle plasma region reaches the inner plasma and the iota profile is deformed close to the magnetic axis, dropping below the value $\rlap{-} \iota = 1/3$. The mode $1/3$ is located inside the core and three magnetic islands appear near the magnetic axis, but the stochastic region in the middle plasma is not linked with the three islands in the plasma core. The pre-disruptive phase ends after the first main relaxation which change the equilibria MHD stability properties. At the beginning of the disruptive phase a magnetic reconnection takes place in the inner and plasma periphery where the flux surfaces shape and the magnetic surfaces are recovered, but the instability remains in the middle plasma like two large $1/2$ magnetic islands. During the disruptive phase three internal disruptions are driven when the pressure gradient exceeds the new hard MHD limit, lower than the limit in the pre-disruptive phase. The internal disruption shares several characteristics with the main event in the pre-disruptive phase, but now the instability is stronger and the stochastic regions in the middle and plasma core are linked. The flux surface deformation in the middle plasma drives large tearing processes and the amount of plasma expelled increases. The disruptive phase ends when the equilibrium stability properties change after the onset of another main event with different patterns than an internal disruption. The instability in the middle plasma region is weaker, the flux surfaces deformation decreases and the tearing process is not observed. The magnetic islands size of the dominant modes is not large enough to create wide stochastic regions. The instability reaches the inner plasma region but it is too weak to drive a large iota profile deformation and the mode $1/3$ keeps outside the plasma, so the magnetic surfaces and flux surface shape are recovered sooner. The MHD stability limit in the post-disruptive phase is less restrictive and the pressure gradient in the middle plasma increases; it does not reach a hard MHD limit and the pressure in the plasma core is not bounded. The post-disruptive phase ends when the averaged pressure gradient in the inner plasma is higher than its value in the middle plasma region. At that point the pressure profile flattening in the inner and middle plasma region have disappeared. The pressure in the plasma core is large enough to avoid that an instability in the middle plasma region drives a strong deformation across the flux surfaces in the inner plasma. The most unstable mode is the $2/3$ near the plasma periphery where the pressure profile is flattened. The $2/3$ mode M.E. and K.E. increase and exceed the $1/2$ energy which drops one order of magnitude. The simulation Lundquist number is $2$ - $3$ orders lower than the real value therefore the instabilities are larger than in the experiment. The pressure gradient limit for the MHD stability is more restrictive in the simulation than in the experiment, reason why the internal disruptions are not observed in the LHD operation, only the soft limit as defined by the 1/2 sawtooth like activity, but the flux surface tearing can explain the LHD efficiency drop during this activity. In advanced LHD operation scenarios, if a large instability is driven in the middle plasma region and the equilibria shows a severe flattening of the pressure profile in the inner plasma region with a strong deformation of the iota profile near the magnetic axis, the mode $1/3$ can enter inside the plasma core. If the instability is large enough to link the stochastic regions in the middle and plasma core, an internal disruption can be driven. The internal disruptions can be avoided if the Lundquist number is high and the resistive pressure-driven modes are stable or marginally unstable. Another option is to avoid the instability in the middle plasma region to reach the plasma core, and that will happen when the pressure profile in the middle and plasma core are not linked, the iota profile does not suffer a large deformation near the magnetic axis and the $1/3$ magnetic islands are not driven, or the pressure in the plasma core is large enough to keep the flux surface shape in the inner plasma region. Under these conditions the LHD operation is in the soft MHD limit and only $1/2$ sawtooth like activity is driven. \begin{acknowledgments} The authors are very grateful to L. Garcia for letting us use the FAR3D code and his collaboration in the developing of the present manuscript diagnostics. \end{acknowledgments}
1,108,101,566,154
arxiv
\section{Introduction} \par The success of the standard model[1] of quarks and leptons is impressive. Perhaps the top quark,whose mass was constrained to be in a narrow range by experiments at LEP, has been discovered at Fermilab[2]. The weak coupling of the top quark is predicted to be almost purely left-handed on the basis of the $\rho$-parameter analysis[3] and also by an analysis of the $b\rightarrow s\gamma$ decay[4]. At the same time, an abrupt end of proliferation of quarks and leptons(in particular, light neutrinos) at the 3rd generation is rather mysterious. One of the reasons why the generations with heavier masses are prohibited may be the dynamical stability of Weinberg-Salam theory with its chiral (left-handed) weak couplings. Some time ago, we performed an analysis of this problem of heavy fermions[5],although at that time the top quark was widely believed to be not so heavy as the present day experiments indicate. We here resume this analysis . In the standard model, all the masses of gauge bosons and fermions are generated by the Higgs mechanism. For this reason, the successful physics of 3 generations of light fermions below 1 TeV may be called as the " Higgs world". In the following, we discard the dynamics of Higgs mechanism for a moment, and we regard the known leptons and quarks together with W and Z bosons as all massless. By this extreme simplification of the picture, we can think about the global aspects of fermion spectrum better. We then examine whether heavier quarks and leptons with masses of TeV region, for example, can be accommodated in the standard model. We first recapitulate the essence of the arguments in Ref.[5]. The first observation is a consequence of the coupling of W boson to the fermion doublet $ \psi_{k},k=1,2$ generically defined by \begin{equation} {\cal L}=(1/2)g\bar{\psi}_{k}(T^{a})_{kl}\gamma^{\mu}(a+b\gamma_{5})\psi_{l}W_{\mu}^{a} \end{equation} The longitudinal coupling of $W_{\mu}^{a}$, which is related to Higgs mechanism , is studied by replacing $W_{\mu}^{a}\rightarrow (2/gv)\partial _{\mu}S^{a}(x)$ in (1) with $ S^{a}(x)$ the unphysical Higgs scalar. We then obtain \begin{equation} {\cal L}=a(\frac{m_{k}-m_{l}}v)\overline{\psi}_{k}(T^{a})_{kl}\psi_{l}S^{a} +b(\frac{m_{k}+m_{l}}v)\overline{\psi}_{k}(T^{a})_{kl}\gamma_{5}\psi_{l}S^{a} \end{equation} by using the equations of motion. The typical mass scale of the Higgs world is \begin{equation} v=247GeV \end{equation} and thus \begin{equation} b(m_{k}+m_{l})\gg v ,\ or ,\ a|m_{k}-m_{l}|\gg v \\ \end{equation} induces a strongly interactiong sector into the standard model. We here regard the situation in (4) as unnatural.In other words, the standard model with chiral couplings does not accommodate fermions with masses much larger than the Higgs scale (3). Another ingredient of the arguments in Ref.[5] is provided by the one-loop effective potential[6] $V(\varphi)=-(\frac{1}2)M^{2}\varphi^{2} + f\varphi^{4}$ \begin{equation} +(64\pi^{2})^{-1}Tr\{3\mu_{\varphi}^{4}\ln\mu_{\varphi}^{2} + M_{\varphi}^{4}\ln M_{\varphi}^{2} - 4m_{\varphi}^{4}\ln m_{\varphi}^{2}\} \end{equation} where $\mu_{\varphi}$,$M_{\varphi}$ and $m_{\varphi}$ are ,respectively, the zeroth-order vector ,scalar and spinor mass matrices for a scalar field vacuum expectation value $\varphi$. When one has fermions with mass much larger than (3), the stability of the potential (5) suggests that \begin{equation} m_{H} \geq \sqrt{2}m_{F} \gg M_{W} \end{equation} Namely, the Higgs mass is forced to become heavy by the appearance of heavy fermions, and the heavy (physical) Higgs in turn induces a strongly interacting sector into Weinberg-Salam theory[7]. We again regard this situation unnatural. It is possible to make the analysis in (6) more precise, but the semi-quantitative analysis (6) is sufficient for our present purpose. The heavier fermions in the standard scheme, if they should exist without spoiling dynamical stability in a perturbative sense, should thus have almost pure vector-like couplings (i.e., $b \simeq 0 $ in (4)) and that the mass of the fermion doublet should be almost degenerate (i.e., $a|m_{k}-m_{l}| \leq v$ in (4)).This latter constraint is also desirable to ensure that the $\rho$-parameter does not receive unacceptably large radiative corrections[2]. If heavier quarks and leptons satisfy the above conditions, they have no sizable couplings to the Higgs sector. In other words, their masses primarily come from dynamics which is different from the Higgs mechanism in the standard model, and consequently, the stability argument on the basis of the effective potential (5) becomes irrelevant for these heavier fermions. Also, the breaking mechanism of SU(2) (both of local as well as custodial) in the standard model is concluded to be a typical phenomenon in the energy scale of $v$. \section{A Vector-like Extention of the Standard Model} \par The present note is mainly motivated by an analysis in Ref.[4] and the appearance of a stimulating scheme which satisfies main features of the constraints discussed above. This scheme was introduced in the name of generalized Pauli-Villars regularization[8]. We regard the fermion sector of the generalized Pauli-Villars regularization as a model of realistic fermion spectrum, and thus unphysical bosonic Dirac fields appearing in the regularization are of course excluded. A crucial property of the Pauli-Villars regularization is that heavier fermions, when their masses become large, decouple from the world of light chiral fermions. In the considerations so far\footnote{ A somewhat related scheme was also considered by K. Inoue from different considerations[9].}, the masses of fermions other than the conventional leptons and quarks are taken to be of the order of the grand unification mass scale or the Planck mass. In this respect, our physical motivation is completely different. To be specific , we consider an $SU(2){ \times} U(1)$ gauge theory written in an abbreviated notation \begin{equation} {\cal L}_{L}=\overline{\psi}i\gamma^{\mu}D_{\mu}\psi - \overline{\psi}_{R}M\psi_{L} - \overline{\psi}_{L}M^{\dagger}\psi_{R} \end{equation} with \begin{equation} \not{\!\! D}=\gamma^{\mu}(\partial_{\mu} - igT^{a}W_{\mu}^{a} - i(1/2)g^{\prime}Y_{L}B_{\mu}) \end{equation} and $Y_{L}=1/3$ for quarks and $Y_{L}=-1$ for leptons. The field $\psi$ in (7) is a column vector consisting of an infinite number of $SU(2)$ doublets, and the infinite dimensional $nonhermitian$ mass matrix $M$ satisfies the index condition \begin{equation} \dim\ker(M^{\dagger}M) = 3,\ \dim\ker(M M^{\dagger})=0 \end{equation} In the explicit "diagonalized" expression of $M$ \begin{eqnarray} M&=&\left(\begin{array}{ccccccc} 0&0&0&m_{1}&0 &0 &..\\ 0&0&0&0 &m_{2}&0 &..\\ 0&0&0&0 &0 &m_{3}&..\\ .&.&.&. &. &. &.. \end{array}\right)\nonumber\\ M^{\dagger}M&=&\left(\begin{array}{cccccc} 0&&&&& \\ &0&&&0& \\ &&0&&& \\ &&&m_{1}^{2}&& \\ &0&&&m_{2}^{2}& \\ &&&&&.. \end{array}\right)\nonumber\\ M M^{\dagger}&=&\left(\begin{array}{cccccc} m_{1}^{2}&&&&& \\ &m_{2}^{2}&&0&& \\ &&m_{3}^{2}&&& \\ &0&&..&& \\ &&&&..& \end{array}\right) \end{eqnarray} the fermion $\psi$ is written as \begin{equation} \psi_{L}=(1-\gamma_{5})/2\left( \begin{array}{c} \psi_{1}\\ \psi_{2}\\ \psi_{3}\\ \psi_{4}\\. \end{array} \right), \ \ \psi_{R}=(1+\gamma_{5})/2\left( \begin{array}{c} \psi_{4}\\ \psi_{5}\\ \psi_{6}\\.\\. \end{array} \right) \end{equation} We thus have 3 massless left-handed $SU(2)$ doublets $\psi_{1},\psi_{2}, \psi_{3}$, and an infinite series of vector-like massive $SU(2)$ doublets $\psi_{4}, \psi_{5},...$ with masses $m_{1},m_{2},..$ as is seen in\footnote{ One may introduce \underline{constant} complete orthonormal sets $\{ u_{n} \}$ and $\{ v_{n}\}$ defined by \\ $M^{\dagger}Mu_{n} = 0$ for $n=-2, -1, 0$,\\ $M^{\dagger}Mu_{n} = m_{n}^{2}, M M^{\dagger}v_{n} = m_{n}^{2}v_{n}$ for $n = 1, 2, ...$\\ by assuming the index condition (9). One then has $Mu_{n} = m_{n}v_{n}$ for $m_{n}{\neq} 0$ by choosing the phase of $v_{n}$ and $Mu_{n} = 0$ for $m_{n}=0$ . When one expands\\ $\psi_{L}= \sum_{n=-2}^{\infty} \psi_{n+3}^{L}u_{n}, \psi_{R}= \sum_{n=1}^{\infty} \psi_{n+3}^{R}v_{n}$\\ one recovers the mass matrix (10) and the relation (12).} \begin{eqnarray} {\cal L}_{L}&=&\bar{\psi}_{1}i\not{\!\! D}(\frac{1-\gamma_{5}}{2})\psi_{1} +\bar{\psi}_{2}i\not{\!\! D}(\frac{1-\gamma_{5}}{2})\psi_{2} \nonumber\\ & &+\bar{\psi}_{3}i\not{\!\! D}(\frac{1-\gamma_{5}}{2})\psi_{3} \nonumber\\ & &+\bar{\psi}_{4}(i\not{\!\! D} -m_{1})\psi_{4} +\bar{\psi}_{5}(i\not{\!\! D} -m_{2})\psi_{5} + ... \end {eqnarray} An infinite number of right-handed fermions in a doublet notation are also introduced by( again in an abbreviated notation) \begin{equation} {\cal L}_{R}=\overline{\phi}i\gamma^{\mu}(\partial_{\mu}-i(1/2)g^{\prime} Y_{R}B_{\mu})\phi - \overline{\phi}_{L}M^{\prime}\phi_{R} -\overline{\phi}_{R}(M^{\prime})^{\dagger}\phi_{L} \end{equation} where \begin{equation} Y_{R}=\left(\begin{array}{cc} 4/3&0\\ 0&-2/3 \end{array}\right) \end{equation} for quarks and \begin{equation} Y_{R}=\left(\begin{array}{cc} 0&0\\ 0&-2 \end{array}\right) \end{equation} for leptons, and the mass matrix $M^{\prime}$ satisfies the index condition (9) but in general it may have different mass eigenvalues from those in(10). After the diagonalization of $M^{\prime}$, $\phi$ is written as \begin{equation} \phi_{L}=(1-\gamma_{5})/2\left( \begin{array}{c} \phi_{4}\\ \phi_{5}\\ \phi_{6}\\ .\\ . \end{array} \right), \ \ \phi_{R}=(1+\gamma_{5})/2\left( \begin{array}{c} \phi_{1}\\ \phi_{2}\\ \phi_{3}\\ \phi_{4}\\ . \end{array} \right) \end{equation} Here, $\phi_{1}, \phi_{2}$,and $ \phi_{3}$ are right-handed and massless, and $\phi_{4}, \phi_{5},....$ have masses $m_{1}^{\prime}, m_{2}^{\prime}$,.. \begin{eqnarray} {\cal L}_{R}&=&\bar{\phi}_{1}i\not{\!\! D}(\frac{1+\gamma_{5}}{2})\phi_{1} +\bar{\phi}_{2}i\not{\!\! D}(\frac{1+\gamma_{5}}{2})\phi_{2} \nonumber\\ & &+\bar{\phi}_{3}i\not{\!\! D}(\frac{1+\gamma_{5}}{2})\phi_{3} \nonumber\\ & &+\bar{\phi}_{4}(i\not{\!\! D} -m_{1}^{\prime})\phi_{4} +\bar{\phi}_{5}(i\not{\!\! D} -m_{2}^{\prime})\phi_{5} + ... \end {eqnarray} with \begin{equation} \not{\!\! D}= \gamma^{\mu}(\partial_{\mu}-i(1/2)g^{\prime} Y_{R}B_{\mu}) \end{equation} The present model is vector-like and manifestly anomaly-free before the breakdown of parity (9);after the breakdown of parity,the model still stays anomaly-free provided that both of $M$ and $M^{\prime}$ satisfy the index condition (9). In this scheme, the anomaly is caused by the left-right asymmetry, in particular, in the sector of (infinitely) heavy fermions; in this sense, the parity breaking (9) may be termed "hard breaking". Unlike conventional vector-like models with a finite number of components[10], the present scheme avoids the appearance of a strongly interacting right-handed sector despite of the presence of heavy fermions. A truncation of the present scheme to a finite number of heavy fermions (for example, to only one heavy doublet in $\psi$) is still consistent, although it is no more called vector-like. The massless fermion sector in the above scheme reproduces the same set of fermions as in the standard model. However, heavier fermions have distinct features. For example, the heavier fermion doublets with the smallest masses are described by \begin{eqnarray} {\cal L}&=&\overline{\psi}_{4}i\gamma^{\mu}(\partial_{\mu}-igW_{\mu}^{a} -i(1/2)g^{\prime}Y_{L}B_{\mu})\psi_{4}-m_{1}\overline{\psi}_{4} \psi_{4}\nonumber\\ & &+\overline{\phi}_{4}i\gamma_{\mu}(\partial_{\mu} -i(1/2)g^{\prime}Y_{R}B_{\mu})\phi_{4} -m_{1}^{\prime}\overline{\phi}_{4}\phi_{4} \end{eqnarray} The spectrum of fermions is thus $doubled$ to be vector-like in the sector of heavy fermions and ,at the same time, the masses of $\psi$ and $\phi$ become non-degenerate, i.e., $m_{1}{\neq}m_{1}^{\prime}$. As a result, the fermion number anomaly[11] is generated only by the first 3 generations of light fermions;the violation of baryon number is not enhanced by the presence of heavier fermions. The masses of heavy doublet components in $\psi$ are degenerate in the present zeroth order approximation. The masses of heavy doublets in $\phi$ have nothing to do with custodial SU(2) in the zeroth order approximation, but they are taken to be degenerate for simplicity. In the present scheme we distinguish two classes of chiral symmetry breaking;one which is related to the breaking of gauge symmetry (Higgs mechanism), and the other which is related to the mass of heavier fermions but not related to the breaking of gauge symmetry. The transition from one class of chiral symmetry breaking to the other, which is also accompanied by the transition from chiral to vector-like gauge couplings, is assumed to take place at the mass scale of the order of $v$ in (3). In any case if the $SU(2){\times}U(1)$ gauge symmetry should be universally valid regardless of the magnitude of the mass of fermions, just like electromagnetism and gravity, the coupling of heavier fermions is required to become vector-like:\ Heavy gauge bosons can naturally couple to light fermions, but the other way around imposes a stringent constraint on the chirality of fermions. We are here interested in the possible on-set of heavier fermions at the order of a few TeV, although these vector-like components are often assumed to acquire masses of the order of grand unification scale(Georgi's survival hypothesis[12]). \section{Light Fermion Masses and Higgs Mechanism} \par As for the mass generation of the first 3 generations of quarks and leptons and also the custodial $SU(2)$ breaking of heavier fermions, one may introduce a Yukawa interaction for quarks, for example, in an abbreviated notation \begin{eqnarray} {\cal L}_{Y}&=&\bar{\psi}_{L}G_{u}\varphi\phi_{R}^{(u)} + \bar{\psi}_{L}G_{d}\varphi^{c}\phi_{R}^{(d)}\nonumber\\ &+&\bar{\psi}_{R}G_{u}^{\prime}\varphi\phi_{L}^{(u)} + \bar{\psi}_{R}G_{d}^{\prime}\varphi^{c}\phi_{L}^{(d)} + h.c. \end{eqnarray} where $\varphi(x)$ is the conventional Higgs doublet ( and $\varphi(x)^{c}$ is its conjugate) , and $G_{u}, G_{d}, G_{u}^{\prime}$ and $G_{d}^{\prime}$ are infinite dimensional coupling matrices acting on $\psi$ and $\phi$. Corresponding to the presence of only one $W$-boson, we here assume the existence of only one Higgs doublet. The fields $\bar{\psi}_{L}$ or $\bar{\psi}_{R}$ in (20) stands for the doublets in (7), and $\phi_{R}^{(u)}$ (or $\phi_{L}^{(u)}$) and $\phi_{R}^{(d)}$ (or $\phi_{L}^{(d)}$), respectively, stand for the upper and lower components of the doublets $\phi$ in (13). If one retains only the first two terms and their conjugates in (20) and if only the massless components of $\psi_{L}$ and $\phi_{R}$ in (11) and (16) are considered, (20) reduces to the Higgs coupling of the standard model. We postulate that the coupling matrices G ( which generically include $G^{\prime}$ hereafter) are such that the interaction (20) is perturbatively well controllable,namely, the typical element of coupling matrices $G$ is bounded by the gauge coupling $g$, \begin{equation} |G|{\leq}g \end{equation} By this way the masses of the first 3 generations of light fermions are generated from (20) below the mass scale in (3). For the heavier fermions, the interaction (20) introduces the breaking of custodial $SU(2)$ and also fermion mixing.After the conventional $SU(2)$ breaking, \begin{equation} {\langle}\varphi{\rangle}=v/\sqrt{2} \end{equation} one may diagonalize the mass matrix in (20) together with the mass matrices in (7) and (13).This introduces a generalization of the ordinary fermion mixing matrix [1]. If one assumes a generic situation, \begin{eqnarray} & & m_{i}, m_{j}^{\prime} {\gg} gv, \ \ \ |m_{i}-m_{j}^{\prime}| {\gg} gv{\nonumber} \\ & & |m_{i}-m_{j}| {\gg} gv, \ \ \ |m_{i}^{\prime}-m_{j}^{\prime}| {\gg} gv \end{eqnarray} for any combination of (renormalized) heavy fermion masses $m_{i}$ and $m_{j}^{\prime}$, the masses of heavier fermions are little modified by the Higgs coupling. The state mixing between light fermions and heavier fermions (and also the mixing among heavier fermions) introduced by the interaction (20) is characterized by a dimensionless parameter \begin{equation} \varepsilon = (1/2)|G|v/m_{i}{\leq}(1/2)gv/m_{i} =M_{W}/m_{i} \end{equation} As a fiducial value of the on-set of heavy fermion mass, we here choose \begin{equation} m_{i} {\sim} a\ few\ TeV \end{equation} and thus the dimensionless parameter $\varepsilon$ (24) at \begin{equation} \varepsilon {\leq} m_{W}/m_{i} {\simeq} 1/50 \end{equation} In practical calculations, it is convenient to diagonalize the light fermion masses in addition to (10) and its analogue of $M^{\prime}$ but leave the mixing of light and heavy fermions non-diagonalized, instead of diagonalizing all the masses. In this case the effects of heavy fermions on the processes of light fermions are estimated in the power expansion of $G$. The mass term after diagonalizing the light quarks in the up-quark sector, for example, is given by \begin{eqnarray} (\bar{\psi}_{L},\bar{\Psi}_{L},\bar{\Phi}_{L}){\cal M}\left( \begin{array}{c} \phi_{R}\\ \Psi_{R}\\ \Phi_{R} \end{array} \right) + h.c. \end{eqnarray} where the mass matrix ${\cal M}$ is defined by \begin{eqnarray} {\cal M}&=& \left(\begin{array}{ccccccccc} m_{u}&0 &0 & & & & &&\\ 0 &m_{c}&0 & &\Large{0}& &&\tilde{G}v/\sqrt{2}&\\ 0 &0 &m_{t}& & & & &&\\ & & &m_{1}&0 &0& &&\\ &\tilde{G}v/\sqrt{2}& &0 &m_{2}&0 & &Gv/\sqrt{2}&\\ & & &0 &0 &.. & & &\\ & & & & & &m_{1}^{\prime}&0 &0\\ &\Large{0} & & &Gv/\sqrt{2}& &0 &m_{2}^{\prime}&0\\ & & & & & &0 &0 &.. \end{array}\right)\nonumber \end{eqnarray} To avoid introducing further notational conventions, we here use the fields $\psi_{L}$ and $\phi_{R}$ in(27) for the first 3 light (i.e.,massless in the zeroth order approximation) fermion components of $\psi$ and $\phi$ in (11) and (16),respectively; $\Psi$ and $\Phi$ stand for the remaining heavy quark components of $\psi$ and $\phi$ in (11) and (16). The coupling matrix $\tilde{G}$ is different from the original $G$ by the unitary transformation of light quarks performed in the process of diagonalizing light quark masses. But the order of magnitude of $\tilde{G}$ is still the same as that of $G$. The physical Higgs $H(x)$ coupling in the unitary gauge is given by the replacement \begin{equation} v \rightarrow v + H(x) \end{equation} in the above mass matrix(27); this is also true for the light quark masses $m_{u}, m_{c},$ and $m_{t}$, if one writes ,for example,\\ $m_{u} = (g/2)(m_{u}/m_{W})v $ and $v \rightarrow v + H(x)$. If one sets $G = 0$ and $\tilde{G} = 0$ in the mass matrix(27), the light and heavy quark sectors become completely disconnected, not only in the Higgs coupling but also in the gauge coupling except for the renormalization effects due to heavy quark loop diagrams. This means that the \underline{direct} effects of heavy fermions on the processes involving light quarks and leptons only can be calculated as a power series in $\tilde{G}$ and $G$, provided that these effects of heavy fermions are small. We show that these effects are in fact of controllable magnitude if the condition $|G| \leq g$ in (21) is satisfied and the mass spectrum of heavy fermions starts at a few TeV. See eq.(26) for the dimensionless parameter $\varepsilon$. We would like to examine some of the physical implications of the present scheme\footnote{We here assume that the mass spectrum of heavier fermions is rather sparsely distributed. We thus estimate the effects of the lightest heavier fermions on physical processes involving ordinary fermions in the standard model.}. The influence of the SU(2) symmetry breaking which induces the mass splitting of fermion doublets is small on heavier fermions,and the heavier fermions are relatively stable against weak decay despite of their large masses. Heavier fermions are expected to decay mainly into the Higgs particle and light fermions with natural decay width \begin{equation} \Gamma_{i}{\sim}|G|^{2}m_{i}. \end{equation} In this note,we take the Higgs mass at the "natural" value \begin{equation} m_{H}\sim v \end{equation} with $v$ in (3). The mass spectrum of light fermions is influenced by heavier fermions through the mixing in (20). There are basically two kinds of diagrams shown in Fig.1 ,which contribute to light quark masses. Fig.1-a gives rise to a mass correction \begin{equation} \sim (|G|v/m_{i})^{2}|G|v \simeq \varepsilon^{2}|G|v \end{equation} and Fig.1-b gives \begin{equation} \sim (|G|v/m_{i})^{2}m_{l} \simeq \varepsilon^{2}m_{l} \end{equation} with $m_{l}$ the light quark mass. The first contribution (31) is dominant for all the light fermions except for the top quark.\\ Numerically,(31) gives \begin{equation} \varepsilon^{2}(|G|v) \leq \varepsilon^{2}m_{W} \leq 40MeV \end{equation} if one uses $\varepsilon \leq 1/50$ in (26). The second contribution (32) is dominant for the top quark, but numerically \begin{equation} \sim \varepsilon^{2}m_{t} \leq 4\times 10^{-4}m_{t} \end{equation} and the correction is negligible. We find it encouraging that the most natural choice $|G| \sim g$ already gives a sensible result (33),although a certain fine tuning is required to account for the actual masses of the electron and up and down quarks. In this sense, the present model may be said natural. It may also be interesting to envision a kind of see-saw picture, namely, the generation of very light fermion (i.e., electron and up and down quark) masses primarily from a mixing with heavy fermions. The typical mass scale of light fermions may then be chosen at around the center of gravity of light fermion mass spectrum, namely, at $\sim 10 GeV$ as is indicated in (40) below. If one adopts this picture , the light fremion masses appearing in (29) do not represent physical masses. The physical mass spectrum itself is, however, an input in the present scheme. A characteristic feature of the present extension of the standard model is that the leptonic as well as quark flavor is generally violated; this breaking is caused by the mixing of light and heavy fermions in (20). The diagonalization of mass matrix does not diagonalize the Higgs coupling in general unlike the standard model, and the physical Higgs particle at the tree level also mediates flavor changing processes although its contribution is not necessarily a dominant one; in the limit of large heavy fermion masses $m_{i} \rightarrow \infty$, this flavor changing coupling vanishes. We first estimate the Higgs and heavy fermion contributions to the processes which are GIM suppressed in the standard model. The example we analyze is the $K^{0}-\bar{K^{0}}$ mixing,whose dominant contribution is given by the Feynman diagram in Fig. 2-a. This diagram gives rise to an amplitude of the order of\footnote{ The flavor changing processes induced by the state mixing such as in Fig. 1 without a Higgs particle exchange should be discarded, since such processes do not take place if one diagonalizes the entire mass matrix exactly.} \begin{eqnarray} &\sim& (1/4\pi)(|G|^{4}/m_{i}^{2}) \bar{s_{L}}\gamma^{\mu}d_{L} \bar{s_{L}}\gamma_{\mu}d_{L} \nonumber\\ &\simeq& (m_{W}/m_{i})^{2}(|G|/g)^{4}\alpha G_{F} \bar{s_{L}}\gamma^{\mu}d_{L} \bar{s_{L}}\gamma_{\mu}d_{L} \end{eqnarray} The tree level Higgs exchange in Fig.2-b gives an amplitude of the order of \begin{eqnarray} &\sim& \varepsilon^{4}(|G|/v)^{2} \bar{s_{R}}d_{L} \bar{s_{R}}d_{L} \nonumber\\ &\simeq& \varepsilon^{4}(|G|/g)^{2}\alpha G_{F} \bar{s_{R}}d_{L} \bar{s_{R}}d_{L} \end{eqnarray} which is smaller than the box diagram contribution (35) if one remembers (21) and (26). Here we used the Higgs mass in (30), and $G_{F}$ is the Fermi constant and $\alpha$ is the fine structure constant. There are diagrams other than those in Fig.2, but their contributions are about the same as those of diagrams in Fig.2. The leptonic flavor changing processes such as $K^{0}_{L} \rightarrow e\bar{\mu}$ are also induced by the mixing of heavy fermions , as was noted above. The amplitude for $K^{0}_{L} \rightarrow e\bar{\mu}$ is estimated on the basis of diagrams analogous to those in Fig. 2, and it is given by \begin{equation} \sim (m_{W}/m_{i})^{2}(|G|/g)^{4}\alpha G_{F} \bar{s_{L}}\gamma^{\mu}d_{L} \bar{e_{L}}\gamma_{\mu}\mu_{L} \end{equation} This amplitude for $K^{0}_{L} \rightarrow e\bar{\mu}$ is about the same as that for $K^{0}-\bar{K^{0}}$ mixing in (35). The process $b \rightarrow s\gamma$ is known to give a stringent constraint on vector-like schemes in general[4]. In the present extention of the standard model, the main part of extra contributions to $b \rightarrow s\gamma$ induced by the mixing with heavy fermions comes from the diagrams shown in Fig.3. The amplitude in Fig.3-a is estimated at \begin{eqnarray} &\sim& \varepsilon e|G|^{2}(1/m_{i})\bar{s}\sigma^{\mu\nu}F_{\mu\nu} (\frac{1+\gamma_{5}}{2})b\nonumber\\ &=& \varepsilon(m_{W}^{2}/m_{i}m_{b})e|G|^{2}(m_{b}/m_{W}^{2}) \bar{s}\sigma^{\mu\nu}F_{\mu\nu} (\frac{1+\gamma_{5}}{2})b\nonumber\\ &\simeq& \varepsilon (m_{W}^{2}/m_{i}m_{b})(|G|/g)^{2}eG_{F}m_{b} \bar{s}\sigma^{\mu\nu}F_{\mu\nu} (\frac{1+\gamma_{5}}{2})b \end{eqnarray} up to a numerical coefficient such as $1/16\pi^{2}$. This amplitude is about the same order as the standard model prediction \begin{equation} \sim V_{tb}V_{ts}^\ast eG_{F}m_{b}\bar{s}\sigma^{\mu\nu}F_{\mu\nu} (\frac{1+\gamma_{5}}{2})b \end{equation} if one uses the upper bound $\varepsilon = 1/50$ in (26). The contribution of Fig.3-b is obtained if one replaces $\varepsilon$ by $m_{b}/m_{i}$ in (38). We thus see that the flavor changing radiative decay induced by the Higgs particle and heavy fermions does not spoil the agreement of the standard model with the recent CLEO experiment. See Ref.[4] and references therein. {}From the above analyses, in particular , the mass correction in (33) ,which should be smaller than $\sim 1 MeV$, and the $K^{0}-\bar{K^{0}}$ mixing in (35), which requires $(m_{W}/m_{i})^{2}(|G|/g)^{4} \leq 10^{-8}$, we learn that the Higgs coupling between a light fermion (except possibly for the top quark) and a heavier fermion in the present model is constrained to be \begin{eqnarray} (1/2)|G|v/m_{W} \leq 1/10 \end{eqnarray} or \begin{eqnarray} |G|/g \leq 1/10, \nonumber\\ \varepsilon \leq 1/500 \end{eqnarray} if one chooses the on-set of the heavy fermion mass as in (25). This (41) is the only fine tuning in the present scheme\footnote{The (technical) problem as to the treatment of the quadratic divergence of the Higgs mass still remains.}. (The Higgs coupling among heavy fermions can be as large as in (21)). One can of course retain the "natural" bound (21) if one chooses the on-set of heavier fermion mass spectrum at values larger than in (25) ; this choice is however less interesting from a view point of possible physics in the near future. The decay rate of $K_{L}^{0}\rightarrow e\bar{\mu}$ is then estimated at the order \begin{equation} \Gamma(K_{L}^{0}\rightarrow e\bar{\mu})\leq 10^{-8}\times \Gamma(K_{L}^{0}\rightarrow \mu\bar{\mu}) \end{equation} where $\Gamma(K_{L}^{0}\rightarrow \mu\bar{\mu})$ is given by the standard model. It is confirmed that $CP$ violation does not appear in the zeroth order approximation without the Higgs coupling (see eq.(12)) and it arises solely in the Higgs sector(20);the pattern of CP violation becomes more involved than in the standard model[1] and CP phase is no more limited to W couplings. (The complex mass matrix in (7), when one diagonalizes it, may generally induce CP phase into (20)). When one incorporates QCD, the strong CP problem still appears in connection with the Higgs coupling (20). As for the neutrinos, the first three neutrinos will remain massless if one assumes the absence of right-handed components(i.e., if $\phi$ in (13) is a singlet)\footnote{ To be precise, the radiative correction by one-loop W-boson exchange induces a mixing of a massless neutrino $\nu$ with a massive neutrino $L$ of the order of $ \sim \alpha((|G|v)^{2}/m_{i})\bar{\nu_{L}}L_{R}$. If one truncates the number of massive neutrinos at any finite number, however, the neutrino $\nu_{L}$ stays massless.}. We however expect the appearance of vector-like heavy neutrinos above TeV region. Cosmological implications of those massive neutrinos may be interesting. \section{Conclusion} \par Motivated by the works in Refs.[4] and [8], we discussed the possible existence of heavier quarks and leptons above 1 TeV. Our understanding of gauge symmetry and gauge interactions is substantial, but our understanding of matter sector is rather meager. It may then be sensible to take $SU(2)\times U(1)$ as a guiding principle and analyze the possible existence of heavier fermions. By this way we can largely avoid arbitrariness in the extension of the standard model. A vector-like extension of the standard model examined in this note is natural in the sense that the validity of perturbation theory (21), combined with a fine tuning of the Higgs coupling associated with light fermions as in (40), and a sensible choice of heavy fermion mass scale (25) lead to consistent results as a first order approximation. A more precise analysis , which incorporates the effects of many heavy fermions, will not alter the main features of our semi-quantitative analysis if the spectrum of heavier fermions is distributed rather sparsely. A moral drawn from the analysis in this note is that flavor-changing processes ( and CP violating processes) provide a sensitive probe of new physics beyond the standard model. The present model as it stands is, however, a completely phenomenological one:The appearance of many fermions with vector-like couplings might be natural from some kind of composite picture of fermions, or if the fermions are elementary their masses might arise from a topological origin as is suggested by (9) or from some kind of space-time compactification. But the fundamental issue of the breaking mechanism of chiral and parity symmetries in (9) is not explained(see, however,Ref.[9]), and a picture of unification of interactions is missing. The breaking of asymptotic freedom of QCD by heavy fermions becomes appreciable only at the mass scale of these heavy fermions due to the decoupling phenomenon. At the next generation of accelerators, we will be able to see whether a drastic extension of the standard model such as a SUSY generalization is realized ,or more conventional schemes such as the one analyzed here is realized; or we may simply find a desert as is suggested by some unification schemes. Clearly physics above 1 TeV region is fascinating, and it awaits more imagination and insight. I thank A. Yamada for critical comments.
1,108,101,566,155
arxiv
\section*{-1. Introduction} The contents of this paper are as follows. In Section 0, we sketch one part of the historic background: classical inequalities on determinants and permanents of positive semi-definite matrices. In Section~\ref{new}, we prove pfaffian and hafnian versions of these inequalities, and we formulate Conjecture~\ref{conj}, another hafnian inequality. In Section~\ref{prod}, we apply the hafnian inequality of Theorem~\ref{spec} to our main goal: improving the lower bound of R\'ev\'esz and Sarantopoulos on the norm of a product of linear functionals on a real Euclidean space (this subject is sometimes called the `real linear polarization constant' problem, its history is sketched at the end of the paper). This is achieved in Theorem~\ref{polar}. We point out that Conjecture~\ref{conj} would be sufficient to completely settle the real linear polarization constant problem. \section*{0. Old inequalities on determinants and permanents}\label{old} Recall that the determinant and the permanent of an $n\times n$ matrix $A=(a_{i,j})$ are defined by $$\det A=\sum_{\pi\in\mathfrak S_n}(-1)^\pi\prod_{i=1}^na_{i,\pi(i)},\qquad\qquad\mathrm {per}\; A=\sum_{\pi\in\mathfrak S_n}\prod_{i=1}^na_{i,\pi(i)},$$ where $\mathfrak S_n$ is the symmetric group on $n$ elements. Throughout this section, we assume that $A$ is a positive semi-definite Hermitian $n\times n$ matrix (we write $A\ge 0$). For such $A$, Hadamard proved that $$\det A\le\prod_{i=1}^na_{i,i},$$ with equality if and only if $A$ has a zero row or is a diagonal matrix. Fischer generalized this to $$\det A\le\det A'\cdot\det A''$$ for \begin{equation}\label{bl}A=\left(\begin{matrix} A'&B\\B^*&A''\end{matrix}\right)\ge 0,\end{equation} with equality if and only if $\det A'\cdot\det A''\cdot B=0$. Concerning the permanent of a positive semi-definite matrix, Marcus [Mar1, Mar2] proved that \begin{equation}\label{M}\mathrm {per}\; A\ge \prod_{i=1}^na_{i,i},\end{equation} with equality if and only if $A$ has a zero row or is a diagonal matrix. Lieb [L] generalized this to \begin{equation}\label{L} \mathrm {per}\; A\ge\mathrm {per}\; A'\cdot \mathrm {per}\; A''\end{equation} for $A$ as in \eqref{bl}, with equality if and only if $A$ has a zero row or $B=0$. Moreover, he proved that in the polynomial $P(\lambda)$ of degree $n'$ (=size of $A'$) defined by $$P(\lambda)=\mathrm {per}\;\left(\begin{matrix} \lambda A'&B\\B^*&A''\end{matrix}\right)=\sum_{t=0}^{n'}c_t\lambda ^t,$$ all coefficients $c_t$ are real and non-negative. This is indeed a stronger theorem since it implies $$\mathrm {per}\; A=P(1)=\sum_{t=0}^{n'} c_t\ge c_{n'}=\mathrm {per}\; A'\cdot\mathrm {per}\; A''.$$ \DJ okovi\'c [D, Mi] gave a simple proof of Lieb's inequalities, and showed also that if $A'$ and $A''$ are positive definite then $c_{n'-t}=0$ if and only if all subpermanents of $B$ of order $t$ vanish. Lieb [L] also states an analogous (and analogously provable) theorem for determinants: for $A$ as in \eqref{bl}, let $$D(\lambda)=\det\left(\begin{matrix} \lambda A'&B\\B^*&A''\end{matrix}\right)=\sum_{t=0}^{n'}d_t\lambda ^t.$$ If $\det A'\cdot\det A''=0$, then $D(\lambda)=0$. If $A'$ and $A''$ are positive definite, then $(-1)^td_{n'-t}$ is positive for $t\le\mathrm {rk}\; B$ and is zero for $t>\mathrm {rk}\; B$. \bigskip \bf Remark. \rm In all of Lieb's inequalities mentioned above, the condition that the matrix $A$ is positive semi-definite can be replaced by the weaker condition that the diagonal blocks $A'$ and $A''$ are positive semi-definite. The proof goes through virtually unchanged. Alternatively, this stronger form of the inequalities can be easily deduced from the seemingly weaker form above. \section{New inequalities on pfaffians and hafnians}\label{new} For an $n\times n$ matrix $A=(a_{i,j})$ and subsets $S$, $T$ of $N:=\{1,\dots, n\}$, we write $A_{S,T}:=(a_{i,j})_{i\in S, j\in T}.$ If $|T|=2t$ is even, we write $$(-1)^T:=(-1)^{t+\sum_{j\in T}j}.$$ \subsection{Pfaffians} As far as the applications in Section~\ref{prod} are concerned, this subsection may be skipped. Recall that the pfaffian of a $2n\times 2n$ antisymmetric matrix $C=(c_{i,j})$ is defined by $$\mathrm {pf}\; C=\frac 1{n!2^n}\sum_{\pi\in\mathfrak S_{2n}}(-1)^\pi c_{\pi(1),\pi(2)}\cdots c_{\pi(2n-1),\pi(2n)}.$$ We have $\left(\mathrm {pf}\; C\right)^2=\det C$. For antisymmetric $A$ and symmetric $B$, both of size $n\times n$, we consider the polynomial $$(-1)^{\lfloor n/2\rfloor} \mathrm {pf}\; \left(\begin{matrix} -\lambda A & B\\ -B &A \end{matrix}\right)=\sum_{t=0}^{\lfloor n/2\rfloor} p_t\lambda ^t.$$ \begin{theorem}\label{pf} Let $A$ and $B$ be real $n\times n$ matrices with $A$ antisymmetric and $B$ symmetric. If $B$ is positive semi-definite, then $p_t\ge 0$ for all $t$. If $B$ is positive definite, then $p_t>0 $ for $t\le (\mathrm {rk}\; A)/2$ and $p_t=0 $ for $t> (\mathrm {rk}\; A)/2$. \end{theorem} \begin{proof} If $B=(b_{i,j})$ is positive semi-definite, then there exist vectors $x_1$, \dots, $x_n$ in a real Euclidean space $V$ such that $(x_i,x_j)=b_{i,j}.$ Recall that in the exterior tensor algebra $\bigwedge V$ a positive definite inner product (and the corresponding Euclidean norm) is defined by $$\left(\bigwedge v_i,\;\bigwedge w_j\right):=\det ((v_i,w_j)).$$ We have \begin{align*} p_t=\sum_{|S|=2t}\sum_{|T|=2t}(-1)^S(-1)^T\mathrm {pf}\; A_{S,S}\cdot \mathrm {pf}\; A_{T,T}\cdot\det B_{N\setminus S,N\setminus T }= \\ =\sum_{|S|=2t}\sum_{|T|=2t}\left((-1)^S\mathrm {pf}\; A_{S,S}\cdot \bigwedge_{i\not\in S}x_i ,\; (-1)^T\mathrm {pf}\; A_{T,T}\cdot \bigwedge_{j\not\in T}x_j\right)=\\ =\left|\sum_{|S|=2t}(-1)^S\mathrm {pf}\; A_{S,S}\cdot\bigwedge_{i\not\in S}x_i\right|^2\ge 0.\end{align*} Assume that $B$ is positive definite. Then the vectors $x_i$ are linearly independent. It follows that the tensors $\bigwedge_{i\not\in S}x_i$ are also linearly independent as $S$ runs over the subsets of $N$. Thus $p_t=0$ if and only if $\mathrm {pf}\; A_{S,S}=0$ for all $|S|=2t$, i.e., if and only if $2t>\mathrm {rk}\; A$. \end{proof} \begin{theorem}\label{pfspec} Let $A$ and $B$ be real $n\times n$ matrices with $A$ antisymmetric and $B$ symmetric. Let $\lambda\ge 0$. If $B$ is positive semi-definite, then $$(-1)^{\lfloor n/2\rfloor} \mathrm {pf}\; \left(\begin{matrix} -\lambda A & B\\ -B &A \end{matrix}\right)\ge\det B.$$ If $B$ is positive definite, then equality occurs if and only if $\lambda A=0$. \end{theorem} \begin{proof} The left hand side is $$p_0+p_1\lambda+\dots+p_{\lfloor n/2\rfloor} \lambda^{\lfloor n/2\rfloor}.$$ The right hand side is $p_0$. \end{proof} I am grateful to the anonymous referee of this paper for the idea of the following alternative proof of Theorems~\ref{pf} and \ref{pfspec}. We may assume $B>0$, since every positive semi-definite matrix is a limit of positive definite ones. The matrix $B^{-1/2}AB^{-1/2}$ being real and antisymmetric, there exists a unitary matrix $U$ such that $D:=U^{-1}B^{-1/2}AB^{-1/2}U$ is diagonal with purely imaginary eigenvalues $a_1\sqrt{-1}$, \dots, $a_n\sqrt{-1}$. The real multiset $\{a_1,\dots, a_n\}$ is invariant under $a\leftrightarrow -a$. We have \begin{align*} \left(\sum p_t\lambda ^t\right)^2 =\det \left(\begin{matrix} -\lambda A & B\\ -B &A \end{matrix}\right) =\det \left(\begin{matrix} -\lambda \sqrt BUDU^{-1} \sqrt B & B\\ -B &\sqrt BUDU^{-1} \sqrt B \end{matrix}\right)=\\ =\det\left(\left(\begin{matrix} \sqrt BU & 0\\ 0 &\sqrt BU \end{matrix}\right)\left(\begin{matrix} -\lambda D & \bf 1 \\ -\bf 1 &D \end{matrix}\right)\left(\begin{matrix}U^{-1} \sqrt B & 0\\ 0 &U^{-1} \sqrt B \end{matrix}\right)\right)=\\ =\det\sqrt B^4\cdot\prod_{i=1}^n\det\left(\begin{matrix} -\lambda a_i\sqrt{-1} & 1\\ -1 &a_i\sqrt {-1} \end{matrix}\right)=\det B^2\cdot\prod_{i=1}^n(1+a_i^2\lambda).\end{align*} Extracting square roots, and choosing the sign in accordance with $p_0=+\det B$, we get $$\sum p_t\lambda ^t=(-1)^{\lfloor n/2\rfloor} \mathrm {pf}\; \left(\begin{matrix} -\lambda A & B\\ -B &A \end{matrix}\right)=\det B\cdot\prod_{a_i>0}(1+a_i^2\lambda),$$ whence both theorems immediately follow, since $\det B > 0$. \subsection{Hafnians} Recall that the hafnian of a $2n\times 2n$ symmetric matrix $C=(c_{i,j})$ is defined by $$\mathrm {haf}\; C=\frac 1{n!2^n}\sum_{\pi\in\mathfrak S_{2n}} c_{\pi(1),\pi(2)}\cdots c_{\pi(2n-1),\pi(2n)}.$$ For symmetric $A$ and $B$, both of size $n\times n$, we consider the polynomial $$\mathrm {haf}\; \left(\begin{matrix} \lambda A & B\\ B &A \end{matrix}\right)=\sum_{t=0}^{\lfloor n/2\rfloor} h_t\lambda ^t.$$ \begin{theorem}\label{haf} Let $A$ and $B$ be symmetric real $n\times n$ matrices. If $B$ is positive semi-definite, then $h_t\ge 0$ for all $t$. If $B$ is positive definite, then $h_t=0$ if and only if all $2t\times 2t$ subhafnians of $A$ vanish. \end{theorem} \begin{proof} If $B=(b_{i,j})$ is positive semi-definite, then there exist vectors $x_1$, \dots, $x_n$ in a real Euclidean space $V$ such that $(x_i,x_j)=b_{i,j}.$ Recall [Mar1, Mar2, MN, Mi] that in the symmetric tensor algebra $S V$ a positive definite inner product (and the corresponding Euclidean norm) is defined by $$\left(\prod v_i,\prod w_j\right):=\mathrm {per}\; ((v_i,w_j)).$$ We have \begin{align*}h_t=\sum_{|S|=2t}\sum_{|T|=2t}\mathrm {haf}\; A_{S,S}\cdot \mathrm {haf}\; A_{T,T}\cdot\mathrm {per}\; B_{N\setminus S,N\setminus T }= \\ =\left|\sum_{|S|=2t}\mathrm {haf}\; A_{S,S}\cdot\prod_{i\not\in S}x_i\right|^2\ge 0.\end{align*} Assume that $B$ is positive definite. Then the vectors $x_i$ are linearly independent. It follows that the tensors $\prod_{i\not\in S}x_i$ are also linearly independent as $S$ runs over the subsets of $N$. Thus $h_t=0$ if and only if $\mathrm {haf}\; A_{S,S}=0$ for all $|S|=2t$. \end{proof} \begin{theorem}\label{spec} Let $A$ and $B$ be symmetric real $n\times n$ matrices. Let $\lambda\ge 0$. If $B$ is positive semi-definite, then $$\mathrm {haf}\;\left(\begin{matrix} \lambda A&B\\ B&A\end{matrix}\right)\ge\mathrm {per}\; B.$$ If $B$ is positive definite, then equality occurs if and only if $A$ is a diagonal matrix or $\lambda=0$. \end{theorem} \begin{proof} The left hand side is $$h_0+h_1\lambda+\dots+h_{\lfloor n/2\rfloor} \lambda ^{\lfloor n/2\rfloor}.$$ The right hand side is $h_0$. \end{proof} Setting $A=B$ and $\lambda =1$, and combining with Marcus's inequality \eqref{M}, we arrive at case $p=1$ of \begin{conj}\label{conj} If $A=(a_{i,j})$ is a positive semi-definite symmetric real $n\times n$ matrix, then the hafnian of the $2pn\times 2pn$ matrix consisting of $2p\times 2p$ blocks $A$ is at least ${(2p-1)!!}^n\prod a_{i,i}^p,$ with equality if and only if $A$ has a zero row or is a diagonal matrix. \end{conj} \section{Products of real linear functionals}\label{prod} In this section, we apply Theorem~\ref{spec} to products of jointly normal random variables and then to products of real linear functionals, which was the main motivation for this work. The ideas in this section are analogous to those that Arias-de-Reyna [A] used in the complex case. Let $\xi_1$, \dots, $\xi_d$ denote independent random variables with standard Gaussian distribution, i.e., with joint density function $(2\pi)^{-d/2}\exp({-|\xi|^2/2})$, where $|\xi|^2=\sum\xi_k^2.$ We write $Ef(\xi)$ for the expectation of a function $f=f(\xi)=f(\xi_1,\dots, \xi_d)$. Recall that $$ E\xi_k^{2p}=(2p-1)!!=(2p-1)(2p-3)\cdots 3\cdot 1$$ for $k=1,\dots, d$ (easy inductive proof via integration by parts), and thus $$E\prod_{k=1}^d \xi_k^{2p_k}=\prod_{k=1}^d (2p_k-1)!!.$$ On $\mathbb R^d$, we write $(\cdot,\cdot)$ for the standard Euclidean inner product. We recall the well-known [B2, G, S, Z] \bigskip\noindent {\bf Wick formula.}\; \it Let $x_1$, \dots, $x_n$ be vectors in $\mathbb R^d$ with Gram matrix $A=((x_i,x_j)).$ Then \begin{equation}\label{Sigma} E\prod_{i=1}^n (x_i,\xi)=\mathrm {haf}\; A. \end{equation} \rm (For odd $n$, we define $\mathrm {haf}\; A=0$.) \begin{proof} Both sides are multilinear in the $x_i$, so we may assume that each $x_i$ is an element of the standard orthonormal basis $e_1$, \dots, $e_d$. If there is an $e_k$ that occurs an odd number of times among the $x_i$, then both sides are zero. If each $e_k$ occurs $2p_k$ times, then the left hand side is $E\prod_{k=1}^d \xi_k^{2p_k}$, and the right hand side is $\prod_{k=1}^d (2p_k-1)!!$, which are equal. \end{proof} The following theorems are easy corollaries of Theorem~\ref{spec} together with the Wick formula~\eqref{Sigma} and Marcus's theorem \eqref{M}. \begin{theorem}\label{mom} If $X_1$, \dots, $X_n$ are jointly normal random variables with zero expectation, the $$E\left( X_1^2\cdots X_n^2\right)\ge EX_1^2\cdots EX_n^2.$$ Equality holds if and only if they are independent or at least one of them is almost surely zero. \end{theorem} \begin{proof} The variables can be written as $X_i=(x_i,\xi)$ with $\xi$ of standard normal distribution and the $x_i$ constant vectors with a positive semi-definite Gram matrix $A=(a_{i,j})=((x_i,x_j))$. Then \begin{align*}E\prod _{i=1}^n X_i^2=E\prod _{i=1}^n (x_i,\xi)^2=\\ =\mathrm {haf}\;\left(\begin{matrix} A&A\\ A&A\end{matrix}\right)\ge\mathrm {per}\; A\ge\prod_{i=1}^na_{i,i}=\\ =\prod_{i=1}^nE(x_i,\xi)^2=\prod_{i=1}^nEX_i^2,\end{align*} with equality if and only if $A$ is a diagonal matrix or has a zero row, i.e., the $x_i$ are pairwise orthogonal or at least one of them is zero. \end{proof} The generalization of Theorem~\ref{mom} to an arbitrary even exponent $2p$ is equivalent to Conjecture~\ref{conj}. \newcommand{\ii}{\'{\i}} \begin{theorem}\label{average} For any $x_1,\dots, x_n\in\mathbb R^d$, $|x_i|=1$, the average of $\prod (x_i,\xi)^2$ on the unit sphere $\{\xi\in\mathbb R^d\;:\; |\xi|=1\}$ is at least $$\frac{\Gamma (d/2)}{2^n\Gamma (d/2+n)}=\frac {(d-2)!!}{(d+2n-2)!!}=\frac 1{d(d+2)(d+4)\dots (d+2n-2)},$$ with equality if and only if the vectors $x_i$ are pairwise orthogonal. \end{theorem} \begin{proof} The average on the unit sphere is the constant in the theorem times the expectation w.r.t.\ the standard Gaussian measure (see e.g.\ [B1]). By Theorem~\ref{mom}, the latter expectation is minimal if and only if the $x_i$ are pairwise orthogonal, in which case it is 1. \end{proof} \begin{theorem}\label{polar} For real linear functionals $f_i$ on a real Euclidean space, $$||f_1\cdots f_n||\ge \frac{||f_1||\cdots ||f_n||}{\sqrt{n(n+2)(n+4)\cdots (3n-2)}}.$$ \end{theorem} Here $||\cdot ||$ means supremum of the absolute value on the unit sphere. In the infinite-dimensional case, functionals with infinite norm may be allowed. Then the convention $0\cdot\infty=0$ should be used on the right hand side. \begin{proof} We may assume that the space is $\mathbb R^d$ with $d\le n$, and the functionals are given by $f_i(\xi)=(x_i,\xi)$ with $||f_i||=|x_i|=1.$ Then $||f_1\cdots f_n||^2$ is at least the average of $\prod f_i^2(\xi)=\prod (x_i,\xi)^2$ on the unit sphere, which by Theorem~\ref{average} and $d\le n$ is at least $1/(n(n+2)(n+4)\cdots (3n-2)).$ \end{proof} It is an unsolved problem, raised by Ben\ii tez, Sarantopoulos and Tonge [BST] (1998), whether Theorem~\ref{polar} is true with $n^n$ under the square root sign in the denominator on the right hand side. This is called the `real linear polarization constant' problem. In the complex case, the affirmative answer was proved by Arias-de-Reyna [A] in 1998, based on the complex analog of the Wick formula [A, B2, G] and on Lieb's inequality \eqref{L}.\footnote{The referee of the present paper called my attention to the fact that Arias-de-Reyna used only the special case of \eqref{L} where the matrix $A'$ is of rank 1. This is much simpler than \eqref{L} in general, it can be proved essentially by the argument Marcus used in [Mar1, Mar2] to prove the even more special case $n'=1$, which still implies inequality~\eqref{M}.} Keith Ball [Ball] gave another proof of the affirmative answer in the complex case by solving the complex plank problem. In the real case, the affirmative answer for $n\le 5$ was proved by Pappas and R\'ev\'esz [PR] in 2004. For general $n$, the best estimate known before the present paper was that of R\'ev\'esz and Sarantopoulos [RS] (2004), based on results of [MST], with $(2n)^{n}/4$ under the square root sign. See [Mat1, Mat2, MM, R] for accounts on this and related questions. Note that \begin{align*} {n(n+2)(n+4)\cdots (3n-2)}=\\ =\exp\left(\log n+\log (n+2)+\log (n+4)+\dots+\log (3n-2)\right)<\\ <\exp\left({\frac12\int_n^{3n}\log u\cdot {\rm d}u}\right)=\\ =\exp\left( [u(\log u -1)]_n^{3n}/2\right)=\exp ((3n\log {3n}-3n -n\log n+n)/2)=\\ =\exp {\frac {n(2\log n+3\log 3 -2)}2} =\left(\frac{3\sqrt 3}en\right)^n,\end{align*} and $3\sqrt 3/e<3\cdot 1.8/2.7=2$, so Theorem~\ref{polar} is an improvement. Note also that the statement with $n^n$ under the square root sign would follow from Conjecture~\ref{conj}. \section*{Acknowledgements} I am grateful to P\'eter Major, M\'at\'e Matolcsi and Szil\'ard R\'ev\'esz for helpful discussions, and to the anonymous referee for useful comments. \section*{References} \noindent [A] J.\ Arias-de-Reyna, Gaussian variables, polynomials and permanents, Lin.\ Alg.\ Appl.\ 285 (1998), 107--114. \bigskip\noindent [Ball] K.\ M.\ Ball, The complex plank problem, Bull.\ London.\ Math.\ Soc.\ 33 (2001), 433--442. \bigskip\noindent [B1] A.\ Barvinok, Estimating $L^\infty$ norms by $L^{2k}$ norms for functions on orbits, Found.\ Comput.\ Math.\ 2 (2002), 393--412. \bigskip\noindent [B2] A.\ Barvinok, Integration and optimization of multivariate polynomials by restriction onto a random subspace, arXiv preprint: math.OC/0502298 \bigskip\noindent [BST] C.\ Ben\ii tez, Y.\ Sarantopoulos, A.\ Tonge, Lower bounds for norms of products of polynomials, Math.\ Proc.\ Camb.\ Phil.\ Soc.\ 124 (1998), 395--408. \bigskip\noindent [D] D.\ \v Z.\ \DJ okovi\'c, Simple proof of a theorem on permanents, Glasgow Math.\ J.\ 10 (1969), 52--54. \bigskip\noindent [G] L.\ Gurvits, Classical complexity and quantum entanglement, J.\ Comput.\ System Sci.\ 69 (2004), no. 3, 448--484. \bigskip\noindent [L] E.\ H.\ Lieb, Proofs of some conjectures on permanents, J.\ Math.\ Mech.\ 16 (1966), 127--134. \bigskip\noindent [Mar1] M.\ Marcus, The permanent analogue of the Hadamard determinant theorem, Bull.\ Amer.\ Math.\ Soc.\ 69 (1963), 494--496. \bigskip\noindent [Mar2] M.\ Marcus, The Hadamard theorem for permanents, Proc.\ Amer.\ Math.\ Soc.\ 15 (1964), 967--973. \bigskip\noindent [MN] M.\ Marcus, M.\ Newman, The permanent function as an inner product, Bull.\ Amer.\ Math.\ Soc.\ 67 (1961), 223--224. \bigskip\noindent [Mat1] M.\ Matolcsi, A geometric estimate on the norm of product of functionals, Lin.\ Alg.\ Appl.\ 405 (2005), 304--310. \bigskip\noindent [Mat2] M.\ Matolcsi, The linear polarization constant of $\mathbb R^n$, Acta Math.\ Hungar.\ 108 (2005), no.\ 1-2, 129--136. \bigskip\noindent [MM] M.\ Matolcsi, G.\ A.\ Mu\~noz, On the real linear polarization constant problem, Math.\ Inequal.\ Appl.\ 9 (2006), no.\ 3, 485--494. \bigskip\noindent [Mi] H.\ Minc, Permanents, Encyclopedia of Mathematics and its Applications, Add\-is\-on-Wesley, 1978 \bigskip\noindent [MST] G.\ A.\ Mu\~noz, Y.\ Sarantopoulos, A.\ Tonge, Complexifications of real Banach spaces, polynomials and multilinear maps, Studia Math.\ 134 (1999), no.\ 1, 1--33. \bigskip\noindent [PR] A.\ Pappas, Sz.\ R\'ev\'esz, Linear polarization constants..., J.\ Math.\ Anal.\ Appl. 300 (2004), 129--146. \bigskip\noindent [R] Sz.\ Gy.\ R\'ev\'esz, Inequalities for multivariate polynomials, Annals of the Marie Curie Fellowships 4 (2006), {\tt http:/\!/www.mariecurie.org/annals/}, arXiv preprint: math.CA/0703387 \bigskip\noindent [RS] Sz.\ Gy.\ R\'ev\'esz, Y.\ Sarantopoulos, Plank problems, polarization and Chebyshev constants, J.\ Korean Math.\ Soc.\ 41 (2004) 157--174. \bigskip\noindent [S] B.\ Simon, The P$(\phi)_2$ Euclidean (Quantum) Field Theory, Princeton Series in Physics, Princeton University Press, 1974 \bigskip\noindent [Z] A.\ Zvonkin, Matrix integrals and map enumeration: an accesible introduction, Combinatorics and physics (Marseille, 1995), Math.\ Comput.\ Modelling 26 (1997), 281--304. \end{document}
1,108,101,566,156
arxiv
\section{Introduction} Learning representations for visual correspondence is a long-standing problem in computer vision, which is closely related to many vision tasks including video object tracking, keypoint tracking, and optical flow estimation, \emph{etc}. This task is challenging due to the factors such as viewpoint change, distractors, and background clutter. Correspondence estimation generally requires human annotations for model training. Collecting dense annotations, especially for large-scale datasets, requires costly human efforts. To leverage the large volume of raw videos in the wild, the recent advances focus on self-supervised correspondence learning by exploring the inherent relationships within the unlabeled videos. In \cite{TimeCycle}, the temporal cycle-consistency is utilized to self-supervise the feature representation learning. To be specific, the correct patch-level or pixel-wise associations between two successive frames should match bi-directionally in both forward and backward tracking trajectories. The bi-directional matching is realized via a frame-level affinity matrix, which represents the pixel pair-wise similarity between two frames. In \cite{colorizition,UVC}, this affinity is also utilized to achieve the content transformation between two frames for self-supervision. A straightforward transformation within videos is the color/RGB information. More specifically, the pixel colors in a target frame can be ``copied'' (or transformed) from the pixels in a reference frame. By minimizing the differences between the transformed and the true colors of the target frame, the backbone network is forced to learn robust feature embeddings for identifying correspondence across frames in a self-supervised manner. In spite of the impressive performance, existing unsupervised correspondence algorithms put all the emphasis on the intra-video analysis. Since the scenario in one video is generally stable and changeless, establishing the correspondence within the same videos is less challenging and inevitably hinders the discrimination potential of learned feature embeddings. In this work, we go beyond the intra-video correspondence learning by further considering the inter-video level embedding separation of different instance objects. Our method is largely inspired by the recent success of contrastive learning \cite{MoCo,SimCLR}, which aims at minimizing the agreement between different augmented versions of the same image via a contrastive loss \cite{contrastiveloss}. Nevertheless, there are two obvious gaps between contrastive learning and correspondence learning. First, classic contrastive learning relies on the augmented still images, but how to adapt it to the video-level correspondence scenario is rarely explored. Second, their optimization goals are somewhat conflicting. Contrastive learning targets at positive concentration and negative separation, ignoring the pixel-to-pixel relevance among the positive embeddings. In contrast, correspondence learning aims at identifying fine-grained matching. In this work, we aim to narrow the above domain gaps by absorbing the core contrastive ideas for correspondence estimation. To transfer the contrastive learning from the image domain to the video domain, we leverage the patch-level tracking to acquire matched image pairs in unlabeled videos. Consequently, our method captures the real target appearance changes reside in the video sequences without augmenting the still images using empirical rules (\emph{e.g.}, scaling and rotation). Furthermore, we propose the inter-video transformation, which is consistent with the correspondence learning in terms of the optimization goal while preserving the contrastive characteristic among different instance embeddings. In our framework, similar to previous arts \cite{colorizition,UVC}, the image pixels should match their counterpart pixels in the current video to satisfy the self-supervision. Besides, these pixels are also forced to mismatch the pixels in other videos to reinforce the instance-level discrimination, which is formulated in the contrastive transformation across a batch of videos, as shown in Figure~\ref{fig:1}. By virtue of the intra-inter transformation consistency as well as the sparsity constraint for the inter-video affinity, our framework encourages the contrastive embedding learning within the correspondence framework. In summary, the main contribution of this work lies in the contrastive framework for self-supervised correspondence learning. 1) By joint unsupervised tracking and contrastive transformation, our approach extends the classic contrastive idea to the temporal domain. 2) To bridge the domain gap between two diverse tasks, we propose the intra-inter transformation consistency, which differs from contrastive learning but absorbs its core motivation for correspondence tasks. 3) Last but not least, we verify the proposed approach in a series of correspondence-related tasks including video object segmentation, pose tracking, object tracking, \emph{etc}. Our approach consistently outperforms previous state-of-the-art self-supervised approaches and is even comparable with some task-specific fully-supervised algorithms. \section{Related Work}\label{relation work} In this section, we briefly review the related methods including unsupervised representation learning, self-supervised correspondence learning, and contrastive learning. {\noindent \bf Unsupervised Representation Learning}. Learning representations from unlabeled images or videos has been widely studied. Unsupervised approaches explore the inherent information inside images or videos as the supervisory signals from different perspectives, such as frame sorting \cite{lee2017unsupervised}, image content recovering \cite{contextencoder}, deep clustering \cite{deepclustering}, affinity diffusion \cite{AffinityDiffusion}, motion modeling \cite{WatchingObjectsMove,tung2017self-supervisedMotionCapture}, and bi-directional flow estimation \cite{UnFlow}. These methods learn an unsupervised feature extractor, which can be generalized to different tasks by further fine-tuning using a small set of labeled samples. In this work, we focus on a sub-area in the unsupervised family, \emph{i.e.}, learning features for fine-grained pixel matching without task-specific fine-tuning. Our framework shares partial insight with \cite{wang2015unsupervised}, which utilizes off-the-shelf visual trackers for data pre-processing. Differently, we jointly track and spread feature embeddings in an end-to-end manner for complementary learning. Our method is also motivated by the contrastive learning \cite{predictiveContrastive}, another popular framework in the unsupervised learning family. In the following, we will detailedly discuss correspondence learning and contrastive learning. {\noindent \bf Self-supervised Correspondence Learning}. Learning temporal correspondence is widely explored in the visual object tracking (VOT), video object segmentation (VOS), and flow estimation \cite{FlowNet} tasks. VOT aims to locate the target box in each frame based on the initial target box, while VOS propagates the initial target mask. To avoid expensive manual annotations, self-supervised approaches have attracted increasing attention. In \cite{colorizition}, based on the frame-wise affinity, the pixel colors from the reference frame are transferred to the target frame as self-supervisory signals. Wang \emph{et al.} \cite{TimeCycle} conduct the forward-backward tracking in unlabeled videos and leverage the inconsistency between the start and end points to optimize the feature representation. UDT algorithm \cite{UDT} leverages a similar bi-directional tracking idea and composes the correlation filter for unsupervised tracker training. In \cite{TrackerSingleMovie}, an unsupervised tracker is trained via incremental learning using a single movie. Recently, Li \emph{el al.} \cite{UVC} combine the object-level and fine-grained correspondence in a coarse-to-fine fashion and shows notable performance improvements. In \cite{SpacetimeCorrespondence}, space-time correspondence learning is formulated as a contrastive random walk and shows impressive results. Despite the success of the above methods, they put the main emphasis on the intra-video self-supervision. Our approach takes a step further by simultaneously exploiting the intra-video and inter-video consistency to learn more discriminative feature embeddings. Therefore, previous intra-video based approaches can be regarded as one part of our framework. \begin{figure*}[t] \centering \includegraphics[width=17.7cm]{main.pdf} \caption{An overview of the proposed framework. Given a batch of videos, we first do patch-level tracking to generate image pairs. Then, intra- and inter-video transformations are conducted for each video in the mini-batch. Finally, except the intra-video self-supervision, we introduce the intra-inter consistency and sparsity constraint to reinforce the embedding discrimination.}\label{fig:2} \end{figure*} {\noindent \bf Contrastive Learning}. Contrastive learning is a popular unsupervised learning paradigm, which aims to enlarge the embedding disagreements of different instances for representation learning \cite{predictiveContrastive,ye2019unsupervised,hjelm2019learning}. Based on the contrastive framework, the recent SimCLR method \cite{SimCLR} significantly narrows the performance gap between supervised and unsupervised models. He \emph{et al.} \cite{MoCo} propose the MoCo algorithm to fully exploit the negative samples in the memory bank. Inspired by the recent success of contrastive learning, we also involve plentiful negative samples for discriminative feature learning. Compared with existing contrastive methods, one major difference is our method jointly tracks and spreads feature embeddings in the video domain. Therefore, our method captures the temporally changed appearance variations instead of manually augmenting the still images. Besides, instead of using a standard contrastive loss \cite{contrastiveloss}, we incorporate the contrastive idea into the correspondence task by a conceptually simple yet effective contrastive transformation mechanism to narrow the domain gap. \section{Methodology}\label{method} An overview of our framework is shown in Figure~\ref{fig:2}. Given a batch of videos, we first crop the adjacent image patches via patch-level tracking, which ensures the image pairs have similar contents and facilitates the later transformations. For each image pair, we consider the intra-video bi-directional transformation. Furthermore, we introduce irrelevant images from other videos to conduct the inter-video transformation for contrastive embedding learning. The final training objectives include the intra-video self-supervision, intra-inter transformation consistency, and sparsity regularization for the batch-level affinity. \subsection{Revisiting Affinity-based Transformation}\label{} Given a pair of video frames, the pixel colors (\emph{e.g.}, RGB values) in one frame can be copied from the pixels from another frame. This is based on the assumption that the contents in two successive video frames are coherent. The above frame reconstruction (pixel copy) operation can be expressed via a linear transformation with the affinity matrix $ {\bf A}_{r \to t}$, which describes the copy process from a reference frame to a target frame \cite{colorizition,liu2018switchable}. A general option for the similarity measurement in the affinity matrix is the dot product between feature embeddings. In this work, we follow previous arts \cite{colorizition,TimeCycle,UVC} to construct the following affinity matrix: \begin{equation}\label{eq1} {\bf A}_{r \to t}(i,j) = \frac{\text{exp}\left({{\bf f}_t(i)}^{\top} {\bf f}_r(j)\right)}{\sum_{j} \text{exp}\left({{\bf f}_t(i)}^{\top} {\bf f}_r(j) \right) }, \end{equation} where $ {\bf f}_t \in \mathbb{R}^{C \times N_1} $ and $ {\bf f}_r \in \mathbb{R}^{C \times N_2} $ denote flattened feature maps with $ C $ channels of target and reference frames, respectively. With the spatial index $ i\in[1, N_1] $ and $ j\in[1, N_2] $, ${\bf A}_{r \to t} \in \mathbb{R}^{N_1 \times N_2}$ is normalized by the softmax over the spatial dimension of $ {\bf f}_r $. Leveraging the above affinity, we can freely transform various information from the reference frame to the target frame by $ \hat{\bf L}_t = {\bf A}_{r \to t} {\bf L}_r $, where $ {\bf L}_r $ can be any associated labels of the reference frame (\emph{e.g.}, semantic mask, pixel color, and pixel location). Since we naturally know the color information of the target frame, one free self-supervisory signal is color \cite{colorizition}. The goal of such an affinity-based transformation framework is to train a good feature extractor for affinity computation. \subsection{Contrastive Pair Generation}\label{} A vital step in contrastive frameworks is building positive image pairs via data augmentation. We free this necessity by exploring the temporal content consistency resides in the videos. To this end, for each video, we first utilize the patch-level tracking to acquire a pair of high-quality image patches with similar content. Based on the matched pairs, we then conduct the contrastive transformation. Given a randomly cropped patch in the reference frame, we aim to localize the best matched patch in the target frame, as shown in Figure~\ref{fig:2}. Similar to Eq.~\ref{eq1}, we compute a patch-to-frame affinity between the features of a random patch in the reference frame and the features of the whole target frame. Based on this affinity, in the target frame, we can identify some target pixels most similar to the reference pixels, and average these pixel coordinates as the tracked target center. We also estimate the patch scale variation following UVC approach \cite{UVC}. Then we crop this patch and combine it with the reference patch to form an image pair. \subsection{Intra- and Inter-video Transformations}\label{} {\flushleft \bf Intra-video.} After obtaining a pair of matched feature maps via patch-level tracking, we compute their fined-grained affinity $ {\bf A}_{r \to t} $ according to Eq.~\ref{eq1}. Based on this intra-video affinity, we can easily transform the image contents from the reference patch to the target patch within a single video clip. {\flushleft \bf Inter-video.} The key success of the aforementioned affinity-based transformation lies in the embedding discrimination among plentiful subpixels to achieve the accurate label copy. Nevertheless, within a pair of small patch regions, the image contents are highly correlated and even only cover a subregion of a large object, struggling to contain diverse visual patterns. The rarely existing negative pixels from other instance objects heavily hinder the embedding learning. In the following, we improve the existing framework by introducing another inter-video transformation to achieve the contrastive embedding learning. The inter-video affinity is defined as follows: \begin{equation}\label{key} {\bf A}_{r^{\Sigma} \to t}(i,j) = \frac{\text{exp}\left({{\bf f}_t(i)}^{\top} {\bf f}_r^{\Sigma}(j)\right)}{\sum_{j} \text{exp}\left({{\bf f}_t(i)}^{\top} {\bf f}_r^{\Sigma}(j) \right) }, \end{equation} where ${\bf f}_r^{\Sigma}$ is the concatenation of the reference features from different videos in the spatial dimension, \emph{i.e.}, $ {\bf f}_r^{\Sigma} = \text{Concat}({\bf f}_r^{1}, \cdots, {\bf f}_r^{n}) $. For a mini-batch with $ n $ videos, the spatial index $ i\in[1, N_1] $ and $ j\in[1, nN_2] $. {\flushleft \bf Rationale Analysis.} Inter-video transformation is an extension of intra-video transformation. By decomposing the reference feature embeddings $ {\bf f}_r^{\Sigma} \in \mathbb{R}^{C \times nN_2}$ into positive and negative, $ {\bf f}_r^{\Sigma} $ can be expressed as $ {\bf f}_r^{\Sigma} = \text{Concat}({\bf f}_r^{+}, {\bf f}_r^{-}) $, where $ {\bf f}_r^{+} \in \mathbb{R}^{C \times N_2} $ denotes the only positive reference feature related to the target frame feature while $ {\bf f}_r^{-} \in \mathbb{R}^{C \times (n-1)N_2} $ is the concatenation of negative ones from unrelated videos in the mini-batch. As a result, the computed affinity $ {\bf A}_{r^{\Sigma} \to t} \in \mathbb{R}^{N_1 \times nN_2} $ can be regarded as an ensemble of multiple sub-affinities, as shown in Figure~\ref{fig:affinity}. Our goal is to build such a batch-level affinity for discriminative representation learning. To facilitate the later descriptions, we also divide the inter-video affinity $ {\bf A}_{r^{\Sigma} \to t}$ as a combination of positive and negative sub-affinities: \begin{equation}\label{eq3} {\bf A}_{r^{\Sigma} \to t} = \text{Concat}({\bf A}_{r^{+} \to t}, {\bf A}_{r^{-} \to t}), \end{equation} where $ {\bf A}_{r^{+} \to t} \in \mathbb{R}^{N_1 \times N_2}$ and $ {\bf A}_{r^{-} \to t} \in \mathbb{R}^{N_1 \times (n-1) N_2} $ are the positive and negative sub-affinities, respectively. Ideally, sub-affinity $ {\bf A}_{r^{+} \to t} $ should be close to the intra-video affinity and $ {\bf A}_{r^{-} \to t} $ is expected to be a zero-like matrix. Nevertheless, with the inclusion of noisy reference features $ {\bf f}_r^{-} $, the positive sub-affinity $ {\bf A}_{r^{+} \to t} $ inevitably degenerates in comparison with the intra-video affinity $ {\bf A}_{r \to t} $, as shown in Figure~\ref{fig:affinity}. In the following, we present the intra-inter transformation consistency to encourage contrastive embedding learning within the correspondence learning task. \begin{figure}[t] \centering \includegraphics[width=7.8cm]{affinity.pdf} \caption{Comparison between intra-video affinity (top) and inter-video affinity (bottom). Best view in zoom in.}\label{fig:affinity} \end{figure} \subsection{Training Objectives}\label{training objectives} To achieve the high-quality frame reconstruction, following \cite{UVC}, we pre-train an encoder and a decoder using still images on the COCO dataset \cite{COCO} to perform the feature-level transformation. The pre-trained encoder and decoder networks are frozen without further optimization in our framework. The goal is to train the backbone network for correspondence estimation (\emph{i.e.}, affinity computation). In the following, the encoded features of the reference image $ {\bf I}_r $ is denoted as $ {\bf E}_r = \text{Encoder}({\bf I}_r) $. {\flushleft \bf Intra-video Self-supervision.} Leveraging the intra-video affinity $ {\bf A}_{r \to t} $ as well as the encoded reference feature $ {\bf E}_r $, the transformed target image can be computed via $ \hat{\bf I}_{r \to t} = \text{Decoder}( {\bf A}_{r \to t} {\bf E}_r ) $. Ideally, the transformed target frame should be consistent with the original target frame. As a consequence, the intra-video self-supervisory loss is defined as follows: \begin{equation}\label{key} {\cal{L}}_{\text{self}} = \|\hat{\bf I}_{r \to t}- {\bf I}_t\|_1. \end{equation} {\flushleft \bf Intra-inter Consistency.} Leveraging the inter-video affinity $ {\bf A}_{r^{\Sigma} \to t} $ and the encoded reference features $ {\bf E}_r^{\Sigma} $ from a batch of videos, \emph{i.e.}, $ {\bf E}_r^{\Sigma} = \text{Concat}({\bf E}_r^1, \cdots, {\bf E}_r^n) $, the corresponding transformed target image can be computed via $\hat{\bf I}_{r^{\Sigma} \to t} = \text{Decoder}( {\bf A}_{r^{\Sigma} \to t} {\bf E}_r^{\Sigma} ) $. This inter-video transformation is shown in Figure~\ref{fig:transformation}. The reference features from other videos are considered as negative embeddings. The learned inter-video affinity is expected to exclude unrelated embeddings for transformation fidelity. Therefore, the transformed images via intra-video affinity and inter-video affinity should be consistent: \begin{equation}\label{key} {\cal{L}}_{\text{intra-inter}} = \|\hat{\bf I}_{r \to t} - \hat{\bf I}_{r^{\Sigma} \to t}\|_1. \end{equation} The above loss encourages both positive feature invariance and negative embedding separation. {\flushleft \bf Sparsity Constraint.} To further enlarge the disagreements among different video features, we force the sub-affinity in the inter-video affinity $ {\bf A}_{r^{\Sigma} \to t} $ to be sparse via \begin{equation}\label{key} {\cal{L}}_{\text{sparse}} = \| {\bf A}_{r^- \to t}\|_1, \end{equation} where $ {\bf A}_{r^- \to t} $ is the negative sub-affinity in Eq.~\ref{eq3}. {\flushleft \bf Other Regularizations.} Following previous works \cite{UVC,TimeCycle}, we also utilize the cycle-consistency (bi-directional matching) between two frames, which equals forcing the affinity matrix to be orthogonal, \emph{i.e.}, $ {\bf A}^{-1}_{r \to t} = {\bf A}_{t \to r} $. Besides, the concentration regularization proposed in \cite{UVC} is also added. These two regularizations are combined and denoted as $ {\cal{L}}_{\text{others}} $. {\flushleft \bf Final Objective.} The final training objective is the combination of the above loss functions: \begin{equation}\label{key} {\cal{L}}_{\text{final}} = {\cal{L}}_{\text{self}} + {\cal{L}}_{\text{intra-inter}} + {\cal{L}}_{\text{sparse}} + {\cal{L}}_{\text{others}}. \end{equation} Our designed losses $ {\cal{L}}_{\text{intra-inter}} $ and $ {\cal{L}}_{\text{sparse}} $ are equally incorporated with the basic objective $ {\cal{L}}_{\text{self}} $. An overview of the training process is shown in Algorithm~1. \begin{figure}[t] \centering \includegraphics[width=8.5cm]{transformation.pdf} \caption{Illustration of the inter-video transformation.}\label{fig:transformation} \end{figure} \subsection{Online Inference}\label{online inference} After offline training, the pretrained backbone model is fixed during the inference stage, which is utilized to compute the affinity matrix for label transformation (\emph{e.g.}, segmentation mask). Note that the contrastive transformation is merely utilized for offline training, and the inference process is similar to the intra-video transformation. To acquire more reliable correspondence, we further design a mutually correlated affinity to exclude noisy matching as follows: \begin{equation}\label{key} {\bf \widetilde{A}}_{r \to t}(i,j) = \frac{\text{exp}\left( {\bf w}(i,j) {{\bf f}_t(i)}^{\top} {\bf f}_r(j) \right)}{\sum_{j} \text{exp}\left({\bf w}(i,j) {{\bf f}_t(i)}^{\top} {\bf f}_r(j) \right) }, \end{equation} where $ {\bf w}(i,j) \in [0,1]$ is a mutual correlation weight between two frames. Ideally, we prefer the one-to-one matching, \emph{i.e.}, one pixel in the reference frame should be highly correlated with some pixel in the target frame and vice versa. The mutual correlation weight is formulated by: \begin{equation}\label{key} {\bf w}(i,j) = \frac{{{\bf f}_t(i)}^{\top}{\bf f}_r(j)} {\max \limits_{i\in[1,N_1]} \left({{\bf f}_t(i)}^{\top} {\bf f}_r(j) \right)} \times \frac{{{\bf f}_t(i)}^{\top}{\bf f}_r(j)} {\max \limits_{j\in[1,N_2]} \left({{\bf f}_t(i)}^{\top} {\bf f}_r(j) \right)}. \end{equation} The weight $ {\bf w} $ can be regarded as the affinity normalization across both reference and target spatial dimensions. Given the above affinity between two frames, the target frame label $ \hat{\bf L}_t $ can be transformed via $ \hat{\bf L}_t = {\bf \widetilde{A}}_{r \to t} {\bf L}_r $. \section{Experiments}\label{experiment} We verify the effectiveness of our method on a variety of vision tasks including video object segmentation, visual object tracking, pose keypoint tracking, and human parts segmentation propagation\footnote{The source code and pretrained model will be available at \url{https://github.com/594422814/ContrastCorr}}. \begin{algorithm}[t] \label{code1} \small \caption{Offline Training Process} \LinesNumbered \KwIn{Unlabeled video sequences.} \KwOut{Trained weights for the backbone network.} \For{each mini-batch}{ Extract deep features of the video frames\; Patch-level tracking to obtain matched feature pairs\; \For{each video in the mini-batch}{ {\tt \scriptsize {// Intra- and Inter-video transformations}}\\ Compute intra-video affinity $ {\bf A}_{r \to t} $ (Eq.~\ref{eq1})\; Compute inter-video affinity $ {\bf A}_{r^{\Sigma} \to t} $ (Eq.~\ref{eq3})\; Conduct intra- and inter-video transformations\; {\tt \scriptsize {// Loss Computation}}\\ Compute intra-video self-supervision $ {\cal L}_{\text{self}} $\; Compute intra-inter consistency $ {\cal L}_{\text{intra-inter}} $\; Compute regularization terms $ {\cal L}_{\text{sparse}} $ and $ {\cal L}_{\text{others}}$\; } Back-propagate all the losses in this mini-batch\; } \end{algorithm} \subsection{Experimental Details} \label{sec:experiments} {\flushleft \bf Training Details.} In our method, the patch-level tracking and frame transformations share a ResNet-18 backbone network \cite{ResNet} with the first 4 blocks for feature extraction. The training dataset is TrackingNet \cite{2018trackingnet} with about 30k video. Note that previous works \cite{TimeCycle, UVC} use the Kinetics dataset \cite{Kinetics}, which is much larger in scale than TrackingNet. Our framework randomly crops and tracks the patches of 256$\times$256 pixels (\emph{i.e.}, patch-level tracking), and further yields a 32$\times$32 intra-video affinity (\emph{i.e.}, the network stride is 8). The batch size is 16. Therefore, each positive embedding contrasts with 15$\times$(32$\times$32$ \times $2) = 30720 negative embeddings. Since our method considers pixel-level features, a small batch size also involves abundant contrastive samples. We first train the intra-video transformation (warm-up stage) for the first 100 epochs and then train the whole framework in an end-to-end manner for another 100 epochs. The learning rate of both two stages is $ 1\times10^{-4} $ and will be reduced by half every 40 epochs. The training stage takes about one day on 4 Nvidia 1080Ti GPUs. {\noindent \bf Inference Details.} For a fair comparison, we use the same testing protocols as previous works \cite{TimeCycle, UVC} in all tasks. \subsection{Framework Effectiveness Study} \label{sec:ablation} In Table~\ref{table:ablation study}, we show ablative experiments of our method on the DAVIS-2017 validation dataset \cite{DAVIS2017}. The evaluation metrics are Jacaard index $ \cal{J} $ and contour-based accuracy $ \cal{F} $. As shown in Table~\ref{table:ablation study}, without the intra-video guidance, inter-video transformation alone for self-supervision yields unsatisfactory results due to overwhelming noisy/negative samples. With only intra-video transformation, our framework is similar to the previous approach \cite{UVC}. By jointly employing both of these two transformations under an intra-inter consistency constraint, our method obtains obvious performance improvements of 3.2\% in $ \cal{J} $ and 3.4\% in $\cal{F} $. The sparsity term of inter-video affinity encourages the embedding separation and further improves the results. In Figure \ref{fig:comparison}, we further visualize the comparison results of our method with and without contrastive transformation. As shown in the last row of Figure \ref{fig:comparison}, only intra-video self-supervision fails to effectively handle the challenging scenarios with distracting objects and partial occlusion. By involving the contrastive transformation, the learned feature embeddings exhibit superior discrimination capability for instance-level separation. \makeatletter \def\hlinew#1{% \noalign{\ifnum0=`}\fi\hrule \@height #1 \futurelet \reserved@a\@xhline} \makeatother \setlength{\tabcolsep}{2pt} \begin{table}[t] \scriptsize \begin{center} \begin{tabular*}{8.0 cm} {@{\extracolsep{\fill}}lcccc|cc} &Intra-video &Inter-video &Sparsity & Mutual & $ \cal{J} $(Mean) &$ \cal{F} $(Mean) \\ &Transformation &Transformation &Constraint & Correlation & & \\ &$ {\cal L}_{\text{self}} + {\cal L}_{\text{others}} $ &$ {\cal L}_{\text{intra-inter}} $ &$ {\cal L} _{\text{sparse}} $ & \\ \hlinew{1pt} &$\checkmark$ & & & &55.8 &60.3 \\ &$\checkmark$ &$\checkmark$ & &&59.0 &63.7\\ &$\checkmark$ &$\checkmark$ &$\checkmark$ &&59.2 &64.0 \\ &$\checkmark$ &$\checkmark$ &$\checkmark$ &$\checkmark$& {\bf 60.5} &{\bf 65.5} \\ \end{tabular*} \caption{Analysis of each component of our method on the DAVIS-2017 validation dataset.} \label{table:ablation study} \end{center} \end{table} \begin{figure}[t] \centering \includegraphics[width=8.3cm]{comparison.pdf} \caption{(a) Ground-truth results. (b) Results of the model with both intra- and inter-video transformations. (c) Results of the model without inter-video contrastive transformation, where the failures are highlighted by white circles. }\label{fig:comparison} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=17.8cm]{examples.pdf} \caption{Experimental results of our method. (a) Video object segmentation on the DAVIS-2017. (b) Visual object tracking on the OTB-2015. (c) Pose keypoint tracking on the J-HMDB. (d) Parts segmentation propagation on the VIP.}\label{fig:examples} \end{figure*} \setlength{\tabcolsep}{2pt} \begin{table}[t] \scriptsize \begin{center} \begin{tabular*}{8.0 cm} {@{\extracolsep{\fill}}l|c|cc} Model & Supervised & $ \cal{J} $(Mean) &$ \cal{F} $(Mean) \\ \hlinew{1pt} Transitive Inv. \cite{Transitive} & &32.0 &26.8 \\ DeepCluster \cite{deepclustering} & &37.5 &33.2 \\ Video Colorization \cite{colorizition} & &34.6 &32.7 \\ Time-Cycle \cite{TimeCycle} & &41.9 &39.4 \\ CorrFlow \cite{CorrFlow} & &48.4 &52.2 \\ UVC (480p) \cite{UVC} & &56.3 &59.2 \\ UVC (560p) \cite{UVC} & &56.7 &60.7 \\ MAST \cite{MAST} & &{\bf 63.3} &{\bf 67.6} \\ {\bf ContrastCorr (Ours)} & &60.5 &65.5 \\ \hline ResNet-18 \cite{ResNet} &$\checkmark$ &49.4 &55.1 \\ OSVOS \cite{OSVOS} &$\checkmark$ &56.6 &63.9 \\ FEEVOS \cite{voigtlaender2019feelvos:} &$\checkmark$ &69.1 &74.0 \\ \end{tabular*} \caption{Evaluation on video object segmentation on the DAVIS-2017 validation dataset. The evaluation metrics are region similarity $ \cal{J} $ and contour-based accuracy $ \cal{F} $ .} \label{table:DAVIS2017} \end{center} \end{table} \setlength{\tabcolsep}{2pt} \begin{table}[t] \scriptsize \begin{center} \begin{tabular*}{8.0 cm} {@{\extracolsep{\fill}}l|c|cc} Model & Supervised & DP@20pixel & AUC \\ \hlinew{1pt} KCF (HOG feature) \cite{KCF} & &69.6 &48.5 \\ UL-DCFNet \cite{TrackerSingleMovie} & &75.5 &58.4 \\ UDT \cite{UDT} & &76.0 &59.4 \\ UVC \cite{UVC} & &- &59.2 \\ LUDT \cite{LUDT} & &76.9 &60.2 \\ {\bf ContrastCorr (Ours)} & &{\bf 77.2} &{\bf 61.1} \\ \hline ResNet-18 + DCF \cite{ResNet} &$\checkmark$ &49.4 &55.6 \\ SiamFC \cite{SiamFc} &$\checkmark$ &77.1 &58.2 \\ DiMP-18 \cite{DiMP} &$\checkmark$ &87.1 &66.2 \\ \end{tabular*} \caption{Evaluation on video object tracking on the OTB-2015 dataset. The evaluation metrics are distance precision (DP) and area-under-curve (AUC) score of the success plot.} \label{table:OTB2015} \end{center} \end{table} \begin{table}[t] \scriptsize \begin{center} \begin{tabular*}{7.8 cm} {@{\extracolsep{\fill}}l|c|cc} Model & Supervised & [email protected] & [email protected] \\ \hlinew{1pt} SIFT Flow \cite{SIFTFlow} & &49.0 &68.6 \\ Transitive Inv. \cite{Transitive} & &43.9 &67.0 \\ DeepCluster \cite{deepclustering} & &43.2 &66.9 \\ Video Colorization \cite{colorizition} & &45.2 &69.6 \\ Time-Cycle \cite{TimeCycle} & &57.3 &78.1 \\ CorrFlow \cite{CorrFlow} & &58.5 &78.8 \\ UVC \cite{UVC} & &58.6 &79.8 \\ {\bf ContrastCorr (Ours)} & &{\bf 61.1} &{\bf 80.8} \\ \hline ResNet-18 \cite{ResNet} &$\checkmark$ &53.8 &74.6 \\ Thin-Slicing Network \cite{thin-slicing} &$\checkmark$ &68.7 &92.1 \\ \end{tabular*} \caption{Keypoints propagation on J-HMDB. The evaluation metric is PCK at different thresholds.} \label{table:JHMDB} \end{center} \end{table} \setlength{\tabcolsep}{2pt} \begin{table}[t] \scriptsize \begin{center} \begin{tabular*}{7.6 cm} {@{\extracolsep{\fill}}l|c|cc} Model &Supervised &mIoU & $ \text{AP}^{r}_{\text{vol}} $~ \\ \hlinew{1pt} SIFT Flow \cite{SIFTFlow} & &21.3 &10.5 \\ Transitive Inv. \cite{Transitive} & &19.4 &5.0 \\ DeepCluster \cite{deepclustering} & &21.8 &8.1 \\ Time-Cycle \cite{TimeCycle} & &28.9 &15.6 \\ UVC \cite{UVC} & &34.1 &17.7 \\ {\bf ContrastCorr (Ours)} & &{\bf 37.4} &{\bf 21.6} \\ \hline ResNet-18 \cite{ResNet} &$\checkmark$ &31.8 &12.6 \\ FGFA \cite{zhu2017flow-guided} &$\checkmark$ &37.5 &23.0 \\ ATEN \cite{VIP} &$\checkmark$ &37.9 &24.1 \\ \end{tabular*} \caption{Evaluation on propagating human part labels in Video Instance-level Parsing (VIP) dataset. The evaluation metrics are semantic propagation with mIoU and part instance propagation in $ \text{AP}^{r}_{\text{vol}} $.} \label{table:VIP} \end{center} \end{table} \subsection{Comparison with State-of-the-art Methods} {\noindent \bf Video Object Segmentation on the DAVIS-2017.} DAVIS \cite{DAVIS2017} is a video object segmentation (VOS) benchmark. We evaluate our method on the DAVIS-2017 validation set following Jacaard index $ \cal{J} $ (IoU) and contour-based accuracy $ \cal{F} $. Table~\ref{table:DAVIS2017} lists quantitative results. Our model performs favorably against the state-of-the-art self-supervised methods including Time-Cycle \cite{TimeCycle}, CorrFlow \cite{CorrFlow}, and UVC \cite{UVC}. Specifically, with the same experimental settings (\emph{e.g.}, frame input size and recurrent reference strategy), our model surpasses the recent top-performing UVC approach by 3.8\% in $ \cal{J} $ and 4.8\% in $ \cal{F} $. The recent MAST approach \cite{MAST} obtains impressive results by leveraging a memory mechanism, which can be added to our framework for further performance improvement. From Figure~\ref{fig:examples} (first row), we can observe that our method is robust in handling distracting objects and partial occlusion. Compared with the fully-supervised ResNet-18 network trained on ImageNet with classification labels, our method exhibits much better performance. It is also worth noting that our method even surpasses the recent fully-supervised methods such as OSVOS. {\noindent \bf Video Object Tracking on the OTB-2015.} OTB-2015 \cite{OTB-2015} is a visual tracking benchmark with 100 challenging videos. We evaluate our method on OTB-2015 under distance precision (DP) and area-under-curve (AUC) metrics. Our model learns robust feature representations for fine-grained matching, which can be combined with the correlation filter \cite{KCF,DSST} for robust tracking. Without online fine-tuning, we integrate our model into a classic tracking framework based on the correlation filter, \emph{i.e.,} DCFNet \cite{DCFNet}. The comparison results are shown in Table~\ref{table:OTB2015}. Note that UDT \cite{UDT} is the recently proposed unsupervised tracker trained with the correlation filter in an end-to-end manner. Without end-to-end optimization, our model is still robust enough to achieve superior performance in comparison with UDT. Our method also outperforms the classic fully-supervised trackers such as SiamFC. As shown in Figure~\ref{fig:examples} (second row), our model can well handle the motion blur, deformation, and similar distractors. {\noindent \bf Pose Keypoint Propagation on the J-HMDB.} We evaluate our model on the pose keypoint propagation task on the validation set of J-HMDB \cite{JHMDB}. Pose keypoint tracking requires precise fine-grained matching, which is more challenging than the box-level or mask-level propagation in the VOT/VOS tasks. Given the initial frame with 15 annotated human keypoints, we propagate them in the successive frames. The evaluate metric is the probability of correct keypoint (PCK), which measures the percentage of keypoints close to the ground-truth in different thresholds. We show comparison results against the state-of-the-art methods in Table~\ref{table:JHMDB} and qualitative results in Figure~\ref{fig:examples} (third row). Our method outperforms all previous self-supervised methods such as Time-Cycle, CorrFlow, and UVC (Table~\ref{table:JHMDB}). Furthermore, our approach significantly outperforms pre-trained ResNet-18 with ImageNet supervision. {\noindent \bf Semantic and Instance Propagation on the VIP.} Finally, we evaluate our method on the Video Instance-level Parsing (VIP) dataset \cite{VIP}, which includes dense human parts segmentation masks on both the semantic and instance levels. We conduct two tasks in this benchmark: semantic propagation and human part propagation with instance identity. For the semantic mask propagation, we propagate the semantic segmentation maps of human parts (\emph{e.g.}, heads, arms, and legs) and evaluate performance via the mean IoU metric. For the part instance propagation task, we propagate the instance-level segmentation of human parts (\emph{e.g.}, different arms of different persons) and evaluate performance via the instance-level human parsing metric: mean Average Precision (AP). Table~\ref{table:VIP} shows that our method performs favorably against previous self-supervised methods. For example, our approach outperforms the previous best self-supervised method UVC by 3.3\% mIoU in semantic propagation and 3.9\% in human part propagation. Besides, our model notably surpasses the ResNet-18 model trained on ImageNet with classification labels. Finally, our method is comparable with the fully-supervised ATEN algorithm \cite{VIP} designed for this dataset. \section{Conclusion} \label{conclusion} In this work, we focus on the correspondence learning using unlabeled videos. Based on the well-studied intra-video self-supervision, we go one step further by introducing the inter-video transformation to achieve contrastive embedding learning. The proposed contrastive transformation encourages embedding discrimination while preserving the fine-grained matching characteristic among positive embeddings. Without task-specific fine-tuning, our unsupervised model shows satisfactory generalization on a variety of temporal correspondence tasks. Our approach consistently outperforms previous self-supervised methods and is even comparable with the recent fully-supervised algorithms. { {\flushleft \bf Acknowledgements.} The work of Wengang Zhou was supported in part by the National Natural Science Foundation of China under Contract 61822208, Contract U20A20183, and Contract 61632019; and in part by the Youth Innovation Promotion Association CAS under Grant 2018497. The work of Houqiang Li was supported by NSFC under Contract 61836011.} { \bibliographystyle{aaai}
1,108,101,566,157
arxiv
\section{Introduction} Consider the space of solutions, $M$, to the equation $F(x)=0$ where $F:\mathbb R^n \rightarrow \mathbb R^m$ is a smooth function. To understand the geometry of $M$ one could start by computing the derivative $DF(x)$. Under suitable regularity conditions (e.g., $DF(x)$ is surjective for all solutions) $M$ is a smooth manifold of dimension $\alpha=\text{Nullity}(DF(x))$. This result is one of several closely related observations in differential geometry that I'll collectively refer to as the rank theorem (the standard version states that $F$ is equivalent to a projection of rank equal to $\text{Rank}(DF(x))$). If one further assumes that $M$ is compact, then the smoothness of $F$ implies that the $\alpha$-Hausdorff measure of $M$ is finite. These observations have a direct bearing on von Neumann algebras and free probability. I'll spell out this analogy by translating each of the quantities above into an appropriate operator algebra counterpart. $F$ will now be an $m$-tuple of $*$-polynomials in $n$ indeterminates and $X$ will denote an $n$-tuple of generators for a tracial von Neumann algebra $(M,\varphi)$ such that $F(X)=0$. Associated to $X$ is a free probability quantity defined in \cite{v2} called the (modified) microstates free entropy dimension $\delta_0(X)$. $\delta_0(X)$ is an analogue of Minkowski dimension and is defined through an asymptotic process involving $\epsilon$-entropy, i.e., the minimum number $\epsilon$-balls required to cover a metric space. It will replace topological dimension in this analogy. Differentiation in operator algebras can be expressed through derivations and tensor products and these devices can used to construct a derivative $D^sF(X)$ which will be a matrix with entries in $M \otimes M^{op}$ - another tracial von Neumann algebra. One can speak of the nullity and rank of $D^sF(X)$ by taking (unnormalized) traces of the projections onto the kernel or cokernel of $D^sF(X)$. Thus, setting $\alpha = \text{Nullity}(D^sF(X))$ one might expect \begin{eqnarray*} \delta_0(X) \leq \alpha. \end{eqnarray*} Notice here that this is an inequality instead of an equality, as $X$ is not defined solely through the relation, but only required to satisfy it. To complete the analogy, just as there is a suitable replacement for dimension, so too is there a kind of Hausdorff entropy $\mathbb H^{\alpha}$ for $X$ and if one believes that $X$ is 'compact' in some suitable way, then \begin{eqnarray*} \mathbb H^{\alpha}(X) < \infty. \end{eqnarray*} This paper is concerned with proving the above von Neumann algebra counterparts. The analogy is simple. The proofs are not. Before outlining some of the difficulties involved, I'll discuss the free probability results, their applications to group von Neumann algebras, and the connections to $L^2$-invariants and combinatorial group theory. \subsection{Group von Neumann Algebras Results, $L^2$-invariants, and Combinatorial Group Theory} The dimension inequality in terms of the nullity has a slightly more general expression than discussed above. Suppose $F$, $X$ are as before, but that the condition $F(X)=0$ is removed. It will be shown (Section 4) that for a suitable notion of the derivative, denoted by $D^s$, \begin{eqnarray*} \delta_0(X) & \leq & \text{Nullity}(D^sF(X)) + \delta_0(F(X):X) \end{eqnarray*} where here $\delta_0(:)$ is a kind of relative free entropy dimension (\cite{v2}). If $F(X)=0$, then $\delta_0(F(X):X)=0$ and the above reduces to the free entropy dimension inequality stated above. The inequality is connected to a number of von Neumann algebra applications of free entropy dimension involving normalizers/commutators (\cite{v2}, \cite{gs}), eigenvalue results for maximal free entropy dimension tuples (\cite{msm}, \cite{ss}), and $L^2$-Betti number computations (\cite{cs}, \cite{dl}). The connections are discussed at length in the examples of Section 4. The argument for the Hausdorff entropy bound requires additional assumptions on $F$ and $X$ including the condition $F(X)=0$ (see Remark 6.10 about dropping this condition). It will also require that $D^sF(X) \in M_{2n}(M \otimes M^{op})$ has\textbf{ geometric decay}, a property equivalent to $|D^sF(X)|$ having nonzero Fuglede-Kadison-L{\"u}ck determinant. \cite{j2} introduced a notion of Hausdorff $\alpha$-entropy $\mathbb H^{\alpha}$ and covering $\alpha$-entropy $\mathbb K^{\alpha}$. In general $\mathbb H^{\alpha} \leq \mathbb K^{\alpha}$, although it is not known whether the inequality is strict. Using the covering entropy I will prove a slightly stronger form of the entropy inequality (Section 6), namely that \begin{eqnarray*} \mathbb H^{\alpha}(X) \leq \mathbb K^{\alpha}(X) <\infty, \end{eqnarray*} for $\alpha = \text{Nullity}(D^sF(X))$. When $X$ satisfies such an inequality $X$ is said to be $\alpha$-bounded. When $\alpha=1$, $\mathbb K^{\alpha}$ has particular significance. Motivated by free probability applications to von Neumann algebras in \cite{v1}, \cite{g}, and \cite{gs}, and the fundamental work of Besicovitch (\cite{besicovitch1}, \cite{besicovitch2}, and reference \cite{falconer}) on regular fractal sets of finite Hausdorff $1$-measure, I showed in \cite{j3} that if $X$ is a finite tuple of self-adjoint elements such that there exists an element $x \in X$ with finite free entropy and such that $\mathbb K^1(X) < \infty$, then $\delta_0$ is an invariant for the von Neumann algebra generated by $X$. To be clear, this means that any other finite tuple $Y$ of self-adjoint generators for the von Neumann algebra generated by $X$ satisfies the condition that $\mathbb K^1(Y) < \infty$ (and thus $\delta_0(Y) \leq 1$). A von Neumann algebra with such a generating set was called strongly $1$-bounded. Thus, by computations made in \cite{v1}, the free group factors cannot be generated by a strongly $1$-bounded von Neumann algebra. Applications of this included showing that a union of two strongly $1$-bounded von Neumann algebras with a diffuse intersection generates a strongly $1$-bounded von Neumann algebra; also any finite generating set for the von Neumann algebra generated by a strongly $1$-bounded von Neumann algebra along with any subset of its normalizers generates a strongly $1$-bounded von Neumann algebra. This work also provided a way to unify and generalize the already established results of \cite{v2} and \cite{g}, which demonstrated that the free group factors have no Cartan subalgebra, and are prime, respectively. Note that the results on normalizers and commutants can be obtained and significantly strengthened under much more general conditions through a variety of solidity/rigidity techniques introduced in \cite{o1}, \cite{o2}, \cite{op}, \cite{ipp}, among others. \cite{va} is one place where the interested reader can read a more detailed account of this area. The geometric decay condition and nullity computation are unwieldy to verify/compute in a general von Neumann algebra context without further assumptions (aside from commutators, skew-commutators, normalizers, and skew-normalizers). In the discrete group case, however, one can access $L^2$-theory (L{\"u}ck's determinant property, Linnell's $L^2$-property) as well as combinatorial group theory to guarantee these spectral derivative conditions. Indeed, if $X$ is an $n$-tuple consisting of the canonical unitaries associated to a generating set of elements for a discrete group $\Gamma$, then $D^sF(X)$ will have geometric decay at $0$ whenever $\Gamma$ is sofic by a result of \cite{es}. Moreover, if $\Gamma$ is also left orderable (i.e., has a left invariant linear ordering) and $F$ has a nontrivial (nonidentity) $*$-monomial, then by a result of \cite{l}, $D^sF(X)$ will have nullity no greater than $n-1$. In particular when $n=2$ one has: \begin{proposition} Suppose $\Gamma$ is a left orderable, sofic group with $2$ generators and $\Gamma \neq \{0\}$. The following conditions are equivalent: \begin{enumerate}[(1)] \item $\Gamma \not\simeq \mathbb F_2$. \item $L(\Gamma) \not\simeq L(\mathbb F_2)$. \item $L(\Gamma)$ is strongly $1$-bounded. \item $\delta_0(X) = 1$ for any finite set of generators $X$ for $L(\Gamma)$. \end{enumerate} \end{proposition} \noindent One can actually show this if left orderability in the above is replaced with a weaker condition (Corollary 7.8). In essence, for such sofic group von Neumann algebras on two generators, the existence of a nontrivial relation collapses the microstate spaces of the generators into a kind of rectifiable curve of amenable von Neumann algebras (i.e., it's strongly $1$-bounded). Putting this together with results of \cite{h} (\cite{b}) on the local indicability (residual solvability) of torsion-free (positive) one-relator groups yields: \begin{corollary} Suppose $\Gamma$ is a torsion free, discrete group with a one relator presentation on $2$ generators. If $\Gamma$ is sofic and the relator is nontrivial, then $L(\Gamma)$ is strongly $1$-bounded. In particular, if the relator is nontrivial and positive, then $L(\Gamma)$ is strongly $1$-bounded. \end{corollary} The torsion-free condition can be replaced with an equivalent condition on the form of the relator, as stated in the abstract (see also Corollary 7.10). Note that the torsion-free condition is necessary as a free product of two finite cyclic groups demonstrates. Further von Neumann algebra results are presented in Sections 6 and 7. \subsection{Technical Overview} Recalling the proof of the Rank Theorem will give an idea of some of the challenges in establishing the analogy. From the outset there is a slight problem with comparing a microstates dimension with nullities in a von Neumann algebra context. Dimension and the derivative in the geometric context (and thus the microstates dimension) take place in a real setting, but when one takes traces in von Neumann algebras, these occur in a complex setting. Thus, a little care must be taken to extract the appropriate real dimension from the complex traces. One can break the derivative up into its self-adjoint and skew-adjoint parts, keep track of its action on these spaces, and define the corresponding derivation. A simple example will clarify the issues. Consider the tracial von Neumann algebra of $k \times k$ complex matrices $M=M_k(\mathbb C)$, the map $F:M \rightarrow M$ defined by $F(x) = x^*x$, and the level set $L = \{ x \in M: F(x) = I\}$. $L$ is just the Lie group of unitaries and its dimension is $k^2$, which is the same dimension as the space of the self-adjoint complex matrices. Normalizing by $k^2$, the dimension of $L$ 'is' $1$. This can easily be recovered in terms of the derivative and the von Neumann algebra. The derivative of $F$ at $I$ is $DF(I) = I_M + J$ where $I_M$ is the identity operator on $M$ and $J$ is the conjugation map on $M$. The kernel of $DF(I)$ should have dimension agreeing with the dimension of $L$, i.e., $1$. Unfortunately the appearance of the conjugation operator $J$ (which is real linear and not complex linear) makes this situation slightly annoying if one wants to use it in a complex setting. However, if one breaks up the action of $I_M$ and $J$ onto the self-adjoint and skew-adjoint portions of $M$, $M^{sa}$ and $M^{sk}$ (regarded as real subspaces), then in a matricially loose sense: \begin{eqnarray*} DF(I) & = & I_M +J \\ & = & \begin{bmatrix} I & 0 \\ 0 & I \\ \end{bmatrix} + \begin{bmatrix} I & 0 \\ 0 & -I \\ \end{bmatrix} \\ & = & \begin{bmatrix} 2I & 0 \\ 0 & 0 \\ \end{bmatrix}.\\ \end{eqnarray*} The nullity of $DF(I)$ in its form above as a $2 \times 2$ matrix with entries in $M_k(\mathbb C)$ should be the trace of the projection onto the orthogonal complement of the range, namely \begin{eqnarray*} \begin{bmatrix} 0 & 0 \\ 0 & I \\ \end{bmatrix}. \end{eqnarray*} The trace of the matrix above 'is' $1$ as expected (by the normalization $1$ is equivalent to $k^2$ which is exactly the dimension of $L$) and its range consists of the set of skew-adjoints i.e., the tangent space of the unitaries. The decomposition described above can be generalized to the derivative of any $*$-polynomial. Notice that one could avoid these algebraic preliminaries by working exclusively in the self-adjoint context (as done initially in \cite{v1}, \cite{v2}). However, I will need to work in the unitary case as well and creating one differential calculus which encompasses both cases seemed preferable. The differential calculus rules for the self-adjoint and unitary cases follow from the general setup. In particular, after a change of bases, the unitary calculus for $*$-monomials can be explicitly derived and this will have computational consequences in the group von Neumann algebra setting. These arguments take place in Section 3. Assuming the linear algebra notions of rank and nullity for derivatives are in order, the manifolds argument proceeds to show that the solution space $F(x)=0$ is locally diffeomorphic to $\ker DF(x)$. Here, one claims (again under some regularity conditions), that after a change of bases, $DF(x)$ is a projection. Although microstates are finite dimensional matrices like $DF(x)$, they are asymptotic approximants to operators with singular values that can be more complex than a finite list of eigenvalues. For example the spectrum may the entire unit interval. This spectral complexity makes the differential geometry argument difficult to carry over. It lead to the idea of \textbf{splitting the spectrum} by a continuous parameter $\alpha$. Spectral splitting occurs in the setting of microstates and since the microstates are simply elements in Euclidean space, for simplicity I will describe spectral splitting in that context. Suppose $0 \in E \subset B \subset \mathbb R^d$ with $B$ a ball of small radius and $f: B \rightarrow \mathbb R^m$ is a $C^1$-function. For small $\beta >0$ define $Q_{\beta} = 1_{[0,\beta]}(|Df(0)|)$; this is the analogue of the kernel of $DF(x)$ in the differential geometry argument. $Q_{\beta}^{\bot}$ is the analogue of the nondegenerate projection which represents $DF(x)$ after a change of bases. Very roughly, by projecting onto $Q_{\beta}$ and $Q_{\beta}^{\bot}$, for any $\epsilon >0$ \begin{eqnarray*} K_{\epsilon}(E) & \leq & K_{\epsilon}(Q_{\beta}(E)) \cdot K_{\epsilon}(f(E)) \end{eqnarray*} where $K_{\epsilon}$ is the minimum number of elements in an $\epsilon$-cover of $E$ (the inequality is not true as stated; Lemma 4.2 is the exact statement). If one takes an appropriate limit as $\epsilon \rightarrow 0$, then \begin{eqnarray*} \dim(E) & \leq & \dim(Q_{\beta}(E)) + \dim(f(E)) \end{eqnarray*} which is very close to the initial free entropy dimension inequality ($\dim$ is a heuristic term that isn't rigorously defined here). The one slight problem is that $Q_{\beta}$ is a projection onto a subspace which might strictly contain the kernel. But since the above holds for any $\beta$ one can take a limit as $\beta \rightarrow 0$. Then $Q_{\beta}$ converges to $Q_0$, the projection onto the kernel of $Df(0)$ and one arrives at the free entropy dimension inequality stated in the previous section. Notice that these estimates will only work in a small enough neighborhood $B$ because one needs uniform control of the derivative. Using these inequalities, some general covering estimates, and putting this into the microstates machinery yields the stated free entropy dimension inequality. The rigorous arguments take place in Section 4. While a single spectral split yields a free entropy dimension bound in terms of the nullity of the derivative, it alone cannot establish the entropy estimate because of the lingering $\beta$ dimension. To see why, it's helpful to think of Hausdorff/covering entropy as a kind of natural measure associated to the dimension of the set. What happens when one takes the natural measure of a set and uses it to evaluate a set with a slightly larger dimension? For example, the Hausdorff 2-measure of a 3-dimensional solid is infinite regardless of the size of the 3-dimensional solid. This mismatch will occur for any $n$-dimensional manifold and $(n-1)$-dimensional Hausdorff measure. So too will it occur with the lingering $\beta$-dimension - trying to show that $\mathbb K^{\alpha}(X)$ is finite for $\alpha = \text{Nullity}(D^sF(X))$ on a space which has dimension $\alpha$ plus a subspace of dimension dependent on $\beta$ and $X$ looks hopeless. The solution I'll present quantifies the relationship between the $\epsilon$-entropy and the excess dimension (Theorem 6.8). Basically, any $\epsilon$-neighborhood of the microstate space looks like a subspace of dimension slightly larger than the right dimension. The difference is quantified by $\beta$ and $D^sF(X)$. The smaller one makes the $\epsilon$-neighborhood, the closer its entropy is to that of an $\epsilon$-ball in a subspace of normalized dimension $\alpha =\text{Nullity}(D^sF(X))$. Thus, as $\epsilon \rightarrow 0$ the exponential growth rate of the local entropy decreases to $\alpha$, the 'right' rate. On the other hand as $\epsilon$ shrinks, the number of balls required to cover the microstate space increases; thus, as $\epsilon \rightarrow 0$ the global entropy increases. Quantifying the rate of the local entropy decrease and global entropy increase in terms of $\epsilon$ is crucial. It is here that the geometric decay in the assumption, or determinant class (in the sense of L{\"u}ck), becomes indispensable. Once this relationship is established, one can perform \textbf{iterative spectral splits} on a geometric scale of $\epsilon$ where $\epsilon$ remains fixed. The approach is different from the one-time split of the spectrum in that for each split, one accounts for the entropy of both the degenerate and nondegenerate portions, pairing the gain with the loss. Iterating the spectral splitting procedure and keeping tally of the cumulative entropy resulting from each iteration yields a finite upper bound. This kind of argument will require significantly more complicated covering estimates than those presented in the dimension argument. It will include Chebyshev estimates, Rogers's asymptotic bounds for covers of balls by balls \cite{rogers}, coverings by sets of negligible dimension (bindings and fringes), and estimates using the quasi-norm spaces $L^p(M)$ for $0 < p <1$. A more detailed description of the difficulties and intuition can be found in Section 5. There, additional Euclidean estimates, von Neumann projection lemmas, and their relation to matricial calculus are presented. Section 6 puts this all together to arrive at the entropy estimate. There are three other sections aside from these core sections (3-6) and the introduction. Section 2 will set up notation, review background material, and prove some technical lemmas, some of which are interesting in their own right (e.g., Lemma 2.7 which proves continuity of the covering number as a function of $\epsilon$). Section 7 will present the group von Neumann algebra corollaries. The appendix will review St. Raymond's volume estimates \cite{sr} and establish the necessary volume/metric entropy inequalities for the quasinorm balls in the $k\times k$ complex matrices. \section{Preliminaries} \subsection{Notation} A tracial von Neumann algebra consists of a tuple $(M, \varphi)$ where $M$ is a von Neumann algebra and $\varphi$ is a tracial state on $M$. $\varphi$ will always be normal and faithful. For any $0 <p < \infty$ $\|\cdot\|_p$ denotes the $p-$ (quasi)norm on $M$ given by $\|x\|_p = \varphi(|x|^p)^{1/p}$, $x \in M$. When $p =\infty$ I will sometimes write $\|x\|_{\infty}$ for the operator norm of $x$. Unless otherwise stated, $(M,\varphi)$ will denote a tracial von Neumann algebra. $L^2(M)$ is the completion of $M$ under the $\|\cdot\|_2$-norm. Suppose $x$ is a normal element in $(M, \varphi)$. $\varphi$ induces a Borel measure $\mu$ on the spectrum of $x$, $sp(x)$, obtained by restricting $\varphi$ to $C(sp(x))$ via the spectral theorem and using the Riesz representation theorem to extend $\varphi$'s restriction to a Borel measure supported on $sp(x)$. This measure will be called the spectral distribution of $x$. If $f$ is a bounded, complex-valued Borel function on $sp(x)$, then $f(x)$ denotes the unique element obtained by extending the continuous functional calculus to the bounded, Borel functional calculus. For a subset $S \subset \Omega$ I will write $1_S$ for the indicator function on $S$, i.e., $1_S(\omega) = 1$ if $\omega \in S$ and $1_S(\omega)=0$ if $\omega \in \Omega - S$. With this notation, if $E \subset sp(x)$ is a Borel set, then $1_E(x)$ is the spectral projection associated to $E$ and $x$. Notice that these notions make sense for a symmetric, real linear operator $T$ from on a finite dimensional vector space (since it's diagonalizable). The spectral calculus notation in the complex case will also be used in the real case. For example, $|T|$ denotes $(T^*T)^{1/2}$ where $T^*$ is the transpose of $T$. As in the complex case, in this situation one has a polar decomposition $T = U|T|$ where $U$ is an orthogonal matrix and $|T|$ is a symmetric, positive semidefinite operator. These also make sense (as in the complex case) when the domain and range of $T$ are distinct real vector spaces (i.e., of different dimension). For any $k \in \mathbb N$, $M_k(\mathbb C)$ denotes the space of $k \times k$ complex matrices and $tr_k$ denotes the unique tracial state on $M_k(\mathbb C)$. $M^{sa}_k(\mathbb C)$ denotes the space of $k \times k$ self-adjoint complex matrices. If $R>0$ then $(M_k(\mathbb C))_R$ and $(M^{sa}_k(\mathbb C))_R$ denote the sets of elements $x$ in $M_k(\mathbb C)$ or $M^{sa}(\mathbb C)$, respectively, whose operator norm is no greater than $R$. $(M_k(\mathbb C))^n$ will denote the space of $n$-tuples of elements in $M_k(\mathbb C)$ and the $\|\cdot\|_2$-norm on this space is defined by $\|(\xi_1,\ldots, \xi_n)\|_2 = (\sum_{i=1}^n tr_k(\xi_i^*\xi_i))^{1/2}$. This notation agrees with the 2-norm for a von Neumann algebra when $n=1$. The $\|\cdot\|_{\infty}$-norm (sometimes written simply as $\|\cdot\|$) is defined by $\|(\xi_1,\ldots, \xi_n)\|_{\infty} = \max_{1\leq i \leq n} \|\xi\|_{\infty}$. $(M^{sa}_k(\mathbb C))^n$, $(M_k(\mathbb C))_R)^n$, and $(M^{sa}_k(\mathbb C)_R)^n$ have analogous meanings - they are direct sums of $n$ copies of the inner terms. If $\xi, \eta \in (M_k(\mathbb C))^n$, then $\xi \cdot \eta, \xi+\eta \in (M_k(\mathbb C))^n$ have the obvious coordinatewise meaning. Given $\xi \in (M_k(\mathbb C))^n$ and $\epsilon, p \in (0,\infty)$, $B_p(\xi, \epsilon) \subset (M_k(\mathbb C))^n$ denotes the ball of $\|\cdot\|_p$-radius $\epsilon$ with center $\xi$. Given a discrete group $\Gamma$, $L(\Gamma) \subset B(\ell^2(\Gamma))$ denotes the group von Neumann algebra generated by the left regular representation of $\Gamma$. $e_{\Gamma}$ denotes the identity element of the group. $\Gamma^{op}$ denotes the opposite group of $\Gamma$. $L(\Gamma)$ has a canonical, faithful, tracial state given by $\varphi(x) = \langle x 1_{e_{\Gamma}}, 1_{e_\Gamma} \rangle$. Unless otherwise stated, $L(\Gamma)$ will be regarded as a tracial von Neumann algebra w.r.t. this canonical trace. Recall that there exists a unique tracial state on $L(\Gamma) \iff L(\Gamma) \text{ is a factor} \iff \Gamma \text{ is i.c.c.}$. For $n \in \mathbb N$, $\mathbb F_n$ denotes the free group on $n$ elements. Suppose $a_1,\ldots, a_n$ are the canonical generators for $\mathbb F_n$. Recall that there is a bijective correspondence between $\mathbb F_n$ and the reduced words $w$ in $a_1,\dots, a_n$. If $g_1,\ldots, g_n$ are elements in a group $\Gamma$, then $w(g_1,\ldots,g_n)$ denotes the obvious element obtained by associating to $w$ the corresponding unique element $g \in \mathbb F_n$ and then setting $w(g_1,\ldots, g_n) = \pi(g)$ where $\pi:\mathbb F_n \rightarrow \Gamma$ is the unique group homomorphism such that $\pi(a_i) = g_i$, $1\leq i\leq n$. I will blur the distinction between reduced words $w$ in $a_1,\ldots, a_n$, and the unique group element they represent in $\mathbb F_n$. Suppose $X=\{x_1,\ldots, x_n\}$ is a finite $n$-tuple of elements in a complex $*$-algebra and $F$ is a $p$-tuple of noncommutative $*$-polynomials in $n$ indeterminates. Write $F = \{f_1,\ldots,f_p\}$. $F(X)$ denotes the $p$-tuple of elements $\{f_1(X),\ldots, f_p(X)\}$. Given another finite $Y=\{y_1,\ldots, y_p\}$ in the same complex $*$-algebra, $X \cup Y$ denotes the concatenated finite tuple $\{x_1,\ldots, x_n, y_1,\ldots, y_p\}$. When the elements of a tuple $X$ are identically $0$ or the identity this will written as $X=0$ or $X=I$, respectively. Given a complex algebra $A$, $A^{op}$ denotes the opposite algebra of $A$. Recall that $A^{op}=A$ as a vector space is $A$. Given an element $a \in A$, I will also write $a \in A^{op}$. Sometimes to reinforce that this element is regarded in $A^{op}$ I will write $a^{op} \in A^{op}$, however at times (only when multiplication is not an issue and in order to reduce notation) I'll simply use the shorthand $a \in A^{op}$. $I$ will denote the identity of a given complex algebra $A$. \subsection{Metric Notation, Packing/Covering, Sumsets} Suppose $(M,d)$ is a metric space. Given $\epsilon >0$ and $x \in M$, $B(x,\epsilon)$ denotes the open ball with radius $\epsilon$ and center $x$. Given $X \subset M$, an $\epsilon$-cover for $X$ is a subset $\Lambda \subset M$ such that $\cup_{i \in \Lambda} B(x_i, \epsilon) \supset X$. For any $\epsilon >0$ $K_{\epsilon}(X)$ denotes the minimum number of elements in an $\epsilon$-cover for $X$. A subset $F$ of $M$ is $\epsilon$-separated if for any distinct elements $x, y \in F$, $d(x,y) \geq \epsilon$. $S_{\epsilon}(X)$ denotes the maximum numbers of elements in an $\epsilon$-separated subset $F \subset X$. Denote by $\mathcal N_{\epsilon}(X)$ the $\epsilon$-neighborhood of $X$ in $M$. These $\epsilon$-parametrized metric concepts are closely related to one another. Below are a number of their simple but very useful properties which are stated for convenience and without proof (they are all easy to verify): \begin{proposition} Suppose $X$ is a subset of the metric space $(M,d)$. The following hold: \begin{enumerate} [(i)] \item If $Y \subset X$, then for any $\epsilon >0$, $S_{\epsilon}(Y) \leq S_{\epsilon}(X)$, $K_{\epsilon}(Y) \leq K_{\epsilon}(X)$, and $\mathcal N_{\epsilon}(Y) \subset \mathcal N_{\epsilon}(X)$. \item $S_{\epsilon}(X \cup Y) \leq S_{\epsilon}(X) + S_{\epsilon}(Y)$. \item If $r > \epsilon >0$, then $S_r(X) \leq S_{\epsilon}(X)$, $K_r(X) \leq K_{\epsilon}(X)$, and $\mathcal N_{\epsilon}(X) \subset \mathcal N_r(X)$. \item If $\epsilon > 2\delta >0$, then $S_{\epsilon}(\mathcal N_{\delta}(X)) \leq S_{\epsilon - 2\delta}(X)$ and if $\epsilon > \delta >0$, then $K_{\epsilon}(\mathcal N_{\delta}(X)) \leq K_{\epsilon -\delta}(X)$. \item $K_{\epsilon}(X) \leq S_{\epsilon}(X) \leq K_{\frac{\epsilon}{2}}(X)$. \item If $(M, d)$ is Euclidean space of dimension $k$, $\mu$ is Lebesgue measure, and $c_k$ is the volume of the unit ball, then \begin{eqnarray*} \frac{\mu(\mathcal N_{\epsilon}(X))}{c_k \epsilon^k} & \leq & K_{\epsilon}(X) \\ & \leq & S_{\epsilon}(X) \\ & \leq & \frac{\mu(\mathcal N_{\epsilon/2}(X))}{c_k (\epsilon/2)^k}. \end{eqnarray*} \end{enumerate} \end{proposition} The (upper) covering dimension (also known as the box-counting, entropy, or Minkowski dimension) of $X$ is defined by \begin{eqnarray*} \dim(X) = \limsup_{\epsilon \rightarrow 0} \frac{K_{\epsilon}(X)}{|\log \epsilon|}. \end{eqnarray*} It is straightforward to check from the properties listed above that $\dim(X) = \limsup_{\epsilon \rightarrow 0} \frac{S_{\epsilon}(X)}{|\log \epsilon|}$ and when $X$ is a subset of Euclidean space of dimension $k$ and $\mu$ is Lebesgue measure, then \begin{eqnarray*} \dim(X) = k + \limsup_{\epsilon \rightarrow 0} \frac{\log(\mu(\mathcal N_{\epsilon}(X)))}{|\log \epsilon|}. \end{eqnarray*} Suppose now that $E$ is a normed linear space and $A$ and $B$ are subsets of $E$. The sumset of $A$ and $B$ is the subset of $E$ whose elements are of the form $a+b$ where $a \in A$ and $b \in B$. This construction is often denoted by $A+B$, but because of the prolific use of the addition sign as an algebraic operation throughout this paper, I will write the sumset of $A$ and $B$ as $A \boxplus B = \{a+b: a \in A, b \in B\}$ to emphasize its set theoretic meaning. If $A_1, \ldots, A_n$ is a sequence of subsets then their sumset is $\boxplus_{i=1}^n A_i = \{a_1 + \cdots + a_n : a_i \in A_i, 1\leq i \leq n\}$. The sumset operations will be used primarily in the context of metric properties and covering estimates. Note that the usage of $\boxplus$ conflicts with its standard meaning in free probability for the free additive convolution. Since I will not be using the free additive convolution in this paper there should be no confusion. By the triangle inequality if $\Lambda_i$ are $\epsilon$-covers for subsets $A_i$, then $\boxplus_{i=1}^n \Lambda_i$ is an $n\epsilon$-cover for $\boxplus_{i=1}^n A_i$. Thus, there is the following simple and very coarse estimate: \begin{lemma} If $E$ is a Banach space and $\langle A_i \rangle_{i=1}^n$ is a sequence of subsets of $E$, then for any $\epsilon >0$, \begin{eqnarray*} S_{2n\epsilon}(\boxplus_{i=1}^n A_i) & \leq & K_{n\epsilon}(\boxplus_{i=1}^n A_i) \\ & \leq & \Pi_{i=1}^n K_{\epsilon}(A_i) \\ & \leq & \Pi_{i=1}^n S_{\epsilon}(A_i).\\ \end{eqnarray*} \end{lemma} Finally, when $B = B(x,r) \subset E$, then for $s>0$, $sB = B(x, rs)$, the balls obtained from dilating $B$ by $s$ w.r.t. the center $x$. \subsection{Microstates} Suppose $(M,\varphi)$ is tracial von Neumann algebra and $X =\{x_1,\ldots, x_n\}$ is an $n$-tuple of elements in $M$. Given $m,k \in \mathbb C$ and $\gamma >0$ the $(m,k,\gamma)$ $*$-microstates $\Gamma(X;m,k,\gamma)$ consists of all elements $\xi = (\xi_1,\ldots, \xi_n) \in (M_k(\mathbb C))^n$ such that for any $1 \leq p \leq m$, $1 \leq i_1,\ldots, i_p \leq n$, and $j_1,\ldots, j_p \in \{1, *\}$, \begin{eqnarray*} |\varphi(x_{i_1}^{j_1} \cdots x_{i_p}^{j_p}) - tr_k(\xi_{i_1}^{j_1} \cdots \xi_{i_p}^{j_p})| < \gamma. \end{eqnarray*} There are several useful variants of the above. For $R>0$, $\Gamma_R(X;m,k,\gamma)$ is the set consisting of all elements $\xi \in \Gamma(X;m,k,\gamma)$ such that $\| \xi \|_{\infty} \leq R$. When $X$ consists of self-adjoint or unitary elements, one can impose the condition that each microstate's entries are self-adjoint or unitary complex matrices. These sets will be denoted by $\Gamma^{sa}_{\cdot}(\cdot)$ and $\Gamma^u_{\cdot}(\cdot)$, respectively. The original definition of matricial microstates was made in \cite{v1}. While originally defined for finite tuples of self-adjoint elements, the unitary or general definition above is straightforward. One can also consider the microstates of $X$ in the presence of $Y$ (introduced in \cite{v2}). For this suppose $Y = \{y_1,\ldots, y_d\} \subset M$. $\Gamma(X:Y;m,k,\gamma)$ consists all $\xi \in \Gamma(X;m,k,\gamma)$ for which there exists an $\eta \in (M_k(\mathbb C))^d$ such that $(\xi, \eta) \in \Gamma(X \cup Y:m,k,\gamma)$. Another way of putting it is that $\Gamma(X:Y;m,k,\gamma)$ is the projection of $\Gamma(X \cup Y;m,k,\gamma)$ onto the first $n$ coordinates. Denote by $\text{vol}$ Lebesgue measure w.r.t. the real inner product metric which $\|\cdot\|_2$ induces on $(M_k(\mathbb C))^n$ ($\|\xi \|_2^2 = \sum_{i=1}^n tr_k(\xi_i^*\xi)^{1/2}$). Observe that under this identification, $(M_k(\mathbb C))^n$ is isomorphic as a real vector space to $\mathbb R^{2nk^2}$. Define successively \[ \chi(X;m,\gamma) = \limsup_{k \rightarrow \infty} k^{-2} \cdot \log(\text{vol}(\Gamma(X;m,k,\gamma))) + 2n \cdot \log k, \] \[ \chi(X) = \inf \{\chi(X;m,\gamma): m \in \mathbb N, \gamma >0\}. \] Note the scaling factor $2n \cdot \log k$ is different from that in \cite{v1} due to both the nonself-adjoint context as well as the normalization of the traces used here. $\chi(X)$ is called the free entropy of $X$. Replacing the microstates of $X$ with the microstates in the presence of $Y$ in the above yields a quantity $\chi(X:Y)$ called the free entropy of $X$ in the presence of $Y$. One can impose operator norm cutoff conditions and obtain free entropy quantities $\chi_R(X)$ as well as $\chi_R(X:Y)$. When $X$ consists of self-adjoint quantities one can consider the self-adjoint microstates and using Lebesgue measure on $(M^{sa}_k(\mathbb C))^n$ inherited from $\|\cdot\|_2$. Replacing the scaling factor $2n \cdot \log k$ in the definition above with $n \log k$, one arrives at the free entropy of the self-adjoint tuple, denoted by $\chi^{sa}(X)$. Other functions can be applied to the microstate spaces with a more geometric measure theoretic bent. Using covering numbers w.r.t. the metric $\|\cdot\|_2$, define successively for any $\epsilon>0$ \[ \mathbb K_{\epsilon}(X;m,\gamma) = \limsup_{k \rightarrow \infty} k^{-2} \cdot \log(K_{\epsilon}(\Gamma(X;m,k,\gamma))), \] \[ \mathbb K_{\epsilon}(X) = \inf \{\mathbb K_{\epsilon}(X;m,\gamma): m \in \mathbb N, \gamma >0\}. \] One similarly defines $\mathbb S_{\epsilon}$ by replacing $K_{\epsilon}$ above with $S_{\epsilon}$. Again, an operator norm cutoff can be introduced giving rise to notation such as $\mathbb K_{\epsilon, R}(\cdot)$ and $\mathbb S_{\epsilon, R}(\cdot)$. One can also define corresponding covering and separated $\epsilon$-quantities for self-adjoint and unitary microstates spaces when $X$ consists of self-adjoint or unitary elements. They will be notated by $\mathbb K_{\epsilon}^{sa}(X)$, $\mathbb S_{\epsilon}^{sa}(X)$, $\mathbb K_{\epsilon}^u(X)$, $\mathbb S_{\epsilon}^u(X)$. The (modified) free entropy dimension of $X$, $\delta_0(X)$ is the common quantity \begin{eqnarray*} \delta_0(X) & = & \limsup_{\epsilon \rightarrow 0} \frac{\mathbb K_{\epsilon}(X)}{|\log \epsilon|} \\ & = & \limsup_{\epsilon \rightarrow 0} \frac{\mathbb S_{\epsilon}(X)}{|\log \epsilon|}. \\ \end{eqnarray*} The equation above was not the original formulation of $\delta_0$ introduced in \cite{v2} involving semicircular (or what would in this context be circular) free perturbations, but it was show in \cite{j1} to be equivalent. Again one can consider self-adjoint or unitary quantities, when $X$ consists of self-adjoint or unitary elements and they will be denoted by $\delta_0^{sa}$ and $\delta_0^u$, respectively. The following says that the use of operator norm cutoffs or self-adjoint/unitary restrictions have no effect on the entropy quantities. The free entropy claim of (i) is essentially contained in \cite{bb}. The covering equalities (ii)-(iv) follow from \cite{j4}, Rogers's asymptotic estimates \cite{rogers}, and Remark 2.6. \begin{proposition} If $R > \max_{x \in X} \|x\|$, and $\epsilon >0$ then the following are true: \begin{enumerate}[(i)] \item $\chi_R(X) = \chi(X)$. If $X$ consists of self-adjoint elements, then $\chi_R^{sa}(X) = \chi^{sa}(X)$. \item $\mathbb K_{\epsilon}(X) = \mathbb K_{\epsilon, R}(X)$. \item If $X$ consists of self-adjoint elements, then $\mathbb K_{\epsilon}(X) = \mathbb K^{sa}_{\epsilon}(X) = \mathbb K^{sa}_{\epsilon, R}(X)$. \item If $X$ consists of unitary elements, then $\mathbb K_{\epsilon}(X) = \mathbb K^u_{\epsilon}(X)$. \end{enumerate} \end{proposition} It was asked in \cite{v1} whether $\delta_0$ or some variant of it is a von Neumann algebra invariant. More precisely, the question is this: if $X$ and $Y$ are finite tuples in a tracial von Neumann algebra which generate the same von Neumann algebra, then is $\delta_0(X)=\delta(Y)$? An affirmative answer to this question would show (again by \cite{v1}) the nonisomorphism of the free group factors and settle a longstanding problem in operator algebras. It is known from \cite{v3} that if $X$ and $Y$ generate the same $*$-algebra, then $\delta_0(X)=\delta_0(Y)$ (Proposition 2.5 below will basically demonstrate this as well). Thus, one can make sense of the free entropy dimension of a finitely generated complex $*$-algebra as the free entropy dimension of any of its finite generating sets. In particular, for a finitely generated discrete group $G$ one can rigorously define $\delta_0(G)$ to be the free entropy dimension of the tuple consisting of the unitaries associated to any finite tuple of group generators. \cite{v2} applied the microstates theory to show that the free group factors have no Cartan subalgebras, and \cite{g} used it to show that they were prime, answering several old operator algebra questions. These results were subsequently strengthened and generalized using different methods which were discussed in the introduction. Motivated by geometric measure theoretic considerations, \cite{j3} effectively showed that $\delta_0$ is an invariant when a tuple has finite covering $1$-entropy. Recall there that for $\alpha >0$, a finite tuple of self-adjoints $X$ in a tracial von Neumann algebra is said to be $\alpha$-bounded if there exist $K, \epsilon_0 >0$ such that for all $0 < \epsilon < \epsilon_0$, \begin{eqnarray*} \mathbb K^{sa}_{\epsilon}(X) \leq \alpha \cdot |\log \epsilon| + K. \end{eqnarray*} If $X$ is $1$-bounded and contains an entry $x$ such that $\chi^{sa}(x) > -\infty$, then $X$ is said to be \textbf{strongly 1-bounded}. It turns out that if $X$ is strongly $1$-bounded, then any other finite generating tuple $Y$ for the von Neumann algebra generated by $X$ satisfies the same inequality above, possibly with a different $\epsilon_0$ and $K$. In particular, $\delta_0(Y) \leq 1$. This allows one to distinguish strongly $1$-bounded von Neumann algebras (which are closed under diffuse intersections, normalizers, and pairwise commutation relations) from von Neumann algebras with (microstates) free entropy dimension strictly greater than $1$ (e.g. the free group factors as demonstrated in \cite{v1}). The key feature in the definition of strongly $1$-bounded is that the defining inequality propagates to any representative of the associated von Neumann algebra. It reduces a von Neumann algebra nonexistence result to computing a numeric of one propitious generating set. As a result of this, a tracial von Neumann algebra is said to be strongly $1$-bounded if it has a finite set of self-adjoint elements which is strongly $1$-bounded. While $\alpha$-boundedness was phrased for finite tuples of self-adjoints, it makes sense for a finite tuple of general (possibly non-self-adjoint) elements. Formally, \begin{definition} A finite tuple $X$ in $(M,\varphi)$ is $\alpha$-bounded if there exist $K, \epsilon_0 >0$ such that for all $0 < \epsilon < \epsilon_0$, \begin{eqnarray*} \mathbb K_{\epsilon}(X) & \leq & \alpha \cdot |\log \epsilon| +K. \end{eqnarray*} \end{definition} This definition coincides with the original one made when the finite tuple consists of self-adjoint elements (by Proposition 2.3). Notice that if $X$ is $\alpha$-bounded, then $\delta_0(X) \leq \alpha$. Whether one works with self-adjoint, general, or unitary tuples is immaterial. Indeed, one can move from one type of generating set to another with $*$-algebraic operations and invoke the following: \begin{proposition} If $X$ and $Y$ are finite tuples of elements in $(M,\varphi)$ which generate the same complex $*$-algebra, then for any $R>0$ there exist $L, R_1 >0$ such that for any $\epsilon >0$, \begin{eqnarray*} \mathbb K_{\epsilon, R}(X) \leq \mathbb K_{\frac{\epsilon}{L}, R_1}(Y). \end{eqnarray*} It follows that there exists a $C >0$ such that for any $\epsilon >0$, $\mathbb K_{\epsilon}(X) \leq \mathbb K_{\epsilon/C}(Y)$. In particular, if $X$ is a general finite tuple and $X^{sa}$ is the tuple obtained by taking the real and imaginary portions of $X$, then $X$ is $\alpha$-bounded (as a finite tuple of general elements) iff $X^{sa}$ is $\alpha$-bounded (as a finite tuple of self-adjoint elements). Also, $\delta_0(X) = \delta_0(X^{sa}) = \delta_0^{sa}(X^{sa})$. \end{proposition} \begin{proof} By hypothesis there exist finite tuples of $*$-polynomials, $F$ and $G$, such that $F(X)=Y$ and $G(F(X))=Y$. Given $R>0$, there exists and $L_1, R_1$ dependent on $R$ such that for any $k \in \mathbb N$ and $\xi, \eta \in ((M_k(\mathbb C))_R)^n$ \begin{enumerate}[(i)] \item $L_1 \cdot \|F(\xi)-F(\eta)\|_2 \geq \|G(F(\xi)) - G(F(\eta))\|_2$. \item $\|F(\xi)\|_{\infty} < R_1$. \end{enumerate} Because $G(F(X))=X$ there exist $m_0 \in \mathbb N$, $\gamma_0>0$ such that for any $\xi \in \Gamma_R(X,m_0,k,\gamma_0)$, $\|G(F(\xi)) - \xi\|_2 < \epsilon/4$. Using condition (i) above, for any $\xi, \eta \in \Gamma_R(X,m_0,k,\gamma_0)$, \begin{eqnarray*} L_1 \cdot\|F(\xi) - F(\eta)\|_2 & \geq & \|G(F(\xi) - G(F(\eta))\|_2 \\ & \geq & \|\xi - \eta\|_2 - \epsilon/2.\\ \end{eqnarray*} It follows that for any $m_1 > m_0$ and $0 < \gamma_1 < \gamma_0$ \begin{eqnarray*} S_{\epsilon}(\Gamma_R(X;m_1,k,\gamma_1)) & \leq & S_{\frac{\epsilon}{2L_1}}(F(\Gamma_R(X;m_1,k,\gamma_1))).\\ \end{eqnarray*} Given $m \in \mathbb N$ and $\gamma >0$, then there exist by condition (ii) corresponding $m_1 > m_0$ and $0 < \gamma_1 < \gamma_0$ such that $F(\Gamma_R(X;m_1,k,\gamma_1)) \subset \Gamma_{R_1}(Y;m,k,\gamma)$. Combining this with the above and Proposition 2.1, \begin{eqnarray*} K_{\epsilon}(\Gamma_R(X;m_1,k,\gamma_1)) & \leq & S_{\epsilon}(\Gamma_R(X;m_1,k,\gamma_1)) \\ & \leq & S_{\frac{\epsilon}{2L_1}}(F(\Gamma_R(X;m_1,k,\gamma_1))) \\ & \leq & S_{\frac{\epsilon}{2L_1}}(\Gamma_{R_1}(Y;m,k,\gamma)) \\ & \leq & K_{\frac{\epsilon}{4L_1}}(\Gamma_{R_1}(Y;m,k,\gamma)).\\ \end{eqnarray*} This being true for any $m \in \mathbb N$, $\gamma >0$, it follows that with $L = (4L_1)^{-1}$, \begin{eqnarray*} \mathbb K_{\epsilon,R}(X) \leq \mathbb K_{\frac{\epsilon}{L}, R_1}(Y). \end{eqnarray*} The first claim is established. The second claim follows from the first claim and Proposition 2.3. For the third part, notice that the second claim implies that if $X$ and $Y$ generate the same $*$-algebra, then $X$ is $\alpha$-bounded iff $Y$ is $\alpha$-bounded. Thus, if $X$ is a general finite tuple and $X^{sa}$ denotes the tuple consisting of the real and imaginary parts of $X$, then $X$ is $\alpha$-bounded iff $X^{sa}$ is $\alpha$-bounded (as a general tuple). Proposition 2.3 shows that the $\epsilon$-covering numbers of a tuple of self-adjoints computed w.r.t. self-adjoint or general microstates coincide. So $X^{sa}$ is $\alpha$-bounded as a general tuple iff $X^{sa}$ is $\alpha$-bounded as a tuple of selfadjoint elements. This completes the third claim. The fourth and final claim concerning $\delta_0$ is trivial from the first claim and Proposition 2.3. \end{proof} \subsection{Rogers's asymptotic bound} \cite{rogers} investigated $\epsilon$-covering estimates for the unit ball $B_d$ in $\mathbb R^d$ for large $d$. One might expect that $K_{\epsilon}(B_d)$ should be the ratio of $1$ over $\epsilon$ raised to the ambient dimension $d$, i.e., $K_{\epsilon}(B_d) \sim (\frac{1}{\epsilon})^d$. From simple volume comparison arguments one has a coarser estimate involving an additional exponential constant of $2$: \begin{eqnarray*} K_{\epsilon}(B_d) \leq \left(\frac{2}{\epsilon}\right)^d = \frac{2^d}{\epsilon^d}. \end{eqnarray*} \noindent While this estimate (or a better one with a numerator of $1+\epsilon$) often suffices to get appropriate dimension bounds in the microstate setting, I'll need a sharper estimate where the numerator $2^d$ is replaced with a term with polynomial growth. Rogers proved in \cite{rogers} that there exists a universal constant $C_r$ such that for $d \geq \max\{1/\epsilon, 9\}$, \begin{eqnarray*} K_{\epsilon}(B_d) \leq \frac{C_r \cdot d^{5/2}}{\epsilon^d}. \end{eqnarray*} \noindent By dilating, it follows that if $B_d(\alpha)$ denotes the ball of radius $\alpha$ in $\mathbb R^n$, then for $d \geq \max\{\frac{\alpha}{\epsilon}, 9\}$, \begin{eqnarray*} K_{\epsilon}(B_d(\alpha)) \leq C_r \cdot d^{5/2} \cdot \left (\frac{\alpha}{\epsilon}\right)^d. \end{eqnarray*} \begin{remark} Suppose $\Omega \subset \mathbb R^d$ and $s, t >0$. Observe that for $d > \max\{\frac{s+t}{s}, 9\}$ the result above implies \begin{eqnarray*} K_{s}(\Omega) & \leq & C_r \cdot d^{5/2} \cdot \left (\frac{s+t}{s}\right)^d \cdot K_{s+t}(\Omega). \end{eqnarray*} To see this pick an $(s+t)$-cover $\langle x_i \rangle_{i \in I}$ for $\Omega$ such that $\#I = K_{s+t}(\Omega)$. From the discussion above, for each $i$ the ball $B(x_i, s+t)$ has an $s$-cover $\langle y_{(i,j)} \rangle_{j \in J}$ where $J$ is an indexing set such that \begin{eqnarray*} \#J & \leq & C_r \cdot d^{5/2} \cdot \left (\frac{s+t}{s}\right)^d.\\ \end{eqnarray*} Clearly $\langle y_{(i,j)} \rangle_{(i,j) \in I \times J}$ is an $s$-cover for $\Omega$ and it has cardinality no greater than \begin{eqnarray*} \#I \cdot \#J & \leq & C_r \cdot d^{5/2} \cdot \left (\frac{s+t}{s}\right)^d \cdot K_{s+t}(\Omega). \\ \end{eqnarray*} \end{remark} When Remark 2.6 is used for the $\epsilon$-coverings in the microstate setting, the polynomial term vanishes under the asymptotic logarithmic process and one recovers the kind of nested scaling property enjoyed by dyadic cubes. More specifically one has the following which are interesting to compare with the corresponding properties proved in \cite{v4} for the free Fisher information of a semicircular perturbation: \begin{lemma} Suppose $X$ is an $n$-tuple of operators in a tracial von Neumann algebra $M$. Define $f:(0,\infty) \rightarrow \mathbb [0,\infty)$ by $f(t) = \mathbb K_t(X)$. The following hold for $f$: \begin{itemize} \item $f$ is monotonically decreasing. \item For any $s,t \in (0,\infty)$, $f(s) \leq f(s+t) + 2n \log \left(\frac{s+t}{s} \right)$. \item $f$ is continuous. \end{itemize} The same results hold if $X$ consists of self-adjoint elements and $f(t) = \mathbb K^{sa}_t(X)$. \end{lemma} \begin{proof} For the first property for $s, t \in (0,\infty)$ such that $s<t$, and for any $m,k \in \mathbb N$ and $\gamma >0$ \begin{eqnarray*} K_s(\Gamma(X;m,k,\gamma)) \geq K_t(\Gamma(X;m,k,\gamma)). \end{eqnarray*} Passing this through the limiting process, one has $f(s) = \mathbb K_s(X) \geq \mathbb K_t(X) = f(t)$. For the second property, suppose $s,t \in (0,\infty)$ and again, $m,k \in \mathbb N$ and $\gamma >0$. By Remark 2.6, \begin{eqnarray*} \mathbb K_s(X;m,\gamma) & = & \limsup_{k \rightarrow \infty} k^{-2} \cdot \log \left(K_s(\Gamma(X;m,k,\gamma)) \right) \\& \leq & \limsup_{k \rightarrow \infty} k^{-2} \cdot \log \left(\frac{C_r (2nk^2)^{5/2}(s+t)^{2nk^2}}{s^{2nk^2}} \cdot K_{s+t}(\Gamma(X;m,k,\gamma))\right) \\ & \leq & \mathbb K_{s+t}(X;m,\gamma) + 2n \log \left(\frac{s+t}{s} \right).\\ \end{eqnarray*} \noindent As this holds for any $m, \gamma$, $f(s) = \mathbb K_s(X) \leq \mathbb K_{s+t}(X) + 2n \log \left ( \frac{s+t}{s} \right) = f(s+t) + 2n \log \left(\frac{s+t}{s} \right)$. The third property follows from the first and second ones. The self-adjoint situation is completely analogous to the nonself-adjoint arguments presented above. \end{proof} \begin{remark} Rogers's result can be used to prove statements (ii)-(iv) of Proposition 2.3. I'll show this for the Proposition 2.3(iii) and leave the other two to the reader, as they are completely analogous. The third statement says that for $R > \max_{x \in X} \|x\|$ with $X$ a finite tuple consisting of self-adjoint elements, that for any $\epsilon >0$, $\mathbb K_{\epsilon}(X) = \mathbb K^{sa}_{\epsilon}(X) = \mathbb K^{sa}_{\epsilon, R}(X)$. From Proposition 2.1(i) and the inclusions, $\Gamma(X;m,k,\gamma) \supset \Gamma^{sa}(X;m,k,\gamma) \supset \Gamma_R^{sa}(X;m,k,\gamma)$ it follows that $\mathbb K_{\epsilon}(X) \geq \mathbb K^{sa}_{\epsilon}(X) \geq \mathbb K^{sa}_{\epsilon, R}(X)$. It remains then to prove the reverse inequalities of this chain. Fix $t >0$. For sufficiently large $m \in \mathbb N$ and small $\gamma >0$, if $\xi \in \Gamma(X:m,k,\gamma)$, then $\|\xi - (\xi+\xi^*)/2\|_2 < t$. Thus, $\Gamma(X;m,k,\gamma) \subset \mathcal N_t(\Gamma^{sa}(X;m,k,\gamma))$. By Proposition 2.1(iv), \begin{eqnarray*} K_{\epsilon}(\Gamma(X;m,k,\gamma)) & \leq & K_{\epsilon}(\mathcal N_t(\Gamma^{sa}(X;m,k,\gamma))) \\ & \leq & K_{\epsilon - t}(\Gamma^{sa}(X;m,k,\gamma)).\\ \end{eqnarray*} Passing this through the limiting process, $\mathbb K_{\epsilon}(X) \leq \mathbb K_{\epsilon-t}^{sa}(X)$. The continuity property for the self-adjoint case in Lemma 2.6 shows that $\mathbb K_{\epsilon}(X) \leq \mathbb K_{\epsilon}^{sa}(X)$ which is the first of the two remaining inequalities. Turning to the inequality $\mathbb K^{sa}_{\epsilon}(X) \leq \mathbb K^{sa}_{\epsilon, R}(X)$, when $R > \max_{x \in X} \|x\|$, Lemma 2.1 of \cite{j3} shows that for a given $t >0$ and any $m_0 \in \mathbb N$ and $\gamma_0>0$, there exist an $m \in \mathbb N$ and $\gamma >0$ such that \begin{eqnarray*} \Gamma^{sa}(X;m,k,\gamma) & \subset & \mathcal N_t(\Gamma^{sa}_R(X;m_0, k, \gamma_0)).\\ \end{eqnarray*} By Proposition 2.1, (i) and (iv), and Remark 2.6 \begin{eqnarray*} K_{\epsilon}(\Gamma^{sa}(X;m,k,\gamma)) & \leq & K_{\epsilon}(\mathcal N_t(\Gamma^{sa}_R(X;m_0,k,\gamma_0))) \\ & \leq & K_{\epsilon-t}(\Gamma^{sa}_R(X;m_0,k,\gamma_0))\\ & \leq & C_r \cdot (2nk^2)^{5/2} \cdot \left (\frac{\epsilon}{\epsilon-t}\right)^{2nk^2} \cdot K_{\epsilon}(\Gamma^{sa}_R(X;m_0,k,\gamma_0)) \\ \end{eqnarray*} Applying $\limsup_{k\rightarrow \infty} k^{-2} \log$ on both sides, \begin{eqnarray*} \mathbb K^{sa}_{\epsilon}(X) & \leq & \mathbb K^{sa}_{\epsilon}(X;m,\gamma) \\ & \leq & \mathbb K^{sa}_{\epsilon, R}(X;m_0,\gamma_0) + 2n \cdot [\log(\epsilon) - \log(\epsilon-t)]. \end{eqnarray*} This is true for any $m_0 \in \mathbb N$, $\gamma_0 >0$. Thus, $\mathbb K^{sa}_{\epsilon}(X) \leq \mathbb K^{sa}_{\epsilon, R}(X)+ 2n \cdot [\log(\epsilon) - \log(\epsilon-t)]$. $t >0$ was arbitrary, so it follows that $\mathbb K^{sa}_{\epsilon}(X) \leq \mathbb K^{sa}_{\epsilon, R}(X)$ as promised. \end{remark} \subsection{Derivatives} I'll set forth here some basic notation for derivatives and recall a few fundamental facts. Details can be found in many places, e.g., \cite{lang}. It will be convenient to speak about derivatives in the general context of real Banach spaces. Suppose $A_1, \ldots, A_n, B_1,\ldots, B_p$ are real Banach spaces and $A = \oplus_{i=1}^n A_i$ and $B = \oplus_{j=1}^p B_j$ are the direct sum Banach spaces. If $U \subset A$ is open and $F:U \rightarrow B$, one can write $F(a) = (F_1(a), \ldots, F_p(a))$ where the $F_j: A \rightarrow B_j$. Assume $F$ is differentiable at $a$ with derivative denoted by $DF(a)$. Just as in multivariable calculus, $DF(a)$ can be canonically represented as a matrix: \begin{eqnarray*} DF(a) & = & \begin{bmatrix} \partial_1F_1(a) & \cdots & \partial_nF_1(a) \\ \vdots & & \vdots \\ \partial_1F_p(a) & \cdots & \partial_nF_p(a) \\ \end{bmatrix}\\ \end{eqnarray*} where the $\partial_iF_j(a): E_i \rightarrow F_j$ are defined exactly as in the Euclidean case. To be clear the $i$th partial derivative of $F_j$ exists at $a$ if there exists a bounded (real) linear map $D_iF_j(a):E_j \rightarrow F_j$ such that \begin{eqnarray*} \lim_{h \rightarrow 0} \frac{\|F_j(a_1,\ldots, a_i+h, \ldots, a_n) - F_j(a) - (D_iF_j(a))(h)\|} {\|h\|} = 0 \end{eqnarray*} As in the Euclidean case, $F$ is smooth iff the partial derivatives are smooth. There is an integral version of the mean value theorem here as well, namely that if $a_1, a_2 \in U$ and the line segment joining $a_1$ and $a_2$ lies in $U$, then \begin{eqnarray*} F(a_2) - F(a_1) & = & \left[ \int_0^1 DF(a_1 + t(a_2 -a_1)) \, dt \right] (a_2-a_1). \end{eqnarray*} Note that if the $A_i$ and $B_j$ are finite dimensional, then all norms are equivalent, and differentiability in one norm is equivalent to differentiability in any other norm and the derivatives (as linear maps) are one and the same. In particular, if $A_i = B_j = M_k(\mathbb C)$ for some fixed $k$, and $F$ is differentiable at $a$ w.r.t. the operator norm, then $F$ is differentiable at $a$ w.r.t. any Schatten norm, and in particular the real Hilbert space direct sum of $L^2$-norms (normalized or unnormalized) induced on $A$ and $B$. Thus, if $F$ is a $p$-tuple of (noncommutative) $*$-polynomials, then $F$ is differentiable w.r.t. the real Hilbert space direct sum norms induced on $A$ and $B$, with a derivative equal to the derivative computed w.r.t. the direct sum operator norms induced on $A$ and $B$. \subsection{Moment Convergence and Spectral Projections} Given a positive operator in a tracial von Neumann algebra and a microstate, I will need to know how the associated spectral projections are related. It will be convenient to state the results in a context slightly more general than that of microstates and towards this end I'll introduce some convenient notation. If $X=\{x_1,\ldots, x_n\}$ and $Y=\{y_1,\ldots, y_n\}$ are $n$-tuples in tracial von Neumann algebras $(M,\varphi)$ and $(N, \psi)$, respectively, and $R, \gamma >0$ and $m \in \mathbb N$, then I will write $X \approx^{R,m,\gamma} Y$ provided that the $x_i$ and $y_i$ have operator norms no greater than $R$ and that their $*$-moments are close by an order of $m$ and $\gamma$, i.e., for any $1 \leq p \leq m$ and $1 \leq i_1,\ldots, i_p \leq n$ and $j_1, \ldots, j_p \in \{1,*\}$, \begin{eqnarray*} |\varphi(x_{i_1}^{j_1} \cdots x_{i_p}^{j_p}) - \psi(y_{i_1}^{j_1} \cdots y_{i_p}^{j_p})| < \gamma. \end{eqnarray*} If $X=\{x\}$ and $Y=\{y\}$, I will simply write this as $x \approx^{R,m,\gamma} y$. Note that this notion makes sense when $X=\{x\}$, $Y=\{y\}$, and $x$ happens to be a real linear operator on a finite dimensional (real) vector space. In this case the $*$-moments are taken w.r.t. the real normalized trace on the operators acting on the finite dimensional real vector space and the adjoint is replaced with the transpose. In the following lemma $(M,\varphi)$ and $(N, \psi)$ denote tracial von Neumann algebras. \begin{lemma} Suppose $a \in M$ is positive with $\|a\| \leq R$. If $c, s >0$, then there exist an $m \in \mathbb N$, $\gamma >0$, dependent only on $c,s, R$ such that for any positive operator $b \in N$ satisfying $b \approx^{R,m,\gamma} a$ \begin{eqnarray*} \varphi(1_{[0,c)}(a)) -s & < & \tau(1_{[0,c)}(b)) \\ & \leq & \tau(1_{[0,c]}(b)) \\ & \leq & \varphi(1_{[0, c]}(a)) + s.\\ \end{eqnarray*} The same conclusion holds if $b$ is a positive semidefinite real linear operator on a finite dimensional, real vector space. \end{lemma} \begin{proof} There exists a monotonic decreasing sequence of uniformly bounded continuous functions which converges pointwise to $1_{[0,\alpha]}$ on $[0,R]$ such that for any $f$ in the sequence, $\inf_{t \in [0,R]} (f(t) - 1_{[0,\alpha]}(t)) > 0$. Similarly there exists a monotonic increasing sequence of uniformly bounded continuous functions converging pointwise to $1_{[0,\alpha)}$ on $[0,R]$ such that each for any such function $f$ in the sequence, $\inf_{t \in [0,R]}1_{[0,\alpha]}(t) - f(t) > 0$. By Stone-Weierstrass there exist uniformly bounded sequences of real polynomial functions $f_n$ and $g_n$ on $[0,R]$ such that $f_n \rightarrow 1_{[0,c]}$ and $g_n \rightarrow 1_{[0,c)}$ pointwise and $f_n(t) \geq 1_{[0,c]}(t)$ and $1_{[0,c)}(t) \geq g_n(t)$ for all $t \in [0,R]$. By Lebesgue's dominated convergence theorem and the Borel spectral theorem, $\lim_{n\rightarrow \infty} \varphi(f_n(a)) = \varphi(1_{[0,c]}(a))$ and $\lim_{n\rightarrow \infty} \varphi(g_n(a)) = \varphi(1_{[0,c)}(a))$. Fix an $N$ such that $\varphi(f_N(a)) \leq \varphi(1_{[0,c]}(a)) +s/2$ and $\varphi(1_{[0,c)}(a)) - s/2 < \varphi(g_N(a))$. Pick $m$ and $\gamma$ (dependent only on $f_N, s, c$) such that for any positive $b \in N$ satisfying $b\approx^{R, m, \gamma}a$, $|\tau(f_N(b)) - \varphi(f_N(a))| < s/2$ and $|\tau(g_N(b)) - \varphi(g_N(a))| < s/2$. \begin{eqnarray*} \varphi(1_{[0,c)}(a)) -s & \leq & \varphi(g_N(a)) - s/2 \\ & \leq & \tau(g_N(b)) \\ & \leq & \tau(1_{[0,c)}(b)) \\ & \leq & \tau(1_{[0,c]}(b)) \\ & \leq & \tau(f_N(b)) \\ & \leq & \varphi(f_N(a)) + s/2) \\ & \leq & \varphi(1_{[0,c]}(a)) + s.\\ \end{eqnarray*} This completes the first claim. The second claim for $b$ a positive semidefinite real linear operator follows by the same argument or alternatively, by realizing $b$ as a real matrix embedded in the space of complex matrices and applying the result above. \end{proof} \subsection{Fuglede-Kadison-L{\"u}ck Determinant, Spectral Projections, Finiteness Properties, Rank, Nullity} Geometric decay is a condition on the traces of the spectral projections of a positive operator. It turns out to be equivalent to the condition that the Fuglede-Kadison-L{\"u}ck determinant is nonzero (Section 6, Lemma 6.2), also known as being of \textbf{determinant class} \cite{luckbook} (see the references therein, including \cite{burg}, as well as \cite{pdh}). Suppose $x$ is an element in the tracial von Neumann algebra $(M,\varphi)$. Denote by $\mu$ the spectral distribution of $|x|$ induced by $\varphi$ and by $E_t$ the spectral measure for $|x|$. To be clear, $E_t$ is the projection-valued measure obtained from extending the continuous functional calculus on $|x|$. Recall the Fuglede-Kadison-L{\"u}ck Determinant of $x$, referred to as the 'generalized Fuglede-Kadison determinant' in \cite{luckbook}. Borrowing the terminology and implied name in the exposition \cite{pdh} this is the common quality \begin{eqnarray*} \text{det}_{FKL}(x) & = & \exp \left ( \lim_{\epsilon \rightarrow 0^+} \int_{\epsilon}^{\infty} \log t \, d\varphi(E_t) \right) \\ & = & \exp \left (\int_{(0,\infty)} \log(\lambda) \, d\mu(\lambda) \right)\\ \end{eqnarray*} when the integral is finite, and $0$ otherwise. Notice that $\det_{FKL}(x) \in [0,\infty)$. $x$ is said to be of determinant class (\cite{luckbook}) when $\det_{FKL}(x) >0$. I want to review here three properties of determinant class/geometric decay (expressed in terms of traces of spectral projections): monotonicity under operator ordering, invariance under row reduction operations, and upper-triangular formulas. Choosing to phrase it in terms of determinant class or spectral projections is a matter of taste/convenience. Both formulations are useful. The operator ordering result will be expressed in terms of traces of spectral projections. It relates the ordering of positive elements to their spectral distributions. \begin{WIFP} Suppose $0 \leq a \leq b$ are elements in a tracial von Neumann algebra $(M, \varphi)$. For any $t >0$, \begin{eqnarray*} \varphi(1_{[0,t]}(a)) & \geq & \varphi(1_{[0,t]}(b)). \end{eqnarray*} \end{WIFP} In the matrix case (and thus by routine approximation for operators embeddable into an ultraproduct of the hyperfinite $\mathrm{II}_1$-factor), the above inequality follows from Weyl's inequality. It holds in the general context of a tracial von Neumann algebra (e.g., Lemma 2.5 (iii) in \cite{fk}). The second property will show that when performing finitely many elementary row operations on derivatives one can retain control of spectral projections and rank. This is obvious in the finite dimensional case and in the tracial case it's just a matter of writing out the analogous notions and drawing the natural connections. Denote by $\pi:M \rightarrow B(L^2(M))$ the left regular representation of $M$ on $L^2(M)$. For any $m,n \in \mathbb N$ denote by $M_{m \times n}(M)$ the set of bounded, complex linear operators $T: \oplus_{j=1}^n L^2(M) \rightarrow \oplus_{k=1}^m L^2(M)$ such that the canonical matrix representation of $T$ is of the form \begin{eqnarray*} \begin{bmatrix} T_{11} & \cdots & T_{1n} \\ \vdots & & \vdots \\ T_{m1} & \cdots & T_{mn} \\ \end{bmatrix} \end{eqnarray*} with $T_{ij} \in \pi(M)$. $M_n(M)$ will be shorthand for $M_{n \times n}(M)$. $T^*T \in M_n(A)$. The \textbf{Nullity} and \textbf{Rank} of $T$ are $\text{Nullity}(T) = n \cdot (tr_n \otimes \varphi)(1_{\{0\}}(T^*T))$ and $\text{Rank}(T) = n \cdot (tr_n \otimes \varphi)(1_{(0,\infty)}(T^*T))$ where $tr_n$ is the normalized trace on the $n \times n$ complex matrices. Basic linear algebra facts carry over to the tracial von Neumann algebra context. For example, $\text{Rank}(T) + \text{Nullity}(T) = n$ (rank-nullity equation), $\text{Rank}(T) = m \cdot (tr_m \otimes \varphi)(1_{(0,\infty)}(TT^*))$ (rank of an operator equals the rank of its adjoint), and if $S \in M_n(M)$, then $\text{Rank}(TS) \leq \text{Rank}(T)$. Also, as $T^*T $ is an element in the tracial von Neumann algebra $M_n(M)$, $\det_{FKL}(T)$ is well-defined. One can do all of this in a more algebraic, bimodular setting as in \cite{luckbook}) but I'll use the above approach to maintain the analogy with the rank theorem. Given $T \in M_{m \times n}(M)$, one can perform elementary row column operations such as multiplying a row by an invertible element of $M$, permuting rows, and adding an $M$-left-multiple of one row to another. As in linear algebra, each of these operations can be uniquely expressed by multiplying $T$ from the left by an invertible, "elementary" matrix $E \in M_{m \times m}(M)$. Two matrices $S, T \in M_{m \times n}(M)$ are said to be \textbf{$M$-row equivalent} if there exists a finite product $M$ of elementary matrices $E$ in $M_{m \times m}(M)$ such that $S = ET$. Note here that $E$ is invertible. The following slightly more general terminology will be convenient. If $S \in M_{p \times n}(M)$ and $T \in M_{m \times n}(M)$ with $p <m$, then $S$ and $T$ are $M$-row equivalent iff $S_0, T \in M_{m \times n}(M)$ are $M$-row equivalent where $S_0$ is the element of $M_{m \times n}(M)$ obtained by taking $S$ and turning it into an $M_{m\times n}(M)$ by stacking from below, $m-p$ rows of zeros of length $n$. Notice that with this terminology, $|S| = |S_0|$. Here is the elementary lemma which I'll need: \begin{lemma} If $x,y,z \in M$, then for any $t >0$, $\varphi(1_{[0, t\|x\|\|z\|]})(|xyz|) \geq \varphi(1_{[0,t]}(|y|))$. \end{lemma} \begin{proof} Notice first that for any $a \in M$, if $a = u|a|$ is the polar decomposition, then $a^* = |a|u^* = u^*(u|a|u^*)$ is the polar decomposition of $a^*$. Traciality of $\varphi$ then implies that the moments of $|a^*|$ equal the corresponding moments of $|a|$. Thus, $|a|$ and $|a^*|$ have the same spectral distribution. In particular for any $t >0$, $1_{[0,t]}(|a|) = 1_{[0,t]}(|a^*|)$. Secondly, observe that \begin{eqnarray*} |ab|^2 & = & b^*a^*ab \\ & \leq & \|a^*a\| \cdot b^*b \\ & = & \|a^*a\| \cdot |b|^2.\\ \end{eqnarray*} \noindent Taking square roots, $|ab| \leq \|a\| \cdot |b|$ and applying Weyl's Inequality for positive operators with the observation above shows \begin{eqnarray*} \varphi(1_{[0,t]}(|ab|)) & \geq & \varphi(1_{[0,t]}(\|a\| |b|) \\ & = & \varphi(1_{[0,t\|a\|^{-1}]}(|b|).\\ \end{eqnarray*} Now, to prove the inequality, the second observation, followed by the first observation, and then recycled once more, yields \begin{eqnarray*} \varphi(1_{[0,t]}(|xyz|) & \geq & \varphi(1_{[0,t\|x\|^{-1}]}(|yz|) \\ & = & \varphi(1_{[0,t\|x\|^{-1}]}(|(yz)^*|) \\ & = & \varphi(1_{[0,t\|x\|^{-1}]}(|z^*y^*|) \\ & \geq & \varphi(1_{[0,t\|x\|^{-1}\|z^*\|^{-1}]}(|y^*|) \\ & = & \varphi(1_{[0,t\|x\|^{-1}\|z\|^{-1}]}(|y|)) \\ \end{eqnarray*} \noindent Rescaling $t$ finishes the proof. \end{proof} By Lemma 2.10, one has the following elementary observation: \begin{corollary} If $S, T \in M_{m \times n}(M)$ are $M$-row equivalent, then $\text{Rank}(S) = \text{Rank}(T)$ and $\text{Nullity}(S) = \text{Nullity}(T)$. Moreover, there exists a $r>0$ depending only on the matrix implementing the row equivalence of $S$ and $T$ such that $\varphi(1_{[0,rt]}(|S|)) \geq \varphi(1_{[0,t]}(|T|)$. \end{corollary} Alternatively and equivalently, one can use basic properties of $\det_{FKL}$ to show in the above that $S$ is of determinant class iff $T$ is of determinant class. The third and final property concerning upper triangularity is succinctly phrased in terms of determinants. The following is a property that one would expect, given that $\det_{FKL}$ is a natural extension/analogue of the usual determinant. The proof can essentially be found in Theorem 3.14 (2) of \cite{luckbook}: \begin{proposition} Suppose $S \in M_{m \times n}(M)$ is upper triangular, i.e., $x_{ij} =0$ for all $1 \leq j < i \leq n$. If the $x_{ii}$ are injective for $1 \leq i \leq p$ and $x_{ij}=0$ for $p < i \leq m$, then $\text{Rank}(S)= p$ and \begin{eqnarray*} \text{det}_{FKL}(x) = \Pi_{i=1}^{p} \text{det}_{FKL}(x_i). \end{eqnarray*} \end{proposition} \section{Noncommutative $*$-polynomials, derivatives, rank and nullity} As discussed in the introduction, in order to understand the geometry of tracial von Neumann algebra level sets I want to use the rank/nullity of the derivative as a bound on its microstates dimension. On the von Neumann algebra level these quantities are expressed as traces of spectral projections of a certain derivation. However, this operator algebraic context is predominantly a complex-valued environment, whereas microstates dimension, at least expressed in a manifold fashion, coincides with a real valued notion of dimension. This section deals with formalizing an algebraic framework which allows passage from the real to the complex settings. There are three parts to this section. The first deals with the embedding the real linear bounded operators on $L^2(M)$ into the $2 \times 2$ matrices of complex linear bounded operators on $L^2(M)$; an analogous result on the real linear operators on the free complex $*$-algebra on $n$ unitaries is also discussed. The second applies this to a generalized derivation definition to arrive at an appropriate notion of rank and nullity, and proves some technical results on microstate approximation with rank and nullity. The last subsection applies this to the case where the domain spaces are the self-adjoint or unitary elements. \subsection{$2 \times 2$ Real Representations} Fix a tracial von Neumann algebra $(M, \varphi)$. The trace implements a real inner product on $M$ given by $\langle x,y \rangle_r = \text{Re } \varphi(y^*x)$ as well as the usual complex inner product on $M$ given by $\langle x, y \rangle = \varphi(y^*x)$. Denote by $M_1$ and $M_2$ the real subspaces of self-adjoint and skew-adjoint elements of $M$ and by $H_j$, $j=1,2$ the closures of $M_j$ w.r.t. this real inner product norm ($\subset L^2(M)$). Note that if $\xi, \eta \in H_j$, then $\langle \xi, \eta \rangle_r = \langle \xi, \eta \rangle$. There are natural real projections $e_j: L^2(M)\rightarrow H_j$ given by $e_1 = (I+J)/2$ and $e_2 = (I-J)/2$ where $J$ is the extension of the conjugation map to all of $L^2(M)$. Denote by $B_{\mathbb R}(L^2(M))$ the set of all real linear operators on $L^2(M)$ which are bounded w.r.t. the real norm generated by the real inner product. Lastly, define $\rho$ to be the bijection on $\{1,2\}$ given by $\rho(1)=2$ and $\rho(2)=1$. \begin{lemma} If $x: H_j \rightarrow H_k$ is a bounded, real linear map, then there exists a unique bounded, complex linear map $\tilde{x}:L^2(M) \rightarrow L^2(M)$ which extends $x$. Moreover, $e_k \tilde{x} e_j = \tilde{x} e_j$, $e_j \tilde{x}^* e_k = \tilde{x}^* e_k$, $e_{\rho(k)} \tilde{x} e_{\rho(j)} = \tilde{x} e_{\rho(j)}$, and $e_{\rho(j)} \tilde{x}^* e_{\rho(k)} = \tilde{x}^* e_{\rho(k)}$. \end{lemma} \begin{proof} Uniqueness is obvious since the complex span of $H_1$ or $H_2$ is $L^2(M)$. It remains to establish existence and verify that the complex linear extensions satisfies the relations. By multiplying the domain and range by $i$ when necessary, this reduces to establishing existence and the relations for such an extension when $j=k=1$. In this case $x:H_1 \rightarrow H_1$ is real linear. Define $\tilde{x}: L^2(M) \rightarrow L^2(M)$ by $\tilde{x}(\xi + i \eta) = x(\xi) + i x(\eta)$ for $\xi, \eta \in H_1$. It is easy to check that $\tilde{x}$ is complex linear. Clearly $e_1 \tilde{x} e_1 = \tilde{x} e_1$ and $e_2 \tilde{x} e_2 = \tilde{x} e_2$. It remains to check the relations for $\tilde{x}^*$. Consider the complex linear map $y: L^2(M) \rightarrow L^2(M)$ defined by $y(\xi + i \eta) = x^*\xi + i x^*\eta$ where here, $x^*$ is the real adjoint of $x$. For any $\xi, \eta \in H_1$, \begin{eqnarray*} \langle \tilde{x} \xi, \eta \rangle & = & \langle x \xi, \eta \rangle \\ & = & \text{Re} \langle x \xi, \eta \rangle \\ & = & \text{Re} \langle \xi, x^*\eta \rangle \\ & = & \langle \xi, y \eta \rangle \\ \end{eqnarray*} Using the complex linearity of $\tilde{x}$ and $y$, it follows that the above holds for any $\xi, \eta \in L^2(M)$, i.e., $\tilde{x}^* = y$. But now it is clear from the definition of $y$, that $e_1 y e_1 = y e_1$ and $e_2 y e_2 = ye_2$ and thus $\tilde{x}^*$ satisfies the same relations. \end{proof} For $x \in B_{\mathbb R}(L^2(M))$ the matrix decomposition of $x$ w.r.t. $L^2(M) = H_1 \oplus H_2$ is of the form \begin{eqnarray*} x = \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{bmatrix} \\ \end{eqnarray*} where $x_{jk}: H_k\rightarrow H_j$ are bounded, real linear operators. By Lemma 3.1, for each $1\leq j,k \leq 2$ there exists a unique complex linear operator $\tilde{x}_{jk}: L^2(M) \rightarrow L^2(M)$ extending $x_{jk}$. Moreover, these extensions and their adjoints automatically satisfy the relations with the $e_j$ described in Lemma 3.1. Define $\Phi: B_{\mathbb R}(L^2(M)) \rightarrow M_2(B(L^2(M)))$ by \begin{eqnarray*} \Phi(x) = \begin{bmatrix} \tilde{x}_{11} & \tilde{x}_{12} \\ \tilde{x}_{21} & \tilde{x}_{22} \\ \end{bmatrix}. \\ \end{eqnarray*} \begin{proposition} $\Phi: B_{\mathbb R}(L^2(M)) \rightarrow M_2(B(L^2(M)))$ is a real linear, $*$-preserving, multiplicative map which send the identity to the identity. \end{proposition} \begin{proof} Suppose $x, y \in B_{\mathbb R}(L^2(M))$ with matrix decompositions $\langle x_{ij} \rangle_{1\leq i,j \leq 2}, \langle y_{ij} \rangle_{1 \leq i,j \leq 2}$ w.r.t. $H_1 \oplus H_2$. The matrix decomposition of $rx+y$ w.r.t. $H_1 \oplus H_2$ is clearly $\langle rx_{ij} + y_{ij} \rangle_{1 \leq i,j \leq 2}$ and $r \tilde{x}_{ij} + \tilde{y}_{ij}$ is the unique complex linear extension of $r x_{ij} + y_{ij}$. By definition then, $\Phi(rx+y) = r \Phi(x) + \Phi(y)$. To show that $\Phi$ preserves the $*$-operation, using the identities which $\tilde{x}_{ij}$ and $\tilde{x}_{ij}^*$ must satisfy by Lemma 3.1, $x^*$ has a matrix decomposition of the form \begin{eqnarray*} \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{bmatrix}^* & = & \begin{bmatrix} e_1 \tilde{x}_{11} e_1 & e_1 \tilde{x}_{12} e_2 \\ e_2 \tilde{x}_{21} e_1 & e_2 \tilde{x}_{22} e_2 \\ \end{bmatrix}^* \\ \\ & = & \begin{bmatrix} e_1 \tilde{x}_{11}^* e_1 & e_1 \tilde{x}_{21}^* e_2 \\ e_2 \tilde{x}_{12}^* e_1 & e_2 \tilde{x}_{22}^* e_2 \end{bmatrix} \\ \\ & = & \begin{bmatrix} \tilde{x}_{11}^* e_1 & \tilde{x}_{21}^* e_2 \\ \tilde{x}_{12}^* e_1 & \tilde{x}_{22}^* e_2 \end{bmatrix}. \\ \end{eqnarray*} By the uniqueness of the complex linear extension (Lemma 3.1), \begin{eqnarray*} \Phi(x^*) & = & \begin{bmatrix} \tilde{x}_{11}^* & \tilde{x}_{21}^* \\ \tilde{x}_{12}^* & \tilde{x}_{22}^* \end{bmatrix} \\ & = & \Phi(x)^*.\\ \end{eqnarray*} For multiplicativity, the matrix decomposition of $xy$ is the product of the two matrix representations of $x$ and $y$: \begin{eqnarray*} \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{bmatrix} \begin{bmatrix} y_{11} & y_{12} \\ y_{21} & y_{22} \end{bmatrix} & = & \begin{bmatrix} \tilde{x}_{11} e_1 & \tilde{x}_{12} e_2 \\ \tilde{x}_{21} e_1 & \tilde{x}_{22} e_2 \end{bmatrix} \begin{bmatrix} e_1 \tilde{y}_{11} e_1 & e_1 \tilde{y}_{12} e_2 \\ e_2 \tilde{y}_{21} e_1 & e_2 \tilde{y}_{22} e_2 \end{bmatrix} \\ & = & \begin{bmatrix} \tilde{x}_{11} e_1 \tilde{y}_{11} e_1 + \tilde{x}_{12}e_2 \tilde{y}_{21} e_1 & \tilde{x}_{11}e_1 \tilde{y}_{12} e_2 + \tilde{x}_{12} e_2 y_{22} e_2 \\ \tilde{x}_{21} e_1 \tilde{y}_{11} e_1 + \tilde{x}_{22}e_2\tilde{y}_{21}e_1 & \tilde{x}_{21}e_1\tilde{y}_{12}e_2+ \tilde{x}_{22} e_2 y_{22} e_2 \\ \end{bmatrix} \\ & = & \begin{bmatrix} (\tilde{x}_{11} \tilde{y}_{11} + \tilde{x}_{12}\tilde{y}_{21}) e_1 & (\tilde{x}_{11}\tilde{y}_{12} + \tilde{x}_{12} \tilde{y}_{22}) e_2 \\ (\tilde{x}_{21} \tilde{y}_{11} + \tilde{x}_{22}\tilde{y}_{21})e_1 & (\tilde{x}_{21}\tilde{y}_{12}+ \tilde{x}_{22} \tilde{y}_{22}) e_2 \\ \end{bmatrix}. \\ \end{eqnarray*} The parenthetical terms of the last matrix are complex linear maps and thus by the uniqueness of the complex linear extension, \begin{eqnarray*} \Phi(xy) & = & \begin{bmatrix} (\tilde{x}_{11} \tilde{y_{11}} + \tilde{x}_{12}\tilde{y}_{21}) & (\tilde{x}_{11}\tilde{y}_{12} + \tilde{x}_{12} \tilde{y}_{22}) \\ (\tilde{x}_{21} \tilde{y}_{11} + \tilde{x}_{22}\tilde{y}_{21}) & (\tilde{x}_{21}\tilde{y}_{12}+ \tilde{x}_{22} \tilde{y}_{22}) \\ \end{bmatrix} \\ & = & \begin{bmatrix} \tilde{x}_{11} & \tilde{x}_{12} \\ \tilde{x}_{21} & \tilde{x}_{22} \end{bmatrix} \begin{bmatrix} \tilde{y}_{11} & \tilde{y}_{12} \\ \tilde{y}_{21} & \tilde{y}_{22} \end{bmatrix} \\ & =& \Phi(x) \Phi(y).\\ \end{eqnarray*} The claim concerning the identity is trivial. \end{proof} \begin{example} Suppose $(M,\varphi)$ is a tracial von Neumann algebra and consider its left and right actions, $\pi_r$ and $\pi_l$ on $L^2(M)$. For $x \in M$, $\pi_r(x) \in B(L^2(M)) \subset B_{\mathbb R}(L^2(M))$. What is $\Phi(\pi_l(x))$? Restricting $\pi_l(x)$ to the real linear subspaces $M^{sa}$ and $M^{sk}$, it is straightforward to show that \begin{eqnarray*} \Phi(\pi_l(x)) & = & \frac{1}{2} \cdot\begin{bmatrix} \pi_l(x) + \pi_r(x^*) & \pi_l(x) - \pi_r(x^*) \\ \pi_l(x) - \pi_r(x^*) & \pi_l(x) + \pi_r(x^*) \end{bmatrix} \\ & \in & M_2(B(L^2(M))).\\ \end{eqnarray*} In particular, when $M=\mathbb C$ and $x = \lambda = a + ib \in \mathbb C$ with $a, b\in \mathbb R$, then the above equation becomes \begin{eqnarray*} \Phi(\pi_l(x)) & = & \begin{bmatrix} a & ib \\ ib & a \end{bmatrix} \\ & \in & M_2(\mathbb C).\\ \end{eqnarray*} Compare this to the canonical real matrix representation of $x=\lambda$ \emph{relative to the real basis} $\{1, i\} \subset \mathbb C$: \begin{eqnarray*} \begin{bmatrix} a & -b \\ b & a \\ \end{bmatrix} \\ \end{eqnarray*} \end{example} Proposition 3.2 can be applied to $M = M_k(\mathbb C)$ in which case one can also account for the action of the traces. Denote by $Tr_{B_{\mathbb R}(M_k(\mathbb C))}$ the unnormalized real trace on $B_{\mathbb R}(M_k(\mathbb C))$ and by $Tr_{M_2(B(M_k(\mathbb C)))}$ the unnormalized, complex trace on $M_2(B(M_k(\mathbb C)))$, i.e., the trace on the $2\times 2$ matrices with entries regarded as complex linear operators on the complex vector space $M_k(\mathbb C)$. It is transparent from the definition of $\Phi$ that $Tr_{B_{\mathbb R}(M_k(\mathbb C))} = Tr_{M_2(B(M_k(\mathbb C)))} \circ \Phi$. Hence the following holds: \begin{lemma} $\Phi : (B_{\mathbb R}(M_k(\mathbb C)), Tr_{B_{\mathbb R}(M_k(\mathbb C))}) \rightarrow (M_2(B(M_k(\mathbb C))), Tr_{M_2(B(M_k(\mathbb C)))})$ is a real linear, multiplicative, $*$-preserving, injection which preserves the unnormalized traces. \end{lemma} An analogous derivation can be performed for the universal complex, unital $*$-algebra $B_n$, on $n$ unitary generators. Denote by $\mathcal A_1$ and $\mathcal A_2$ the real $*$-subalgebras of $B_n$ consisting of self-adjoint and skew-adjoint elements of $B_n$. Notice that $\mathcal A_2 = i \mathcal A_1$. Define $e_1: B_n \rightarrow \mathcal A_1$ by $e_1(x) = (x+x^*)/2$ and $e_2 = I-e_1$. $e_1$ and $e_2$ are the real idempotent projection maps onto $\mathcal A_1$ and $\mathcal A_2$, respectively. Arguing as in Lemma 3.1 yields: \begin{lemma} If $x: \mathcal A_i \rightarrow \mathcal A_j$ is real linear, then there exists a unique complex linear map $\tilde{x}: B_n \rightarrow B_n$ which extends $x$. Moreover, $e_j \tilde{x} e_i = \tilde{x} e_i$, $e_{\rho(j)} \tilde{x} e_{\rho(i)} = \tilde{x} e_{\rho(i)}$. \end{lemma} Denote by $L_{\mathbb R}(B_n)$, $L_{\mathbb C}(B_n)$ the space of real and complex linear operators on $B_n$. Exactly as before, for $x \in L_{\mathbb R}(B_n)$ the matrix decomposition of $x$ w.r.t. $B_n = \mathcal A_1 \oplus \mathcal A_2$ (here $\oplus$ denotes the algebraic direct sum) is of the form \begin{eqnarray*} x = \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{bmatrix} \\ \end{eqnarray*} where $x_{ij}: \mathcal A_j \rightarrow \mathcal A_i$ are real linear operators. By Lemma 3.4, for each $1\leq i,j \leq 2$ there exists a unique complex linear operator $\tilde{x}_{ij}: \mathfrak{A}_n \rightarrow \mathfrak{A}_n$ extending $x_{ij}$. Moreover, these extensions automatically satisfy the relations $e_i \tilde{x}_{ij} e_j=\tilde{x}_{ij} e_j$ and $e_{r(i)} \tilde{x}_{ij} e_{r(j)}=\tilde{x}_{ij} e_{r(j)}$. Define again $\Phi: L_{\mathbb R}(B_n) \rightarrow M_2(L_{\mathbb C}(B_n))$ by \begin{eqnarray*} \Phi(x) = \begin{bmatrix} \tilde{x}_{11} & \tilde{x}_{12} \\ \tilde{x}_{21} & \tilde{x}_{22} \\ \end{bmatrix} \\ \end{eqnarray*} As before, $\Phi$ is a real linear, multiplicative map which sends the identity to the identity. Finally, note that if $\odot$ denotes the algebraic tensor product, then there exists a complex linear, homomorphism $\pi: B_n \odot B_n \rightarrow L_{\mathbb C}(B_n)$ uniquely determined by $\pi(a \odot b) = L_{a,b}$ where $L_{a,b} : B_n \rightarrow B_n$ is defined by $L_{a,b}(x) = axb$. Moreover, using the universality of $B_n$, $\pi$ is injective for $n >1$. This can be seen by identifying $B_n$ with the complex $*$-algebra generated by the $n$ canonical unitaries in $L(\mathbb F_n)$. From here, it is enough to show that for any finite list of $2$-tuples of elements in $\mathbb F_n$, $(a_i,b_i)$ with $a_i \neq e$, there exists a single group element $g \in \mathbb F_n$ such that for all $i$, $ga_ig^{-1} \neq b_i$. Alternatively, using factoriality of $L(\mathbb F_n)$ one can invoke invoke Corollary 4.20 of \cite{takesaki}. In any case, putting all this together with the fact that the algebraic tensor product of injective maps is injective, one has the following result which will be useful for the unitary calculus: \begin{lemma} If $n >1$, then $\pi \odot I_2: M_2(B_n \odot B_n) \rightarrow M_2(L_{\mathbb C}(B_n))$ is an injective $*$-homomorphism. \end{lemma} \subsection{Derivatives, Derivations, Rank, and Nullity} Throughout this section $\odot$ will designate the algebraic tensor product and $\otimes$ will denote the von Neumann algebra tensor product. If $(A, \varphi)$, $(B, \psi)$ are tracial complex $*$-algebras, then there exists a trace on $A \odot B$ given by $(\varphi \odot \psi)(a\odot b) = \varphi(a) \cdot \psi(b)$. Similarly if $(A, \varphi)$ and $(B, \psi)$ are tracial von Neumann algebras, then there exists a canonical tracial state on $A \otimes B$ given by $(\varphi \otimes \psi)(a \otimes b) = \varphi(a) \cdot \psi(b)$. In this case $A \odot B$ canonically embeds into $A \otimes B$ by a trace-preserving, injective, unital $*$-homomorphism $\iota$ (this is a bloated way of saying that the minimal norm is in fact a norm and not just a seminorm). The following discussion will motivate the definition of $D^sF(X)$. This notion is defined in terms of derivations and will be convenient for describing operator calculus. It will be important to connect this to the ordinary derivative which will be used in the microstate setting. One can succinctly express their connection in terms of a commutative diagram (Remark 3.8) which will be fleshed out in the discussion below. Suppose $\mathfrak{A}_n$ is the universal, unital complex $*$-algebra on $n$ generators $\mathbb X= \{X_1, \ldots, X_n\}$. $\mathfrak{A}_n^{op}$ is the opposite algebra associated to $\mathfrak{A}_n$. For each $1 \leq j \leq n$ denote by $A_j$ the unital $*$-algebra generated by $\mathbb X - \{X_j\}$. If $\xi =\{\xi_1,\ldots, \xi_n\}$ is an $n$-tuple in a tracial von Neumann algebra $M$, then there exist canonical $*$-homomorphisms $\pi_{\xi}: \mathfrak{A}_n \rightarrow M$, $\pi_{\xi}^{op}: \mathfrak{A}_n \rightarrow M^{op}$, such that $\pi_{\xi}(X_i) = \xi_i \in M$ and $\pi^{op}_{\xi}(X_i) = \xi_i^{op} \in M^{op}$, $1\leq i \leq n$. Denote by $\sigma_M : M \odot M^{op} \rightarrow B(L^2(M))$ the unique complex $*$-homomorphism such that for any $a, b, \xi \in M$ and $\xi \in M \subset L^2(M)$, $\sigma_M(a \odot b^{op})(\xi) = a \xi b$. Define $\pi_M(\xi): \mathfrak{A}_n\odot \mathfrak{A}_n^{op} \rightarrow B(L^2(M))$ by $\pi_M = \sigma_M \circ (\pi_{\xi} \odot \pi^{op}_{\xi})$. Fix $1 \leq r \leq n$, $f \in \mathfrak{A}_n$, and set $A = A_r$, $x= X_r$. $f$ induces a well-defined function from $\oplus_{i=1}^n M$ into $M$, which will be denoted again by $f$. Regarding $\oplus_{i=1}^n M$ and $M$ as real Banach spaces with the operator norm, $f$ is differentiable (the operator norm is necessary, since the $L^2$-norm is not submultiplicative). In particular, writing $f = \sum \lambda_{i_1,\ldots, i_{p+1}} a_{i_1} x^{q_{i_1}} \cdots a_{i_p} x^{q_{i_p}} a_{i_{p+1}}$ where $q_{i_j} \in \{1, *\}$ and $a_{i_j} \in A$, the partial derivative of $f$ with respect to the jth variable at $\xi = (\xi_1,\ldots, \xi_n) \in \oplus_{i=1}^n M$ is the bounded, real linear map $\partial_jf(\xi)$ on $M$ given by \begin{eqnarray*} & & \sum \lambda_{i_1,\ldots, i_{p+1}} \left [ \pi_M(\xi)(a_{i_1} \odot a_{i_2} \cdots a_{i_p} x^{q_{i_p}} a_{i_{p+1}}) J^{\delta_{q_{i_1}, *}} + \cdots + \pi_M(\xi)(a_{i_1} x^{q_{i_1}} a_{i_2} \cdots a_{i_p} \odot a_{i_{p+1}}) J^{\delta_{q_{i_p}, *}} \right] \\ \end{eqnarray*} \noindent where $J$ is the (real linear) conjugation extension on $L^2(M)$. This is effectively the free partial derivative except that the appearance of a conjugated element requires an added $J$ term. As in the previous subsection set $M_1 = M^{sa}$, $M_2 =i M^{sa}$, $H_i = L^2(M_i)$. The partial derivative operator $\partial_jf(\xi)$ above is clearly bounded w.r.t. the $L^2$-norm and one can regard $\partial_jf(\xi)$ as an element of $B_{\mathbb R}(L^2(M))$. Applying $\Phi$ of the preceding section to $\partial_jf(\xi)$ so regarded, one arrives at a $2 \times 2$ matrix representation of $\partial_jf(\xi)$ realized as an element of $M_2(\sigma_M(M \odot M^{op})) \subset M_2(B(L^2(M)))$. Recall that this is obtained by breaking the action of $\partial_jf(\xi)$ up into actions on the $H_i$, and then extending these restricted real linear operators into complex linear operators. Alternatively, the resultant matrix representation of $\partial_jf(\xi)$ can be expressed in terms of derivations. To see this note that $\{x,x^*\}$ generates the free complex $*$-algebra generated by a single indeterminate and $A=A_r$ is free from the unital complex algebra generated by $\{x,x^*\}$. Endowing $A[x,x^*] \odot A[x,x^*]^{op}$ with the canonical $A[x,x^*]$-$A[x,x^*]^{op}$ bimodule structure, define the complex linear derivations $\partial_{sa}, \partial_{sk} : A[x,x^*] \rightarrow A[x,x^*] \odot A[x,x^*]^{op}$ by the relations $\partial_{sa}(A) = \partial_{sk}(A) = 0$, $\partial_{sa}(x) = \partial_{sa}(x^*) = \partial_{sk}(x) = I \odot I$ and $\partial_{sk}(x^*) = -I \odot I$. The $\Phi$-matrix of the operator $\partial_jf(\xi)$ with respect to the real inner product decomposition $L^2(M) = H_1 \oplus H_2$ can be described with $\partial^{sa}$ and $\partial^{sk}$. Consider \begin{eqnarray*} \begin{bmatrix} e_1 \pi_M(\xi)(\partial_{sa} f) e_1 & e_1\pi_M(\xi)(\partial_{sk} f) e_2 \\ e_2 \pi_M(\xi)(\partial_{sa} f) e_1 & e_2\pi_M(\xi)(\partial_{sk} f) e_2 \\ \end{bmatrix} = \begin{bmatrix} \pi_M(\xi)(\partial_{sa} f_1) e_1 & \pi_M(\xi)(\partial_{sk} f_1) e_2 \\ \pi_M(\xi)(\partial_{sa} f_2) e_1 & \pi_M(\xi)(\partial_{sk} f_2) e_2 \\ \end{bmatrix}\\ \end{eqnarray*} \noindent where $f_1 = (f+f^*)/2$, $f_2 = (f-f^*)/2$, $e_1 = (I + J)/2$, and $e_2 = (I-J)/2$. This is the matrix representation of $\partial_j f(\xi)$ w.r.t. $H_1 \oplus H_2$ . $\Phi(\partial_jf(\xi))$ is then \begin{eqnarray*} \begin{bmatrix} \pi_M(\xi)(\partial_{sa} f_1) & \pi_M(\xi)(\partial_{sk} f_1) \\ \pi_M(\xi)(\partial_{sa} f_2) & \pi_M(\xi)(\partial_{sk} f_2) \\ \end{bmatrix} & \in & M_2(B(L^2(M))).\\ \end{eqnarray*} From the standpoint of $*$-moments, however, these expressions are problematic. This is because they all involve $\pi_M$ which is defined with $\sigma_M$ which in turn isn't always an embedding. However, omitting the application of $\sigma_M$ in the definition $\pi_M = \sigma_M \circ (\pi_{\xi} \odot \pi_{\xi}^{op})$ in the above yields \begin{eqnarray*} \begin{bmatrix} (\pi_{\xi} \odot \pi^{op}_{\xi})(\partial_{sa} f_1) &(\pi_{\xi} \odot \pi^{op}_{\xi})(\partial_{sk} f_1) \\ (\pi_{\xi} \odot \pi^{op}_{\xi})(\partial_{sa} f_2) & (\pi_{\xi} \odot \pi^{op}_{\xi})(\partial_{sk} f_2) \\ \end{bmatrix} & \in & M_2(M \odot M^{op}) .\\ \end{eqnarray*} This matrix has $*$-moments which can be described entirely in terms of the moments of $\xi$ and (after applying the dummy embedding $\iota$ which will replace the algebraic tensor product with the von Neumann algebra tensor product) its entries will lie in the tracial von Neumann algebra $M \otimes M^{op}$. Notice that in the special case of $M = M_k(\mathbb C)$ all these matricial expressions and their images under $\Phi$ are elements in tracial von Neumann algebras and the expressions have the same $*$-moments. Indeed in this case $\Phi$ is a trace-preserving, real $*$-embedding by Lemma 3.3, $\sigma_M$ is a trace-preserving $*$-isomorphism, and $M \odot M^{op} \simeq M \otimes M^{op} \simeq B(L^2(M))$. The discussion motivates the following definitions and remark. \begin{definition} Suppose that $\mathfrak{A}$ is a unital, complex $*$-algebra and $x \in \mathfrak{A}$ such that the unital $*$-algebra generated by $x$, $\mathbb C[x,x^*]$, is isomorphic to the universal complex $*$-algebra on a single element. Assume moreover that $A \subset \mathfrak{A}$ is a unital $*$-algebra and $A$ and $\mathbb C[x,x^*]$ are algebraically free and denote the unital $*$-subalgebra they generate by $A[x,x^*]$. The maps $\partial_{sa}, \partial_{sk} : A[x,x^*] \rightarrow A[x,x^*] \odot A[x,x^*]^{op}$ are the unique derivations defined by the relations $\partial_{sa}(A) = \partial_{sk}(A) = 0$, $\partial_{sa}(x) = \partial_{sa}(x^*) = \partial_{sk}(x) = I \odot I^{op}$ and $\partial_{sk}(x^*) = -I \odot I^{op}$, where $A[x,x^*] \odot A[x,x^*]^{op}$ is given the natural $A[x,x^*]-A[x,x^*]^{op}$ bimodule structure. For $f \in A[x,x^*]$ write $f = f_1 + f_2$ where $f_1 = (f + f^*)/2$ and $f_2 = (f-f^*)/2$. Define \begin{eqnarray*} D_{(x,x^*)}f = \begin{bmatrix} \partial_{sa} f_1 & \partial_{sk} f_1 \\ \partial_{sa} f_2 & \partial_{sk} f_2 \\ \end{bmatrix} \in M_2( A[x,x^*] \odot A[x,x^*]^{op}).\\ \end{eqnarray*} \end{definition} \begin{definition} Fix an $n$-tuple $X=\{x_1,\ldots, x_n\}$ in the tracial von Neumann algebra $M$ and suppose $f \in \mathfrak{A}_n$ and $1 \leq j \leq n$. Suppose $\mathbb X = \{X_1,\ldots, X_n\}$ are the canonical generators for $\mathfrak{A}_n$ and $A_j$ is the unital $*$-subalgebra generated by $\mathbb X - \{X_j\}$. Set $x =X_j$, $A=A_j$, and consider $D_{(x,x^*)}$ in Definition 3.6. The jth partial S-derivative of $f$ at $X$ is the element \begin{eqnarray*} (\partial_j^s f)(X) & = & (\pi_{X} \otimes \pi_{X}^{op} \otimes I_2) \circ (D_{(x_j,x_j^*)}f) \\ & \in & M_2(M \odot M^{op}) \\ & \subset & M_2(M \otimes M^{op}).\\ \end{eqnarray*} \end{definition} \begin{remark} Suppose $f \in \mathfrak A_n$ and $X = \{x_1,\ldots, x_n\}$ is an $n$-tuple in the tracial von Neumann algebra $M$. As before $f$ can be regarded as a smooth function from the real Banach space $\oplus_{i=1}^n M$ into $M$. Fix $1 \leq j \leq n$ and set $A= A_j \subset \mathfrak{A}_n$, $x=X_j \in \mathfrak{A}_n$. Denote by $(\partial_jf)(X)$ the jth partial derivative operator of $f$ evaluated at $X$, regarded as a bounded real linear map on the real Hilbert space $L^2(M)$. The discussion and definitions above yield the following commutative diagram: \begin{eqnarray*} \begin{tikzpicture}[scale=4] \node (A) at (0,1.2) {$B_{\mathbb R}(L^2(M))$}; \node (B) at (1.2,1.2) {$\mathfrak{A}_n$}; \node (C) at (0,0) {$M_2(\sigma_M(M \otimes M^{op}))$}; \node (D) at (1.2,0) {$M_2(M \odot M^{op})$}; \node (E) at (1.8,0.6) {$M_2(A[x,x^*] \odot A[x,x^*]^{op})$}; \node (F) at (2.2, 0) {$M_2(M \otimes M^{op})$}; \path[->,font=\scriptsize] (B) edge node[above]{$(\partial_j \cdot)(X)$} (A) (A) edge node[left]{$\Phi$} (C) (B) edge node[left]{$(\partial_j^s \cdot)(X)$} (D) (B) edge node[midway, right]{$\hspace{.1in} D_{(x,x^*)}$} (E) (E) edge node [below, right]{$\hspace{.1in}(\pi_X \odot \pi_X^{op}) \otimes I_2$} (D) (D) edge node[above]{$\sigma_M \otimes I_2$} (C) (D) edge [commutative diagrams/hook] node[above]{$\iota \otimes I_2$} (F); \end{tikzpicture} \end{eqnarray*} \end{remark} \begin{definition} Fix an $n$-tuple $X=\{x_1,\ldots, x_n\}$ in the tracial von Neumann algebra $M$ and an $m$-tuple $F= \{f_1,\ldots,f_m\}$ in $\mathfrak{A}_n$. For each $1\leq i \leq m$, $1 \leq j \leq n$, consider the $\partial^s_jf_i(X)$, the $j$th partial S-derivative of $f_i$ at $X$ realized in the von Neumann algebra tensor product $M_2(M \otimes M^{op})$. The S-derivative of $F$ at $X$ is \begin{eqnarray*} D^sF(X)= \begin{bmatrix} (\partial_1^s f_1)(X) & \cdots & (\partial^s_n f_1)(X) \\ \vdots & & \vdots \\ (\partial_1^s f_m)(X) & \cdots & (\partial^s_n f_m)(X) \\ \end{bmatrix} \in M_{m\times n}(M_2(M \otimes M^{op})). \end{eqnarray*} \end{definition} The S-derivative of $F$ at $X$ is an operator whose $*$-moments (when defined) can be characterized in terms of the moments of $X$. It is naturally connected to the ordinary derivative of $F$ at $X$ when $F$ is regarded as a function on a Banach space direct sum of copies of $M$ (Remark 3.8 and the commutative diagram). Moreover, one can make sense of its rank and nullity as described in Section 2.6 when $D^sF(X)$ is regarded as a matrix with $M\otimes M^{op}$ entries. I'll show shortly that this rank and nullity reflect the rank and nullity of the corresponding (ordinary) derivative of $F$ at a microstate. These notions will be used to establish global dimension and entropy bounds on the microstate spaces. \begin{definition} Consider $D^sF(X)$ regarded as an element $M_{2m \times 2n}(M\otimes M^{op}) = M_{m\times n}(M_2(M \otimes M^{op}))$. $\text{Nullity}(D^sF(X))$ and $\text{Rank}(D^sF(X))$ are the rank and nullity of $D^sF(X)$ so regarded and in the sense of Section 2.6. More explicitly set $P = 1_{\{0\}}(D^sF(X)^* D^sF(X))$ and $Q = 1_{(0,\infty)}(D^sF(X)^* D^sF(X))$. $P, Q \in M_{2n}(M \otimes M^{op})$. Consider the tracial state $\psi = tr_{2n} \otimes (\varphi \otimes \varphi^{op})$ on $M_{2n}(M \otimes M^{op})$. The nullity and rank of $F$ at $X$ are \[ \text{Nullity}(D^sF(X)) = 2n \cdot \psi(P) \in [0,2n] \] \noindent and \[ \text{Rank}(D^sF(X)) = 2n \cdot \psi(Q) \in [0,2n], \] \noindent respectively. \end{definition} \begin{remark} When $G = \{g_1,\ldots,g_p\}$ is another $p$-tuple of elements in $\mathfrak{A}_n$, then one can consider the obvious $(n+p)$-tuple $F \cup G$ and $D^s(F \cup G)(X)$ denotes the element of $M_{(m+p) \times n}(M_2(M \otimes M^{op}))$ obtained by 'stacking' $D^sF(X)$ on top of $D^sG(X)$. In this case if $Q_F$ is the projection onto the kernel of $D^sF(X)$ and $Q_G$ is the projection onto the kernel of $D^sG(X)$, then the projection onto the kernel of $D^s(F \cup G)(X)$ is $Q_F \wedge Q_G$. \end{remark} \subsection{Microstate approximations} In this section I'll see how properties of the $S$-derivative of a tuple $F$ at $X$ are reflected by microstates of $X$. Suppose $A$ and $B$ are tracial $*$-algebras (real or complex). Given an indexing set $J$ for subsets $X =\langle x_j \rangle_{j \in J}$ and $Y = \langle y_j \rangle_{j \in J}$ in $A$ and $B$, I will write $X \approx Y$ if $X$ and $Y$ have the same $*$-moments. Suppose for each $1 \leq i \leq p$, $f_i \in \mathfrak{A}_n$ and $F = (f_1, \ldots, f_p)$. Regard $F$ as a map from $(M_k(\mathbb C))^n$ into $(M_k(\mathbb C))^p$ where $F(\xi) = (f_1(\xi),\ldots, f_p(\xi))$, $\xi \in (M_k(\mathbb C))^n$. As such the derivative of $F$ at $\xi \in (M_k(\mathbb C))^n$ has the matrix representation \[ DF(\xi) = \begin{bmatrix} \partial_1 f_1(\xi) & \cdots & \partial_n f_1(\xi) \\ \vdots & & \vdots \\ \partial_1 f_p(\xi) & \cdots & \partial_n f_p(\xi) \\ \end{bmatrix} \in M_{p\times n}(B_{\mathbb R}(M_k(\mathbb C))) \] \noindent where $\partial_if_j(\xi)$ are the partial derivatives of the $f_i$ regarded as functions from $\oplus_{i=1}^n M_k(\mathbb C)$ to $M_k(\mathbb C)$; this was discussed in the preceding subsection in the context of a general von Neumann algebra. Proposition 3.3 asserts that $\Phi$ is a real, trace-preserving $*$-embedding. $\sigma_M \otimes I_2$ is a bijective, trace-preserving $*$-isomorphism of complex $*$-algebras for $M = M_k(\mathbb C)$ . Thus, by the commutative diagram of Remark 3.8, \begin{eqnarray*} \langle (\partial_if_j)(\xi) \rangle_{1\leq i \leq p, 1\leq j \leq n} & \approx & \langle \Phi((\partial_if_j)(\xi)) \rangle_{1 \leq i \leq p, 1 \leq j \leq n} \\ & \approx & \langle (\sigma_M \otimes I_2)(\partial^s_i f_j)(\xi) \rangle_{1 \leq i \leq p, 1 \leq j \leq n}\\ & \approx & \langle (\partial^s_i f_j)(\xi) \rangle_{1 \leq i \leq p, 1 \leq j \leq n}\\ \end{eqnarray*} Denote by $\Lambda =\langle e_{ij} \rangle_{1 \leq i,j \leq n}$ the canonical system of matrix units for $M_n(\mathbb C)$. Set \[ \Lambda_1 = \langle e_{ij} \otimes I_{B_{\mathbb R}(M_k(\mathbb C))} \rangle_{1 \leq i,j \leq n}, \] \[ \Lambda_2 = \langle e_{ij} \otimes I_{M_2(M_k(\mathbb C) \otimes M_k(\mathbb C)^{op})} \rangle_{1 \leq i,j \leq n}, \] and \[ \Lambda_3 = \langle e_{ij} \otimes I_{M_2(M \otimes M^{op})} \rangle_{1 \leq i,j \leq n}. \] It follows from the $*$-distributional equivalences above that \begin{eqnarray*} (DF(\xi)^*DF(\xi), \Lambda_1) \subset M_n(B_{\mathbb R}(M_k(\mathbb C)) =M_n(\mathbb C) \otimes B_{\mathbb R}(M_k(\mathbb C)) \end{eqnarray*} and \begin{eqnarray*} (D^sF(\xi)^*D^sF(\xi), \Lambda_2) \in M_n(M_2(M_k(\mathbb C) \otimes M_k(\mathbb C)^{op})) = M_n(\mathbb C) \otimes (M_2(M_k(\mathbb C) \otimes M_k(\mathbb C)^{op})) \end{eqnarray*} have the same $*$-moments. Notice that the algebras on the right hand sides are given the canonical real and complex traces, respectively. Now the trace of any $*$-monomial of the entries of $D^sF(\xi)^*D^sF(\xi)$ and the trace of the corresponding $*$-monomial of the entries of $D^sF(X)^*D^sF(X)$ can be made arbitrarily close provided that the $*$-moments of $\xi$ are sufficiently close to those of $X$. Thus, for any $R >0$, there exists an $R_1 >0$ dependent only on $F$ and $X$ such that for any given $m_1 \in \mathbb N$ and $\gamma_1 >0$, there exists a $m \in \mathbb N$ and $\gamma >0$ such that if $\xi \approx^{R,m, \gamma} X$, then $(D^sF(\xi)^*D^sF(\xi), \Lambda_2) \approx^{R_1, m_1, \gamma_1} (D^sF(X)^* D^sF(X), \Lambda_3)$. From the above algebraic identifications \begin{eqnarray*} (D^sF(\xi)^*D^sF(\xi), \Lambda_2) \approx (DF(\xi)^*DF(\xi), \Lambda_1). \end{eqnarray*} Putting these two facts together yields: \begin{proposition} Suppose $X$ is an $n$-tuple of elements in $M$ and $F$ is a $p$-tuple of elements in $\mathfrak{A}_n$. For any $R>0$ there exists an $R_1>0$ dependent only on $F,X,R$ such that if $m_1 \in \mathbb N$ and $\gamma_1 >0$, then there exist $m \in \mathbb N$ and $\gamma >0$ such that if $\xi \approx^{R,m,\gamma} X$, then $DF(\xi)^*DF(\xi) \approx^{R_1, m_1,\gamma_1} D^sF(X)^*D^sF(X)$. \end{proposition} Combining Lemma 2.9 with the proposition above, one has the following: \begin{proposition} Suppose $X$ is an $n$-tuple of elements in $M$ and $F$ is a $p$-tuple of elements in $\mathfrak{A}_n$. If $\alpha,r, R >0$, then there exist $m \in \mathbb N$ and $\gamma >0$ such that if $\xi \in \Gamma_R(X;m,k,\gamma)$, then the real dimension of the range of the projection $1_{[0,\alpha]}(DF(\xi)^*DF(\xi))$ on $(M_k(\mathbb C))^n$ is no greater than \begin{eqnarray*} 2nk^2 \cdot \psi(1_{[0,\alpha]}(D^sF(X)^*D^sF(X)) + r) \end{eqnarray*} \noindent where $\psi = (tr_n \otimes (tr_2 \otimes( \varphi \otimes \varphi^{op})))$ is the canonical trace on $M_n(M_2(M \otimes M^{op}))$. In particular, if $s, R >0$, then there exist $m \in \mathbb N$ and $\rho, \gamma >0$ such that if $\xi \in \Gamma_R(X;m,k,\gamma)$ and $0 < t < \rho$, then the real dimension of the range of the projection $1_{[0,t]}(|DF(\xi)|)$ on $(M_k(\mathbb C))^n$ is no greater than \begin{eqnarray*} (\text{Nullity}(D^sF(X)) + s) k^2. \end{eqnarray*} \end{proposition} \begin{proof} By Proposition 3.12 there exists an $R_1$ dependent only on $F, X, R$ such that for any $m_0 \in \mathbb N$ and $\gamma_0 >0$ there exist $m_1 \in \mathbb N$, $\gamma_1 >0$ such that if $\xi \approx^{R,m_1,\gamma_1} X$, then $DF(\xi)^* DF(\xi) \approx^{R_1, m_0,\gamma_0} D^sF(X)^* D^sF(X)$. Applying Lemma 2.9, there exists $m_2 \in \mathbb N$, and $\gamma_2 >0$ such that if $Y$ is a positive semidefinite, symmetric, real linear operator on a finite dimensional real vector space and $Y \approx^{R_1,m_2,\gamma_2} D^sF(X)^*D^sF(X)$, then the trace of the spectral projection $1_{[0, \alpha]}(Y)$ is no greater than \begin{eqnarray*} \psi(1_{[0,\alpha]}(D^sF(X)^*D^sF(X))) + r. \end{eqnarray*} Setting $m_0=m_2$ and $\gamma_0=\gamma_2$ into the first sentence, there exist $m \in \mathbb N$ and $\gamma >0$ such that if $\xi \approx^{R,m,\gamma} X$, then $DF(\xi)^* DF(\xi) \approx^{R_1, m_2,\gamma_2} D^sF(X)^* D^sF(X)$. Thus, if $\xi \in \Gamma_R(X;m,k,\gamma)$, then \begin{eqnarray*} \psi_k(1_{[0,\alpha]}(DF(\xi)^*DF(\xi))) \leq \psi(1_{[0,\alpha]}(D^sF(X)^*D^sF(X))) + r \end{eqnarray*} where $\psi_k$ is the normalized real trace on the space of real linear operators on the real vector space $(M_k(\mathbb C))^n$. Multiplying both sides by $2nk^2$, it follows that the real dimension of the projection $1_{[0,\alpha]}(DF(\xi)^*DF(\xi))$ on $(M_k(\mathbb C))^n$ is no greater than \begin{eqnarray*} && 2nk^2 \cdot (\psi(1_{[0,\alpha]}(D^sF(X)^*D^sF(X))) + r).\\ \end{eqnarray*} The second claim readily follows from this. \end{proof} \subsection{Self-adjoint and Unitary Calculus} It's natural to wonder how the notions of nullity and rank behave when one deals with self-adjoint or unitary tuples and the mapping $F$ preserves these classes. In these situations, either class satisfies additional single variable relations, i.e., $X-X^*=0$ or $X^*X=I$, respectively. These relations should 'transform' the $2 \times 2$ matrix entries of $D^sF(X)$ into one operator entry through a change of variables. Moreover, the nullity or rank of the full $2\times 2$ case and the single operator entry case should be connected in a natural way, and the resultant differential calculus on the self-adjoints or unitaries should be simple to compute (or at least one that is no more difficult than that of $D^s$). This is indeed the case and the results are stated and proven here. While such observations could be made using the notion of a Hilbert manifold, I avoid them here and will proceed in a more pedestrian way. First for the self-adjoint situation. In this case the differential calculus is consistent with that of \cite{v4} and is connected to the $*$-calculus in the obvious way. \begin{definition} Fix an $n$-tuple $X=\{x_1,\ldots, x_n\}$ in the tracial von Neumann algebra $M$ and suppose $f \in \mathfrak{A}_n$ and $1 \leq j \leq n$. Set $x =X_j$, $A=A_j$ as in Definition 3.7. $\partial_{sa}: A[x, x^*] \rightarrow A[x,x^*] \odot A[x,x^*]^{op}$ is the derivation determined by the relations $\partial_{sa}(A)=0$ and $\partial_{sa}(x) = \partial_{sa}(x^*) = I \odot I^{op}$ where $A[x,x^*] \odot A[x,x^*]^{op}$ is given the natural $A[x,x^*]-A[x,x^*]^{op}$ bimodule structure. The jth partial $S^{sa}$-derivative of $f$ at $X$, $\partial_i^{sa}f(X)$, is the element $(\pi_X \otimes \pi_X^{op}) \circ \partial_{sa}f \in M \otimes M^{op}$ where $\pi_X:\mathfrak{A}_n \rightarrow M$ is the unique $*$-homomorphism such that $\pi_X(X_j) = x_j$. \end{definition} \begin{definition} Suppose $X \subset M$ and $F = \{f_1,\ldots, f_m\} \subset \mathfrak{A}_n$ are $n$ and $m$-tuples and $F$ consists of self-adjoint elements. For each $1\leq i \leq m$, $1 \leq j \leq n$, consider the $\partial^{sa}_jf_i(X)$, the $j$th partial $S^{sa}$-derivative of $f_i$ at $X$ realized in the von Neumann algebra tensor product $M \otimes M^{op}$. The self-adjoint S-derivative of $F$ at $X$ is \begin{eqnarray*} D^{sa}F(X) = \begin{bmatrix} (\partial_{1}^{sa} f_1)(X) & \cdots & (\partial_{n}^{sa} f_1)(X) \\ \vdots & & \vdots \\ (\partial_{1}^{sa}f_m)(X) & \cdots & (\partial_{n}^{sa} f_m)(X) \\ \end{bmatrix} \in M_{m\times n}(M \otimes M^{op}). \end{eqnarray*} \end{definition} \begin{remark} Suppose $X, F$ are as in Definition 3.15. One can consider the nullity and rank of $D^{sa}F(X) \in M_{m\times n}(M \otimes M^{op})$ as described in Section 2.6. Explicitly set $P = 1_{\{0\}}(D^{sa}F(X)^* D^{sa}F(X))$ and $Q = 1_{(0,\infty)}(D^{sa}F(X)^* D^{sa}F(X))$. $P, Q \in M_{n}(M \otimes M^{op})$. Consider the tracial state $\psi = tr_{n} \otimes (\varphi \otimes \varphi^{op})$ on $M_{n}(M \otimes M^{op})$. The selfadjoint nullity and rank of $F$ at $X$ are \[ \text{Nullity}(D^{sa}F(X)) = n \cdot \psi(P) \in [0,n] \] \noindent and \[ \text{Rank}(D^{sa}F(X)) = n \cdot \psi(Q) \in [0,n], \] respectively. \end{remark} \begin{proposition} Suppose $X \subset M$ and $F \subset \mathfrak{A}_n$ are $n$ and $m$-tuples of elements and $L = \{(X_1-X_1^*)/2,\ldots, (X_n-X_n^*)/2\} \subset \mathfrak{A}_n$. If $G = F \cup L$ denotes the joined $(m+n)$-tuple of elements of $\mathfrak{A}_n$ and the elements of $F$ are self-adjoint, then $\text{Nullity}(D^sG(X)) = \text{Nullity}(D^{sa}F(X))$. Moreover, if $\mu$ is the spectral distribution of $|D^sG(X)|$ and $\nu$ is the spectral distribution of $|D^{sa}F(X)|$, then there exists a $c>0$ such that for any $t \in (0,1)$, $\mu((0,t]) \leq \nu((0,ct])$. \end{proposition} \begin{proof} By Lemma 2.11 it will suffice to prove the dimension and spectral decay claims for a matrix with entries in $M\otimes M^{op}$ which is $(M \otimes M^{op})$-row equivalent to $D^sG(X)$. $D^sG(X) = D^s(F \cup L)$ has the matricial form \begin{eqnarray*} \begin{bmatrix} D^sF(X) \\ D^sL(X) \\ \end{bmatrix} & \in & M_{(m+n) \times n}(M_2(M \otimes M^{op})) \\ \end{eqnarray*} Expanding the terms in both $D^sF(X)$ and $D^sL(X)$, and writing $F= \{f_1,\ldots, f_m\}$ where $f_i = f_i^*$ by hypothesis, \begin{eqnarray*} D^sG(X) & = & \begin{bmatrix} A_{11} & \cdots & \cdots & A_{1n} \\ \vdots & \cdots & \cdots & \vdots \\ A_{m1} & \cdots & \cdots & A_{mn} \\ E & 0 & \cdots & 0 \\ 0 & E & \ddots & 0 \\ \vdots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & E \\ \end{bmatrix} \\ \end{eqnarray*} where $e = \begin{bmatrix} 0 & 0 \\ 0 & 1 \\ \end{bmatrix}$, $E = I_{M \otimes M^{op}} \otimes e = \begin{bmatrix} 0 & 0 \\ 0 & 1 \\ \end{bmatrix}$, and \begin{eqnarray*} A_{ij} = \partial^s_jf_i(X) = \begin{bmatrix} \partial_i^{sa} f_j(X) & \partial_i^{sk} f_j(X) \\ 0 & 0 \\ \end{bmatrix} \\ \end{eqnarray*} are elements of $M_2(M \otimes M^{op})$. As elsewhere, I make the obvious identication of operator-valued matrices and their representation as elements in $M \otimes M^{op}$ tensored by the matrix algebras. Inspecting $E$ and $A_{ij}$ it follows that the matricial form of $D^sG(X)$, regarded as a $2(n+m) \times 2n$ matrix with entries in $M \otimes M^{op}$, is $(M \otimes M^{op})$-row equivalent to \begin{eqnarray*} T& = & D^{sa}F(X) \otimes e + I_{M_n(M \otimes M^{op})} \otimes e^{\bot} \\ & \in & M_{n}(M \otimes M^{op}) \otimes M_2(\mathbb C) \\ & = &M_{2n}(M \otimes M^{op}).\\ \end{eqnarray*} Note that this row equivalence is made in the sense of section 2.6 where the $T$ is identified with the matrix $T_0 \in M_{2(m+n)\times 2n}$ obtained by taking $T$ and filling the last $2m$ rows with $0$. It follows that $|T|^2 = |D^{sa}F(X)|^2 \otimes e + I_{M_n(M \otimes M^{op})} \otimes e^{\bot}$. The two terms of this sum are positive elements with their respective ranges contained in the ranges of the orthogonal projections, $I_{M_n(M \otimes M^{op})} \otimes e$ and $I_{M_n(M \otimes M^{op})} \otimes e^{\bot}$. It follows that $|T| = |D^{sa}F(X)| \otimes e + I_{M_n(M \otimes M^{op})} \otimes e^{\bot}$. If $\mu$ and $\nu$ are the spectral trace measures associated to $|T|$ and $|D^{sa}F(X)|$, then the orthogonality comment applied to the decomposition of $|T|$ implies that $\mu = (\nu + \delta_{\{1\}})/2$. It follows from this that $\text{Nullity}(T) = \text{Nullity}(D^{sa}F(X))$ and for all $t \in (0,1)$, $\mu((0,t]) = \nu((0,t])/2 < \nu((0,t]$. Since $T$ is $(M \otimes M^{op})$-row equivalent to $D^sG(X)$ this completes the proof. \end{proof} Turning towards the unitary situation, suppose now that $X$ consists of unitary elements. I will argue as in the self-adjoint case, except that this time instead of using the linear equation $X= X+X^*$, I will use the unitary relation $X^*X=I$ to conclude that the kernel is contained in the skew-adjoints (the tangent space of the unitaries). I will then find a related linear operator whose kernel and determinant is equivalent to the kernel and determinant of the differential restricted to this tangent space. As in the self-adjoint case, the end result will express the rank as a matrix over $M \otimes M^{op}$ as opposed to $M_2(M \otimes M^{op})$. First, a preliminary computation to motivate the definition. \begin{definition} For each $f \in \mathfrak{A}_n$, define \begin{eqnarray*} m_f = \frac{1}{2} \cdot \begin{bmatrix} f \odot I + I \odot f^* & f \odot I - I \odot f^* \\ f \odot I - I \odot f^* & f \odot I + I \odot f^* \\ \end{bmatrix} \in M_2(\mathfrak{A}_n \odot \mathfrak{A}_n^{op}) \\ \end{eqnarray*} Given an $n$-tuple $X$ of elements in the tracial von Neumann algebra, $m_{f(X)} = (\pi_X \odot \pi_X \otimes I_2)(m_f) \in M_2(M \odot M^{op}) \subset M_2(M \otimes M^{op})$. \end{definition} \begin{remark} If $X =\{a\}$ and $f \in \mathfrak{A}_1$ is the trivial element $f=X_1$, then $m_{f(X)}=m_a$ is the $2 \times 2$ tensor matrix representation of the left multiplication operator on $L^2(M)$ by $a$, i.e., $m_a = \Phi(\pi(a))$ where $\pi$ is the left regular representation of $M$ on $L^2(M)$. See Example 3.1. \end{remark} \begin{lemma} If $f \in \mathfrak{A}_n$ is a noncommutative $*$-monomial and $Y =\{y_1, \ldots, y_n\} \subset M$ is an $n$-tuple of unitaries in $M$, then the following hold: \begin{itemize} \item For any $1\leq j \leq n$, $(m_{f(Y)^*} ((\partial_j^sf)(Y)) m_{y_j}) \in M_2(M \otimes M^{op})$ is a diagonal matrix. \item If $f = Z_1 X_j^{k_1} Z_2 X_j^{k_2} \cdots Z_p X_j^{k_p} Z_{p+1}$ where $Z_i \in A_j$ are $*$-monomials in $\mathbb X - \{X_j\}$ and $k_i \in \{1, *\}$, $1 \leq i \leq p+1$, and $\pi_Y: \mathfrak{A}_n \rightarrow M$ is the canonical $*$-homomorphism such that $\pi_Y(X_i) = y_i$, then $[m_{f(Y)^*}((\partial_j^sf)(Y)) m_{y_j}]_{22}$ is \begin{eqnarray*} \sum_{i=1}^p (-1)^{\delta_{k_i, *}} \pi_Y \left (Z_{p+1}^* \cdots (X_j^{k_{i+1}})^* Z_{i+1}^* X_j^{\delta_{k_i,*}} \right) \otimes \left(\pi_Y\left((X_j^{\delta_{k_i,*}})^* Z_{i+1} X_j^{k_{i+1}} \cdots Z_{p+1} \right)\right)^{op} \in M \otimes M^{op}. \end{eqnarray*} \end{itemize} \end{lemma} \begin{proof} First note that it suffices to do this when $n >1$ since $\mathfrak{A}_n$ canonically embeds into $\mathfrak{A}_{n+1}$ and $Y$ can be augmented into the $(n+1)$-tuple $\{y_1,\ldots, y_n, I\}$. Denote by $B_n$ the algebra obtained by quotienting $\mathfrak{A}_n$ by the ideal generated by the unitary relations $X_jX_j^* = X_j^*X_j = I$ and by $q: \mathfrak{A}_n \rightarrow B_n$ the quotient map. Equivalently, $B_n$ is the universal unital complex $*$-algebra generated by $n$ unitaries. Recall that $\pi_Y: \mathfrak{A}_n \rightarrow M$ is the complex $*$-homomorphism uniquely defined by $\pi_Y(X_j) = y_j$. Because the $y_j$'s are unitary, there exists a canonical complex $*$-homomorphism $\sigma_Y: B_n \rightarrow M$ uniquely defined by $\sigma_Y(q(X_n)) = y_j$ such that the following diagram commutes: \begin{eqnarray*} \begin{tikzcd}[column sep=5em, row sep=3em] \mathfrak{A}_n \arrow{r}{q} \arrow[swap]{rd}{\pi_Y} & B_n \arrow{d}{\sigma_Y} \\ & M \end{tikzcd} \end{eqnarray*} Each of these maps yields an opposite morphism on the opposite domain and range; as a map of sets, these opposite morphisms agree with the original morphism. Abusing notation I'll use the same letter to denote the map and its induced opposite map. Tensoring the maps with their opposite maps and amplifying by the $2 \times 2$ matrices yields the following commutative diagram: \begin{eqnarray*} \begin{tikzcd}[column sep=5em, row sep=6em] M_2(\mathfrak{A}_n \odot \mathfrak{A}_n^{op}) \arrow{r}{q \odot q \odot I_2} \arrow{rd}[swap]{(\pi_Y \odot \pi_Y) \odot I_2} & M_2(B_n \odot B_n^{op})\arrow{d}{(\sigma_Y \odot \sigma_Y) \odot I_2} \\ & M_2(M \otimes M^{op}) \end{tikzcd} \end{eqnarray*} Thus, \begin{eqnarray*} m_{f^*(Y)} (\partial^sf)(Y)) m_{y_j} & = & (\pi_Y \odot \pi_Y \odot I_2)(m_f \cdot (D_{(x,x^*)}f) \cdot m_{X_j}) \\ & = & (\sigma_Y \odot \sigma_Y \odot I_2)(q \odot q \otimes I_2)(m_{f^*} (D_{(x,x^*)}f) m_{X_j}). \\ \end{eqnarray*} From the above, the claim reduces to looking at $(q \odot q \odot I_2)(m_{f^*} (D_{(x,x^*)}f) m_{X_j})$. Set $a_1 = (q \odot q \odot I_2)(m_{f^*})$, $a_2 = (q \odot q \odot I_2)(D_{(x,x^*)}f)$, $a_3 = (q \odot q \odot I_2)(m_{X_j})$. Recall that there is an injective complex linear homomorphism $\pi: B_n \odot B_n^{op} \rightarrow L_{\mathbb C}(B_n)$ where $\pi(a \odot b): B_n \rightarrow B_n$ is defined by $\pi(a \odot b)(x) = axb$. Recall also the real linear, multiplicative map $\Phi_{B_n}: L_{\mathbb R}(B_n) \rightarrow M_2(L_{\mathbb C}(B_n))$ given in Subsection 3.1 (just after Lemma 3.4). $f = Z_1 X_j^{k_1} Z_2 X_j^{k_2} \cdots Z_p X_j^{k_p} Z_{p+1}$ where $Z_i \in A_j$ are $*$-monomials in $\mathbb X - \{X_j\}$ and $k_i \in \{1, *\}$, $1 \leq i \leq p$. Write $X = q(\mathbb X)$, $x_i = q(X_i)$, and $z_j = q(Z_j)$. By definition of $B_n$ the $x_i$ and $z_i$ are unitary elements of $B_n$. One easily checks the identities \[ (\pi \odot I_2)(a_1) = \Phi_{B_n}(\pi(f^*(X) \odot I)),\] \[ (\pi \odot I_2)(a_2) = \Phi_{B_n}(T),\] \[ (\pi \odot I_2)(a_3) = \Phi_{B_n}(\pi(x_j \odot I)),\] \noindent where $T \in L_{\mathbb R}(B_n)$ is the real linear operator defined by \begin{eqnarray*} T = \sum_{i=1}^p \pi(z_1 x_j^{k_1} \cdots z_i \odot z_{i+1} x_j^{k_{i+1}} \cdots z_{p+1}) J^{\delta_{k_i, *}}. \end{eqnarray*} Using the fact that all terms in $B_n$ appearing in the elementary tensors below are unitary, \begin{eqnarray*} & & \pi(f(X)^* \odot I) T \pi(x_j \odot I) \\ & = & \pi(f(X)^* \odot I) \sum_{i=1}^p \pi(z_1 x_j^{k_1} \cdots z_i \odot z_{i+1} x_j^{k_{i+1}} \cdots z_{p+1}) J^{\delta_{k_i, *}} \pi(x_j \odot I) \\ & = & \pi(f(X)^* \odot I) \sum_{i=1}^p \pi(z_1 x_j^{k_1} \cdots z_i (x_j)^{\delta_{k_i,1}} \odot (x_j^*)^{\delta_{k_i,*}} z_{i+1} x_j^{k_{i+1}} \cdots z_{p+1}) J^{\delta_{k_i, *}} \\ & = & \sum_{i=1}^p \pi(z_{p+1}^* \cdots (x_j^{k_{i+1}})^* z_{i+1}^* (x_j^{k_i})^* (x_j)^{\delta_{k_i,1}} \odot (x_j^*)^{\delta_{k_i,*}} z_{i+1} \cdots z_{p+1}) J^{\delta_{k_i, *}} \\ \\ & = & \sum_{i=1}^p \left [ \pi \left (z_{p+1}^* \cdots (x_j^{k_{i+1}})^* z_{i+1}^* x_j^{\delta_{k_i,*}} \odot (x_j^{\delta_{k_i,*}})^* z_{i+1} x_j^{k_{i+1}} \cdots z_{p+1} \right ) \right ] J^{\delta_{k_i, *}}. \end{eqnarray*} \noindent Both $B_n^{sa}$ and $B_n^{sk}$ (the real subspaces of self-adjoint and skew-adjoint elements) are invariant real subspaces of $J^{\delta_{k_i, *}}$ and so they are invariant under the above operator as well. From the definition of $\Phi_{B_n}$ and the fact that $B_n^{sa}$ and $B_n^{sk}$ are invariant under $(\pi(f(X)^* \otimes I) T \pi(x_j \otimes I)$ it follows that $\Phi_{B_n}(\pi(f(X)^* \odot I) T \pi(x_j \odot I)) \in M_2(B_n \odot B_n^{op})$ is diagonal. Moreover, note that if $\xi \in B_n^{sk}$, then from the above, \begin{eqnarray*} & & \pi(f(X)^* \odot I) T \pi(x_j \odot I)(\xi) \\ & = & \sum_{i=1}^p \left [ \pi \left (z_{p+1}^* \cdots (x_j^{k_{i+1}})^* z_{i+1}^* x_j^{\delta_{k_i,*}} \odot (x_j^{\delta_{k_i,*}})^* z_{i+1} x_j^{k_{i+1}} \cdots z_{p+1} \right ) \right ] J^{\delta_{k_i, *}}(\xi) \\ & = & \sum_{i=1}^p (-1)^{\delta_{k_i, *}} \left (z_{p+1}^* \cdots (x_j^{k_{i+1}})^* z_{i+1}^* x_j^{\delta_{k_i,*}} \right) (\xi) \left((x_j^{\delta_{k_i,*}})^* z_{i+1} x_j^{k_{i+1}} \cdots z_{p+1} \right). \\ \end{eqnarray*} To finish the proof, set $b_1 = \pi(f^*(X)\odot I)$, $b_2 = T$, $b_3 = \pi(x_j \odot I)$. $a_i \in M_2(B_n \odot B_n^{op})$ and $b_i \in L_{\mathbb R}(B_n)$ and $\Phi_{B_n}(b_i) = (\pi \otimes I_2)(a_i)$ by the three identities stated above. \begin{eqnarray*} (\pi \otimes I_2)(a_1 a_2 a_3) & = & (\Phi_{B_n}(b_1 b_2 b_2)) \\ & = & \Phi_{B_n}\left (\pi(f(X)^* \odot I) T \pi(x_j \odot I)\right) \\ & \in & M_2(B_n\odot B_n^{op})\\ \end{eqnarray*} is a diagonal matrix by the preceding paragraph. Since $n >1$, Lemma 3.5 implies that $\pi \otimes I_2$ is injective, whence $a_1a_2a_3$ is diagonal. Similarly, from the representations of $[\Phi_{B_n}\left (\pi(f(X)^* \odot I) T \pi(x_j \odot I)\right)]_{22}$ in the preceding paragraph and the injectivity of $\pi \otimes I_2$, it follows that $[a_1 a_2 a_3]_{22}$ is \begin{eqnarray*} \sum_{i=1}^p (-1)^{\delta_{k_i, *}} \left (z_{p+1}^* \cdots (x_j^{k_{i+1}})^* z_{i+1}^* x_j^{\delta_{k_i,*}} \right) \odot \left((x_j^{\delta_{k_i,*}})^* z_{i+1} x_j^{k_{i+1}} \cdots z_{p+1} \right)^{op} \in B_n \odot B_n^{op}. \end{eqnarray*} Finally, from the first paragraph, $m_{f^*(Y)}(\partial^sf(Y))m_{y_j} = (\sigma_Y \odot \sigma_Y \odot I_2)(a_1a_2a_3)$. The established diagonality of $a_1a_2a_3$ and the form of its $22$-entry with the commutative diagrams of the first paragraph complete the proof. \end{proof} \begin{remark} Lemma 3.20 is a computation of the product of three matrices $m_{f(Y)^*}$, $(\partial_j^sf)(Y)$, $m_{y_j}$, It is a check that matrix multiplication and operator composition are the same thing. An alternative to the approach would be to stay in the matrix setting and multiply the matrices. This is surprisingly messy. \end{remark} \begin{definition} Suppose $X \subset M$ is an $n$-tuple of unitary elements and $F=\{f_1,\ldots, f_m\}$ is an $m$-tuple of $*$-monomials in $\mathfrak{A}_n$. Define $D^uF(X)$ to be the element in $M_{m \times n}(M\otimes M^{op})$ whose $ij$th entry is \begin{eqnarray*} \partial_j^uf_i(X) & = & \left [ m_{f_i(X)^*} ((\partial^s_j f_i)(X)) m_{x_j} \right ]_{22}.\\ \end{eqnarray*} \end{definition} \begin{remark} Suppose $X, F$ are as in Definition 3.22. As in the selfadjoint case, one can consider the nullity and rank of $D^uF(X) \in M_{m\times n}(M \otimes M^{op})$ as described in Section 2.6 . Note that in this case, as in the selfadjoint case, $\text{Nullity}(D^uF(X)), \text{Rank}(D^uF(X)) \in [0, n]$. \end{remark} \begin{remark} The entries of $D^uF(X)$ can be described/computed in the following way. As in Lemma 3.20 suppose $f_l = Z_1 X_j^{k_1} Z_2 X_j^{k_2} \cdots Z_p X_j^{k_p} Z_{p+1}$ is a reduced word where $Z_i \in A_j$ and $k_i \in \{1, *\}$, $1 \leq i \leq p$. If $1 \leq i \leq n$, then the $ij$th entry of $D^uF(X)$ is obtained by taking a sum over all occurrences of $X_j^{m_j}$ in $f_i$ with $m_j \in \{1, *\}$, $f_i = w_1 X_j^{m_j} w_2$, where each occurrence contributes a summand terms of $w_2^* \otimes w_2$ when $m_j=1$ or $-w_2^*X_j \otimes X_j^* w_2$ when $m_j=*$. Then this sum in $\mathfrak{A}_n \odot \mathfrak{A}_n^{op}$ is evaluated (by universality) at $X \subset M$ to produce an element in $M \otimes M^{op}$. \end{remark} \begin{remark} Suppose that $X = \{u_1,\ldots, u_n\}$ is an $n$-tuple of unitaries in $\mathcal A = \oplus_{i=1}^n M_N(\mathbb C)$, and $F$ is a $p$-tuple of noncommutative $*$-monomials. Set $\mathcal B = \oplus_{j=1}^p M_N(\mathbb C)$. If $U_N$ denotes the $N \times N$ unitary matrices, then the unitary groups $U(\mathcal A)$ of $\mathcal A$ and $U(\mathcal B)$ of $\mathcal B$ are $\oplus_{i=1}^N U_N$ and $\oplus_{j=1}^p U_N$. Denote by $M_N^{sk}$ the $N\times N$ skew-adjoint matrices and set $\mathcal A^{sk} = \oplus_{i=1}^n M_N^{sk}$ and $\mathcal B^{sk} = \oplus_{i=1}^n M_N^{sk}$. The tangent space of $U(\mathcal A)$ at $X$ is $X\mathcal A^{sk}$ and the tangent space of $U(\mathcal B)$ at $f(X)$ is $f(X)\mathcal B^{sk}$. $F$ restrict to a well-defined map from $U(\mathcal A)$ into $U(\mathcal B)$ and thus, its differential at $X$, $DF(X): \mathcal A \rightarrow \mathcal B$ sends the tangent space $X\mathcal A^{sk}$ into $f(X)\mathcal B^{sk}$. The rank of this map is the (finite real) dimension of $DF(X)X\mathcal A^{sk} \subset F(X) \mathcal B^{sk}$. However, since $F(X) \in U(\mathcal B)$, this is the same as the (finite real) dimension of $F(X)^* DF(X) X \mathcal A^{sk} \subset \mathcal B^{sk}$. This is what $D^uF(X)$ captures algebraically: \begin{eqnarray*} \dim_{\mathbb R}(F(X)^* DF(X) X \mathcal A^{sk}) & = & N^2 \cdot \text{Rank}(D^uF(X)). \end{eqnarray*} \end{remark} \begin{remark} A few simple computations on the polynomial $X_i^*X_i-I$ will be useful in what follows. To ease the notation, for $x \in M$, I'll simply write $x \in M^{op}$ for the image of $x$ in $M^{op}$, as opposed to $x^{op}$. This should cause no confusion here since the example consists of computations involving a single unitary. Fix $1 \leq i \leq n$ and consider $g=X_i^*X_i-I \in \mathfrak{A}_n$. Again, $X=\{x_1,\ldots, x_n\}$ is an $n$-tuple of unitaries in a tracial von Neumann algebra $M$. Clearly $\partial^s_jg(X)=0$ for $j \neq i$ and \begin{eqnarray*} \partial_i^sg(X) = \begin{bmatrix} x_i^* \otimes I + I \otimes x_i & x_i^* \otimes I - I \otimes x_i \\ 0 & 0 \\ \end{bmatrix} \in M_2(M \otimes M^{op}). \end{eqnarray*} Define \begin{eqnarray*} p_i = \frac{1}{2} \cdot \begin{bmatrix} I -\frac{1}{2}( x_i \otimes x_i + x_i^* \otimes x_i^*) & \frac{1}{2}(x_i \otimes x_i - x^*_i \otimes x^*_i) \\ \frac{1}{2}(x^*_i \otimes x^*_i - x_i \otimes x_i) & I + \frac{1}{2}(x_i^* \otimes x_i^* + x_i \otimes x_i) \\ \end{bmatrix} \in M_2(M \otimes M^{op}) \end{eqnarray*} and $e_i = p_i^{\bot}$. Note that $p_i$ is the matricial, tensor product representation of the real operator $[\sigma_M(I \otimes I) - \sigma_M(x_i \otimes x_i) \circ J]/2$ and $e_i$ is the matricial, tensor product representation of $[\sigma_M(I \otimes I) + \sigma_M(x_i \otimes x_i) \circ J]/2$. It is easy to check that $(\partial_i^sg(X)) p_i=0$, $(\partial_i^sg_i(X)) e_i = (\partial_i^sg(X))$, the projection onto the range of $\partial_i^sg(X)$ has (unnormalized) trace $1$, and that $p_i$ is the projection onto $\ker(\partial_i^sg)$. Notice also that $p_i = m_{x_i}z_i$ where \begin{eqnarray*} z_i = \frac{1}{2} \cdot \begin{bmatrix} 0 & 0 \\ x^*_i \otimes I - I \otimes x_i & x_i^* \otimes I + I \otimes x_i \\ \end{bmatrix} \in M_2(M \otimes M^{op}).\\ \end{eqnarray*} Here, $z_i$ is the tensor matricial representation of $(\sigma_M(x_i^* \otimes I) - \sigma_M(I \otimes x_i)J)/2$. One also has \begin{eqnarray*} \partial^s_ig(X) m_{x_i} & = & \frac{1}{2} \begin{bmatrix} x^*_i \otimes I + I \otimes x_i & x^*_i \otimes I - I \otimes x_i \\ 0 & 0 \\ \end{bmatrix} \begin{bmatrix} x_i \otimes I + I \otimes x_i^* & x_i \otimes I - I \otimes x_i^* \\ x_i \otimes I - I \otimes x^* & x_i \otimes I + I \otimes x_i^* \\ \end{bmatrix}\\ & = & 2 \cdot \begin{bmatrix} I_{M \overline{\otimes} M^{op}} & 0 \\ 0 & 0 \\ \end{bmatrix}.\end{eqnarray*} \end{remark} \begin{proposition} Suppose $X =\{x_1,\ldots, x_n\} \subset M$ and $F =\{f_1,\ldots,f_m\} \subset \mathfrak{A}_n$ are $n$ and $m$-tuples and $G =\{X_1^*X_1-I, \ldots, X_n^*X_n-I\} \subset \mathfrak{A}_n$. If every element of $F$ is a noncommutative $*$-monomial, every element of $X$ is a unitary, and $H=\langle f_i -I \rangle_{i=1}^m \cup G$ is the joined $(m+n)$-tuple of elements in $\mathfrak{A}_n$, then $\text{Nullity}(D^sH(X)) = \text{Nullity}(D^uF(X))$. Moreover, if $\mu$ and $\nu$ are the spectral distributions of $|D^sH(X)|$ and $|D^uF(X)|$, then for any $t \in (0,1)$, $\mu((0,t]) \leq \nu((0,t])$. \end{proposition} \begin{proof} Writing out the definition, \begin{eqnarray*} D^sH(X) & = & \begin{bmatrix} D^sF(X) \\ D^sG(X) \\ \end{bmatrix} \in M_{(m+n) \times n}(M_2(M\otimes M^{op})). \end{eqnarray*} \noindent Denote by $A$ the $n \times n$ diagonal matrix whose $ii$th entry is $m_{x_i}$ and set $E_1 = I_{M_n(M \otimes M^{op})} \otimes e_{11} \in M_{n\times n}(M_2(M \overline{\otimes} M^{op}))$. Note that $A$ is a unitary element since the $x_i$ are unitaries (Remark 3.19). Remark 3.26 shows that \begin{eqnarray*} D^sG(X) A & = & 2E_1 \in M_{n\times n}(M_2(M \otimes M^{op})).\\ \end{eqnarray*} Set $E_2 = E_1^{\bot} = I_{M_n(M \otimes M^{op})} \otimes e_{22}$. Denote by $W$ the $m\times m$ diagonal matrix whose $ii$th element is $m_{f_i(X)^*}$. Note that $W$, like $A$, is unitary since the elements of $F$ are $*$-monomials and the elements of $X$ are unitaries. By Lemma 3.20, for any $1 \leq i \leq m$ and $1 \leq j \leq n$, $(WD^sF(X)A)_{ij} \in M_2(M \otimes M^{op})$ is a diagonal element whose $22$-entry is $D^uF(X)_{ij}$. Consequently, $|WD^sF(X)A|^2$ commutes with $E_2=I_{M_n(M\otimes M^{op})} \otimes e_{22}$, as well as $E_1 = E_2^{\bot}$. Compute: \begin{eqnarray*} A^* D^sH(X)^*D^sH(X) A & = & A^* D^sF(X)^*D^sF(X) A + A^* D^sG(X)^*D^sG(X) A \\ &= & (A^*D^sF(X)^*W^*)(WD^sF(X)A)(E_2 +E_1) + 2E_1 \\ & = & |D^uF(X)|^2 \otimes e_{22} + \\ & & E_1A^*D^sF(X)^*W^* WD^sF(X)AE_1 + 2E_1 \\ & \geq & |D^uF(X)|^2 \otimes e_{22} + E_1. \\ \end{eqnarray*} From the above $\ker(|D^sH(X)A|) = \ker(|D^uF(X)| \otimes e_{22})$. As $A$ is a unitary, $\text{Nullity}(|D^sH(X)|) = \text{Nullity}(|D^sH(X)A|) = \text{Nullity}(|D^uF(X)|)$, establishing the nullity claim. For the spectral distribution claim, set $T = D^uF(X) \otimes e_{22}$. Taking square roots of the above computation, $|D^sH(X)A| \geq |T| + E_1$. Note that $|T|=|D^uF(X)| \otimes e_{22}$ and $E_1$ are positive elements with orthogonal supports, $E_1 +E_2 =I$, the (normalized) trace of $E_1$ is $1/2$, and $\ker(|T|) = \ker(|D^uF(X)|) \otimes e$. It follows from this that if $m$ is the spectral distribution of $|T+E_1| = |T| + E_1$, then $m = (\nu + \delta_{1})/2$. If $\mu_1$ is the spectral distribution of $|D^sH(X)A|$, then $\mu_1 = \mu$ since $A$ is unitary. By Weyl's Inequality for positive operators (Section 2.6), for any $t \in [0,1)$, $\mu([0,t]) = \mu_1([0,t]) \leq m([0,t]) = \nu([0,t])/2$. Setting $t=0$ on the RHS equation yields $m(\{0\}) = \nu(\{0\})/2$. On the other hand, by the nullity equation just established, the trace of the projection onto $\ker(|D^uF(X)|)$ is twice the trace of the projection onto $\ker(|D^sH(X)|)$, i.e., $\nu(\{0\}) = 2 \mu(\{0\})$. Thus, $m(\{0\}) = \mu(\{0\})$. Combined with the fact that for all $t \in [0,1)$, $\mu([0,t]) \leq m([0,t])$, it follows that for all $t \in (0,1)$, $\mu((0,t]) \leq m((0,t]) = \nu((0,t])$. \end{proof} \section{Single Spectral Splits: Local Projection and Dimension Bounds} The main goal of this section is to find upper bound free entropy and Hausdorff dimension estimates for the solution space $F(X)=Y$. Here $F$ denotes a finite tuple of noncommutative $*$-polynomials and $X$ is a finite tuple in a tracial von Neumann algebra. The estimates will be corollaries of a more general estimate which is derived form the standard 'local to global' manifold argument and some specific Euclidean covering estimates. This section will be broken up into three parts: the Euclidean covering estimates, the localization construction and free entropy dimension implications, and examples. \subsection{Euclidean Estimates} For a linear operator $T$ on a vector space define $T^{\bot} = I - T$. This notation coincides with the usual inner product space one when $T$ is an orthogonal projection. \begin{lemma} Suppose $K \subset \mathbb R^d$ is an open, convex subset containing the origin $0$ and $f: K \rightarrow \mathbb R^m$ is a $C^1$-function. If for all $x \in K$, $\|Df(0) - Df(x)\| < r$ and $T$ is a linear operator on $\mathbb R^d$ such that for any $x \in \mathbb R^d$, $\|Df(0)T(x)\| \geq \beta \|T(x)\|$, then for any $x, y \in K$, \begin{eqnarray*} \|f(y) - f(x)\| & \geq & \beta \cdot \|T(y)-T(x)\| - \|Df(0)\| \cdot \|T^{\bot}(y)-T^{\bot}(x)\| - r\|y-x\|. \end{eqnarray*} \end{lemma} \begin{proof} Suppose $x,y \in K$. Set $B = \int_0^1 Df(x + t(y-x)) \, dt$; using the convexity assumption and the bound on the derivative, $\|(B - Df(0)) \| < r.$ By the mean value theorem \begin{eqnarray*} \|f(y) - f(x) \| & = & \| B(y-x) \| \\ & \geq & \|Df(0)(y-x)\| - r\|y-x\| \\ & \geq & \|Df(0)( T+ T^{\bot})(y -x))\| - r\|y-x\|\\ & \geq & \|Df(0)T(y-x)\| - \|Df(0) T^{\bot}(y -x)\| - r\|y-x\|\\ & \geq & \beta \cdot \|T(y) - T(x)\| - \|Df(0)\| \cdot \|T^{\bot}(y)-T^{\bot}(x)\| - r \cdot \|y-x\|. \\ \end{eqnarray*} \end{proof} \begin{lemma} Suppose $K\subset \mathbb R^d$ is an open, convex set, $f:K \rightarrow \mathbb R^m$ is a $C^1$-function, and $x_0 \in E \subset K$. Assume $Q$ is an orthogonal projection such that for some $\beta \in (0,1)$ and any $x \in \mathbb R^d$, $\|Df(x_0)Q(x)\| \geq \beta \|Q(x)\|$. Denote by $A$ the affine map which sends $x \in \mathbb R^d$ to $Q^{\bot}(x- x_0)$. If $t \in (1-\frac{\beta}{8(\|Df(x_0)\|+1)},1)$ and for all $x \in K$, $\|Df(x_0) - Df(x)\| < \frac{\beta}{4}$, then for any $\epsilon >0$, \begin{eqnarray*} K_{\epsilon}(E) & \leq & K_{(1-t)\epsilon}(A(E)) \cdot S_{\frac{\beta \epsilon}{4}}(f(E)).\\ \end{eqnarray*} \end{lemma} \begin{proof} Without loss of generality assume $x_0 =0$ so that $A=Q^{\bot}$. Fix an $\epsilon$-separated subset $\Delta$ of $E$ of maximal cardinality. Find a cover for $Q^{\bot}(E)$ by open $(1-t)\epsilon$ balls with minimal cardinality and denote the set of centers of these balls by $\Theta$. For every $x \in \Theta$ define $F_x = \{y \in \Delta: \|x- Q^{\bot}(y)\| < (1-t)\epsilon\}$. Clearly $\Delta = \cup_{x \in \Theta} F_x$. Choosing $z \in \Theta$ so that $\#F_{z} = \max\{\#F_x : x \in \Theta\}$, \begin{eqnarray*} S_{\epsilon}(E) & = & \#\Delta \\ & \leq & \#(\cup_{x \in \Theta} F_x) \\ & \leq & \# \Theta \cdot \max\{\#F_x : x \in \Theta\} \\ & \leq & K_{(1-t)\epsilon}(Q^{\bot}(E)) \cdot \#F_{z}.\\ \end{eqnarray*} Suppose $x$ and $y$ are two distinct points in $F_z$. Since $F_z \subset \Delta$, $\|x-y\| \geq \epsilon$. On the other hand, by definition $\|z -Q^{\bot}(x)\| < (1-t)\epsilon$ and $\|z -Q^{\bot}(y)\| < (1-t)\epsilon$ so $\|Q^{\bot}(x) - Q^{\bot}(y)\| < 2(1-t)\epsilon$. \begin{eqnarray*} \|Q(x) - Q(y)\|^2 & = & \|x-y\|^2 - \|Q^{\bot}(x) - Q^{\bot}(y)\|^2 \\ & > & \|x-y\|^2 - 4(1-t)^2 \epsilon^2 \\ & \geq & (1-4(1-t)^2) \|x-y \|^2 \\ & \geq & \frac{7}{8} \cdot \|x -y\|^2. \end{eqnarray*} \noindent Applying the preceding lemma with $Q=T$ and $r = \beta/4$ yields \begin{eqnarray*} \|f(y) - f(x) \| & = & \beta \cdot \|Q(x) - Q(y)\| - \|Df(0)\| \cdot \|Q^{\bot}(x) -Q^{\bot}(y)\| - \frac{\beta}{4} \cdot \|y-x\| \\ & \geq & \frac{3\beta}{4} \cdot \|x - y\| - \|Df(0)\| \cdot 2(1-t) \epsilon - \frac{\beta}{4} \cdot \|y-x\| \\ & \geq & \frac{\beta}{2} \cdot \|x-y\| - \frac{\beta \epsilon}{4}\\ & \geq & \frac{\beta \epsilon}{4}.\\ \end{eqnarray*} \noindent This being true for any distinct $x, y \in F_z \subset E$, $S_{\frac{\beta \epsilon}{4}}(f(E)) \geq\#F_z$. Using the inequality from the previous paragraph, \begin{eqnarray*} K_{\epsilon}(E) & \leq & S_{\epsilon}(E) \\ & \leq & K_{(1-t)\epsilon}(Q^{\bot}(E)) \cdot \#F_z\\ & \leq & K_{(1-t) \epsilon}(A(E)) \cdot S_{\frac{\beta \epsilon}{4}}(f(E)). \\ \end{eqnarray*} \end{proof} \begin{remark} The spectral projections of $|Df(x_0)|$ (regarded as a symmetric, positive semidefinite operator on $\mathbb R^n$) always satisfy the inequality for $Q$ in the lemma, i.e., $\|Df(x_0)Qv\| \geq\beta\|Qv\|$ when $Q=1_{[\beta,\infty)}(|Df(x_0)|)$. \end{remark} \begin{remark} One can carry out a manifold-themed argument of Lemma 4.2 by quantifying the usual rank theorem in multivariable calculus, (e.g., Theorem 9 (1) in \cite{spivak}). While this may seem more intuitive, there is a notational cost, as one must perform more bookkeeping of Lipschitz constants of charts. In the course of doing this the dependencies of the constants lose some transparency. \end{remark} Notice that in the above lemma, the upper bound for the $\epsilon$-covering number of $E$ is scaled by quantities which are not arbitrarily close to $1$, e.g., $1-t < 1/8$. One can improve the concluding estimate if one assumes $Q$ is a spectral projection of $|Df(x_0)|$, but the improvement only allows $t>1/2$ and doesn't permit arbitrarily close values to $0$. This is largely irrelevant for the free entropy/Hausdorff arguments presented here, but they will be unsuitable for the entropy estimates in Section 5. \subsection{Free Entropy Dimension Inequalities} Recall in Section 3.2 that $\mathfrak{A}_n$ denotes the universal, unital, complex $*$-algebra $\mathfrak{A}_n$ on $n$-indeterminates. Fix a $p$-tuple $F = (f_1,\ldots,f_p) \subset \mathfrak{A}_n$ and any uniformly bounded positive Borel function $\phi:[0,\infty) \rightarrow \mathbb R_+$. Define successively, \[ \Pi_{F, \phi,R}(X,r, \epsilon;m,k,\gamma) = \sup_{\xi \in \Gamma_R(X;m,k,\gamma)} \log (K_{\epsilon}(\phi(|DF(\xi)|)[B_{\infty}(\xi,r) \cap \Gamma_R(X;m,k,\gamma) - \xi)])), \] \[ \Pi_{F, \phi,R }(X,r, \epsilon;m,\gamma) = \limsup_{k \rightarrow \infty} k^{-2} \cdot \Pi_{F, \phi, R}(X,r, \epsilon;m,k,\gamma), \] \[ \Pi_{F, \phi, R}(X,r, \epsilon) = \inf \{\Pi_{F, \phi, R}(X,r, \epsilon;m,\gamma): m\in \mathbb N, \gamma >0\}, \] \[ \Pi_{F,\phi, R}(X,r) = \limsup_{\epsilon \rightarrow 0^+} \frac{\Pi_{F, \phi,R}(X,r, \epsilon)}{|\log \epsilon|}. \] Observe that in the above $DF(\xi)$ is the derivative of $F$ regarded as a smooth map from the real Hilbert space $(M_k(\mathbb C))^n$ into $(M_k(\mathbb C))^p$ and $|DF(\xi)|$ is its absolute value. In the proof that follows below $\|DF(\xi)\|$ will designate the operator norm of $DF(\xi)$ computed w.r.t. the canonical real inner product norms on $(M_k(\mathbb C))^n$ and $(M_k(\mathbb C))^p$. \begin{theorem} Suppose $R > \max_{x \in X} \|x\|$. There exists $C, \kappa >1$ dependent on $F$ and $R$ such that if $\beta >0$, $\phi_{\beta} = 1_{[0,\beta)}$, and $0< r < \min\{\frac{1}{2}, \frac{\beta}{8\kappa}\}$, then for any $\epsilon >0$ \begin{eqnarray*} \mathbb K_{\epsilon, R}(X) & \leq & 2 n \log(R) + 2 n \cdot|\log r| + \Pi_{F,\phi_{\beta},R}(X, r, \tfrac{\beta \epsilon}{9C}) + \mathbb S_{\frac{\beta\epsilon}{4}}(F(X):X). \\ \end{eqnarray*} Hence, \begin{eqnarray*} \delta_0(X) \leq \Pi_{F, \phi_{\beta}, R}(X, r) + \delta_0(F(X):X). \end{eqnarray*} \end{theorem} \begin{proof} Fix $\kappa, C >1$ dependent on $F$ and $R$ such that for any $k \in \mathbb N$, $\xi, \eta \in ((M_k(\mathbb C))_{R+1})^n$, $\|DF(\xi) - DF(\eta)\| \leq \kappa \|\xi - \eta\|_{\infty}$ and $\|F(\xi)\|_{\infty}, \|DF(\xi)\|+1 < C$. Suppose $0< r < \min\{\frac{1}{2}, \frac{\beta}{8\kappa}\}$ and $\epsilon >0$. Given $m \in \mathbb N$ and $\gamma >0$, there exist $m < m_1 \in \mathbb N$ and $\gamma >\gamma_1 >0$ with the property that for any $k$, $F(\Gamma_R(X;m_1,k,\gamma_1)) \subset \Gamma_C(F(X):X;m,k,\gamma)$. For each $k$ find an $r$-cover $\langle w_{(j,k)} \rangle_{j \in J_k}$ for $\Gamma_R(X;m_1,k,\gamma_1) \subset ((M_k(\mathbb C))_R)^n$ with respect to the operator norm such that \begin{eqnarray*} \#J_k \leq \left(\frac{2R}{r} \right)^{2nk^2}. \end{eqnarray*} \noindent Using the triangle inequality, I can assume that $w_{(j,k)} \in \Gamma_R(X;m_1,k,\gamma_1)$ at the expense of replacing the $r$-cover condition with a $2r$-cover condition. Fix $k \in \mathbb N$, $j \in J_k$, and set $E = \Gamma_R(X;m_1,k,\gamma_1) \cap B_{\infty}(w_{(j,k)}, 2r)$. Now $w_{(j,k)} \in E \subset B_{\infty}(w_{(j,k)}, 2r)$ with $B_{\infty}(w_{(j,k)}, 2r)$ clearly convex and open with respect to the $\|\cdot\|_2$-metric (all norms on a finite dimensional space being equivalent). Moreover, for any $\xi, \eta \in B_{\infty}(w_{(j,k)}, 2r)$, $\|DF(\xi) - DF(\eta)\| \leq \kappa \|\xi -\eta\|_{\infty} < \kappa \cdot 2r < \frac{\beta}{4}$. Applying Lemma 4.2 with $t = 1 - \frac{\beta}{9C}$ and $A$ equal to the contractive mapping $\xi \mapsto 1_{[0,\beta)}(|DF(w_{(j,k)}|)(\xi - w_{(j,k)})$, for any $\epsilon >0$ \begin{eqnarray*} K_{\epsilon}(E) & \leq & K_{\frac{\beta \epsilon}{9C}}(A(E)) \cdot S_{\frac{\beta \epsilon}{4}}(F(E)).\\ \end{eqnarray*} Observe that \begin{eqnarray*} K_{\frac{\beta \epsilon}{9C}}(A(E)) & = & K_{\frac{\beta \epsilon}{9C}}[1_{[0,\beta)}(|DF(w_{(j,k)})|)(E - w_{(j,k)})] \\ & = & K_{\frac{\beta \epsilon}{9C}}[1_{[0,\beta)}(|DF(w_{(j,k)})|)(\Gamma_R(X;m_1,k,\gamma_1) \cap B_{\infty}(w_{(j,k)}, r) - w_{(j,k)}] \\ & \leq & \Pi_{F,\phi_{\beta},R}(X, r, \tfrac{\beta \epsilon}{9C}; m_1,k,\gamma_1). \end{eqnarray*} Hence, generously majorizing, \begin{eqnarray*} K_{\epsilon}(E) & \leq & K_{\frac{\beta \epsilon}{9C}}(A(E)) \cdot S_{\frac{\beta \epsilon}{4}}(F(E)) \\ & \leq & \Pi_{F,\phi_{\beta},R}(X, r, \tfrac{\beta \epsilon}{9C}; m_1,k,\gamma_1) \cdot S_{\frac{\beta \epsilon}{4}}(F(E))\\ & \leq & \Pi_{F,\phi_{\beta},R}(X, r, \tfrac{\beta \epsilon}{9C}; m_1,k,\gamma_1) \cdot S_{\frac{\beta \epsilon}{4}}(\Gamma_C(F(X):X;m,k,\gamma)).\\ \end{eqnarray*} Using the subadditivity of covering numbers and the estimate above: \begin{eqnarray*} K_{\epsilon}(\Gamma_R(X;m_1,k,\gamma_1)) & \leq & \sum_{j \in J_k} K_{\epsilon}\left (\Gamma_R(X;m_1,k,\gamma_1) \cap B_{\infty}(w_{(j,k)}, 2r) \right)\\ & \leq & \#J_k \cdot \Pi_{F,\phi_{\beta}}(X, r, \tfrac{\beta \epsilon}{9C}; m_1,k,\gamma_1) \cdot S_{\frac{\beta \epsilon}{4}}(\Gamma_C(F(X):X;m,k,\gamma)) \\ & \leq & \left(\frac{2R}{r} \right)^{2nk^2} \cdot \Pi_{F,\phi_{\beta},R}(X, r, \tfrac{\beta \epsilon}{9C};m_1,k,\gamma_1) \cdot S_{\frac{\beta \epsilon}{4}}(\Gamma_C(F(X):X;m,k,\gamma)). \\ \end{eqnarray*} Thus, \begin{eqnarray*} \mathbb K_{\epsilon, R}(X) & \leq & \mathbb K_{\epsilon, R}(X;m_1,\gamma_1) \\ & = & \limsup_{k \rightarrow \infty} k^{-2} \cdot \left[ \log(K_{\epsilon}(\Gamma_R(X;m_1,k,\gamma_1)) \right ] \\ & \leq & 2n \log(2R) + 2n \cdot|\log r| + \Pi_{F,\phi_{\beta},R}(X, r, \tfrac{\beta \epsilon}{9C};m_1,\gamma_1) + \mathbb S_{\frac{\beta \epsilon}{4}}(F(X):X;m,\gamma).\\ \end{eqnarray*} This holds for any choice of $m, \epsilon$ and $\gamma$ with $m_1 > m$ and $\gamma_1 < \gamma$ so, \begin{eqnarray*} \mathbb K_{\epsilon,R}(X) & \leq & 2 n \log(2R) + 2 n \cdot|\log r| + \Pi_{F,\phi_{\beta},R}(X, r, \tfrac{\beta \epsilon}{9C}) + \mathbb S_{\frac{\beta\epsilon}{4}}(F(X):X). \\ \end{eqnarray*} as claimed. For the second inequality, taking $R > \max_{x \in X} \|x\|$, dividing both sides by $|\log \epsilon|$, and taking a limit as $\epsilon \rightarrow 0$, it follows from the cutoff covering formulation of $\delta_0$ (Section 2.3) that \begin{eqnarray*} \delta_0(X) & \leq & \limsup_{\epsilon \rightarrow 0^+} \frac{2n \log 2R + 2n \cdot |\log r|}{|\log \epsilon|} + \limsup_{\epsilon \rightarrow 0^+} \frac{\Pi_{F,\phi_{\beta},R}(X, r, \tfrac{\beta \epsilon}{9C})}{|\log(\tfrac{\beta \epsilon}{9C})|} \cdot \frac{|\log(\tfrac{\beta \epsilon}{9C})|}{|\log\epsilon|} + \\ & & \limsup_{\epsilon \rightarrow 0^+} \frac{\mathbb S_{\frac{\beta \epsilon}{4}}(F(X):X)}{|\log (\frac{\beta \epsilon}{4})|} \cdot \frac{|\log(\tfrac{\beta \epsilon}{4})|}{|\log\epsilon|}\\ & \leq & \Pi_{F,\phi_{\beta}, R}(X, r) + \delta_0(F(X):X). \end{eqnarray*} \end{proof} \begin{remark} In the proof of Theorem 4.5 one localizes the microstate space into balls of operator norm radius of order $r$. This is necessary to obtain uniform control of the derivatives via the Lipschitz estimate $\|DF(\xi) - DF(\eta) \| \leq \kappa \cdot \|\xi - \eta\|_{\infty}$. Such uniform estimates really require the operator norm and are not available with the $\|\cdot\|_2$-norm. However, notice that the $\|\cdot\|_2$-norm is also used in a crucial way through the Euclidean/orthogonal estimates of Lemma 4.2. The use of both the $L^{\infty}$ and $L^2$ norms appears to be a minor detail here (in terms of metric entropy the $L^p$-norms are all equivalent by \cite{sr} up to an exponential factor). However, it will have rather severe consequences in subsequent entropy estimates involving iterative spectral splits.\end{remark} \begin{proposition} For any $\beta >0$ define $\phi_{\beta} = 1_{[0,\beta)}$. If $t, R >0$, then there exists a $\rho$ such that for any $0 < \beta < \rho$ and $r, \epsilon >0$, \begin{eqnarray*} \Pi_{F, \phi_{\beta},R}(X, r, \epsilon) \leq (\text{Nullity}(D^sF(X)) + t) \cdot \log \left(\frac{2r}{\epsilon} \right). \end{eqnarray*} \noindent Consequently, \begin{eqnarray*} \sup_{r, R>0} \Pi_{F,\phi_{\beta},R}(X,r) \leq \text{Nullity}(D^sF(X)) +t. \end{eqnarray*} \end{proposition} \begin{proof} Suppose $t, R >0$. By Proposition 3.13 there exist $m \in \mathbb N$ and $\rho, \gamma > 0$ such that if $\xi \in \Gamma_R(X;m,k,\gamma)$ and $0 < \beta < \rho$, then the real dimension of the range of the real orthogonal projection $1_{[0,\beta)}(|DF(\xi)|)$ on $(M_k(\mathbb C))^n$ is no greater than $(\text{Nullity}(D^sF(X))+ t)k^2$. It follows from this and coarse covering estimates for Euclidean balls (Section 2.4) that for any $\epsilon >0$, \begin{eqnarray*} \Pi_{F,\phi_{\beta},R}(X,r, \epsilon;m,k,\gamma) \leq (\text{Nullity}(D^sF(X)) + t)k^2 \cdot \log \left(\frac{2r}{\epsilon} \right). \end{eqnarray*} \noindent Thus, $\Pi_{F, \phi_{\beta},R}(X, r, \epsilon) \leq (\text{Nullity}(D^sF(X)) + t) \cdot \log \left(\frac{2r}{\epsilon} \right)$. It follows that $\Pi_{F,\phi_{\beta}, R}(X,r) \leq \text{Nullity}(D^sF(X)) + t$. This is true for all $0 < r, R$ and completes the proof. \end{proof} Combining Proposition 4.7 with Theorem 4.5 yields \begin{corollary} \begin{eqnarray*} \delta_0(X) & \leq & \text{Nullity}(D^sF(X)) + \delta_0(F(X):X) \\ & = & 2n - \text{Rank}(D^sF(X)) + \delta_0(F(X):X). \\ \end{eqnarray*} \end{corollary} When $X$ consists of self-adjoints or unitaries and $F$ preserves either condition, then one can replace the $D^s$ calculus with the self-adjoint or unitary calculus discussed in Section 3 to arrive at the following: \begin{corollary} If $X \subset M$ and $F \subset \mathfrak{A}_n$ are self-adjoint $n$ and $m$-tuples, then \begin{eqnarray*} \delta_0(X) & \leq & \text{Nullity}(D^{sa}F(X)) + \delta_0(F(X):X) \\ & = & n - \text{Rank}(D^{sa}F(X)) + \delta_0(F(X):X). \\ \end{eqnarray*} \end{corollary} \begin{proof} Define $L =\{(X_1 - X_1^*)/2,\ldots, (X_n- X_n^*)/2\} \subset \mathfrak{A}_n$. If $G = F \cup L$, then by Proposition 3.17, $\text{Nullity}(D^sG(X)) = \text{Nullity}(D^{sa}F(X))$. Also, $L(X) = \{0,\ldots, 0\}$. Thus, by Corollary 4.8, \begin{eqnarray*} \delta_0(X) & \leq & \text{Nullity}(D^sG(X)) + \delta_0(G(X):X) \\ & = & \text{Nullity}(D^{sa}F(X)) + \delta_0(F(X) \cup L(X):X) \\ & = & \text{Nullity}(D^{sa}F(X)) + \delta_0(F(X):X) \\ & = & n - \text{Rank}(D^{sa}F(X)) + \delta_0(F(X):X). \\ \end{eqnarray*} \end{proof} The unitary case follows from similar considerations: \begin{corollary} If $X \subset M$ and $F \subset \mathfrak{A}_n$ are $n$ and $m$-tuples where the elements of $X$ are unitaries and $F$ consists of $*$-monomials, then \begin{eqnarray*} \delta_0(X) & \leq & \text{Nullity}(D^{u}F(X)) + \delta_0(F(X):X) \\ & = & n - \text{Rank}(D^{u}F(X)) + \delta_0(F(X):X). \\ \end{eqnarray*} \end{corollary} \begin{proof} Define $G =\{X_1^*X_1-I,\ldots, X_n^* X_n-I\} \subset \mathfrak{A}_n$. If $H = F \cup G$, then by Proposition 3.27, $\text{Nullity}(D^sH(X)) = \text{Nullity}(D^uF(X))$. Also, $G(X) = \{0,\ldots, 0\}$. Thus, by Corollary 4.8, \begin{eqnarray*} \delta_0(X) & \leq & \text{Nullity}(D^sH(X)) + \delta_0(H(X):X) \\ & = & \text{Nullity}(D^uF(X)) + \delta_0(F(X) \cup G(X):X) \\ & = & \text{Nullity}(D^uF(X)) + \delta_0(F(X):X) \\ & = & n - \text{Rank}(D^uF(X)) + \delta_0(F(X):X). \\ \end{eqnarray*} \end{proof} \subsection{Examples} The first example is reassuring but not particularly enlightening: \begin{example} Suppose $X = \{x_1, x_2\}$ consists of commuting self-adjoint elements, $x_1$ has no eigenvalues, and $F = \{f\}$ where $f = X_2 X_1 - X_1^* X_2^* \in \mathfrak{A}_2$. Clearly $F(X) = (f(X)) = 0$. By definition, \begin{eqnarray*} D^{sa}F(X) & = & \begin{bmatrix} (\partial_{1}^{sa} f)(X) & (\partial_{2}^{sa} f)(X) \\ \end{bmatrix} \\ & = & \begin{bmatrix} x_2 \otimes I - I \otimes x_2 & I \otimes x_1 - x_1 \otimes I \\ \end{bmatrix} \\ & \in & M_{1\times 2}(M \otimes M^{op}). \\ \end{eqnarray*} Since $x_1$ is self-adjoint and has no eigenvalues, $I \otimes x_1 - x_1 \otimes I$ is injective. To see this observe that the moments of $I \otimes x_1 - x_1 \otimes I$ are the moments of the convolution of the spectral distribution of $x_1$ with that of $-x_1$; a convolution of non-atomic measures being nonatomic, the spectral distribution of $I \otimes x_1 - x_1 \otimes I$ is non-atomic, and in particular, the singleton set $\{0\}$ has $0$ measure. By faithfulness of the trace and the spectral theorem $I \otimes x_1 -x_1 \otimes I \in M \otimes M^{op}$ is injective and thus has dense range. In turn, this implies that $D^{sa}F(X)$ has dense range, so that $\text{Rank}(D^{sa}F(X)) =1$. Thus $\text{Nullity}(D^{sa}F(X)) =2-1=1$ and as one would expect from Corollary 4.9 \begin{eqnarray*} \delta_0(X) & \leq & \text{Nullity}(D^{sa}F(X)) + \delta_0(F(X):X) \\ & \leq & 1 + 0 \\ & = & 1.\\ \end{eqnarray*} \end{example} Here is a slightly more complex example: \begin{example} Suppose $X = \{x_1, \ldots, x_n\}$ consists of unitaries in $M$ and $F = \{f\}$ where $f = A X_1^{s_1} B X_1^{s_2} \in \mathfrak{A}_n$, $A$ and $B$ are $*$-monomials in $X_2,\ldots, X_n$, and either $s_1=s_2=1$ or $s_1 = 1$ and $s_2=*$. Set $a = A(X)$ and $b = B(X)$; $a$ and $b$ are $*$-monomials in $x_2,\ldots, x_n$. Assume that $bx_1$ has no eigenvalues when $s_1=s_2=1$ or that $b$ has no eigenvalues when $s_1=1, s_2=*$, and that in either case, $f(X)=I$. \begin{eqnarray*} D^{u}F(X) & = & \begin{bmatrix} (\partial_{1}^{u} f)(X) & \cdots & (\partial_{n}^{u} f)(X) \\ \end{bmatrix} \\ & \in & M_{1\times n}(M \otimes M^{op}). \\ \end{eqnarray*} Using the fact that $s_i \in \{1,*\}$, the unitary calculus rule in Remark 3.24 yields \begin{eqnarray*} \partial_1^uf(X) & = & \begin{cases} x_1^*b^* \otimes b x_1 + I_M \otimes I_{M^{op}} &\mbox{if } s_1 = s_2=1 \\ (x_1 \otimes x_1^*) (b^* \otimes b - I_M \otimes I_{M^{op}}) & \mbox{if } s_1=1, s_2=*\\ \end{cases}\\ \end{eqnarray*} The assumption on the absence of eigenvalues shows that in either case above, the tensor product operators are injective. This is because as in the self-adjoint case of Example 4.1 one can identify the tensor product operators as a product of multiplication operators on two, independent, nonatomic probability spaces with supports in the unit circle and argue accordingly. $\partial_1^uf(X) \in M \otimes M^{op}$ is injective and thus has dense range. In turn, this implies that $\text{Rank}(D^uF(X))$ has dense range so that $\text{Rank}(D^uF(X)) =1$. Thus, $\text{Nullity}(D^uF(X)) =n -1$. Since $f(X)=I$ Corollary 4.10 implies that \begin{eqnarray*} \delta_0(X) & \leq & \text{Nullity}(D^{u}F(X)) + \delta_0(F(X):X) \\ & \leq & 1 + 0 \\ & = & n-1.\\ \end{eqnarray*} The absence of eigenvalues condition (diffuseness) occurs naturally. Consider for instance, the canonical unitaries associated to the generators $a,b$ for the group $\Gamma$ generated by the single relation $a^mb^{s_1}a^nb^{s_2}=e$ where $m, n \in \mathbb Z$ and $s_1,s_2 \in \{1,-1\}$. By the above computation $\delta_0(\Gamma) \leq 1$. When $s_1=1$ and $s_2 =-1$ these groups are the Baumslag-Solitar groups, $\Gamma_{m,n}$. In this case $b$ is in the normalizer of $a$ and the inequality reduces to a case first obtained by \cite{v2} (see also the strengthened generalizations in \cite{gs}). Both sets of authors actually show that the group von Neumann algebras are strongly $1$-bounded. I will find a different proof of this in terms of the spectral distribution of the derivative. Note that the isomorphism classes of these group von Neumann algebras has been studied and partially classified in \cite{ns}. \end{example} \begin{example} Suppose $X = \{x_1,\ldots, x_n\}$ consists of self-adjoint elements and $F =\{f_1,\ldots, f_p\}$ are self-adjoint. Assume further that $\delta_0(X)=n$, the maximum possible value. By Corollary 4.9, \begin{eqnarray*} n & = & \delta_0(X) \\ & \leq & n - \text{Rank}(D^{sa}F(X)) + \delta_0(F(X):X) \\ & \leq & n - \text{Rank}(D^{sa}F(X)) + \delta_0(f(X)), \\ \end{eqnarray*} whence, $\text{Rank}(D^{sa}F(X)) \leq \delta_0(f(X))$. Thus, computing the rank of $D^{sa}F(X)$ gives a lower bound on $\delta_0(f(X))$. When $p=1$ $F$ consists of a single selfadjoint element $f$. By \cite{v1}, $\delta_0(f(X))=1$ iff $f(X)$ has no eigenvalues. So showing that $\text{Rank}(D^{sa}F(X)) =1$ guarantees that $F(X)$ has no eigenvalues. This example is connected to the rank/nullity computation of matrices with operator-valued entries arising from freely independent self-adjoint/unitary tuples (\cite{ss}) as well as the free entropy dimension computations for a single self-adjoint polynomial under maximal free entropy dimension assumptions on the tuple (\cite{msm}). \end{example} \begin{example} Suppose $X = \{x_1,\ldots, x_n\} \subset M$ and $F=\{f_1,\ldots, f_{n-1}\} \subset \mathfrak{A}_n$ consist of self-adjoint elements such that for each $1 \leq i \leq n-1$, $\partial^{sa}_i f_i(X) \in M \otimes M^{op}$ has dense range. It is easily seen that $D^{sa}F(X)$ is the upper triangular $(n-1) \times n$ matrix \begin{eqnarray*} \begin{bmatrix} (\partial_{1}^{sa} f_1)(X) & * & \cdots & * & (\partial_{n}^{sa} f_1)(X) \\ 0 & (\partial_{2}^{sa} f_2)(X) & \cdots & & \vdots \\ \vdots & \ddots & \ddots & & \vdots \\ 0 & \cdots & 0 & (\partial_{n}^{sa} f_{n-1})(X) & (\partial_{n}^{sa} f_{n-1})(X) \\ \end{bmatrix} \in M_{(n-1)\times n}(M \otimes M^{op}). \end{eqnarray*} \noindent By Proposition 2.12, $\text{Rank}(D^{sa}F(X))= n-1$ so that from Corollary 4.9, $\delta_0(X) \leq 1 + \delta_0(F(X):X)$. \cite{v5} studied finite sequences of Haar unitaries $u_1, \ldots, u_n$ satisfying the pairwise commutation relation $u_i u_{i+1} = u_{i+1} u_i$ for $1 \leq i \leq n-1$. It was shown that $\delta_0(u_1,\ldots, u_n) \leq 1$. This was subsequently generalized and strengthened in \cite{gs}. The example here provides another way to see how pairwise commutativity of generators affects free entropy dimension. Indeed, suppose $x_1, \ldots x_n$ are self-adjoint, diffuse elements in a tracial von Neumann algebra and $x_i x_{i+1} = x_{i+1} x_i$ for $1 \leq i \leq n-1$. Set $f_i = X_i X_{i+1} - X^*_{i+1} X^*_i$ for $1\leq i \leq n-1$ and $F = \{f_1, \ldots, f_{n-1}\}$, $X = \{x_1, \ldots, x_n\}$. For each $i$, $(\partial_i^{sa} f_i)(X) = I \otimes x_{i+1} - x_{i+1}\otimes I \in M \otimes M^{op}$. As observed in Example 4.1, each of these operators has dense range, regarded as operators on $L^2(M \otimes M^{op})$. $F(X) =0$ so that $\delta_0(F(X):X)=0$. Applying the preceding paragraph yields $\delta_0(X) \leq 1$. Under the additional assumption of embeddability of $X$ into an ultraproduct of the hyperfinite $\mathrm{II}_1$-factor, $\delta_0(X)=1$ by \cite{j0}. As in Example 4.2, I'll show that all of these examples are strongly $1$-bounded by studying the spectral distribution of the derivative of $F$. \end{example} \begin{example} Suppose $\Gamma$ is a one-relator group on $n$ generators such that its relator is not a proper power (the relator cannot be written as a proper power of another element). The relator yields a $*$-monomial $w$ on $n$-indeterminates such that when applied to the canonical $n$-tuple of group unitaries $X$, satisfies the property that $\text{Rank}(D^uw(X))=1$. This follows from combining \cite{b}, \cite{h}, and \cite{l}; see Proposition 7.5 for a full proof. It follows from Corollary 4.10 that $\delta_0(\Gamma) = \delta_0(X) \leq n -1$. This is to be expected. Indeed, denoting by $\delta^*$ the nonmicrostates free entropy in \cite{v4}, \cite{cs} combined with \cite{dl} yields \begin{eqnarray*} \delta_0(\Gamma) & \leq & \delta^*(\Gamma) \\ & \leq & \beta_1^{(2)}(\Gamma) +1\\ & = & (n-2) + 1 \\ & = & n-1.\\ \end{eqnarray*} This result and in fact a stronger entropy inequality will be stated and proven in Section 7. \end{example} \section{Iterating Spectral Splits I: Heuristics and Technical Estimates} In this section I'll discuss how to upgrade the results of the preceding section to obtain finiteness results for free packing and Hausdorff entropy and establish the necessary technical machinery. These results will be applied in later sections to provide new examples of strongly $1$-bounded von Neumann algebras arising from one relator discrete groups. Establishing entropy upper bounds in this context is considerably more difficult than establishing the free entropy dimension bounds of the previous section. Before getting into the details I'll give an overview of how they are related (the impatient, nuts-and-bolts reader can skip to 5.2 for the beginning of the proofs). After this informal discussion, I'll build tools for the next section. They come in three parts: 1) Euclidean estimates; 2) elementary von Neumann algebra approximations; 3) estimates for polynomials of matrices. \subsection{Overview} I will review in broad terms the previous section's main free entropy dimension argument and explain how it fails to provide an entropy bound. Then I'll discuss how to overcome this. In what follows $DF(X)$ denotes the derivative of $F$ at $X$ in a general informal sense, i.e., the distinction between $D^s$ and the normal derivative will be blurred. Roughly, the proof of Theorem 4.7 goes like this. Suppose $F(X) =0$. By definition, $\Gamma_R(X;m,k,\gamma)$ is contained in the ball of operator norm radius $R$ in $(M_k(\mathbb C))^n$ so one can cover $\Gamma_R(X;m,k,\gamma)$ by no more than $(2R/r)^{2nk^2}$ balls of $\emph{operator norm}$ radius $r <1$ with centers in $\Gamma_R(X;m,k,\gamma)$. The intersection of $\Gamma_R(X;m,k,\gamma)$ with each of these balls of operator norm radius $r$ has the property that $DF$ varies (in $\emph{operator norm}$ as a real Hilbert space operator from $(M_k(\mathbb C))^n$ into $(M_k(\mathbb C))^d$) by no more than $r$ times a fixed constant determined by $F$, $R$, and the submultiplicativity of the operator norm. This uniform bound on the derivative combined with orthogonality estimates and a spectral splitting parameter $\beta$ allows one to dominate the $\|\cdot\|_2$-metric entropy on each $r$ operator norm neighborhood by the entropy of an $\|\cdot\|_2$ $r$-ball in the kernel of the differential plus a $t_1k^2$-approximate subspace. Here $t_1$ depends on how small $r$ is, which in turn is driven by the trace of the spectral projection $1_{(0,\beta)}(|DF(X)|)$ . An $\epsilon$-covering (w.r.t. $\|\cdot\|_2$) bound for this approximate "kernel" $r$-ball multiplied by the initial $r$-covering bound gives a bound for $K_{\epsilon}(\Gamma_R(X;m,k,\gamma))$: \begin{eqnarray*} \left(\frac{2R}{r} \right)^{2nk^2} \cdot \left(\frac{r}{\epsilon} \right)^{(\text{Nullity}(DF(X)) + t_1) k^2}. \end{eqnarray*} \noindent The microstates limiting process extracts the normalized exponent as a bound - $\text{Nullity}(DF(X)) +t_1$ and this will be an upper bound for the free entropy dimension. Letting $t_1 \rightarrow 0$ shows that the free entropy dimension is dominated by $\text{Nullity}(DF(X))$. Unfortunately the bound, as is, fails to provide a free packing entropy estimate with growth exponent $\text{Nullity}(DF(X))$ because of the residual error term $t_1$. This process was called \textbf{splitting the spectrum} in the introduction. In the above argument one covers the microstate spaces by $r$ operator norm balls, and then covers the intersection of the microstate space with each of these $r$-balls by $\epsilon$-balls taken from perturbed copies of $\ker(DF(X))$. Now repeat this process on each of these local subspace $\epsilon$-balls, intersecting them with the microstate space, and then covering them by appropriate balls of radius $\epsilon^2$ via some suitable Euclidean estimate (e.g. something like Lemma 4.2) again. The advantage of zooming in at a further $\epsilon$-scale is that the differential of the polynomial tuple moves less (less curvature), and yields a smaller approximate subspace perturbation, say with error $t_2 < t_1$ that is even closer to $\text{Nullity}(DF(X))$. One then repeats this process on the subspace balls of radius $\epsilon^3$, picking up an even better approximating subspace perturbation. Iterating this spectral splitting process $p$ times and keeping $\epsilon$ $\emph{fixed}$ throughout, a back-of-the-envelope computation shows \begin{eqnarray*} K_{\epsilon^p}(\Gamma_R(X;m,k,\gamma)) &\leq & \left(\frac{2R}{r} \right)^{2nk^2} \left(\frac{r}{\epsilon} \right)^{(\text{Nullity}(DF(X)) + t_1) k^2} \left(\frac{\epsilon}{\epsilon^2} \right)^{(\text{Nullity}(DF(X)) + t_2) k^2} \cdot \\ && \left(\frac{\epsilon^2}{\epsilon^3} \right)^{(\text{Nullity}(DF(X)) + t_3) k^2} \cdots \left(\frac{\epsilon^{p-1}}{\epsilon^p} \right)^{(\text{Nullity}(DF(X)) + t_p) k^2} \\ & \leq & \left(\frac{2R}{r} \right)^{2nk^2} \left(\frac{1}{\epsilon^p} \right)^{[\text{Nullity}(DF(X))]k^2} \left(\frac{1}{\epsilon}\right)^{(t_1 + \cdots +t_p)k^2}. \end{eqnarray*} \noindent The last expression will yield a finite packing entropy bound provided that the exponent term in the last line, $t_1 + \cdots + t_p$, can be uniformly bounded for all $p$. With some work this can be guaranteed by imposing decay conditions on the spectral distribution of $|DF(X)|$ near $0$. Somewhat surprisingly, this spectral decay condition is equivalent to $DF(X)$ having finite Fuglede-Kadison-L{\"u}ck Determinant. This problem where one tries to bound a space which has the same local structure at every scale (in this case on the scale of powers of a fixed $\epsilon$) is similar to the situation of bounding the Hausdorff measure of a self-similar fractal. There are, however, some strong assumptions made in the iterative spectral splitting argument which must be addressed. Call a covering inequality \textit{asymptotically coarse} if its dominating term explicitly contains constants of the form $C^{\alpha k^2}$ where $C >1$ and $\alpha >0$. Asymptotically coarse inequalities involving constants $C$ uniformly bounded below from $1$ can be made a fixed, finite number of times without destroying the qualitative nature of an upper bound for entropy. They are lethal in the context above. For example, imagine that each of the iterations of the computation above involved a covering bound involving an additional factor of $2^{k^2}$, a seemingly benign bound that appears for example in the comparison of the $\|\cdot\|_{\infty}$ and $\|\cdot \|_2$ norms in Section 2.4 or in the single spectral split argument for the dimension above. After $p$-iterations one would end up with a constant of the form $2^{p k^2}$ and after taking appropriate limits one has an additive factor of $p \log 2$ which converges to $\infty$ as $p \rightarrow \infty$. This would leave a vacuous upper bound of $\infty$. In the estimate above there appear to be no asymptotically coarse inequalities. However, one should expect these in two places:1) scaling estimates for balls; 2) norm switching. 1) refers to finding sharp bounds for coverings of the unit ball by $\epsilon$-balls in Euclidean space. In the above I assumed that one could find such a cover with no more than $(1/\epsilon)^n$-balls as opposed to say $(C/\epsilon)^n$. This appears in the ratios of powers of $\epsilon$ assumed in the first pass computation above. While false in this strict form, it is asymptotically true (\cite{rogers}). 2) is both more subtle and troublesome. By norm switching I'm referring to the process of relying on different norms in order to make different estimates. Norm switching occurred in the dimension argument. There I used properties specific to both the $L^{\infty}$ and the $L^2$-norm. While one can account for entropy changes when moving from one norm to the other with St. Raymond's volume computations \cite{sr}, the error terms are asymptotically coarse. I'll deal with 2) by using the $L^2$-norm metric exclusively for covering estimates. I'll use Chebyshev to choose 'good' projections upon which qualitative $L^{\infty}$ estimates do hold, estimate the traces of the 'bad' projections upon which they fail, then bound the covering numbers of the 'bad' projections, and 'transfer' their entropy into the (assumed) geometrically decaying spectrum of the derivative, $DF(X)$. Exploiting $L^2$ estimates for the explicit form of the derivative of the polynomial function $F$ is crucial. The residual set of 'bad' projections on which the uniform $L^{\infty}$ bounds fail is so small from a dimensional perspective that one can control it without introducing aymptotically coarse estimates. This division of the operators into a 'good' part where $L^{\infty}$-bounds are available and 'bad' parts where they fail but for which an $L^2$-estimate is available is somewhat reminiscent of the proof of the Calderon-Zygmund decomposition. \subsection{Bindings and Fringes} The results of this subsection will be phrased in the context of finite dimensional, real Hilbert spaces. Throughout denote by $V$ and $W$ two finite dimensional, real Hilbert spaces, $L(V, W)$ the set of real linear operators from $V$ into $W$, and $P(V)$ the subset of $L(V,V)$ consisting of orthogonal projections. The key example to keep in mind is when $V$ and $W$ are of the form $(M_k(\mathbb C))^n$ for some $n$, and the real inner product is the one generated by the tracial state: $\langle \xi, \eta \rangle = \sum_{i=1}^n Re(tr_k(\eta_i^*\xi_i))$ where $\xi = (\xi_1,\ldots, \xi_n)$ and $\eta = (\eta_1,\ldots, \eta_n)$. Notice that the real inner product norm here coincides with the standard complex inner product norm. In this context, I want $\epsilon$-entropy estimates for level sets of a $*$-polynomial function restricted to a ball $B$ of radius $\rho$ computed w.r.t. the inner product. It is impossible to get a global, dimension-free (ones which don't refer to the dimension of the ambient inner product space) Lipschitz bound for $f$ on $B$ since multiplication fails to be continuous w.r.t. the inner product ($L^2$)-norm. However, such dimension-free estimates will work on almost the entire space. Clarifying what 'almost' means is the goal of this subsection and is expressed through the concepts of a \textbf{binding} and its \textbf{fringe}. This short subsection will define bindings and fringes. Their application to $*$-polynomials via Chebyshev's inequality will follow in the two subsequent sections. Suppose $\Lambda \subset L(V, W)$ and fix a linear operator $T \in L(V, W)$. \begin{definition} If $\epsilon >0$, an $\epsilon$-binding of $\Lambda$ focused on $T$ is a map $\Theta:\Lambda \rightarrow P(V)$ such that $\|(S-T)\Theta(S)\| < \epsilon$. \end{definition} \begin{definition} Suppose $\Theta$ is an $\epsilon$-binding of $\Lambda$ focused on $T$. The fringe of $\Theta$ on a set $K \subset V$ is \begin{eqnarray*} \mathcal F(\Theta, K) = \{(T - S)\Theta(S)^{\bot} \xi: \xi \in K, S \in \Lambda\} \subset W. \end{eqnarray*} If $r >0$, then $\mathcal F(\Theta, r)= \mathcal F(\Theta, B)$ where $B$ is the closed ball of radius $r$ centered at the origin. \end{definition} \begin{remark} Observe that if $K$ is symmetric, i.e., $x \in K$ iff $-x \in K$, then, so is $\mathcal F(\Theta, K)$ by linearity. Balls being symmetric, $F(\Theta, r) = - F(\Theta, r)$ for any $r >0$. Observe also that if $\Theta$ is an $\epsilon$-binding of $\Lambda$ focused on $T$ and $\Lambda_0 \subset \Lambda$, then $\Theta$ induces by restriction, an $\epsilon$-binding $\Theta_0$ of $\Lambda_0$ focused on $T$. It is easy to see that for any set $K \subset V$, $\mathcal F(\Theta_0,K) \subset \mathcal F(\Theta, K)$. \end{remark} \begin{definition} If $E \subset K \subset V$ with $K$ an open, convex set, and $f: K \rightarrow W$ is a $C^1$-function, then for any $x, y \in K$ define the distance operator from $x$ to $y$ by \begin{eqnarray*} T_{x,y} = \int_0^1 Df(x+t(y-x)) \, dt \in L(V, W). \end{eqnarray*} The set of distance operators for $E$ with respect to $f$ is \begin{eqnarray*} \mathcal D(f, E) = \{T_{x,y}: x, y \in E\} \subset L(V, W). \end{eqnarray*} \end{definition} Recall from the mean value theorem (Section 2.5) that $T_{x,y}(x-y) = f(x)-f(y)$ which justifies the term, 'distance operator'. \begin{remark} Notice that if $E_1 \subset E_2$, then by definition, $\mathcal D(f, E_1) \subset \mathcal D(f, E_2)$. In particular, if $\Theta$ is an $\epsilon$-binding of $\mathcal D(f, E_2)$ on an operator $T$, then by Remark 5.3 $\Theta$ induces by restriction an $\epsilon$-binding of $\mathcal D(f, E_1)$ on $T$. \end{remark} The following lemma says that the expansiveness of $f$ can be controlled locally by the fringe of an $\epsilon$-binding of the set of distance operators focused on a single derivative. \begin{lemma} Suppose $K \subset V$ is an open, convex set, $f:K \rightarrow W$ is a $C^1$-function, $x_0 \in K$, and $\Lambda = \mathcal D(f, K)$. If $\Theta$ is an $\epsilon$-binding of $\Lambda$ focused on $Df(x_0)$, then for each $x, y \in K$ there exists a corresponding operator $A: V \rightarrow W$ such that $\|A\| < \epsilon$ and \begin{eqnarray*} f(x) - f(y) - Df(x_0)(x-y) -A(x-y) \in \mathcal F(\Theta, \|x-y\|). \end{eqnarray*} \end{lemma} \begin{proof} Suppose $x, y \in K$. Set $S =\int_0^1 Df(x + t(y-x)) \, dt$. $S \in \Lambda$ so by definition of an $\epsilon$-binding, $\|(S-Df(x_0))\Theta(S)\| < \epsilon$. By the mean value theorem, \begin{eqnarray*} f(x) - f(y) & = & S(x-y) \\ & = & Df(x_0)(x-y) + (S-Df(x_0))\Theta(S)(x-y) + (S- Df(x_0))\Theta(S)^{\bot}(x-y) \\ \end{eqnarray*} \noindent Regrouping terms gives \begin{eqnarray*} f(x) - f(y) - Df(x_0)(x-y) - (S-Df(x_0))\Theta(S)(x-y) & = & (S- Df(x_0))\Theta(S)^{\bot}(x-y)\\ & \in & \mathcal F(\Theta, \|x-y\|).\\ \end{eqnarray*} \noindent Set $A = (S-Df(x_0))\Theta(S)$. By definition, $\|A\| < \epsilon$ and the above completes the proof. \end{proof} \begin{lemma} Suppose $x_0 \in E \subset B \subset V$ with $B$ an open ball of radius $\rho >0$, $f:B \rightarrow W$ is a $C^1$-function, and $\Lambda = \mathcal D(f, B)$. If for all $x \in E$, $\|f(x)\| <\gamma$ and $\Theta$ is an $\rho_1$-binding of $\Lambda$ focused on $Df(x_0)$, then there exists an orthogonal operator $U$ such that for any spectral projection $Q$ of $|Df(x_0)|$, \begin{eqnarray*} Df(x_0)Q(E- x_0) \subset UQU^* \left[ \mathcal N_{2 \rho \rho_1 + 2 \gamma}(\mathcal F(\Theta, 2 \rho)) \right ]. \end{eqnarray*} \end{lemma} \begin{proof} By Lemma 5.6, for any $x \in E \subset B$ there exists a corresponding operator $A:V \rightarrow W$ such that $\|A\| < \rho_1$ and $f(x) - f(x_0) - Df(x_0)(x-x_0) - A(x-x_0) \in \mathcal F(\Theta,\|x-x_0\|)$. Since $x, x_0 \in E$, $\|f(x)\|, \|f(x_0)\| < \gamma$ so that \begin{eqnarray*} Df(x_0)(x-x_0) & \in & -A(x-x_0) - \mathcal N_{2\gamma}(\mathcal F(\Theta, \|x-x_0\|)) \\ & \in & \mathcal N_{2 \rho \rho_1 + 2 \gamma}(\mathcal F(\Theta, 2 \rho)).\\ \end{eqnarray*} \noindent Applying the polar decomposition shows \begin{eqnarray*} Df(x_0)Q(x-x_0) & = & U|Df(x_0)|Q(x-x_0) \\ & = & U Q|Df(x_0)|(x-x_0) \\ & = & UQU^* Df(x_0)(x-x_0).\\ \end{eqnarray*} \noindent Putting these two observations together, \begin{eqnarray*} Df(x_0)Q(x - x_0) & = & UQU^*Df(x_0)(x-x_0) \\ & \in & UQU^* \left[ \mathcal N_{2 \rho \rho_1 + 2 \gamma}(\mathcal F(\Theta, 2 \rho)) \right ]. \\ \end{eqnarray*} $x \in E$ was arbitrary so I'm done. \end{proof} \begin{lemma} Suppose $x_0, E, B, f, \Lambda, \gamma, \Theta, \rho,$ and $\rho_1$ are as in the hypotheses of Lemma 5.7. For $\alpha >0$ define $Q = 1_{[\alpha, \infty)}(|Df(x_0)|)$. If $t \in (0,1)$, $\epsilon >0$, and $\beta = \alpha t \epsilon - 4 \rho \rho_1 - 4 \gamma > 0$, then \begin{eqnarray*} K_{\epsilon}(E) \leq K_{(1-t^2)^{1/2}\epsilon}(Q^{\bot}(E)) \cdot S_{\beta}(\mathcal F(\Theta, 2\rho)). \end{eqnarray*} \end{lemma} \begin{proof} By orthogonality and Proposition 2.1(v) \begin{eqnarray*} K_{\epsilon}(E) & \leq & K_{(1-t^2)^{1/2}\epsilon}(Q^{\bot}(E)) \cdot K_{t\epsilon}(Q(E)) \\ & \leq & K_{(1-t^2)^{1/2}\epsilon}(Q^{\bot}(E)) \cdot S_{t\epsilon}(Q(E)). \\ \end{eqnarray*} \noindent By the spectral theory, for any $\xi, \eta$ in the range of $Q$, $\|Df(x_0)(\xi) -Df(x_0)(\eta)\| \geq \alpha \|\xi - \eta\|$. It follows from this that $S_{t\epsilon}(Q(E)) \leq S_{\alpha t \epsilon}(Df(x_0)Q(E))$. To complete the proof it suffices to show that $S_{\alpha t \epsilon}(Df(x_0)Q(E)) \leq S_{\beta}(\mathcal F(\Theta, 2\rho))$. By Lemma 5.7, \begin{eqnarray*} Df(x_0)(Q(E)) - Df(x_0)(Q(x_0)) & = & Df(x_0)(Q(E - x_0)) \\ & = & UQU^* \left[ \mathcal N_{2 \rho \rho_1 + 2 \gamma}(\mathcal F(\Theta, 2 \rho)) \right ]. \\ \end{eqnarray*} From this, the fact that $UQU^*$ is a contraction, and Proposition 2.1(iv) \begin{eqnarray*} S_{\alpha t \epsilon}(Df(x_0)Q(E)) & \leq & S_{\alpha t \epsilon}(UQU^* \left[ \mathcal N_{2 \rho \rho_1 + 2 \gamma}(\mathcal F(\Theta, 2 \rho)) \right ]) \\ & \leq & S_{\alpha t \epsilon} \left[ \mathcal N_{2 \rho \rho_1 + 2 \gamma}(\mathcal F(\Theta, 2 \rho)) \right ] \\ & \leq & S_{\alpha t \epsilon - 4 \rho \rho_1-4 \gamma}(\mathcal F(\Theta, 2 \rho)) \\ & = & S_{\beta}(\mathcal F(\Theta, 2 \rho)). \\ \end{eqnarray*} \end{proof} \subsection{Estimates in Tracial von Neumann Algebras} The estimates here will be used to construct bindings and fringes for maps that arise as finite tuples of $*$-polynomials. Essentially they say that the $\|\cdot\|_2$ norm is quasi-submultiplicative, i.e., $\|xyp\|_{\infty} \leq C \|x\|_2 \|y\|_2$ on a projection $p$ with trace almost equal to $1$ and moreover, this can be controlled with $C$. Again the context to keep in mind below is when $M=M_k(\mathbb C)$ (and in fact the only case I'll need), however I'll phrase the results in the tracial von Neumann algebra setting. \begin{lemma} If $z \in M$ and $C >0$, then there exists a projection $p \in M$ such that $\|zp\|_{\infty} < C\|z\|_2$ and $\varphi(p) \geq 1- C^{-2}$. \end{lemma} \begin{proof} This is Chebyshev's inequality. Set $p = 1_{[0, C\|z\|_2]}(|z|)$ and denote by $u$ the partial isometry in the polar decomposition of $z$. $\|zp\|_{\infty} = \|u|z| p\|_{\infty} \leq \| |z| p \|_{\infty} < C\|z\|_2$ which yields the first inequality. For the second, \begin{eqnarray*} 0 & \leq & (C\|z\|_2)^2 \cdot p^{\bot} \\ & \leq & |z|^2 p^{\bot} \\ & \leq & |z|^2.\\ \end{eqnarray*} Taking traces yields $(C \|z\|_2)^2 \cdot \varphi(p^{\bot}) \leq \varphi(|z|^2) < \|z\|_2^2$. Grouping terms, $1 - \varphi(p) = \varphi(p^{\bot}) \leq C^{-2}$ and the second inequality follows. \end{proof} \begin{lemma} If $x, q \in M$ with $qH \subset cl(xH)$, then there exists a projection $e \in M$ such that $eH \subset (\ker x)^{\bot}$ satisfying $xeH \subset qH$ and $\varphi(e) = \varphi(q)$. \end{lemma} \begin{proof} I'll prove this first in the case where $x \geq 0$ and then use the polar decomposition to arrive at the general claim. So assume $x \geq 0$ and $qH \subset cl(xH)$. $f = 1_{(0,\infty)}(x)$ is the projection onto $cl(xH)$. Define for each $n$, $f_n = 1_{(1/n, \infty)}(x)$, $f_0 = 1_{\{0\}}(x)$, and note that $f_0$ is the projection onto $\ker x$. For each $n$, note that $q\wedge f_n \leq f_n$, $f_n$ commutes with $x$, and that $x f_n = f_n x f_n$ is invertible when regarded as an element of $f_nMf_n$. It follows that there exists a projection $e_n \leq f_n$ such that $x e_n H = (q \wedge f_n)H$ and $\varphi(e_n) = \varphi(q \wedge f_n)$. Indeed, this is obtained by taking the projection onto the range of $(f_n xf_n)^{-1} (q \wedge f_n)$ where the inverse is taken w.r.t. the compression $f_nMf_n$. Notice also from this definition and the Borel spectral theorem that for any $n$, $e_n \leq e_{n+1}$. There exists a sequence of increasing projections $\langle e_n \rangle_{n=1}^{\infty}$ such that for each $n$, $\varphi(e_n) = \varphi(q \wedge f_n)$, and $xe_n H \subset (q \wedge f_n)H$. Denote by $e$ the strong operator topology limit of the $e_n$ (or equivalently, the projection onto the closure of $\cup_{n=1}^{\infty} e_nH$). Because $e_n \leq f_n \leq f$, $e \leq f$ where $f$ is the projection onto $cl(xH) = (\ker x)^{\bot}$ ($x \geq 0$). For any $n$, \begin{eqnarray*} xe_nH & \subset & (q \wedge f_n)H \\ & \subset & qH. \\ \end{eqnarray*} \noindent It follows that $xeH \subset qH$. Moreover, since $\lim_{n \rightarrow \infty} f_n = f$ (the projection onto $cl(xH)$) and $q\leq f$ by hypothesis, \begin{eqnarray*} \varphi(e) & = & \lim_{n \rightarrow \infty} \varphi(e_n) \\ & = & \lim_{n \rightarrow \infty} \varphi(q \wedge f_n) \\ & = & \lim_{n \rightarrow \infty} \varphi(q \wedge f) \\ & = & \varphi(q). \end{eqnarray*} \noindent This finishes the proof in the case where $x$ is positive. For the general case, suppose $x, q \in M$ are given with $qH \subset cl(xH)$. Consider the polar decomposition $x^* = u|x^*|$. Thus, $u \in M$ is a partial isometry with initial range $cl(|x^*|H) = cl(xH)$ and final range $cl(x^*H) = (\ker x)^{\bot}$. Apply the preceding result to the positive element $|x^*|$ and $q$, noting that $qH \subset cl(xH) = cl(|x^*|H)$. There exists a projection $p \in M$ such that $pH \subset (\ker |x^*|)^{\bot} = (\ker x^*)^{\bot}$, $|x^*|pH \subset qH$, and $\varphi(p) = \varphi(q)$. Set $e = upu^* \in M$. $eH \subset (\ker x)^{\bot}$ and \begin{eqnarray*} xeH & = & |x^*|u^*(upu^*)H \\ & = & |x^*| p u^* H\\ & = & |x^*| p H \\ & \subset & qH.\\ \end{eqnarray*} \noindent $\varphi(e) = \varphi(upu^*) = \varphi(p) = \varphi(q)$, completing the proof. \end{proof} \begin{lemma} Suppose $x, q \in M$ with $q$ a projection. There exists a projection $p \in M$ such that $\varphi(p) \geq\varphi(q)$ and $xpH \subset qH$. \end{lemma} \begin{proof} Denote by $f$ the projection onto $cl(xH)$. From the polar decomposition, if $e$ is the projection onto $\ker x$, then $\varphi(e) + \varphi(f) =1$. Thus, $\varphi(e) = 1 -\varphi(f) = \varphi(f^{\bot})$. Now \begin{eqnarray*} 1 & \geq & \varphi(q \vee f) \\ & = & \varphi(q) + \varphi(f) - \varphi(q\wedge f). \\ \end{eqnarray*} \noindent Thus, $\varphi(e) + \varphi(q\wedge f) = 1 -\varphi(f) + \varphi(q \wedge f) \geq \varphi(q)$. Obviously $q \wedge f \leq f$. Invoke Lemma 5.10 to produce a projection $p_1 \in M$ such that $p_1H \subset (\ker x)^{\bot}$, $xp_1H \subset (q \wedge f)H$, and $\varphi(p_1) = \varphi(q \wedge f)$. Set $p = e + p_1$. Since $e$ and $p_1$ are orthogonal, \begin{eqnarray*} \varphi(p) & = & \varphi(e) + \varphi(p_1) \\ & = & \varphi(e) + \varphi(q \wedge f) \\ & \geq & \varphi(q).\\ \end{eqnarray*} \noindent Lastly, $xpH = xp_1H \subset (q \wedge f)H \subset qH$. \end{proof} \begin{lemma} If $z_1, \ldots, z_n \in M$ and $C>0$, then there exists a projection $p \in M$ such that $\|z_1 \cdots z_np\|_{\infty} < C^n \|z_1\|_2 \cdots \|z_n\|_2$ and $\varphi(p) \geq 1 - nC^{-2}$. \end{lemma} \begin{proof} I will prove this by induction on $n$. The base case $n=1$ is covered in Lemma 5.9. Assume it's true for $n=k$. Suppose $z_1, \ldots z_{k+1} \in M$. By the inductive hypothesis there exists a projection $p_0 \in M$ such that $\|z_2 \cdots z_{k+1} p_0\|_{\infty} < C^k \|z_2\|_2 \cdots \|z_{k+1}\|_2$ and $\varphi(p_0) > 1- kC^{-2}$. By Lemma 5.9 there exists a projection $p_1$ such that $\|z_1 p_1\|_{\infty} < C \|z_1\|_2$ and $\varphi(p_1) > 1 -C^{-2}$. By Lemma 5.11 there exists a projection $q \in M$ such that $\varphi(q) \geq \varphi(p_1)$ and $(z_2 \cdots z_{k+1}) qH \subset p_1H$. Set $p = q \wedge p_0$. Clearly $p = qp = p_0 p $. \begin{eqnarray*} \|z_1 \cdots z_{k+1} p \|_{\infty} & = & \|z_1 (z_2 \cdots z_{k+1} q p)\|_{\infty} \\ & = & \|z_1 p_1(z_2 \cdots z_{k+1} q p)\|_{\infty} \\ & \leq & \|z_1 p_1\|_{\infty} \cdot \|p_1 (z_2 \cdots z_{k+1}qp)\|_{\infty} \\ & \leq & C \|z_1\|_2 \cdot \|z_2 \cdots z_{k+1}p_0 p\|_{\infty} \\ & \leq & C^{k+1} \cdot \|z_1\|_2 \cdots \|z_{k+1}\|_2.\\ \end{eqnarray*} \noindent Also, $\varphi(p) = \varphi(q \wedge p_0) = \varphi(p_0) + \varphi(q) - \varphi(p_0\vee q) > 1 - (k+1)C^{-2}$. This verifies the condition for $n=k+1$ and completes the proof. \end{proof} By taking adjoints one immediately gets: \begin{corollary} If $z_1, \ldots, z_n \in M$ and $C>0$, then there exists a projection $p \in M$ such that $\|p z_1 \cdots z_n\|_{\infty} < C^n \|z_1\|_2 \cdots \|z_n\|_2$ and $\varphi(p) \geq 1 - nC^{-2}$. \end{corollary} Combining Corollary 5.13 with Lemma 5.12 yields: \begin{corollary} If $x_1, \ldots, x_k, y_1,\ldots, y_n \in M$ and $C>0$, then there exists a projection $p \in M$ such that $\|x_1 \ldots x_k p\|_{\infty} < C^{k} \|x_1\|_2 \cdots \|x_k\|_2$, $\|p y_1 \cdots y_n\|_{\infty} < C^n \|y_1\|_2 \cdots \|y_n\|_2$, and $\varphi(p) \geq 1 - (k+n) C^{-2}$. \end{corollary} \subsection{Polynomials, derivatives, and coverings for matrices} In this last subsection I'll construct bindings for the derivative of a tuple of $*$-polynomials and estimate its fringe entropy. The estimates will invoke the results in subsection 5.2 while the bindings will be created from the projections in subsection 5.3 upon which quasi-submultiplicative estimates are valid. As in Section 3, $\mathfrak{A}_n$ denotes the universal, unital, complex $*$-algebra on $n$-indeterminates, $X_1,\ldots, X_n$. Unless otherwise stated, $F \in \mathfrak{A}_n$, i.e., $F$ is a noncommutative $*$-polynomial in $n$-indeterminates. $F$ can uniquely be written in the reduced form $c_1w_1 + \cdots +c_nw_n$ where the $c_i$ are nonzero complex numbers and the $w_i$ are distinct $*$-monomials in the $X_i$. Define $c(F)$ to be the maximum over the set $\{|c_i|, \ell(w_i): 1 \leq i \leq n\} \cup \{n\}$ where $\ell(\cdot)$ is the length function defined on the $*$-monomials of $\mathfrak{A}_n$. As usual, $\deg(F) = \max_{1\leq i \leq n} \ell(w_i)$. When $F=\{f_1,\ldots, f_p\}$ is a finite $p$-tuple of elements in $\mathfrak{A}_n$, then $c(F) = \max_{1\leq i \leq p} c(f_i)$ and $\deg(F) = \max_{1 \leq i \leq n} \deg(f_i)$. For $k$ fixed $F$ induces an obvious smooth map from $(M_k(\mathbb C))^n$ into $M_k(\mathbb C)$. In this subsection for a fixed $\xi \in (M_k(\mathbb C))^n$, $DF(\xi): (M_k(\mathbb C))^n \rightarrow M_k(\mathbb C)$ denotes the derivative of $F$ at $\xi$ and for each $1 \leq i \leq n$ $\partial_i F(\xi): M_k(\mathbb C) \rightarrow M_k(\mathbb C)$ denotes the $i$th partial derivative of $F$ w.r.t. the ith coordinate of $\xi = (\xi_1,\ldots, \xi_n)$ (Section 2.5). Denote by $B_{k, R}$ the ball of $\|\cdot\|_2$-radius $R$ in $M_k(\mathbb C)$ and $(B_{k,R})^n \subset (M_k(\mathbb C))^n$ the direct sum of $n$ copies of $B_{k,R}$. $\sigma_k: M_k(\mathbb C) \otimes M_k(\mathbb C)^{op} \rightarrow L(M_k(\mathbb C))$ is the complex, trace preserving $*$-isomorphism determined by $\sigma_k(a \otimes b^{op})(\xi) = a\xi b$, $a,b, \xi \in M_k(\mathbb C)$. Throughout this subsection $i$ and $j$ denote integer indices. \begin{lemma} Suppose $1 \leq j \leq n$. There exist $N \in \mathbb N$ and for $1 \leq i \leq N$, $\lambda_i \in \mathbb C$, and $*$-monomials $w_i, v_i \in \mathfrak{A}_{2n}$ all dependent only on $F$ and $j$ such that for each $i$ either $w_i$ or $v_i$ has an occurence of the last $n$ of the $2n$-indeterminate generators for $\mathfrak{A}_{2n}$ and for any $X, H \in (M_k(\mathbb C))^n$, \begin{eqnarray*} (\partial_jF)(X+H) = (\partial_jF)(X) + \sum_{i=1}^N \lambda_i \cdot \sigma_k(w_i(X,H) \otimes v_i(X,H)^{op}) \circ J^{d_i}. \end{eqnarray*} Moreover, $N$ and $\max_{1\leq i \leq N} |\lambda_i|$ are no greater than $c(F)\cdot(2n)^{c(F)}$ and $\max_{1\leq i \leq N} (\ell(w_i)+\ell(v_i)) < \deg(F)$. \end{lemma} \begin{proof} By writing $F$ in reduced form there exist an $N \in \mathbb N$ and for each $1 \leq i \leq N$, $*$-monomials $a_i, b_i \in \mathfrak{A}_n$, complex numbers $\lambda_i$, and $d_i \in \{0,1\}$ depending on $F$ and $j$ such that for any $Y \in (M_k(\mathbb C))^n$, \begin{eqnarray*} \partial_jF(Y) = \sum_{i=1}^N \lambda_i \sigma_k(a_i(Y) \otimes b_i(Y)^{op}) \circ J^{d_i} \end{eqnarray*} where $\max_{1 \leq i \leq N} (\ell(a_i) + \ell(b_i)) < \deg(F)$. Substituting $Y=X+H$ and $Y=X$ into the above equation and subtracting yields: \begin{eqnarray*} \partial_jF(X+H) - \partial_jF(X) = \sum_{i=1}^N \lambda_i \cdot \sigma_k[a_i(X+H) \otimes b_i(X+H)^{op} - a_i(X) \otimes b_i(X)^{op}] \circ J^{d_i}.\\ \end{eqnarray*} Each bracketed summand on the RHS can be expanded as a further sum of terms of the form $w_i(X,H) \otimes v_i(X,H)$ where $w_i,v_i \in \mathfrak{A}_{2n}$ and either $w_i$ or $v_i$ has an occurence of the last $n$ of the $2n$-indeterminates, i.e., that one of the $H$ terms appears in $w_i(X,H) \otimes v_i(X,H)$. Moreover, $\deg(w_i) + \deg(v_i) \leq \deg(a_i)+\deg(b_i)$. This establishes the equation. The last statement is a consequence of the fact that the $\partial_jF(Y)$ is obtained from the reduced form of $F$ and the expansion of the perturbed elementary tensors $a_i(X+H) \otimes b_i(X+H)^{op}$ can be written as a sum of no more than $(2n)^{c(F)}$ elementary tensors of the form $w_i(X,H) \otimes v_i(X,H)$ with $\deg(w_i) + \deg(v_i) \leq \deg(a_i)+\deg(b_i) < \deg(F)$. \end{proof} \begin{lemma} For each $1 \leq j \leq n$ there exist an $N_0, N \in \mathbb N$ and for $1 \leq i \leq N$, $c_i \in \mathbb C$, $s_i \in \{1,*\}$, $1 \leq q_i \leq n$, $*$-monomials $a_i, b_i \in \mathfrak{A}_{2n}$, and $*$-monomials $m_i \in \mathfrak{A}_n$ such that for any $n$-tuples $X$ and $H=(h_1,\ldots, h_n)$ in $(M_k(\mathbb C))^n$, if $T = \int_0^1 \partial_j F(X+tH) \, dt$, then for any $\xi \in M_k(\mathbb C)$ \begin{eqnarray*} (T-\partial_j F(X))\xi = \sum_{i=1}^{N_0} c_i \cdot a_i(X,H) h_{q_i} m_i(X) \xi^{s_i} b_i(X,H) + \sum_{i=N_0+1}^N c_i \cdot a_i(X,H) \xi^{s_i} m_i(X) h_{q_i} b_i(X,H). \end{eqnarray*} Moreover, $N$ and $\max_{1\leq i \leq N} |c_i|$ are no greater than $c(F)\cdot(2n)^{c(F)}$ and $\max_{1\leq i \leq N} (\ell(a_i)+\ell(b_i) + \ell(m_i)+1) < \deg(F)$. \end{lemma} \begin{proof} Invoke Lemma 5.15 to produce $N, \lambda_1,\ldots, \lambda_N \in \mathbb C$, and the $*$-monomials $w_1,\ldots, w_N, v_1, \ldots, v_N$ as in its statement. Denote by $n_i$ the sum of the exponents arising from the last $n$ variables and their adjoints which appears in $w_i(X,H)$ and $v_i(X,H)$. Set $c_i = \lambda_i \int_0^1 t^{n_i} \, dt$. Compute: \begin{eqnarray*} T - \partial_jF(X) & = & \int_0^1 \partial_jF(X + tH) - \partial_jF(X) \, dt \\ & = & \int_0^1 \sum_{i=1}^N \lambda_i \cdot \sigma_k(w_i(X, tH) \otimes v_i(X,tH)^{op}) \circ J^{d_i} \, dt \\ & = & \sum_{i=1}^N \left(\lambda_i \cdot \sigma_k(w_i(X,H) \otimes v_i(X, H)^{op}) \circ J^{d_i} \cdot \int_0^1 t^{n_i} \, dt \right )\\ & = & \sum_{i=1}^N \left(c_i \cdot \sigma_k(w_i(X,H) \otimes v_i(X, H)^{op}) \circ J^{d_i} \right ). \\ \end{eqnarray*} Define $s_i =1$ if $d_i =0$ and $s_i = *$ if $d_i =1$. Substituting $\xi \in M_k(\mathbb C)$ in the equation above produces \begin{eqnarray*} (T - \partial_jF(X))\xi & = & \sum_{i=1}^N c_i \cdot w_i(X,H) \xi^{s_i} v_i(X, H). \\ \end{eqnarray*} Now for each $i$, either $w_i$ or $v_i$ has a nontrivial occurrence of one of the last $n$ of the $2n$-indeterminates. In the first case there exists some $n+1 \leq i_j \leq 2n$ such that $w_i = a_j X_{i_j} m_j$ where $a_j \in \mathfrak{A}_{2n}$, and $m_j$ is a word in $X_1,\ldots, X_n \in \mathfrak{A}_{2n}$ (the first $n$ indeterminates). In the latter case there exists some $n+1 \leq i_j \leq 2n$ such that $v_i = m_j X_{i_j} b_j$ where $b_j \in \mathfrak{A}_{2n}$ and $m_j$ is a word in $X_1,\ldots X_n \in \mathfrak{A}_{2n}$ (the first $n$ indeterminates). Evaluating either of these expression at the $2n$-tuple $(X,H)$ and regrouping indices establishes the equation. The last statement follows from the last statement of Lemma 5.15 and the fact that $|c_i| \leq |\lambda_i|$. \end{proof} \begin{remark} Lemma 5.16 has a similar statement where one replaces the mean value integral operator by just the point derivative, i.e., with the same notation as in Lemma 5.15 there exist constants $c_1, \ldots, c_N$ such that for any $\xi \in M_k(\mathbb C)$, \begin{eqnarray*} (\partial_jF(X+H)-\partial_j F(X))\xi & = & \sum_{i=1}^{N_0} c_i \cdot a_i(X,H) h_{q_i} m_i(X) \xi^{s_i} b_i(X,H) + \\ &&\sum_{i=N_0+1}^N c_i\cdot a_i(X,H) \xi^{s_i} m_i(X) h_{q_i} b_i(X,H).\\ \end{eqnarray*} This statement is just a slight reformulation of Lemma 5.15 (as one doesn't need to integrate out any constants as in Lemma 5.16). \end{remark} Rephrasing Lemma 5.16 and Remark 5.17 with $H = X-Y$ gives the following: \begin{corollary} For $1 \leq j \leq n$ there exists $N \in \mathbb N$ and for $1 \leq i \leq N$, $c_i \in \mathbb C$, $s_i \in \{1,*\}$ dependent only on $F$ such that if $X, Y \in (B_{k,R})^n$, then there exist $a_i, b_i,$ and $m_i$ each products of $r_i, d_i,$ and $n_i$ elements from $B_{k,R}$ with $r_i+ d_i + n_i+1 < \deg(F)$ such that if $T = \int_0^1 \partial_j F(X+t(Y-X))\, dt$ or $T = \partial F(Y)$, then for any $\xi \in M_k(\mathbb C)$, \begin{eqnarray*} (T-\partial_j F(X))\xi = \sum_{i=1}^{N_0} c_i a_i (x_{q_i} - y_{q_i}) m_i \xi^{s_i} b_i + \sum_{i=N_0+1}^N c_i a_i \xi^{s_i} m_i (x_{q_i}-y_{q_i}) b_i. \end{eqnarray*} Moreover, $N$ and $\max_{1 \leq i \leq N} |c_i|$ are no greater than $c(F)\cdot(2n)^{c(F)}$. \end{corollary} \begin{lemma} Suppose $1 \leq j \leq n$, $R\geq1$, $B = c(F)^2 \cdot(2n)^{2c(F)}$ and $X, Y \in (B_{k,R})^n$. If $\|X-Y\|_2 < \epsilon<1$, and either $T = \int_0^1 \partial_j F(X+t(Y-X))\, dt$ or $T = \partial_j F(Y)$, then there exists a projection $p \in M_k(\mathbb C)$ such that $\varphi(p) > 1 - B \epsilon^{\frac{1}{B}}$ and for any $\xi \in M_k(\mathbb C)$, \begin{eqnarray*} \|(T-\partial_j F(X))p\xi p\|_2 & \leq & \left (B \cdot R^{\deg(F)} \cdot \epsilon^{1/2} \right) \cdot \|p\xi p\|_2. \\ \end{eqnarray*} \end{lemma} \begin{proof} For $1 \leq j \leq n$ invoke Corollary 5.18 to produce the $N \in \mathbb N$ and $c_i \in \mathbb C$, $s_i \in \{1,*\}$ dependent on $F$ such that if $X, Y \in (B_{k,R})^n$, then there exists the corresponding $c_i, a_i, b_i, m_i, q_i, r_i, d_i,$ and $n_i$ as in the conclusion of the corollary. Recall the summation expression of $(T-\partial_jF(X))\xi$: \begin{eqnarray*} (T-\partial_j F(X))\xi = \sum_{i=1}^{N_0} c_i a_i (x_{q_i} - y_{q_i}) m_i \xi^{s_i} b_i + \sum_{i=N_0+1}^N c_i a_i \xi^{s_i} m_i (x_{q_i}-y_{q_i}) b_i. \end{eqnarray*} When $1 \leq i \leq N_0$, applying Corollary 5.14 to $M=M_k(\mathbb C)$ and $C= \epsilon^{-\frac{1}{4\deg(F)}}>1$ provides a (complex linear) orthogonal projection $p_i$ such that \begin{eqnarray*} \|a_i (x_{q_i} - y_{q_i}) m_ip_i\|_{\infty} & \leq & (CR)^{r_i+n_i+1} \cdot \epsilon \\ & \leq & R^{r_i+n_i+1} \cdot \epsilon^{3/4}, \\ \end{eqnarray*} $\|p_ib_i\|_{\infty} < (CR)^{d_i} \leq R^{d_i} \cdot \epsilon^{-\frac{1}{4}}$, and $tr_k(p_i) > 1 - \deg(F) \cdot \epsilon^{\frac{1}{2 \deg(F)}}$. Thus, for any $\xi \in M_k(\mathbb C)$ and (complex linear) orthogonal projection $e \leq p_i$ \begin{eqnarray*} \|a_i(x_{q_i} - y_{q_i})m_i (e\xi^{d_i}e) b_i\|_2 & \leq & \|a_i(x_{q_i} - y_{q_i})m_i e\|_{\infty} \cdot \|e \xi^{d_i}e\|_2 \cdot \|eb_i\|_{\infty} \\ & \leq & \|a_i(x_{q_i} - y_{q_i})m_i p_i \|_{\infty} \cdot \|e \xi^{d_i}e\|_2 \cdot \|p_i b_i\|_{\infty} \\ & \leq & R^{r_i+n_i+1} \cdot \epsilon^{3/4} \cdot R^{d_i} \cdot \epsilon^{-\frac{1}{4}}\cdot \|e \xi e\|_2 \\ & \leq & R^{\deg(F)} \cdot \epsilon^{1/2} \cdot \|e \xi e\|_2.\\ \end{eqnarray*} For $N_0 \leq i \leq N$ an analogous argument yields a projection $p_i$ such that $tr_k(p_i) > 1 - \deg(F) \cdot \epsilon^{\frac{1}{2 \deg(F)}}$ and for any projection $e \leq p_i$, $\|a_i (e\xi^{d_i}e) m_i (x_{q_i} - y_{q_i})b_i\|_2 \leq R^{\deg(F)} \cdot \epsilon^{1/2} \cdot \|e\xi e\|_2$. Define $p = \wedge_{i=1}^N p_i$. Generously majorizing and using the fact that $c(F)^2 < B$, \begin{eqnarray*} tr_k(p) & > & 1 - \sum_{i=1}^N tr_k(p_i^{\bot}) \\ & = & 1 - N \deg(F) \epsilon^{\frac{1}{2 \deg(F)}} \\ & > & 1 - B \epsilon^{\frac{1}{B}}. \\ \end{eqnarray*} Moreover, since $p \leq p_i$ for each $i$, it follows from the above that for $1 \leq i \leq N_0$, \begin{eqnarray*} \|a_i(x_{q_i} - y_{q_i})b_i (p\xi^{s_i}p) m_i\|_2 & \leq & R^{\deg(F)} \cdot \epsilon^{1/2} \cdot \|p\xi p\|_2\\ \end{eqnarray*} and for $N_0 < i \leq N$, \begin{eqnarray*} \|a_i (p\xi^{s_i}p) b_i (x_{q_i} - y_{q_i})m_i\|_2 & \leq & R^{\deg(F)} \cdot \epsilon^{1/2} \cdot \|p\xi p\|_2.\\ \end{eqnarray*} Thus, using the bound on $N$ and the $|c_i|$ provided in Corollary 5.18 \begin{eqnarray*} \|(T-\partial_j F(X))(p\xi p)\|_2 & \leq & \sum_{i=1}^{N_0} |c_i| \cdot \|a_i (x_{q_i} - y_{q_i}) m_i (p\xi^{s_i} p) b_i \|_2 + \\ & & \sum_{i=N_0+1}^N |c_i| \cdot \|a_i (p \xi^{s_i} p) m_i (x_{q_i}-y_{q_i}) b_i\|_2 \\ & \leq & (c(F) \cdot(2n)^{c(F)})^2 \cdot R^{\deg(F)} \cdot \epsilon^{1/2} \cdot \|p\xi p\|_2 \\ &= & (B \cdot R^{\deg(F)} \cdot \epsilon^{1/2}) \cdot \|p\xi p\|_2.\\ \end{eqnarray*} \end{proof} Applying Lemma 5.19 to each partial derivative and taking the intersection of the associated projections yields: \begin{corollary} Suppose $R>1$, $X, Y \in (B_{k,R})^n$ and $B=n \cdot c(F)^2 \cdot(2n)^{2c(F)}$. If $\|X-Y\|_2 < \epsilon <1$, and either $T = \int_0^1 DF(X+t(Y-X))\, dt$ or $T=DF(Y)$, then there exists a projection $p \in M_k(\mathbb C)$ such that $\varphi(p) > 1 - B\epsilon^{\frac{1}{B}}$ and if $P = \oplus_{i=1}^n p$, then for any $\xi \in (M_k(\mathbb C))^n$, \begin{eqnarray*} \|(T-DF(X))P\xi P\|_2 & \leq & \left (B \cdot R^{\deg(F)} \cdot \epsilon^{1/2} \right) \cdot \|P\xi P\|_2. \\ \end{eqnarray*} \end{corollary} Recall in subsection 5.2 that for a convex set $K \subset (M_k(\mathbb C))^n$, the set of distance operators for $K$ with respect to $f$, $\mathcal D(F,K)$, is the set of all operators of the form \begin{eqnarray*} \int_0^1 DF(x+t(y-x)) \, dt \end{eqnarray*} where $x, y \in K$. Couching Corollary 5.20 in the terminology of subsection 5.2 gives the following: \begin{lemma} Suppose $B=2n \cdot c(F)^2 \cdot(2n)^{2c(F)}$ and $R>1>\rho >0$. If $\xi_0 \in (B_{k,R})^n$, and $K = B_2(\xi_0, \rho)$, then $\mathcal D(F, K)$ has a $(BR^{\deg(F)}\rho^{1/2})$-binding $\Theta$ focused on $DF(\xi_0)$ with the property that for any $T \in \mathcal D(F,K)$, $\Theta(T) = (\sigma_k(e \otimes e^{op}),\ldots, \sigma_k(e \otimes e^{op}))$ where $e \in M_k(\mathbb C)$ is an orthogonal projection satisfying $tr_k(e) > 1-B \rho^{\frac{1}{B}}$. \end{lemma} \begin{proof} Suppose $T \in \mathcal D(F,K)$. For some $\xi, \eta \in K = B_2(\xi_0, \rho)$, $T= T_{\xi, \eta} = \int_0^1 DF(\xi + t(\eta-\xi)) \, dt$. By Corollary 5.20 there exist projections $p, q \in M_k(\mathbb C)$ such that $\varphi(p), \varphi(q) > 1-(B\rho^{\frac{1}{B}})/2$ and if $P = \oplus_{i=1}^n p$ and $Q= \oplus_{i=1}^n q$, then for any $\xi \in (M_k(\mathbb C))^n$, \begin{eqnarray*} \|(T - DF(\xi))P\xi P\|_2 \leq \left (B/2 \cdot R^{\deg(F)} \cdot \rho^{1/2} \right) \cdot \|P\xi P\|_2 \\ \end{eqnarray*} and \begin{eqnarray*} \|(DF(\xi) - DF(\xi_0))Q\xi Q\|_2 \leq \left (B/2 \cdot R^{\deg(F)} \cdot \rho^{1/2} \right) \cdot \|Q\xi Q\|_2. \\ \end{eqnarray*} Set $e = p \wedge q$. $tr_k(e) > 1 - B\rho^{\frac{1}{B}}$. Putting $E = \oplus_{i=1}^n e$ and $\Theta(T) = \sigma_k(E \otimes E)$ the triangle inequality and the two inequalities above show that for any $\xi \in (M_k(\mathbb C))^n$, \begin{eqnarray*} \|(T - DF(\xi_0))\Theta(T)(\xi)\|_2 & = & \|(T - DF(\xi_0))E\xi E\|_2 \\ & \leq & \|(T - DF(\xi))E\xi E \|_2 + \|(DF(\xi) - DF(\xi_0))E\xi E\|_2 \\ & = & \|(T - DF(\xi))PE\xi EP \|_2 + \|(DF(\xi) - DF(\xi_0))QE\xi EQ \|_2 \\ & \leq & \left( B \cdot R^{\deg(F)} \cdot \rho^{1/2} \right) \cdot \|E\xi E\|_2 \\ & \leq & \left( B \cdot R^{\deg(F)} \cdot \rho^{1/2} \right) \cdot \|\xi\|_2. \\ \end{eqnarray*} The above holds for any $T \in \mathcal D(F,K)$ and yields a map $\Theta: \mathcal D(F,K) \rightarrow P(\oplus_{i=1}^n M_k(\mathbb C))$ such that \begin{eqnarray*} \|(T - DF(\xi_0))\Theta(T)\| \leq (B \cdot R^{\deg(F)} \cdot \rho^{1/2}). \end{eqnarray*} By definition, $\Theta$ is a $(BR^{\deg(F)}\rho^{1/2})$-binding of $\mathcal D(F,K)$ focused on $DF(\xi_0)$. The statement about the projectional form of $\Theta$ is immediate. \end{proof} Extending this to a $p$-tuple of elements is easily done and involves straightforward manipulations of the multivariable derivative formalism in Section 2.5: \begin{corollary} Suppose $F = (f_1,\ldots, f_p) \in (\mathfrak{A}_n)^p$, $B=2np \cdot c(F)^2 \cdot(2n)^{2c(F)}$ and $R>1>\rho >0$. If $\xi_0 \in (B_{k,R})^n$, and $K = B_2(\xi_0, \rho)$, then $\mathcal D(F, K)$ has a $(BR^{\deg(F)}\rho^{1/2})$-binding $\Theta$ focused on $DF(\xi_0)$ such that for any $T \in \mathcal D(F,K)$, $\Theta(T) = (\sigma_k(p \otimes p^{op}),\ldots, \sigma_k(p \otimes p^{op}))$ and $p \in M_k(\mathbb C)$ is an orthogonal projection with $tr_k(p) > 1-B \rho^{\frac{1}{B}}$. \end{corollary} Lemma 5.21 and Corollary 5.22 construct bindings for the distance operators in a ball, focused on the derivative evaluated at the center of the ball. It remains to look at the fringe of the binding and establish the appropriate entropy estimates on it. \begin{definition} If $R, r >0$ and $d \in \mathbb N$, then $E(R,r, k,d) \subset M_k(\mathbb C)$ consists of all $k \times k$ matrices $x$ of rank no more than $d$ such that $\|x\|_r \leq R$. \end{definition} \begin{lemma} Suppose $B = c(F) \cdot (2n)^{c(F)}$. There exists an $N_1 \in \mathbb N$, $N_1 \leq B+1$ such that if $R >0$, $x,y \in (B_{k, R})^n$, $P = (\sigma_k(p \otimes p^{op}),\ldots, \sigma_k(p \otimes p^{op}))$ for a projection $p \in M_k(\mathbb C)$, and $T = \int_0^1 DF(x+t(y-x)) \, dt$, then \begin{eqnarray*} (T - DF(x))(P^{\bot}(B_{k,R})^n) \subset \boxplus_{i=1}^{nN_1} E\left(N_1 \cdot \max(R,R^{N_1}), 2/N_1, k, 2k(1-tr_k(p)) \right). \end{eqnarray*} \end{lemma} \begin{proof} It suffices to produce an $N_1$ such that for each $1 \leq j \leq n$, \begin{eqnarray*} (T- \partial_jF(x))(\sigma_k(p \otimes p^{op})^{\bot}(B_{k,R})) \in \boxplus_{i=1}^{N_1} E(N_1 \cdot \max(R,R^{N_1}), 2/N_1,k, 2k(1-tr_k(p))) \end{eqnarray*} where $T=\int_0^1 \partial_jF(x + t(y-x)) \, dt$. By Corollary 5.18 there exist $N \in \mathbb N$ and $c_1,\ldots, c_N \in \mathbb \mathbb N$ dependent only on $F$ and $a_j,b_j,$ and $m_j$ where for each $j$, $a_j$, $b_j$, and $m_j$ are each products of $r_j, d_j,$ and $n_j$ elements from $B_{k,R}$ with $r_j+ d_j + n_j$ no greater than the degree of $F$ and such that for any $\xi \in M_k(\mathbb C)$, \begin{eqnarray*} (T-\partial_j F(x))\xi = \sum_{i=1}^{N_0} c_i a_i (x_{q_i} - y_{q_j}) m_i \xi^{s_i} b_i + \sum_{i=N_0+1}^N c_i a_i \xi^{s_i} m_i (x_{q_i}-y_{q_i}) b_i. \end{eqnarray*} Moreover, $N$ and $C= \max_{1 \leq i \leq N} |c_i|$ are no greater than $B$. Thus, I can find an $N_1$ so that $\max\{C, N\} < N_1 \leq B+1$. $\sigma_k(p \otimes p^{op})^{\bot} = \sigma_k(p^{\bot} \otimes I^{op}) - \sigma_k(p \otimes (p^{\bot})^{op})$ so that if $\xi \in B_{k,R}$, then $\eta=\sigma_k(p \otimes p^{op})^{\bot}(\xi)$ has (complex) rank no greater than $2k(1-tr_k(p))$ and $\|\eta\|_2 \leq R$. Substituting this into the above yields \begin{eqnarray*} (T-\partial_j F(x))(\sigma_k(p \otimes p^{op})^{\bot}\xi) = \sum_{i=1}^{N_0} c_i a_i (x_{q_i} - y_{q_i}) m_i \eta^{s_i} b_i + \sum_{i=k+1}^N c_i a_i \eta^{s_i} m_i (x_{q_i}-y_{q_i}) b_i. \end{eqnarray*} For $1 \leq i \leq N_0$, by H{\"o}lder's inequality, the fact that $a_i$, $b_i$, and $m_j$ are products of elements in $B_{k,R}$ of length no greater than $k_j$, $s_j$, and $n_j$ and since $x, y \in (B_{k,R})^n$, it follows that $a_i (x_{q_i} - y_{q_i}) b_i \eta^{d_i} m_j \in E(\max(R,R^N), 2/N, k, 2k(1-tr_k(p)))$. \begin{eqnarray*} c_i a_i (x_{q_i} - y_{q_i}) m_i \eta^{d_i} b_i \in E(C \cdot \max(R,R^{\deg(F)}), 2/\deg(F), k, 2k(1-tr_k(p))). \end{eqnarray*} A completely analogous argument shows that for $k+1 \leq i \leq N$ \begin{eqnarray*} c_i a_i \eta^{d_i} m_i(x_{q_i}-y_{q_i}) b_i \in E(C \cdot \max(R,R^{\deg(F)}), 2/\deg(F), k, 2k(1-tr_k(p))). \end{eqnarray*} Putting these two facts together and using sumset notation produces \begin{eqnarray*} (T - \partial_j F(x))(\sigma_k(p \otimes p^{op})^{\bot}(B_{k,R})) & \subset & \boxplus_{i=1}^{N} E\left (C \cdot \max(R,R^N), 2/N, k, 2k(1-tr_k(p)) \right) \\ & \subset & \boxplus_{i=1}^{N_1} E\left (N_1 \cdot \max(R,R^{N_1}), 2/N_1, k, 2k(1-tr_k(p)) \right). \\ \end{eqnarray*} \end{proof} The extension of Lemma 5.24 to a general $p$-tuple is immediate: \begin{corollary} Suppose $F = (f_1,\ldots, f_p) \in (\mathfrak{A}_n)^p$ and $B = c(F) \cdot (2n)^{c(F)}$. There exists an $N_1 \in \mathbb N$, $N_1 \leq B+1$ such that if $R >0$ and $x,y \in (B_{k, R})^n$, $P = (\sigma_k(p \otimes p^{op}),\ldots, \sigma_k(p \otimes p^{op}))$ for a projection $p \in M_k(\mathbb C)$, and $T = \int_0^1 DF(x+t(y-x)) \, dt$, then \begin{eqnarray*} (T - DF(x))(P^{\bot}(B_{k,R})^n) \subset \oplus_{j=1}^p \left( \boxplus_{i=1}^{nN_1} E\left(N_1 \cdot \max(R,R^{N_1}), 2/N_1, k, 2k(1-tr_k(p)) \right) \right). \end{eqnarray*} \end{corollary} Notice that in Corollary 5.25 and Lemma 5.24 above the Schatten $2/N_1$-quasi-norm is used. Computing the $\epsilon$-neighborhood of the resultant $E$-set w.r.t. the $L^2$-norm shows that it has suitable entropy estimates which translate to bounds on the fringes. This follows from a routine repetition of the original estimates in \cite{sr} modulo technical details (see Proposition A.5 in the appendix). Indeed, putting Corollary 5.22 together with Corollary 5.25 and Proposition A.5 yield the following main result of this subsection. In what follows the covering number quantities $K_{\epsilon}$ will be taken w.r.t. the usual normalized inner product norm $\|\cdot\|_2$ on $(M_k(\mathbb C))^p$. \begin{proposition} Suppose $F = (f_1,\ldots, f_p) \in (\mathfrak{A}_n)^p$, $R > 1 > \rho >0$, and $B = 2np \cdot c(F)^2 \cdot (2n)^{2c(F)}$. There exists a constant $D_B >0$ dependent only on $B$ such that if $\xi_0 \in (B_{k, R})^n$ and $K=B_2(\xi_0, \rho)$, then $\mathcal D(F,K)$, has a $(BR^{\deg(F)}\rho^{1/2})$-binding $\Theta$ focused on $DF(\xi_0)$ such that for any $1> \epsilon >0$, \begin{eqnarray*} K_{\epsilon}\left (\mathcal F(\Theta, R) \right) & \leq & \left(\frac{D_B \cdot (B R^B +1)^2\sqrt{p}}{\epsilon} \right)^{16B^3 \rho^{\frac{1}{2B}}k^2}.\\ \end{eqnarray*} \end{proposition} \begin{proof} If $B = 2np \cdot c(F)^2 \cdot (2n)^{2c(F)}$, then Corollary 5.22 shows that that for any $\xi_0, \rho$ and $K$ as in the proposition's statement, $\mathcal D(F,K)$ has a $(BR^{\deg(F)}\rho^{1/2})$-binding $\Theta$ focused on $DF(\xi_0)$ such that for any $T \in \mathcal D(F, K)$, $\Theta(T) = (\sigma_k(p \otimes p^{op}),\ldots, \sigma_k(p \otimes p^{op}))$ where $p \in M_k(\mathbb C)$ is an orthogonal projection with $tr_k(p) > 1-Bp^{\frac{1}{B}}$. Now if $T \in \mathcal D(F,K)$, then $T = T_{x,y} = \int_0^1 DF(x + t(y-x)) \, dt$ for some $x,y \in K = B_2(\xi_0,\rho) \subset (B_{k,R+1})^n$. By Corollary 5.25 there exists an $N_1 \in \mathbb N$, $N_1 < B$ such that \begin{eqnarray*} (T - DF(x))(\Theta(T)^{\bot}(B_{k,R})^n) \subset \oplus_{j=1}^p \left( \boxplus_{i=1}^{nN_1} E(N_1 R^{N_1}, \tfrac{2}{N_1}, k, 2kB\rho^{1/B}) \right). \end{eqnarray*} By definition of the fringe of $\Theta$, \begin{eqnarray*} \mathcal F(\Theta, R) \subset \oplus_{j=1}^p \left( \boxplus_{i=1}^{nN_1} E(N_1 R^{N_1}, \tfrac{2}{N_1}, k, 2kB\rho^{1/B}) \right). \end{eqnarray*} Using Proposition 2.1 (i) (monotonicity of $K_{\epsilon}$), Lemma 2.2, and Proposition A.5 there exists a universal constant $D_{2/N_1} >0$ dependent only on $2/N_1$ such that for any $1> \epsilon >0$, \begin{eqnarray*} K_{\epsilon}(\mathcal F(\Theta,R)) & \leq & K_{\epsilon}\left [\oplus_{j=1}^p \left( \boxplus_{i=1}^{nN_1} E(N_1 R^{N_1}, \tfrac{2}{N_1}, k, 2kB\rho^{1/B}) \right) \right ] \\ & \leq & \left(\frac{D_{2/N_1} (N_1 R^{N_1} +1)^2 \sqrt{p}}{\epsilon} \right)^{8p nN_1 \sqrt{2B\rho^{1/B}}k^2}\\ & \leq & \left(\frac{D_B (B R^{B} +1)^2 \sqrt{p}}{\epsilon} \right)^{16B^3 \rho^{\frac{1}{2B}}k^2} \end{eqnarray*} where $D_B=D_{2/N_1}$. \end{proof} \section{Iterating Spectral Splits II: Geometric Decay and finite $\alpha$-covering entropy} The previous section discussed a heuristic argument for obtaining $\alpha$-covering entropy bounds as well as some technical devices for dealing with 'asymptotically coarse' estimates. In this section I'll put these parts together to arrive at the main entropy result. I'll then compute some simple examples (commutators, normalizers, skew-normalizers, and staggered relations) which hold in a general tracial von Neumann algebra setting. \subsection{The Main Estimates} Here is the notion of geometric decay: \begin{definition} Suppose $\mu$ is a Borel measure on $\mathbb R$ with support contained on $[0,\infty)$. $\mu$ has geometric decay if there exists an $\epsilon_0 \in (0,1)$ such that $\sum_{n=1}^{\infty} \mu((0, \epsilon_0^n)) < \infty$. \end{definition} \begin{definition} Suppose $T$ is an operator such that $|T|$ lies in $(M,\varphi)$ and denote by $\mu$ the spectral distribution of $|T|$. $T$ has geometric decay (w.r.t. $M$) iff $\mu$ has geometric decay. \end{definition} The lemma below shows that $T$ has geometric decay iff $T$ is of determinant class (in the sense of \cite{luckbook}) iff $\det_{FKL}(T) > -\infty$. I will use the above terminology/definition instead of the phrase 'determinant class' or 'finiteness of $\det_{FKL}$' at times to reinforce this discrete formulation. \begin{lemma} Suppose $\mu$ is a probability measure with compact support contained in $[0,\infty)$. The following conditions are equivalent: \begin{enumerate} \item $1_{(0,\infty)}(t) \cdot \log t$ is integrable w.r.t. $\mu$ on $(0, \infty)$. \item For any $\epsilon \in (0,1)$, $\sum_{n=1}^{\infty} n \mu([\epsilon^{n+1}, \epsilon^n)) < \infty$. \item For some $\epsilon_0 \in (0,1)$, $\sum_{n=1}^{\infty} n \mu([\epsilon_0^{n+1}, \epsilon_0^n)) < \infty$. \item For any $\epsilon \in (0,1)$, $\sum_{n=1}^{\infty} \mu((0, \epsilon^n)) < \infty$. \item For some $\epsilon_0 \in (0,1)$, $\sum_{n=1}^{\infty} \mu((0, \epsilon_0^n)) < \infty$. \end{enumerate} \end{lemma} \begin{proof} For $\epsilon \in (0,1)$ fixed, \begin{eqnarray*} \int_{(0,1)} |\log t| \, d\mu(t) & = & \sum_{n=0}^{\infty} \int_{[\epsilon^{n+1},\epsilon^n)} |\log t| \, d\mu(t) \\ & \leq & |\log \epsilon| \sum_{n=0}^{\infty} (n+1) \mu([\epsilon^{n+1}, \epsilon^n)) \\ & \leq & |\log \epsilon| \cdot \sum_{n=0}^{\infty} n \mu([\epsilon^{n+1}, \epsilon^n)) + |\log \epsilon| \\ & \leq & \int_{(0,1)} |\log t| \, d\mu(t) + |\log \epsilon|.\\ \end{eqnarray*} \noindent The equivalence of (1)-(3) follows. For any $\epsilon \in (0,1)$, by Fubini \begin{eqnarray*} \sum_{n=1}^{\infty} n \mu([\epsilon^{n+1}, \epsilon^n)) & = & \sum_{n=1}^{\infty} \sum_{j=1}^n \mu([\epsilon^{n+1}, \epsilon^n)) \\ & = & \sum_{j=1}^{\infty} \sum_{n=j}^{\infty} \mu([\epsilon^{n+1}, \epsilon^n)) \\ & = & \sum_{j=1}^{\infty} \mu((0, \epsilon^j)).\\ \end{eqnarray*} \noindent So (2) $\Leftrightarrow$ (4) and (3) $\Leftrightarrow$ (5) completing the proof. \end{proof} Formulations (4) and (5) were what I was originally interested in using as an upper bound for the metric entropies. Recall from Section 2.7 that if $x \in M$ is a positive operator and $\mu$ is the spectral distribution of $x$ induced by $\varphi$, then $x$ is of determinant class iff \begin{eqnarray*} \int_{(0,\infty)} |\log (\lambda)| \, d\mu(\lambda) < \infty. \end{eqnarray*} By Lemma 6.3 this condition is satisfied iff for some $\epsilon_0 >0$, $\sum_{n=1}^{\infty} \mu((0,\epsilon_0^n)) < \infty$ iff $x$ has geometric decay at $0$. Thus, \begin{corollary} Suppose $T$ is an operator such that $|T|$ lies in $(M,\varphi)$. $T$ has geometric decay iff $|T|$ is of determinant class. \end{corollary} \begin{remark} If the spectral distribution of $|T|$ is absolutely continuous w.r.t. Lebesgue measure with a density $f$ such that $|f(t)| \leq C t^{\alpha}$ for some constant $C$ and $\alpha >-1$, then $T$ has geometric decay. In particular, if $f \in L^{\infty}(\mathbb R)$, then $T$ has geometric decay. \end{remark} \begin{remark} Suppose $T \in M$ is a normal operator and denote by $\mu$ the spectral distribution of $T$ induced by $\varphi$. Assume that $\mu$ has the property that for any $\lambda \in \mathbb C$ there exists a corresponding $\epsilon >0$ such that \begin{eqnarray*} \sum_{k=0}^{\infty} \mu(B(\lambda, \epsilon^k)) < \infty. \end{eqnarray*} Suppose $p$ is a nonzero polynomial. Then $p(T)$ has geometric decay. Indeed, there exist complex numbers $\lambda_1,\ldots, \lambda_d$ such that for any $\lambda \in \mathbb C$, $p(\lambda) = \Pi_{j=1}^d (\lambda - \lambda_j)$. For any $\epsilon >0$ the spectral theorem yields \begin{eqnarray*} \varphi(1_{(0,\epsilon)}(|p(T)|)) & = & \varphi((1_{(0,\epsilon)} \circ |p|))(T) \\ & = & \mu(\{\lambda \in \mathbb C: \Pi_{j=1}^d |\lambda - \lambda_j| < \epsilon\})\\ & \leq & \sum_{j=1}^d \mu(B(\lambda_j, \epsilon^{1/d})).\\ \end{eqnarray*} Choose for each $j$ an $\epsilon_j>0$ such that $\sum_{k=0}^{\infty} \mu(B(\lambda_j, \epsilon_j^k)) < \infty$ and set $\epsilon_0 = \min_{1 \leq j \leq d} \epsilon_j^d$. It follows that \begin{eqnarray*} \sum_{k=1}^{\infty} \varphi(1_{(0,\epsilon_0^k)}(|p(T)|)) & \leq & \sum_{k=1}^{\infty} \sum_{j=1}^d \mu(B(\lambda_j, \epsilon_0^{k/d})) \\ & \leq & \sum_{j=1}^d \sum_{k=1}^{\infty} \mu(B(\lambda_j, \epsilon_j^k)) \\ & < & \infty. \end{eqnarray*} Thus, $p(T)$ has geometric decay. The spectral distribution $\mu$ of $T$ will satisfy these local density condition when $T$ is a normal operator with an $L^{\infty}$ density w.r.t. Lebesgue measure on $\mathbb R^2 \simeq \mathbb C$ or when $T$ is a unitary operator with an $L^{\infty}$ density w.r.t. Lebesgue measure on the unit circle. Related computations have been made previously, explicitly computing the Fuglede-Kadison-L{\"u}ck determinant and identifying it with Mahler measure (\cite{luckbook}, \cite{pdh}). \end{remark} \begin{definition} $D^sF(X)$ has geometric decay if $|D^sF(X)| = (D^sF(X)^*D^sF(X))^{1/2}$, regarded as an element of the tracial von Neumann algebra $M_{2n}(M\otimes M^{op})$, has geometric decay. \end{definition} Given a finite tuple $F$ of elements in $\mathfrak{A}_n$ (the universal, unital complex $*$-algebra in $n$-indeterminates), $c(F)$ and $\deg(F)$ will have the same meaning as in Section 5.4. For the remainder of this section define $\psi_n=tr_n \otimes tr_2 \otimes (\varphi \otimes \varphi^{op})$. $\psi_n$ is the canonical tracial state on $M_{2n}(M\otimes M^{op}) = M_n(\mathbb C) \otimes (M_2(M \otimes M^{op}))$ induced by $(M,\varphi)$. For an element $T \in M_{2n}(M \otimes M^{op})$ recall that if $P= 1_{\{0\}}(|T|)$, then $2n \psi_n(P) = \text{Nullity}(T)$ (subsection 2.7). Here is the fundamental microstates result which quantifies the local and global entropy loss and gains: \begin{theorem} Suppose $F$ is a $p$-tuple of elements in $\mathfrak{A}_n$, $L = 2np \cdot c(F)^2 \cdot (2n)^{c(F)}$, and $X$ is an $n$-tuple of operators in $M$ with operator norms strictly less than $R$ such that $F(X)=0$. There exists a constant $D$ dependent only on $F$ such that for any $\frac{1}{5}>\rho, \delta, t > 0$ satisfying $t \rho^{-1/4} \delta - 5LR^{\deg(F)} > 1$ \begin{eqnarray*} \mathbb K_{\rho \delta}(X) & \leq & \mathbb K_{\rho}(X) + \\ & & [2n \cdot \psi(1_{[0,\rho^{1/2}]}(|D^sF(X|))+ t] \cdot | \log ((1-t^2)^{1/2} \delta | + \\ & & 24 D\rho^{1/D} \cdot ( \log 2D + |\log \rho|). \end{eqnarray*} \end{theorem} \begin{proof} Using the assumption $F(X)=0$ and Proposition 3.13, there exists an $m_1 \in \mathbb N$ and $0<\gamma_1$ such that if $\xi \in \Gamma_R(X;m_1,k,\gamma_1)$, then the following two conditions are satisfied: \begin{itemize} \item $\|F(\xi)\|_2 < \rho^2/12$. \item The real dimension of the range of the projection $1_{[0,\rho^{1/4}]}(DF(\xi)^*DF(\xi))$ as a real linear operator on $(M_k(\mathbb C))^n$ is no greater than $2nk^2 \cdot [\psi(1_{[0,\rho^{1/4}]}(|D^sF(X)|^2))+ t]$. \end{itemize} Suppose that $m_1 < m \in \mathbb N$ and $0 < \gamma < \gamma_1$. For each $k \in \mathbb N$ find a minimal $\rho$-cover $\langle \xi_i \rangle_{i \in I_k}$ w.r.t. the $\|\cdot \|_2$-norm for $\Gamma_R(X;m,k,\gamma)$ such that \begin{eqnarray*} \#I_k = K_{\rho}(\Gamma_R(X;m,k,\gamma)). \end{eqnarray*} \noindent Fix a $k$ and an $i \in I_k$. Set $E = B_2(\xi_i, \rho) \cap \Gamma_R(X;m_1,k,\gamma_1)$. I will now find a minimal $\rho \epsilon$-cover for $E$ with a suitable upper bound on its cardinality. Find and fix an $\xi_0 \in E$. Set $K= B_2(\xi_0, 2\rho)$. By Proposition 5.26 there exists a universal constant $D$ dependent only on $L$ such that if $\xi_0 \in (M_k(\mathbb C)_R)^n$ and $K = B_2(\xi_0, 2\rho)$, then $\mathcal D(F,K)$ has an $(LR^{\deg(F)}\rho^{1/2})$-binding $\Theta_1$ focused on $DF(\xi_0)$ such that for any $1> \epsilon >0$, \begin{eqnarray*} K_{\epsilon}\left (\mathcal F(\Theta_1, 1) \right) & \leq & \left( \frac{D}{\epsilon} \right)^{\left(16D\rho^{1/D} \cdot k^2 \right )}. \end{eqnarray*} By the triangle inequality $B(\xi_i, \rho) \subset K$ so by definition $\mathcal D(F, B(\xi_i,\rho)) \subset \mathcal D(F,K)$. $\Theta_1$ induces by restriction a $(LR^{\deg(F)}\rho^{1/2})$-binding $\Theta$ of $\mathcal D(F, B(\xi_i, \rho))$ focused on $DF(\xi_0)$ (Remark 5.5). I want to invoke Lemma 5.8 with the following substititutions: $f=F$, $V=(M_k(\mathbb C))^n$, $W=(M_k(\mathbb C))^p$, $x_0=\xi_0$, $B=B_2(\xi_i,\rho)$, $\Lambda = \mathcal D(F,B)$, $\rho_1 = LR^{\deg(F)}\rho^{1/2}$, $\alpha = \rho^{1/4}$, $\gamma = \rho^2/12$, $\epsilon = \rho \delta$, $E$ and $\Theta$ as defined above, and $\beta = \alpha t \epsilon - 4 \rho \rho_1 -4 \gamma$. The $\rho$ here will correspond with the $\rho$ in Lemma 5.8. Denote by $Q^{\bot}$ the (real linear) projections $1_{[0,\alpha)}(|Df(\xi_0)|)$. In order to use Lemma 5.8 I need to check that $\beta >0$: \begin{eqnarray*} \beta & = & \alpha t \epsilon -4 \rho \rho_1 - 4 \gamma \\ & = & \rho^{1/4} t \rho \delta - 4 \rho (LR^{\deg(F)} \rho^{1/2}) - 4 \gamma \\ & > & t \rho^{5/4} \delta - 5 LR^{\deg(F)} \rho^{3/2} \\ & = & \rho^{3/2} \cdot (t \rho^{-1/4} \delta - 5 LR^{\deg(F)}) \\ & > & \rho^{3/2} \\ & > & 0.\\ \end{eqnarray*} Now $Q^{\bot}(E)$ is contained in a ball of radius $\rho$ in a copy of Euclidean space of dimension no greater than the real dimension of the range of the projection $1_{[0,\rho^{1/4}]}(DF(\xi_0)^*DF(\xi_0))$. By the second condition stated above, this dimension is dominated by $2nk^2[\psi(1_{[0,\rho^{1/4}]}(|D^sF(X)|^2))+ t]$. Applying Lemma 5.8, Rogers's asymptotic sphere covering estimate (\cite{rogers}), and the lower bound estimate on $\beta$ above \begin{eqnarray*} K_{\rho \delta}(E) & \leq &K_{(1-t^2)^{1/2} \rho \delta}(Q^{\bot}(E)) \cdot S_{\beta}(\mathcal F(\Theta, 2\rho)) \\ & \leq & (2nk^2)^3 \cdot \left ( \frac{\rho}{(1-t^2)^{1/2} \rho \delta} \right)^{[2nk^2 \psi(1_{[0,\rho^{1/4}]}(|D^sF(X)|^2))+ t]} \cdot S_{\rho^{3/2}}(\mathcal F(\Theta, 2\rho)). \\ \end{eqnarray*} The third term on the RHS can be further dominated. By Remark 5.3, the covering estimate on $\mathcal F(\Theta_0,1)$, and the properties of covering numbers and separating numbers (Proposition 2.1), \begin{eqnarray*} S_{\rho^{3/2}}(\mathcal F(\Theta, 2\rho)) & \leq & S_{\rho^{3/2}}(\mathcal F(\Theta_1, 2\rho)) \\ & \leq & S_{\rho^{3/2}}(\mathcal F(\Theta_1, 1)) \\ & \leq & K_{\frac{\rho^{3/2}}{2}}(\mathcal F(\Theta_1, 1)) \\ & \leq & \left( \frac{2D}{\rho^{3/2}} \right)^{\left(16D\rho^{1/D} \cdot k^2 \right )}. \end{eqnarray*} Putting this together with the inequalities which preceded it gives \begin{eqnarray*} K_{\rho \delta}(E) & \leq & (2nk^2)^3 \cdot \left ( \frac{\rho}{(1-t^2)^{1/2} \rho \delta} \right)^{[2nk^2 \psi(1_{[0,\rho^{1/4}]}(|D^sF(X)|^2))+ t]} \cdot \left( \frac{2D}{\rho^{3/2}} \right)^{\left(16D\rho^{1/D} \cdot k^2 \right )}. \end{eqnarray*} Recall that $m_1 < m \in \mathbb N$, $0 < \gamma <\gamma_1$, and $\langle \xi_i \rangle_{I_k}$ is a minimal $\rho$-cover $\langle \xi_i \rangle_{I_k}$ for $\Gamma_R(X;m,k,\gamma)$. By the preceding two paragraph there exists for each $i \in I_k$ a $\rho \delta$-cover for $E_i = B_2(\xi_i, \rho) \cap \Gamma_R(X,m,k,\gamma)$ with cardinality no greater than the last dominating expression of the preceding inequality. Using Proposition 2.3, \begin{eqnarray*} \mathbb K_{\rho \delta}(X) & = & \mathbb K_{\rho \delta,R}(X) \\ & \leq & \mathbb K_{\rho \delta, R}(X;m,\gamma) \\ & \leq & \limsup_{k \rightarrow \infty} k^{-2} \cdot \log \left ( \#I_k \cdot (2nk^2)^3 \left ( \frac{\rho}{(1-t^2)^{1/2} \rho \delta} \right)^{[2nk^2 \cdot \psi(1_{[0,\rho^{1/4}]}(|D^sF(X|^2))+ t]k^2} \right ) + \\ & & \limsup_{k \rightarrow \infty} k^{-2} \cdot \log\left (\frac{2D}{\rho^{3/2}} \right)^{\left(16D\rho^{1/D} \cdot k^2 \right)} \\ & \leq & \mathbb K_{\rho}(X;m,\gamma) + \\ & & [2n \psi(1_{[0,\rho^{1/2}]}(|D^sF(X|))+ t] \cdot | \log ((1-t^2)^{1/2} \delta | + \\ & & 24D\rho^{1/D} \cdot ( \log 2D + |\log \rho|). \\ \end{eqnarray*} \noindent As this is true for sufficiently large $m$ and small $\gamma$ the desired inequality follows. \end{proof} The preceding theorem extracts the basic covering estimate from the von Neumann algebra and microstate setting. Having done this, proving the main result will now involve iterating the spectral splits on a geometric scale and aggregating the entropy. \begin{theorem} Suppose $X$ is an $n$-tuple of operators in $(M,\varphi)$ and $F$ is a $p$-tuple of $*$-polynomials in $n$ indeterminates such that $F(X)=0$. If $\alpha = \text{Nullity}(D^sF(X))$ and $D^sF(X)$ has geometric decay, then $X$ is $\alpha$-bounded. \end{theorem} \begin{proof} Set $L = 2np \cdot c(F)^2 \cdot (2n)^{c(F)}$ and $R >0$ strictly greater than the operator norm of any element of $X$ as in the statement of Theorem 6.8. Fix once and for all an $\epsilon_0 \in (0,1/5)$ such that for all $k \in \mathbb N$ with $k >4$, \begin{eqnarray*} \epsilon_0^{-(k/4-1)} > (5 L R^{\deg(F)} +1) k^2. \end{eqnarray*} Set $t_k = k^{-2}$. The condition on $\epsilon_0$ implies $t_k \cdot \epsilon_0^{-k/4} \cdot \epsilon_0 - 5 L R^{\deg(F)} > 1$. By Theorem 6.8 there exists a $D$ dependent only on $F$ such that for $k >4$, setting $\rho = \epsilon_0^k$, $\delta = \epsilon_0$, and $t = t_k$ in the context of Theorem 6.8 yields \begin{eqnarray*} \mathbb K_{\epsilon_0^{k+1}}(X) & = & \mathbb K_{\epsilon_0^k \epsilon_0}(X) \\ & \leq & \mathbb K_{\epsilon_0^k}(X) + [2n \cdot \psi(1_{[0,\epsilon_0^{k/2}]}(|D^sF(X|))+ k^{-2} ] \cdot | \log [(1-k^{-4})^{1/2} \epsilon_0]| + \\ & & 24D\epsilon_0^{k/D} \cdot ( \log 2D + |\log \epsilon_0^k|)\\ & \leq & \mathbb K_{\epsilon_0^k}(X) + [2n \cdot \psi(1_{[0,\epsilon_0^{k/2}]}(|D^sF(X|))+ k^{-2} ] \cdot [k^{-4} + |\log \epsilon_0|] + \\ & & 24D\epsilon_0^{k/D} \cdot ( \log 2D + |\log \epsilon_0^k|).\\ \end{eqnarray*} \noindent Iterating yields for any $k>4$, \begin{eqnarray*} \mathbb K_{\epsilon_0^{k+1}}(X) & \leq & \mathbb K_{\epsilon_0^5}(X) + \\ & & \sum_{j=5}^k [2n \cdot \psi(1_{[0,\epsilon_0^{j/2}]}(|D^sF(X|))+ j^{-2} ] \cdot [j^{-4} + |\log \epsilon_0|] + \\ & & \sum_{j=5}^k 24D\epsilon_0^{j/D} \cdot ( \log 2D+ |\log \epsilon_0^j|) \\ & = & \mathbb K_{\epsilon_0^5}(X) + C_1 + C_2 \\ \end{eqnarray*} where $C_1$ and $C_2$ denote the second and third term of the second expression above. To estimate the $C_1$ term set \begin{eqnarray*} D_1 = \sum_{j=5}^{\infty} \left [2n \cdot \psi(1_{[0,\epsilon_0^{j/2}]}(|D^sF(X|))\cdot j^{-4} + j^{-6} +j^{-2} \cdot |\log \epsilon_0| \right] \end{eqnarray*} and \begin{eqnarray*} D_2 = 2n |\log \epsilon| \cdot \sum_{j=5}^{\infty} \psi(1_{(0,\epsilon_0^{j/2}]}(|D^sF(X)|)). \end{eqnarray*} Clearly $D_1 < \infty$ and $D_2 < \infty$ by the geometric decay assumption on $D^sF(X)$. \begin{eqnarray*} C_1 & = & \sum_ {j=5}^k [2n \cdot \psi(1_{[0,\epsilon_0^{j/2}]}(|D^sF(X)|)+ j^{-2} ] \cdot [j^{-4} + |\log \epsilon_0|] \\ & \leq & \sum_{j=5}^k 2n \cdot \psi(1_{[0,\epsilon_0^{j/2}]}(|D^sF(X|))\cdot |\log \epsilon_0| +\\ && \sum_{j=5}^{\infty} \left [2n \cdot \psi(1_{[0,\epsilon_0^{j/2}]}(|D^sF(X|))\cdot j^{-4} + j^{-6} +j^{-2} \cdot |\log \epsilon_0| \right] \\ & = & \left( \sum_{j=5}^k [\alpha + 2n \cdot \psi(1_{(0,\epsilon_0^{j/2}]}(|D^sF(X|))] \cdot |\log \epsilon_0|] \right) + D_1 \\ & < & (k-4) \alpha |\log \epsilon_0| + 2n\cdot |\log \epsilon| \cdot \sum_{j=5}^{\infty} \psi(1_{(0,\epsilon_0^{j/2}]}(|D^sF(X|)) + D_1 \\ & = & \alpha |\log \epsilon_0^{k-4}| + D_1+ D_2 \\ \end{eqnarray*} Turning to the $C_2$ term, \begin{eqnarray*} C_2 & = & \sum_{j=5}^k 24D\epsilon_0^{j/D} \cdot ( \log 2D + |\log \epsilon_0^j|) \\ & \leq & 24 D \sum_{j=1}^{\infty} (\epsilon_0^{1/D})^j \cdot ( \log 2D + j |\log \epsilon_0|)\\ & < & \infty \\ \end{eqnarray*} as the series on the right hand sides have geometric terms paired with polynomial ones which force convergence. Set $D = \mathbb K_{\epsilon_0^5}(X) +D_1 + D_2 + C_2$. $D$ is independent of $k$. Putting the above inequalities together yields for any $k \in \mathbb N$, $k >4$, \begin{eqnarray*} \mathbb K_{\epsilon_0^k}(X) & \leq & \mathbb K_{\epsilon_0^5}(X) + C_1 + C_2 \\ & < & \mathbb K_{\epsilon_0^5}(X) + \alpha \cdot |\log \epsilon_0^{k-4}| +D_1 + D_2 + C_2 \\ & < & \alpha \cdot |\log \epsilon_0^k| + D. \end{eqnarray*} This bound almost gives the estimate for any sufficiently small $\epsilon$. To finish the argument suppose $\epsilon \in (0,\epsilon_0)$. Find a $k \in \mathbb N$ such that $\epsilon_0^{k+1} \leq \epsilon < \epsilon_0^k$. Using the monotonicity of $\mathbb K_{\cdot}()$ (Lemma 2.5), \begin{eqnarray*} \mathbb K_{\epsilon}(X) & < & \mathbb K_{\epsilon_0^{k+1}}(X) \\ & \leq & \alpha \cdot |\log \epsilon_0^{k+1}| +D \\ & \leq & \alpha \cdot |\log \epsilon_0| + \alpha \cdot |\log \epsilon_0^k| +D\\ & < & \alpha |\log \epsilon| + (\alpha \cdot |\log \epsilon_0| + D).\\ \end{eqnarray*} By definition, $X$ is $\alpha$-bounded. \end{proof} \begin{remark} It may be possible to remove the condition $F(X)=0$ in Theorems 6.8 and 6.9. Theorem 6.9 would then take the following, somewhat more general form. Suppose $X$ is an $n$-tuple of operators $(M,\varphi)$ and $F$ is a $p$-tuple of $*$-polynomials in $n$ indeterminates such that $F(X)$ is a $\beta$-bounded $p$-tuple of operators. If $\alpha = \text{Nullity}(D^sF(X))$ and $D^sF(X)$ has geometric decay, then $X$ is $(\alpha+\beta)$-bounded. Proving this would perhaps involve making changes to Lemma 5.7 and Lemma 5.8, and then adjusting the bookkeeping in Theorem 6.8 and 6.9. I won't pursue this more general line of reasoning here. \end{remark} \begin{corollary} Suppose $X$ is an $n$-tuple of self-adjoint elements in a tracial von Neumann algebra, $F$ is a $p$-tuple of noncommutative self-adjoint $*$-polynomials in $n$ indeterminates such that $F(X)=0$, and $\alpha = \text{Nullity}(D^{sa}F(X))$. If $D^{sa}F(X)$ has geometric decay, then $X$ is $\alpha$-bounded. \end{corollary} \begin{proof} Define $L = \{(X_1 - X_1^*)/2, \ldots, X_n - X_n^*)/2\}$, $G = F \cup L$. By Proposition 3.17 \begin{eqnarray*} \alpha & = & \text{Nullity}(D^{sa}F(X)) \\ & = & \text{Nullity}(D^sG(X)) \\ \end{eqnarray*} and there exists a $c>0$ such that for any $t \in (0,1)$, $\mu((0,t])) \leq \nu((0,ct])$ where $\mu$ and $\nu$ are the spectral distributions of $|D^sG(X)|$ and $|D^{sa}F(X)|$, respectively. Since $D^{sa}F(X)$ has geometric decay, Lemma 6.2 and the inequality on the spectral distributions implies that $D^sG(X)$ has geometric decay. Obviously $G(X)=0$. By Theorem 6.9, $X$ is $\alpha$-bounded. \end{proof} \begin{corollary} Suppose $X$ is an $n$-tuple of unitaries in a tracial von Neumann algebra, $F$ is a $p$-tuple of noncommutative $*$-monomials in $n$ indeterminates such that each entry of $F(X)$ is the identity operator, and $\alpha = \text{Nullity}(D^uF(X))$. If $D^uF(X)$ has geometric decay, then $X$ is $\alpha$-bounded. \end{corollary} \begin{proof} The argument proceeds exactly as in the self-adjoint case. Define $G=\{X_1^*X_1-I, \ldots, X_n^*X_n-I\}$ and $H = \langle f_i -I \rangle_{i=1}^p \cup G \subset (\mathfrak{A}_n)^{p+n}$. By Proposition 3.26 \begin{eqnarray*} \alpha & = & \text{Nullity}(D^{u}F(X)) \\ & = & \text{Nullity}(D^sH(X)) \\ \end{eqnarray*} and for any $t \in (0,1)$, $\mu((0,t])) \leq \nu((0,t])$ where $\mu$ and $\nu$ are the spectral distributions of $|D^sH(X)|$ and $|D^uF(X)|$, respectively. Since $D^uF(X)$ has geometric decay, Lemma 6.2 and the inequality on the spectral distributions implies that $D^sH(X)$ has geometric decay. Obviously $H(X)=0$. By Theorem 6.9, $X$ is $\alpha$-bounded. \end{proof} Theorem 6.9, Corollary 6.11, and Corollary 6.12 bound the $\alpha$-covering entropy in terms of the nullity/rank of the derivative provided that the derivative has geometric decay. To apply the theorem one must first verify the geometric decay property of the derivative and secondly, make a computation of its nullity. Note here that simply computing the nullity will give a free entropy dimension bound by Corollaries 4.9, 4.10, and 4.11. In the context of group von Neumann algebras the geometric decay condition is related to L{\"u}ck's Determinant conjecture while the computation of the nullity/rank is naturally linked to Atiyah's conjecture as well as Kaplanksy's and Linnell's conjectures. These will be discussed in the next section. For now some simple computations can be made involving noncommutative quadratic varieties in the general tracial von Neumann algebra setting. Despite this seemingly simple situation, they will yield new nonisomorphism results and generalizations of already established bounds. \subsection{Examples} A few simple observations will be useful in the examples to come: \begin{lemma} Suppose $T = [T_1 \cdots T_n] \in M_{1\times n}(M)$ and $1 \leq i \leq n$. If $T_i \in M$ is an injective operator, then $\text{Nullity}(T)=n-1$. If in addition $T_i$ has geometric decay, then $T$ has geometric decay. \end{lemma} \begin{proof} First note that if \begin{eqnarray*} \overline{T} = \begin{bmatrix} T_1 & \cdots & T_n \\ 0 & \cdots & 0 \\ \vdots & & \vdots \\ 0 & \cdots & 0 \\ \end{bmatrix} \\ \in M_{n\times n}(M), \end{eqnarray*} then $T^*T = \overline{T}^*\overline{T} \in M_{n \times n}(M)$. Thus it suffices to establish the nullity and geometric decay claims for $\overline{T}$. Because $|\overline{T}|$ and $|\overline{T}^*|$ have the same spectral distribution (see the proof of Lemma 2.10) it suffices to show the claims for $\overline{T}^*$. Towards this end assume $T_i$ is injective. $|\overline{T}^*|$ is the $n \times n$ matrix whose $11$-entry is $(T_1 T_1^* + \cdots + T_n T_n^*)^{1/2}$ and whose other entries are all $0$. Define $A \in M_{n \times n}(M)$ to be the matrix whose $11$-entry is $(T_iT_i^*)^{1/2}$ and whose other entries are all $0$. It follows that $|\overline{T}^*| \geq A$. By the assumed injectivity of $T_i$ it follows that $n-1 \leq \text{Nullity}(|\overline{T}^*|) \leq \text{Nullity}(A) = n-1$. Suppose that in addition $T_i$ has geometric decay. If $\mu$ and $\nu$ are the spectral distributions associated to $|\overline{T}^*|$ and $|A|$, then by Weyl's Inequality for positive operators (Section 2.7), for any $t$, $\mu([0,t]) \leq \nu([0,t])$. The nullities of $|\overline{T}^*|$ and $|A|$ being equal, $\mu(\{0\}) = \nu(\{0\})$, and it follows that for any $t$, $\mu((0,t]) \leq \nu((0,t])$. If $\sigma$ is the spectral distribution of $|T_i^*| \in M$, then it is straightforward to see that $\nu = n^{-1} \cdot \sigma + (n-1)/n \cdot \delta_{\{0\}}$. $\sigma$ is also the spectral distribution of $|T_i| \in M$ (again since the absolute value of an operator and its adjoint have the same spectral distribution). It is obvious from this equation, Lemma 6.3, and the assumed geometric decay of $|T_i|$ that $\nu$ has geometric decay. Thus, $\mu$ must have geometric decay. So $|\overline{T}^*|$ has geometric decay. \end{proof} \begin{lemma} If $X$ is $1$-bounded and there exists a self-adjoint element $y$ in the $*$-algebra generated by $X$ such that $\chi^{sa}(y) > -\infty$, then $vN(X)$ is strongly $1$-bounded. \end{lemma} \begin{proof} Denote by $Y$ the $(2n+1)$-tuple consisting of $y$ and the real and imaginary parts of the operators in $X$ (so $Y$ consists of self-adjoint elements). Both $X$ and $Y$ generate the same $*$-algebra. By Proposition 2.5 and the assumption that $X$ is $1$-bounded, it follows that $Y$ is $1$-bounded (as a general tuple or a tuple of self-adjoint elements). $\chi^{sa}(y) > -\infty$ so $Y$ is strongly $1$-bounded, whence $vN(X)=vN(Y)$ is a strongly $1$-bounded von Neumann algebra. \end{proof} Taking the example of two commuting operators isn't particularly interesting, but as with the computations in Section 4, it serves as a good sanity check. \begin{example} Suppose $X = \{x_1, x_2\}$ consists of commuting self-adjoint elements in $M$, $F = \{f\}$ where $f(X_1, X_2) = X_2 X_1 - X_1^* X_2^*$, and that the spectral distribution of $x_1$ has an $L^{\infty}$ density $g$ w.r.t. Lebesgue measure on the real line. The setting is exactly that of Example 4.1 except here I assume a stronger condition on the spectral distribution of $x_1$ (it implies the absence of eigenvalues condition of Example 4.1). It was observed in Example 4.1 that \begin{eqnarray*} D^{sa}F(X) & = & \begin{bmatrix} (\partial_{1}^{sa} f)(X) & (\partial_{2}^{sa}f)(X) \\ \end{bmatrix} \\ & = & \begin{bmatrix} x_2 \otimes I - I \otimes x_2 & I \otimes x_1 - x_1 \otimes I \\ \end{bmatrix} \\ & \in & M_{1\times 2}(M \otimes M^{op}). \\ \end{eqnarray*} The density of the spectral distribution of $x_1 \otimes I - I \otimes x_1$ is $g*\tilde{g}$ where $\tilde{g}(t) = g(-t)$. $g$ is compactly supported, whence $g, \tilde{g} \in L^1(\mathbb R) \cap L^{\infty}(\mathbb R)$, and thus $g*\tilde{g} \in L^{\infty}(\mathbb R)$. By Remark 6.5 it follows that $x_1 \otimes I - I \otimes x_1 \in M \otimes M^{op}$ has geometric decay. The $L^{\infty}$ density also guarantees that $x_1 \otimes I - I \otimes I$ is injective. By Lemma 6.13 $D^{sa}F(X)$ has nullity equal to $2-1=1$ and geometric decay. By Corollary 6.11 $X$ is $1$-bounded. $\chi^{sa}(x_1) > -\infty$ (by \cite{v1}) so $X$ is strongly $1$-bounded. Of course, $X$ generates an abelian von Neumann algebra and the fact that it's strongly $1$-bounded was determined early on in \cite{v2}. \end{example} \begin{example} Suppose $X = \{x_1, \ldots, x_n\}$ consists of unitaries in $M$ and $F = \{f\}$ where $f = A X_1^{s_1} B X_1^{s_2} \in \mathfrak{A}_n$, $A$ and $B$ are $*$-monomials in $X_2,\ldots, X_n$, and either $s_1=s_2=1$ or $s_1 = 1$ and $s_2=*$. Set $a = A(X)$ and $b = B(X)$; $a$ and $b$ are $*$-monomials in $x_2,\ldots, x_n$. Assume that $bx_1$ has an $L^{\infty}$ density w.r.t. Lebesgue measure on the unit circle when $s_1=s_2=1$ or $b$ has an $L^{\infty}$ density w.r.t. Lebesgue measure on the unit circle when $s_1=1$ and $s_2=*$ and that in either case $f(X)=I$. The setting is exactly that of Example 4.2 except that here I assume a stronger condition on the spectral distributions (it implies the absence of eigenvalues condition of Example 4.2). In Example 4.2 the partial derivative of $f$ with respect to the first variable was computed: \begin{eqnarray*} \partial_1^uf(X) & = & \begin{cases} x^*b^* \otimes bx + I_M \otimes I_{M^{op}} &\mbox{if } s_1 = s_2=1 \\ (x_1 \otimes x_1^*)(b^* \otimes b - I_M \otimes I_{M^{op}}) & \mbox{if } s_1=1, s_2=*\\ \end{cases}\\ \end{eqnarray*} The $L^{\infty}$-density assumption again implies in either case that the operators have geometric decay. To see this note that by Remark 6.6 it suffices to show that $x^*b^*\otimes bx$ or $b^* \otimes b$ have $L^{\infty}$ densities w.r.t. Lebesgue measure on the unit circle. But the product of two independent unitaries, each with a spectral distribution which has an $L^{\infty}$ densities (which must also be in $L^1$) w.r.t. Lebesgue measure on the unit circle, again has a spectral distribution with an $L^{\infty}$ density w.r.t. Lebesgue measure on the unit circle. Thus, $\partial_1^uf(X)$ has geometric decay. $\partial^u_1f(X)$ is also injective by the $L^{\infty}$ density observation. By Lemma 6.13 this implies that $\text{Nullity} D^uf(X) = n-1$ and $D^uf(X)$ has geometric decay. Invoking Corollary 6.12, $X$ is $(n-1)$-bounded. In particular, if $n=2$, then $X$ is $1$-bounded. Using the $L^{\infty}$-density assumption on either $bx_1$ or $b$, it follows that there exists a selfadjoint element in the $*$-algebra generated by $X$ with finite free entropy. Invoking Lemma 6.14, when $n=2$ $X$ is strongly $1$-bounded. Thus, the von Neumann algebra generated by $X$ is strongly $1$-bounded. \end{example} \begin{remark} It follows that if $\Gamma= \langle a, b | a^mb^{s_1} a^n b^{s_2} \rangle$ with $m,n \in \mathbb Z -\{0\}$ and either $s_1 =s_2 =1$ or $s_1 = -s_2 =1$, then $L(\Gamma)$ is strongly $1$-bounded. Indeed, since every nontrivial group element in $\Gamma$ gives rise to a Haar unitary in the left regular representation, this falls into the rubric of Example 6.2. As noted before, when $s_1 = -s_2=1$, this is the Baumslag-Solitar group and it was shows in \cite{v2} and \cite{gs} that this von Neumann algebra was strongly $1$-bounded by using the normalizing relations in a direct way. This example can in part be rederived from the general group von Neumann algebra results in the following section. \end{remark} \begin{lemma} Suppose $m \leq n$ and $T \in M_{m \times n}(M)$ is upper triangular. If $T_{ii} \in M$ is injective for $1 \leq i \leq m$, then $\text{Nullity}(T) = n-m$. If in addition, each $T_{ii}$ has geometric decay, then $T$ has geometric decay. \end{lemma} \begin{proof} For the first claim $\text{Rank}(T) = m$ by the injectivity of each $T_{ii}$ and the upper triangular form of $T$. The nullity statement follows. To establish the geometric decay property, by Lemma 6.3 this is equivalent to showing that $T$ is of determinant class. But from the properties of the Fuglege-Kadison-L{\"u}ck determinant w.r.t. upper triangularity (Section 2.6) and the assumed geometric decay of the $T_{ii}$ it follows that $\det_{FKL}(T_{jj}) >0$ for all $j$ so that \begin{eqnarray*} \text{det}_{FKL}(T) & = & \Pi_{j=1}^m \text{det}_{FKL}(T_{ii}) \\ & > & 0.\\ \end{eqnarray*} \end{proof} \begin{example} Recall Example 4.4 with the $n$-tuple $X$ of self-adjoint elements and the $n-1$ tuple of polynomial relations $F=\{f_1,\ldots, f_{n-1}\}$. Assume further that for each $i$, $\partial^{sa}_if_i$ has geometric decay and one of the elements of $X$ has finite free entropy (as a singleton tuple). This condition is satisfied when $f_i = X_i X_{i+1} - X^*_{i+1} X^*_i$ for $1 \leq i \leq n-1$ and each element in $X$ has an $L^{\infty}$-density w.r.t. Lebesgue measure. From Lemma 6.16 $D^{sa}F(X) \in M_{n-1 \times n}(M \otimes M^{op})$ has nullity equal to $n-1$ and has geometric decay. Thus, by Corollary 6.11 it follows that $X$ is $1$-bounded. The condition of finite free entropy for one of the elements of $X$ shows that $X$ is strongly $1$-bounded. This gives another way to see how tensor products of tracial von Neumann algebras are strongly $1$-bounded von Neumann algebras, a result first obtained by \cite{g}. One can use other polynomials aside from commutators and do this in the unitary case, provided that the geometric decay conditions are satisfied (e.g., the noncommutative words in Example 6.2). I'll say more about these staggered relations in the next section. \end{example} \section{Group von Neumann Algebras Applications} In this section I want to apply the results in Section 6 to discrete group von Neumann algebras whose underlying groups satisfy additional, seemingly natural (but not so easy to verify) properties. There are two group properties which I want to use. The first, L{\"u}ck's determinant property, guarantees that a matrix of elements in the integer ring of a discrete group is of determinant class. The second, Linnell's $L^2$-property, concerns $0$ as it occurs in the spectrum of such operators. It allows after nondegeneracy checks, a computation of the rank of the associated derivative in the case of a single group relation. Throughout this section $\Gamma$ denotes a discrete, countable group. \subsection{Determinant Conjecture} Denote by $\mathbb Z \Gamma \subset L(\Gamma)$ the integral ring generated by the the group unitaries in the left regular representation, $\{u_g \in B(\ell^2(\Gamma)): g \in \Gamma\}$. L{\"u}ck's determinant conjecture states that for any $x \in M_k(Z\Gamma) \subset M_k(L(\Gamma))$, \begin{eqnarray*} \text{det}_{FKL}(x) \geq 1. \end{eqnarray*} \noindent I'll say that $\Gamma$ has L{\"u}ck's property if the above inequality holds for any $x \in M_k(\mathbb Z \Gamma)$. In \cite{luck} L{\"u}ck showed that the conjecture is true for all residually finite groups (where he used $\det_{FKL}$ to compute the $L^2$-Betti numbers of residually finite groups in terms of the ordinary Betti numbers in \cite{luck}). This was extended in \cite{schick} to residually amenable groups. More recently it was shown in \cite{es} that all sofic groups have L{\"u}ck's determinant property (see also the operator algebraic approach in \cite{bs}). \begin{remark} As far as I know, there are no known examples of non-sofic discrete groups. Amenable, residually finite, and residually amenable discrete countable groups are all known to be sofic. \end{remark} \begin{remark} Sofic groups are closed under a number of natural group operations including inverse and direct limits, subgroups, free products, amenable extensions, and direct products (\cite{es}). It is also easy to see that if $\Gamma$ is sofic, then so is $\Gamma^{op}$, the opposite group of $\Gamma$. It follows that $\Gamma$ is sofic iff $\Gamma \times \Gamma^{op}$ is sofic.\end{remark} By Corollary 6.4 if $\Gamma$ has L{\"u}ck's property, then every element of $\mathbb Z \Gamma$ has geometric decay. Recall that $\mathfrak{A}_n$ denotes the universal unital, complex $*$-algebra on $n$ indeterminates. Applying the unitary calculus developed in Section 3 (Remark 3.23) yields the following: \begin{lemma} Suppose $\Gamma$ is a countable, discrete sofic group, $F$ is a $p$-tuple of $*$-monomials in $\mathfrak{A}_n$, and $g_1,\ldots, g_n \in \Gamma$. If $X = \{u_{g_1},\ldots, u_{g_n}\} \subset L(\Gamma)$, $F(X)=I$, and $\alpha = \text{Nullity}(D^uF(X))$, then $X$ is $\alpha$-bounded. \end{lemma} \begin{proof} By Corollary 6.12 it suffices to show that $D^uF(X)$ has geometric decay. By Definition 3.22 and Remark 3.24, \begin{eqnarray*} |D^uF(X)|^2 & = & D^uF(X)^*D^uF(X) \\ & \in & M_n(\mathbb Z(\Gamma \times \Gamma^{op})) \\ & \subset & M_n(L(\Gamma) \otimes L(\Gamma)^{op})\\ & = & M_n(L(\Gamma \times \Gamma^{op})).\\ \end{eqnarray*} \noindent By the remark above $\Gamma \otimes \Gamma^{op}$ is sofic since $\Gamma$ is. Thus, $0 < 1 \leq \det_{FKL}(|D^uF(X)|^2)$. $|D^uF(X)|^2$ has geometric decay and this implies that $|D^uF(X)|$ does as well (Lemma 6.3), whence by definition, $D^uF(X)$ does. \end{proof} \subsection{Linnell's $L^2$-Property} Linnell posed the following in \cite{l}: \begin{linnell} If $\Gamma$ is a torsion-free discrete group, then any nonzero $x \in \mathbb C\Gamma$ is injective on $\ell^2(\Gamma)$. \end{linnell} I will say that a countable, discrete group $\Gamma$ has Linnell's $L^2$-property if it satisfies the conclusion of the conjecture above. Linnell's conjecture is related to other open problems/conjectures. An older and closely related statement is Kaplansky's conjecture: \begin{kaplansky} If $\Gamma$ is a torsion-free discrete group, then for any nonzero $x,y \in \mathbb C\Gamma$, $xy \neq 0$. \end{kaplansky} Both conjectures are naturally connected to Atiyah's conjecture. The interested reader can read more about this conjecture as well as progress on the two above conjectures in a variety of places, e.g., \cite{luckbook}. Linnell showed in \cite{l} that a left orderable group always possess Linnell's $L^2$-property. A group $\Gamma$ is said to be orderable if there exists a strict linear ordering $<$ on $\Gamma$ which preserves left and right multiplication, i.e., for any $a, b, c \in \Gamma$ if $a<b$, then $ac < bc$ and $ca < cb$. One can relax the conclusion in the definition so that one only assumes $ac<bc$ in which case $\Gamma$ is right ordered, or $ca < cb$ in which case $\Gamma$ is said to be left-ordered. A group is right-ordered iff it is left-ordered. Orderability assumes one ordering can fulfill both rolls. Clearly any ordered group is both left and right ordered. Also note that any left or right ordered group is necessarily torsion-free. Here are some examples of left orderable groups, which will by the discussion above, satisfy Linnell's $L^2$-property: \begin{example} Residually torsion-free nilpotent groups are left orderable. Notice that this subclass is contained in the class of residually amenable groups and thus they are all sofic. Examples of residually torsion-free nilpotent groups include elementary, torsion-free amenable groups \cite{l0}, free groups, the braid groups, and more recently, Hydra groups (\cite{dr}) and some of their generalizations (\cite{bm}). \end{example} \begin{example} By a result independently obtained by Brodski{\u i} and Howie (\cite{b},\cite{h}), all torsion free, one-relator groups are locally indicable. By \cite{bh} locally indicable groups are left orderable (\cite{bh}). Thus, all torsion free one-relator groups are left orderable. \end{example} \begin{example} Left orderable groups are closed under subgroups, the opposite operation, free products, direct products, and extensions. \end{example} \begin{remark} If $\Gamma$ satisfies Linnell's $L^2$-property, then $\Gamma$ must be torsion-free. In particular if $g \in \Gamma$ is not the group identity and $u_g \in L(\Gamma)$ is the canonical unitary associated to $g$, then $u_g$ is Haar (i.e., has the same spectral distribution as Haar measure on the unit circle of $\mathbb C$). It is easy to see from this observation and the entropy formula for a single selfadjoint in \cite{v1} that the real part of $u_g$ has finite free entropy. From this and Lemma 6.14 it follows that if $X$ is a finite tuple of unitaries associated to group elements and $X$ is $1$-bounded, then $X$ is strongly $1$-bounded. \end{remark} \begin{proposition} Suppose $\Gamma$ is a group which satisfies Linnell's $L^2$-property and $X=\{u_{g_1},\ldots, u_{g_n}\}$ with $g_1,\ldots, g_n \in \Gamma$. If there exists a nonempty, reduced word $f$ on $n$ letters such $f(g_1,\ldots, g_n)=e_{\Gamma}$ and the $i$th indeterminant appears in $f$, then there exists a $*$-monomial $w \in \mathfrak A_n$ such that $w(X)=I$ and $\partial^u_iw(X)$ is injective. \end{proposition} \begin{proof} Denote by $\ell$ the length function on $\mathbb F_n$, i.e., for $g \in \mathbb F_n$ $\ell(g)$ is the length of $g$ represented as a reduced word. Set \begin{eqnarray*} m = \min\{\ell(w): w(g_1,\ldots,g_n)=e_{\Gamma}, w \neq e_{\mathbb F_n}, w \in \mathbb F_n\}. \end{eqnarray*} $m \in \mathbb N$ is well-defined by the existence of $f$. Pick a $v \in \mathbb F_n$ such that $\ell(v)=m$, $v(g_1,\ldots,g_n)= e_{\Gamma}$, $v \neq e_{\mathbb F_n}$. There exists a $*$-monomial $w \in \mathfrak{A}_n$ such that $\ell_0(w)=m$, for some $i$, $X_i$ or $X_i^*$ appears in $w$, and $w(X)=w(u_{g_1},\ldots, u_{g_n}) =I$ where $\ell_0$ is the length function defined on $*$-monomials in $\mathfrak{A}_n$ (equivalently $\ell_0$ is the degree of the $*$-monomial). This is obtained by taking $v$ (which is a sequence in $n$ indeterminates and labelled inverses), replacing the ith indeterminate with $X_i$, the inverses with adjoints, and concatenating the terms in the sequence to produce a product in $\mathfrak{A}_n$ (recall that $X_1,\ldots, X_n$ are the canonical generators of $\mathfrak{A}_n$). By permuting the indices I can assume without loss of generality that $i=1$. To complete the proof I have to show that $\partial^u_1w(X)$ is injectivity. Set $B=\{X_2,\ldots, X_n\}$ and write $w = w_1X_1^{j_1} \cdots w_p X_1^{j_p} w_{p+1}$ where $p \in \mathbb N$ (since $X_1$ appears in $w$), $w_1,\ldots,w_{p+1}$ are (possibly empty) words in the $*$-semigroup generated by $B$, $j_1,\ldots, j_p \in \{*,1\}$, and for any $1 \leq k \leq p-1$, $j_k \neq j_{k+1}$ only if $w_k \neq I$. This expression of $w$ is possible since $w$ is derived from $v$ and $v$'s length. Written in this way, it follows that $1 \leq m = \ell_0(w) = p +\sum_{k=1}^{p+1} \ell_0(w_k)$. From the unitary calculus formula in Lemma 3.20 and Remark 3.24 $(\partial_1^uw)(X)$ equals \begin{eqnarray*} \sum_{k=1}^p (-1)^{\delta_{j_k, *}}((u_{g_1}^{\delta_{*, j_k}})^* w_{k+1}(X) \cdots w_p(X) u_{g_1}^{j_p} w_{p+1}(X))^* \otimes ((u_{g_1}^{\delta_{*, j_k}})^* w_{k+1}(X) \cdots w_p(X) u_{g_1}^{j_p} w_{p+1}(X))^{op}.\\ \end{eqnarray*} Consider the operator $T$ obtained from the right hand side of the elementary tensors above: \begin{eqnarray*} T & = & \sum_{k=1}^p (-1)^{\delta_{j_k, *}} \cdot ((u_{g_1}^{\delta_{j_k, *}})^* w_{k+1}(X) \cdots w_p(X) u_{g_1}^{j_p} w_{p+1}(X))^{op} \\ & \in & \mathbb Z(\Gamma^{op}) \\ & \subset & \mathbb C(\Gamma^{op}). \\ \end{eqnarray*} It is straightforward to check that $T$ has the same noncommutative $*$-moments as $(\partial_1^uw)(X)$. Moreover, if two operators $a, b$ in respective tracial von Neumann algebras have the same noncommutative $*$-moments and $a$ is injective, then so is $b$. Applying this observation for $a=T$ and $b=\partial^u_1w(X)$, if I can show that $T$ is injective, then this will show the injectivity of $\partial^u_1w(X)$ and complete the proof. Towards this end, notice that the terms in the expansion of $T$ above are pairwise orthogonal (as elements in $L(\Gamma^{op}) \subset \ell^2(\Gamma^{op})$). Indeed, if this is not the case then for some $1 \leq r < s \leq p$, $j_r = j_s$ and \begin{eqnarray*} &((u_{g_1}^{\delta_{j_r, *}})^* w_{r+1}(X) \cdots w_p(X) u_{g_1}^{j_p} w_{p+1}(X)) = ((u_{g_1}^{\delta_{j_s, *}})^* w_{s+1}(X) \cdots w_p(X) u_{g_1}^{j_p} w_{p+1}(X)).&\\ \end{eqnarray*} Thus, \begin{eqnarray*} ((u_{g_1}^{\delta_{j_r, *}})^* w_{r+1}(X) \cdots w_s(X) u_{g_1}^{j_s} & = & (u_{g_1}^{\delta_{j_s, *}})^*.\\ \end{eqnarray*} Since $j_r =j_s$, this implies \begin{eqnarray*} w_{r+1}(X) \cdots w_s(X) u_{g_1}^{j_s} & = & I.\\ \end{eqnarray*} There is a nontrivial $*$-monomial $g$ of length $1 \leq s-r+ \sum_{k=r+1}^s \ell_0(w_k) < m$ such that $g(X)=I$; $g$ yields a reduced word $w_0 \in \mathbb F_n$ for which $\pi(w_0)=e_{\Gamma}$ and $1\leq \ell(w_0) = s-r+ \sum_{k=r+1}^s \ell_0(w_k) < m$. This is a contradiction. It follows that $T$ is nonzero as it is a nonempty sum of orthogonal, nonzero vectors. $\Gamma$ has Linnell's $L^2$-property iff $\Gamma^{op}$ has Linnell's $L^2$-property. Thus, $\Gamma^{op}$ has Linnell's $L^2$-property. Since $T \in \mathbb C(\Gamma^{op})$ is nonzero, it is injective and this finishes the proof. \end{proof} \begin{remark} The one place where I used the assumption that $\Gamma$ has Linnell's $L^2$-property was in concluding that nontriviality of $\partial_1^uf(X)$ implies that it's injective. This seems excessive as it invokes a global group property to just one single operator (obtained through partial differentiation) which is naturally and concretely expressed in terms of the group relation. \end{remark} \begin{corollary} Suppose $\Gamma$ is a sofic group which satisfies Linnell's $L^2$-property, has a finite generating tuple $\{g_1,\ldots, g_n\}$, and $X=\{u_{g_1},\ldots, u_{g_n}\} \subset L(\Gamma)$. $\Gamma \not\simeq \mathbb F_n$ iff $X$ is $(n-1)$-bounded. \end{corollary} \begin{proof} Suppose $\Gamma \not\simeq \mathbb F_n$. Denote by $a_1,\ldots, a_n$ the canonical set of generators for $\mathbb F_n$. By universality there exists a $*$-homomorphism $\pi: \mathbb F_n \rightarrow \Gamma$ such that $\pi(a_i) = g_i$. Since $\Gamma \not\simeq \mathbb F_n$ there exists a $b \in \ker \pi$ which is not the identity. $b$ yields a non-empty reduced word $f$ in the $g_i$ such that $f(g_1,\ldots g_n) = e_{\Gamma}$. Fix some index $1 \leq j \leq n$ such that the $j$th indeterminate appears in $f$. By Proposition 7.5 there exists a nontrivial $*$-monomial $w \in \mathfrak{A}_n$ such that $w(X) =I$ and $\partial_i^uw(X)$ is injective. By Lemma 6.13 $\alpha = \text{Nullity}(D^uw)(X) = n-1$. By Lemma 7.3 $X$ is $n-1$ bounded. Conversely, suppose $X$ is $(n-1)$-bounded. Recall that $\delta_0()$ is a $*$-algebra invariant and $\delta_0(G)$ is well-defined for a finitely generated discrete group (Section 2.3). By the computation of the free entropy dimension for freely independent self-adjoint variables in \cite{v2}, $\delta_0(\mathbb F_n) = n$ for $n \in \mathbb N$. So if $\Gamma \simeq \mathbb F_n$, then $n-1=\delta_0(X)=\delta_0(\Gamma) = \delta_0(\mathbb F_n) = n$ which is preposterous. $\Gamma \not\simeq \mathbb F_n$ as desired. \end{proof} \begin{corollary} Suppose $\Gamma$ is a sofic group with two generators, satisfies Linnell's $L^2$-property, and $\Gamma \neq \{0\}$. The following are equivalent: \begin{enumerate}[(1)] \item $\Gamma \not\simeq \mathbb F_2$. \item $L(\Gamma) \not\simeq L(\mathbb F_2)$. \item $\delta_0(X) = 1$ for any finite set of generators $X$ for $L(\Gamma)$. \item $L(\Gamma)$ is strongly $1$-bounded. \end{enumerate} \end{corollary} \begin{proof} $ $ (1) $\Rightarrow$ (4): If (1), then by Corollary 7.7, $\Gamma$ is $1$-bounded. By Remark 7.4 $L(\Gamma)$ is strongly $1$-bounded. (4) $\Rightarrow$ (3): Suppose $X$ is any finite set of generators for $L(\Gamma)$. By \cite{j3} and Proposition 2.5 $X$ is $1$-bounded. It follows (Section 2) that $\delta_0(X) \leq 1$. Since $\Gamma$ is sofic, by \cite{gs} $L(\Gamma)$ is embeddable into an ultraproduct of the hyperfinite $\mathrm{II}_1$-factor. Moreover, $\Gamma$ being torsion free and not equal to the trivial identity group, $L(\Gamma)$ is diffuse. So by \cite{j0} and Proposition 2.5, $\delta_0(X) \geq 1$. Thus, $\delta_0(X)=1$. (3) $\Rightarrow$ (2): By contradiction if $L(\Gamma) \simeq L(\mathbb F_2)$, then the assumed isomorphism identifies the canonical traces (since $L(\mathbb F_2)$ has a unique tracial state). Hence, there exists a 2-tuple $X$ of freely independent semicirculars (w.r.t. the canonical trace on $L(\Gamma)$) which generates $L(\Gamma)$. By \cite{v1} $\delta_0(X) = \delta_0^{sa}(X)=2 \neq 1$ which violates (3). (2) $\Rightarrow$ (1): If $\Gamma \simeq \mathbb F_2$, then $L(\Gamma) \simeq L(\mathbb F_2)$. Take the contrapositive. \end{proof} Recall that any nontrivial element $w$ (i.e., non-identity element) in a free group can be written as $w=v^m$ for some maximal $m \in \mathbb N$ and that subject to this maximality condition, $v$ is unique. If $m >1$ then $w$ is said to be a proper power. Recall also that a positive element in a free group on $n$ generators $g_1,\ldots, g_n$, is an element of the form $g_{j_1} \cdots g_{j_d}$ where $d \in \mathbb N$, $1 \leq j_1, \ldots, j_d \leq n$. A positive $k$-relator group is simply a group with a presentation on finitely many generators and $k$ positive relators. \begin{corollary} If $\Gamma$ is a sofic, one-relator group on $n$ generators whose relator is a nontrivial, non-proper power, then $\Gamma$ is $(n-1)$-bounded. If $\Gamma$ is a one-relator group on $n$ generators whose relator word is nontrivial, positive and not a proper power, then $\Gamma$ is $(n-1)$-bounded. \end{corollary} \begin{proof} For the first claim, by \cite{kms} the relator word is not a proper power iff $\Gamma$ is torsion free. $\Gamma$ is a torsion-free one relator group so by Example 7.2 $\Gamma$ is left orderable and thus by \cite{l} satisfies Linnell's $L^2$-property. $\Gamma$ is a sofic group which satisfies Linnell's $L^2$-property. Since $\mathbb F_n$ is Hopfian, $\Gamma \not\simeq \mathbb F_n$. By Corollary 7.7, $\Gamma$ is $(n-1)$-bounded. To establish the second claim by the first claim it is enough to show that the group is sofic. By \cite{baum} a one-relator group whose relator is a positive word is residually solvable, whence, residually amenable, and thus sofic by \cite{es}. \end{proof} \begin{corollary} If $\Gamma$ is a sofic, one-relator group on 2 generators whose relator is nontrivial and not a proper power, then $L(\Gamma)$ is strongly 1-bounded. In particular if $\Gamma$ is a positive one-relator group on 2 generators whose relator is nontrivial and not a proper power, then $L(\Gamma)$ is strongly $1$-bounded. \end{corollary} \begin{proof} For the first claim, by Corollary 7.9 $\Gamma$ is $1$-bounded. It is easy to show that $\Gamma \neq \{0\}$ and by Remark 7.4 this implies $L(\Gamma)$ is strongly $1$-bounded. The second claim follows from the first provided soficity can be established. As in Corollary 7.9 this follows from the residual solvability of positive relator groups established in \cite{baum} and \cite{es}. \end{proof} \begin{remark} By \cite{murasugi} it turns out that the center of a one relator group $\Gamma$ on $2$ generators has the following dichotomy: it is either trivial (e.g., the Baumslag-Solitar Groups BS(n,m) when $|n| \neq |m|$ were shown in \cite{ys} to be i.c.c.), or a copy of $\mathbb Z$. The result also shows that when the number of generators is greater than $2$, then the center is always trivial. \end{remark} \begin{lemma} Suppose $\Gamma$ is a group with generators $a_1,\ldots, a_n$ and $f_1,\ldots, f_{n-1}$ are reduced, nonempty words where the $i$th indeterminate appears in $f_i$ and the first $i-1$ indeterminates do not appear in $f_i$. If $\Gamma$ is sofic and has Linnell's $L^2$-property, then $L(\Gamma)$ is strongly $1$-bounded. \end{lemma} \begin{proof} Denote by $X$ the $n$-tuple of canonical group unitaries of $L(\Gamma)$ associated to $a_1,\ldots, a_n$. By Proposition 7.5 and the fact that $\Gamma$ satisfies Linnell's $L^2$-property for each $i$ I can associate a $*$-monomial $w_i \in \mathfrak A_n$ such that $w_i(X)=I$ and $\partial_i^u w(X)$ is injective. Moreover, from the proof of Proposition 7.5, it's easy to see that the indeterminates $X_1,\ldots, X_{i-1}$ do not appears in the expansion of $w_i$. Thus, \begin{eqnarray*} D^uF(X) & = & \begin{bmatrix} \partial^u_1w_1(X) & \cdots & \cdots & \cdots & \partial^u_n w_1(X) \\ 0 & \ddots & & & \vdots \\ \vdots & \ddots & \ddots & & \vdots \\ 0 & \cdots & 0 & \partial^u_{n-1}w_{n-1}(X) &\partial^u_n w_{n-1}(X) \\ \end{bmatrix} \\ & \in & M_{n-1,n}(L(\Gamma) \otimes L(\Gamma^{op})).\\ \end{eqnarray*} By Lemma 6.16 $\text{Nullity}(D^uF(X)) = n-(n-1)=1$. Because $\Gamma$ is sofic, by Lemma 7.3 $X$ is $1$-bounded. By Remark 7.4 $X$ is strongly $1$-bounded, whence $L(\Gamma)$ is strongly $1$-bounded. \end{proof}
1,108,101,566,158
arxiv
\section{Introduction} It is well-known following Drinfeld\cite{Dri} that the semiclassical object underlying quantum groups are Poisson-Lie groups. This means a Lie group together with a Poisson bracket such that the group product is a Poisson map. The infinitesimal notion of a Poisson-Lie group is a Lie bialgebra, meaning a Lie algebra ${\mathfrak{g}}$ equipped with a `Lie cobracket' $\delta:{\mathfrak{g}}\to {\mathfrak{g}}\otimes{\mathfrak{g}}$ forming a Lie 1-cocycle and such that its adjoint is a Lie bracket on ${\mathfrak{g}}^*$. Of the many ways of thinking about quantum groups, this is a `deformation' point of view in which the coordinate algebra on a group is made noncommutative, with commutator controlled at lowest order by the Poisson bracket. In recent years the examples initially provided by quantum groups have led to a significant `quantum groups approach' to noncommutatuve differential geometry in which the next layers of geometry beyond the coordinate algebra are considered, and often classified with the aid of quantum group symmetry. The most important of these is the differential structure, expressed normally as the construction of a bimodule $\Omega^1$ of `1-forms' over the (noncommutative) coordinate algebra and a map ${\rm d}$ for the exterior differential. These are typically extended to a differential graded algebra $(\Omega, {\rm d})$ of all degrees with ${\rm d}^2=0$. The semiclassical analysis for what this data means at the Poisson level is known to be a {\em Poisson-compatible preconnection} $\nabla$. The systematic analysis in \cite{BegMa:semi} found, in particular, a no-go theorem proving the non-existence of a left and right translation-covariant differential structure of the classical dimension on standard quantum group coordinate algebras ${\mathbb{C}}_q[G]$ when $G$ is the Lie group of a complex semisimple Lie algebra ${\mathfrak{g}}$. In \cite{BegMa:twi} was a similar result for the non-existence of ad-covariant differential structures of classical dimension on enveloping algebras of semisimple Lie algebras. Such results tied in with experience at the algebraic level, where one often has to go to higher dimensional $\Omega^1$, and \cite{BegMa:semi,BegMa:twi} also provided an alternative, namely to consider non-associative exterior algebras corresponding to $\nabla$ with curvature. This has been taken up further in \cite{BegMa15}. The present paper revisits the analysis focusing more clearly on the Lie algebraic structure. For left-covariant differentials we find this time a clean and positive result which both classifies and constructs at semiclassical level differential calculi of the correct classical dimension on quantum groups for which the dual Lie algebra ${\mathfrak{g}}^*$ has a certain property linked to being solvable. More precisely, our result (Corollary~\ref{preliecorol}) is that the semiclassical data exists if and only if ${\mathfrak{g}}^*$ admits a pre-Lie structure $\Xi:{\mathfrak{g}}^*\otimes{\mathfrak{g}}^*\to{\mathfrak{g}}^*$. Here a pre-Lie structure is a product obeying certain axioms such that the commutator is a Lie algebra, such objects also being called left-symmetric or Vinberg algebras, see \cite{Cartier} and \cite{Bu2} for two reviews. Our result has no contradiction to ${\mathfrak{g}}$ semisimple and includes quantum groups such as ${\mathbb{C}}_q[SU_2]$, where we exhibit the pre-Lie structure that corresponds its known but little understood 3D calculus in \cite{Wor}. Even better, the duals ${\mathfrak{g}}^*$ for all quantum groups ${\mathbb{C}}_q[G]$ are known to be solvable\cite{Ma:mat} and it may be that all solvable Lie algebras admit pre-Lie algebra structures, a question posed by Milnor, see \cite{Bu2}. This suggests for the first time a route to the construction of a left-covariant differential calculus for all ${\mathbb{C}}_q[G]$, currently an unsolved problem. We build on the initial analysis of this example in \cite{BegMa:semi}. Next, if the calculus is both left and right covariant (bicovariant), we find an additional condition (\ref{Xi-bi}) on $\Xi$ which we relate to infinitesimal or Lie-crossed modules with the coadjoint action, see Theorems~\ref{almostcross} and \ref{zero-curvature}. The paper also covers in detail the important case of the enveloping algebra $U({\mathfrak{m}})$ of a Lie algebra ${\mathfrak{m}}$, viewed as quantisation of ${\mathfrak{m}}^*$. This is a Hopf algebra so, trivially, a quantum group, and our theory applies with ${\mathfrak{g}}={\mathfrak{m}}^*$ an abelian Poisson-Lie group with its Kirillov-Kostant Poisson bracket. In fact our result in this example turns out to extend canonically to all orders in deformation theory, not just the lowest semiclassical order. We show (Proposition~\ref{envel}) that $U({\mathfrak{m}})$ admits a connected bicovariant differential exterior algebra of classical dimension if and only if ${\mathfrak{m}}$ admits a pre-Lie algebra. The proof builds on results in \cite{MT}. We do not require ad-invariance but the result again excludes the case that ${\mathfrak{m}}$ is semisimple since semisimple Lie algebras do not admit pre-Lie structures. The ${\mathfrak{m}}$ that are allowed do, however, include solvable Lie algebras of the form $[x_i,t]=x_i$ which have been extensively discussed for the structure of `quantum spacetime' (here $x_i, t$ are now viewed as space and time coordinates respectively), most recently in \cite{BegMa14}. In the 2D case we use the known classification of $2$-dimensional pre-Lie structures over ${\mathbb{C}}$ in \cite{Bu} to classify all possible left-covariant differential structures of classical dimension. This includes the standard calculus previously used in \cite{BegMa14} as well as some other differential calculi in the physics literature\cite{SMK}. We also cover the first steps of the noncommutative Riemannian geometry stemming from the choice of these different differential structures, namely the allowed quantum metrics. The choice of calculus highly constrains the possible quantum metrics, a new phenomenon identified in \cite{BegMa14} and analysed in greater generality in \cite{BegMa15}. The 4D case and its consequences for quantum gravity are explored in our related paper\cite{MaTao:cos}. We then apply our theory to the quantisation of the tangent bundle and cotangent bundle of a Poisson-Lie group. In Section 5, we recall the use of the Lie bialgebra ${\mathfrak{g}}$ of a Poisson-Lie group $G$ to construct the tangent bundle as a bicrossproduct of Poisson-Lie groups and its associated `bicross-sum' of Lie bialgebras\cite{Ma:book}. Our results (see Theorem~\ref{prelie-T}) then suggest a full differential structure, not only at semiclassical level, on the associated bicrossproduct quantum groups ${\mathbb{C}}[G]{\blacktriangleright\!\!\!\triangleleft} U_\lambda({\mathfrak{g}}^*)$ in \cite{Ma:mat, Ma:bicross, Ma:book}. We prove this in Proposition~\ref{propC(G)U(g*)} and give ${\mathbb{C}}[SU_2]{\blacktriangleright\!\!\!\triangleleft} U_\lambda(su_2^*)$ in detail. Indeed, these bicrossproduct quantum groups were exactly conceived in the 1980s as quantum tangent spaces of Lie groups. Meanwhile, in Section 6, we use a pre-Lie structure on ${\mathfrak{g}}^*$ to make ${\mathfrak{g}}$ into a braided Lie biaglebra\cite{Ma:blie} (Lemma~\ref{blie}). The Lie bialgebra of the cotangent bundle becomes a `bosonization' in the sense of \cite{Ma:blie} and we construct in some cases a natural preconnection for the semiclassical differential calculus. As before, we cover abelian Lie groups with the Kirillov-Kostant Poisson bracket and quasitriangular Poisson-Lie groups as examples. Most of the work in the paper is at the semiclassical level but occasionally we have results about differentials at the Hopf algebra level as in \cite{Wor}, building on our recent work \cite{MT}. In the Appendix, inspired by the construction on cotangent bundle in Section 6, we generalise the construction there to any cocommutative Hopf algebra. This allows us to specialise to the group algebra of a finite group, which is far from the Poisson-Lie setting of the body of the paper. This is a first step in a general braided Hopf algebras construction of differential calculus and exterior algebras to be studied elsewhere. \section{Preliminaries} \subsection{Deformation of noncommutative differentials} We follow the setting in \cite{BegMa:semi}. Let $M$ be a smooth manifold, one can deform commutative multiplication on $C^\infty(M)$ to an associative multiplication $x\bullet y$ where $x\bullet y=xy+O(\lambda).$ Assume that the commutator can be written as $[x,y]_\bullet=x\bullet y-y\bullet x=\lambda\{x,y\}+O(\lambda^2)$ and that we are working in a deformation setting where we can equate order by order in $\lambda$. One can show that $\{\ ,\ \}$ is a Poisson bracket and thus $(M,\{\ ,\ \})$ is a Poisson manifold. As vector spaces, $n$-forms $\Omega^n(M)$ and exterior algebra $\Omega(M)$ take classical values. But now $\Omega^1(M)$ is assumed to be a bimodule over $(C^\infty (M),\bullet)$ up to order $O(\lambda^2)$ with $x\bullet\tau=x\tau+O(\lambda)$ and $\tau\bullet x=\tau x+O(\lambda)$. The deformed ${\rm d}$ operator ${\rm d} _\bullet:\Omega^n(M)\to\Omega^{n+1}(M)$ is given by ${\rm d}_\bullet x={\rm d} x+O(\lambda)$ and also a graded derivation to order $O(\lambda^2).$ Define $\gamma:C^\infty (M)\otimes\Omega^1(M)\to\Omega^1(M)$ by \begin{equation*} x\bullet\tau-\tau\bullet x=[x,\tau]_\bullet=\lambda\gamma(x,\tau)+O(\lambda^2). \end{equation*} It was shown in~\cite{Haw,BegMa:semi} that being a bimodule up to order $O(\lambda^2)$ requires that map $\gamma$ satisfies \begin{gather} \gamma(xy,\tau)=\gamma(x,\tau)y+x\gamma(y,\tau),\label{pre1}\\ \gamma(x,y\tau )=y\gamma(x,\tau)+\{x,y\}\tau.\label{pre2} \end{gather} If ${\rm d}_\bullet$ is a derivation up to order $O(\lambda^2),$ then $\gamma$ should also satisfy \begin{equation}\label{P-C} {\rm d}\{x,y\}=\gamma(x,{\rm d} y)-\gamma(y,{\rm d} x). \end{equation} \begin{definition} Any map $\gamma:C^\infty (M)\otimes\Omega^1(M)\to\Omega^1(M)$ satisfying (\ref{pre1}) and (\ref{pre2}) is defined to be a \textit{preconnection} on $M$. A preconnection $\gamma$ is said to be \textit{Poisson-compatible} if in addition (\ref{P-C}) is also satisfied, where ${\rm d}: C^\infty(M)\to \Omega^1(M)$ is the usual exterior derivation. \end{definition} In view of the properties of a preconnection, one can rewrite ${\nabla}_{\hat{x}}\tau=\gamma(x,\tau)$ for any $\tau\in \Omega^1(M)$, then the map ${\nabla}_{\hat{x}}:\Omega^1(M)\to\Omega^1(M)$ can be thought of as a usual covariant derivative $\nabla_{\hat{x}}$ along Hamiltonian vector fields $\hat{x}=\{x,-\}$ associated to $x\in C^\infty(M)$ and such that ${\nabla}_{\hat{x}}(y\tau)=y{\nabla}_{\hat{x}}\tau+\hat{x}(y)\tau,$ which are (\ref{pre1}) and (\ref{pre2}). Then a preconnection $\gamma$ can be viewed as a `partially defined connection' on Hamiltonian vector fields. From the analysis above, we see that a preconnection controls the noncommutativity of functions and $1$-forms, and thus plays a vital role in deforming a differential graded algebra $\Omega(M)$ at lowest order, parallel to the Poisson bracket for $C^\infty(M)$ at lowest order. \subsection{Poisson-Lie groups and Lie bialgebras} In this paper, we mainly work over a Poisson-Lie group $G$ and its Lie algebra ${\mathfrak{g}}$. By definition, the Poisson bracket $\{\ ,\ \}:C^\infty(G)\otimes C^\infty(G)\to C^\infty(G)$ is determined uniquely by a so-called Poisson bivector $P=\omega{}^{(1)}\otimes\omega{}^{(2)}$, i.e., $\{f,g\}=P({\rm d} f,{\rm d} g)=\omega{}^{(1)}({\rm d} f)\omega{}^{(2)}({\rm d} g).$ Then ${\mathfrak{g}}$ is a Lie bialgebra with Lie cobracket $\delta:{\mathfrak{g}}\to{\mathfrak{g}}\otimes{\mathfrak{g}}$ given by $\delta(x)=\frac{{\rm d}}{{\rm d} t}\omega{}^{(1)}(g)g^{-1}\otimes \omega{}^{(2)}(g)g^{-1}|_{t=0}$ where $g=\exp tx \in G$ for any $x\in{\mathfrak{g}}.$ See~\cite{Ma:book} for more details. The map $\delta$ is a Lie 1-cocycle with respect to the adjoint action, and extends to group 1-cocycles $D(g)=(R_{g^{-1}})_*P(g)$ with respect to the left adjoint action and $D^\vee (g)=(L_{g^{-1}})_*P(g)$ with respect to the right adjoint action. Here $D^\vee(g)={\rm Ad}_{g^{-1}}D(g)$ so these are equivalent. We recall for later that the left group cocycle property here is \[ D(uv)=D(u)+{\rm Ad}_u(D(v)),\quad \forall u,v\in G,\quad D(e)=0.\] Given $\delta,$ we can recover $D$ as the unique solution to \begin{equation*} {\rm d} D(\tilde{x})(g)={\rm Ad}_g(\delta x),\quad D(e)=0, \end{equation*} where $\tilde{x}$ is the left-invariant vector field corresponding to $x\in{\mathfrak{g}}$. We then recover the Poisson bracket by $P(g)=R_{g*}(D(g))$ for all $g\in G$. For convenience, we recall the notion of Lie crossed module~\cite{Ma:blie} of a Lie bialgebra. \begin{definition} Let ${\mathfrak{g}}$ be a Lie bialgebra. A \textit{left ${\mathfrak{g}}$-crossed module} $(V,{\triangleright},\alpha)$ is a left ${\mathfrak{g}}$-module $(V,{\triangleright})$ that admits a left ${\mathfrak{g}}$-coaction $\alpha:V\to{\mathfrak{g}}\otimes V$ such that \begin{equation*} \alpha(x{\triangleright} v)=([x,\ ]\otimes{\rm id}+{\rm id}\otimes x{\triangleright})\alpha(v)+\delta(x){\triangleright} v \end{equation*} for any $x\in{\mathfrak{g}},v\in V.$ In the case of ${\mathfrak{g}}$ is finite-dimensional, the notion of a left ${\mathfrak{g}}$-crossed module is equivalent to a left ${\mathfrak{g}}$-module $(V,{\triangleright})$ that admits a left ${\mathfrak{g}}^{*op}$-action ${\triangleright}'$ satisfying \begin{equation} \label{liecross} \phi\o{\triangleright}' v\<\phi\t,x\>+x\o{\triangleright} v\<\phi,x\t\>=x{\triangleright}(\phi{\triangleright}' v)-\phi{\triangleright}'(x{\triangleright} v) \end{equation} for any $x\in{\mathfrak{g}},\,\phi\in{\mathfrak{g}}^*$ and $v\in V,$ where the left ${\mathfrak{g}}^{* op}$-action ${\triangleright}'$ corresponds to the left ${\mathfrak{g}}$-coaction $\alpha$ above via $\phi{\triangleright}' v=\<\phi,v{}^{(1)}\>v{}^{(2)}$ with $\alpha(v)=v{}^{(1)}\otimes v{}^{(2)}.$ We call a left ${\mathfrak{g}}$-module $V$ with linear map ${\triangleright}':{\mathfrak{g}}^*\otimes V\to V$ (not necessarily an action) such that (\ref{liecross}) a \textit{left almost ${\mathfrak{g}}$-crossed module}. \end{definition} \subsection{Left-covariant preconnections} It was studied in \cite[Section 3]{BegMa:semi} that left, right covariance and bicovariance of a differential calculus in terms of the preconnection $\gamma.$ Roughly speaking, $\gamma$ is said to be \textit{left-covariant} (\textit{right-covariant}, or \textit{bicovariant}) if the associated differential calculus on $(C^\infty(G),\bullet)$ is left-covariant (right-covariant, or bicovariant) in the usual sense over $(C^\infty (G),\bullet)$ up to $O(\lambda^2).$ To introduce our results, we explain the notations used in \cite{BegMa:semi} and also give a short review of the results on Poisson-Lie groups there. Firstly, there is one-to-one correspondence between 1-forms $\Omega^1(G)$ and $C^\infty(G,{\mathfrak{g}}^*)$ the set of smooth sections of the trivial ${\mathfrak{g}}^*$ bundle. For any 1-form $\tau$, define $\tilde{\tau}\in C^\infty (G,{\mathfrak{g}}^*)$ by letting $\tilde{\tau}_g=L_g^*(\tau_g).$ Conversely, any $s\in C^{\infty}(G,{\mathfrak{g}}^*)$ defines an 1-form (denoted by $\hat{s}$) by setting $\hat{s}_g=L_{g^{-1}}^*(s(g)).$ The smoothness of vector fields and $1$-forms and this one-to-one relation can also be shown by Maurer-Cartan form. In particular, we know ${\rm d} a\in\Omega^1(G)$, $\widetilde{{\rm d} a}\in C^\infty(G,{\mathfrak{g}}^*)$ for any $a\in C^{\infty}(G).$ Denote $\widetilde{{\rm d} a}$ by $\hat{L}_a$, then $\<\hat{L}_a(g),v\>=\<\widetilde{{\rm d} a}(g),v\>=\<L_g^*(({\rm d} a)_g),v\>=\<({\rm d} a)_g,{(L_g)}_*v\>={(L_g)}_*(v) a,$ which is the directional derivation of $a$ with respect to $v\in{\mathfrak{g}}$ at $g$. Under above notations, a preconnection $\gamma$ can now be rewritten on ${\mathfrak{g}}^*$-valued functions as $\tilde{\gamma}:C^\infty(G)\times C^\infty(G,{\mathfrak{g}}^*)\to C^\infty(G,{\mathfrak{g}}^*)$ by letting $\tilde{\gamma}(a,\tilde{\tau})=\widetilde{\gamma(a,\tau)}.$ Note that for any $\phi,\psi\in{\mathfrak{g}}^*$ and $g\in G,$ there exist $a\in C^\infty(G)$, $s\in C^\infty(G,{\mathfrak{g}}^*)$ such that $\hat{L}_a(g)=\phi$ and $s(g)=\psi.$ One can define a map $\widetilde{\Xi}:G\times{\mathfrak{g}}^*\times{\mathfrak{g}}^*\to{\mathfrak{g}}^*$ by \begin{equation*} \tilde{\gamma}(a,s)(g)=\{a,s\}(g)+\widetilde{\Xi}(g,\hat{L}_a(g),s(g)). \end{equation*} For brevity, the notation for the Poisson bracket is extended to include ${\mathfrak{g}}^*$ valued function by $\{a,s\}=\omega{}^{(1)}({\rm d} a){\mathcal{L}}_{\omega{}^{(2)}}(\hat{s}),$ where ${\mathcal{L}}$ means Lie derivation. It was shown in \cite[Proposition 4.5]{BegMa:semi} that a preconnection $\gamma$ or $\tilde{\gamma}$ is left-covariant if and only if $\widetilde{\Xi}(gh,\phi,\psi)=\widetilde{\Xi}(h,\phi,\psi)$ for any $g,h\in G,\,\phi,\psi\in{\mathfrak{g}}^*.$ Hence for left-covariant preconnection $\tilde{\gamma},$ the map $\widetilde{\Xi}$ defines a map $\Xi:{\mathfrak{g}}^*\otimes{\mathfrak{g}}^*\to{\mathfrak{g}}^*$ by $\Xi(\phi,\psi)=\widetilde{\Xi}(e,\phi,\psi)$ and \begin{equation}\label{preconnection-xi} \tilde{\gamma}(a,s)(g)=\{a,s\}(g)+\Xi(\hat{L}_a(g),s(g)). \end{equation} Conversely, given $\Xi:{\mathfrak{g}}^*\otimes{\mathfrak{g}}^*\to{\mathfrak{g}}^*$ and define $\widetilde{\gamma}$ by the formula (\ref{preconnection-xi}), one can show the corresponding $\gamma$ is a left-covariant preconnection. In addition, a left-covariant preconnection $\tilde{\gamma}$ is Poisson-compatible if and only if \begin{equation}\label{compatible} \Xi(\phi,\psi)-\Xi(\psi,\phi)=[\phi,\psi]_{{\mathfrak{g}}^*}. \end{equation} for any $\phi,\psi\in {\mathfrak{g}}^*$ \cite[Proposition 4.6]{BegMa:semi}. Based on these results, we can write down a formula for preconnection $\gamma$ in coordinates. Let $\{e_i\}$ be a basis of ${\mathfrak{g}}$ and $\{f^i\}$ dual basis of ${\mathfrak{g}}^*.$ Denote $\{\omega^i\}$ the basis of left--invariant $1$-forms that is dual to $\{\partial_i\}$ the left-invariant vector fields of $G$ generated by $\{e_i\}.$ Then the Maurer-Cartan form is \[\omega=\sum_i\omega^i e_i\ \in \Omega^1(G,{\mathfrak{g}}).\] For any $\eta=\sum_i\eta_i\omega^i\in\Omega^1(G)$ with $\eta_i\in C^\infty(G),$ we know $\eta$ corresponds to $\widetilde{\eta}=\sum_i\eta_i f^i\in C^\infty(G,{\mathfrak{g}}^*).$ On the other hand, any $s=\sum_i s_i f^i\in C^\infty(G,{\mathfrak{g}}^*)$ with $s_i\in C^\infty(G)$ corresponds to $\widehat{s}=\sum_i s_i\omega^i\in \Omega^1(G).$ In particular, $\widetilde{{\rm d} a}=\sum_i (\partial_i a) f^i.$ For any $a\in C^\infty(G)$ and $\tau=\sum_i\tau_i \omega^i\in \Omega^1(G),$ we know $\{a,\widetilde{\tau}\}=\sum_i\{a,\tau_i\}f^i$ and $ \Xi(\widetilde{{\rm d} a}(g), \widetilde{\tau}(g))=\Xi(\sum_i (\partial_i a)(g) f^i, \sum_j \tau_j(g) f^j) =\sum_{i,j} (\partial_i a)(g) \tau_j (g) \Xi (f^i,f^j) =\sum_{i,j,k} (\partial_i a)(g) \tau_j (g) \<\Xi (f^i,f^j), e_k\>f^k, $ namely \begin{align*} \widetilde{\gamma}(a,\widetilde{\tau})&=\sum_k \left(\{a,\tau_k\}+\sum_{i,j}(\partial_i a) \tau_j \<\Xi(f^i,f^j),e_k\>\right) f^k.\nonumber \end{align*} If we write $\Xi^{ij}_k=\<\Xi(f^i,f^j),e_k\>$ (or $\Xi(f^i,f^j)=\sum_k \Xi^{ij}_k f^k$) for any $i,j,k,$ then we have \begin{equation}\label{preconnection-ijk} \gamma(a,\tau)=\sum_k \left(\{a,\tau_k\}+\sum_{i,j} \Xi^{ij}_k(\partial_i a) \tau_j\right) \omega^k. \end{equation} In particular, we have a more handy formula \begin{equation}\label{preconnection-ijk-1} \gamma(a,\omega^j)=\sum_{i,k}(\partial_i a)\<\Xi(f^i,f^j),e_k\>\omega^k=\sum_{i,k}\Xi^{ij}_k(\partial_i a)\omega^k,\quad \forall\,j. \end{equation} \section{Bicovariant preconnections} At the Poisson-Lie group level, it was shown in~\cite[Theorem 4.14]{BegMa:semi} that $\tilde{\gamma}$ is bicovariant if and only if \begin{equation}\label{G-cross} \Xi(\phi,\psi)-{\rm Ad}^*_{g^{-1}}\Xi({\rm Ad}^*_g\phi,{\rm Ad}^*_g\psi)=\phi(g^{-1}\omega{}^{(1)}(g)){\rm ad}^*_{g^{-1}\omega{}^{(2)}(g)}\psi, \end{equation} for all $g\in G$ and $\phi,\psi\in{\mathfrak{g}}^*.$ Working at the Lie algebra level of a connected and simply connected Poisson-Lie group we now have a new result: \begin{theorem}\label{almostcross} Let $G$ be a connected and simply connected Poisson-Lie group. The left-covariant preconnection $\tilde{\gamma}$ on $G$ determined by $\Xi:{\mathfrak{g}}^*\otimes{\mathfrak{g}}^*\to{\mathfrak{g}}^*$ is bicovariant if and only if \begin{equation}\label{g-crossv} {\rm ad}^*_x\Xi(\phi,\psi)-\Xi({\rm ad}^*_x\phi,\psi)-\Xi(\phi,{\rm ad}^*_x\psi)=\phi(x\o){\rm ad}^*_{x\t}(\psi), \end{equation} where $\delta(x)=x\o\otimes x\t,$ namely, $({\rm ad}^*,-\Xi)$ makes ${\mathfrak{g}}^*$ into a left almost ${\mathfrak{g}}$-crossed module. Also, the condition~(\ref{g-crossv}) is equivalent to \begin{equation}\label{bi} \delta_{{\mathfrak{g}}^*}\Xi(\phi,\psi)-\Xi(\phi\o,\psi)\otimes\phi\t-\Xi(\phi,\psi\o)\otimes\psi\t=\psi\o\otimes [\phi,\psi\t]_{{\mathfrak{g}}^*}. \end{equation} \end{theorem} \proof We first show the `only if' part. To get corresponding formula on Lie algebra level for (\ref{G-cross}), we substitute $g$ with $\exp tx$ and differentiate at $t=0.$ Notice that $\frac{{\rm d}}{{\rm d} t}{\rm Ad}^*(\exp tx)|_{t=0}={\rm ad}^*_x$ and ${\rm Ad}^*(\exp tx)|_{t=0}={\rm id}_{{\mathfrak{g}}^*}.$ We get \begin{equation*} {\rm ad}^*_x\Xi(\phi,\psi)-\Xi({\rm ad}^*_x\phi,\psi)-\Xi(\phi,{\rm ad}^*_x\psi)=\phi(x\o){\rm ad}^*_{x\t}(\psi), \end{equation*} as displayed, where $\delta(x)=x\o\otimes x\t=\frac{{\rm d}}{{\rm d} t}g^{-1}P(g)|_{t=0}$ when $g=\exp tx.$ Now denote ${\rm ad}^*_x$ by $x{\triangleright}$ and $-\Xi(\phi,\ )=\phi{\triangleright}$, the left ${\mathfrak{g}}^{*op}$-action, then the left hand side of (\ref{g-crossv}) becomes $-x{\triangleright}(\phi{\triangleright}\psi)+\phi\o{\triangleright}\psi\<\phi\t,x\>+\phi{\triangleright}(x{\triangleright}\psi),$ while the right hand side is $\phi(x\o){\rm ad}^*_{x\t}(\psi)=-\phi(x\t){\rm ad}^*_{x\o}(\psi)=-x\o{\triangleright}\psi\<\phi,x\t\>$. Hence we obtain \begin{equation*} \phi\o{\triangleright}\psi\<\phi\t,x\>+x\o{\triangleright}\psi\<\phi,x\t\>=x{\triangleright}(\phi{\triangleright}\psi)-\phi{\triangleright}(x{\triangleright}\psi) \end{equation*} from (\ref{g-crossv}). This means ${\mathfrak{g}}^*$ is a left almost ${\mathfrak{g}}$-crossed module under $({\rm ad}^*,-\Xi).$ Conversely, we can exponentiate $x$ near zero, and solve the ordinary differential Equation (\ref{g-crossv}) near $g=e$. It has a unique solution (\ref{G-cross}) near the identity. Since the Lie group $G$ is connected and simply connected, one can show (\ref{G-cross}) is valid on the whole group. Notice that ${\rm ad}^*_x\phi=\phi\o \<\phi\t,x\>$ for any $x\in{\mathfrak{g}},\,\phi\in{\mathfrak{g}}^*,$ so the left hand side of (\ref{g-crossv}) becomes $-\Xi(\phi,\psi)\o\<\Xi(\phi,\psi)\t,x\>-\Xi(\phi\o,\psi)\<\phi\t,x\>-\Xi(\phi,\psi\o)\<\psi\t,x\>,$ while the right hand side of (\ref{g-crossv}) is $\phi(x\o)\psi\o \<\psi\t,x\t\>=\psi\o\<[\phi,\psi\t]_{{\mathfrak{g}}^*},x\>,$ thus (\ref{g-crossv}) is equivalent to (\ref{bi}) by using non-degenerate pairing between ${\mathfrak{g}}$ and ${\mathfrak{g}}^*.$ \endproof \section{Flat preconnections} As in~\cite{BegMa:semi}, the \textit{curvature} of a preconnection ${\gamma}$ is defined on Hamiltonian vector fields $\hat{x}=\{x,-\}$ by \begin{equation*} R(\hat{x},\hat{y})={\nabla}_{\hat{x}}{\nabla}_{\hat{y}}-{\nabla}_{\hat{y}}{\nabla}_{\hat{x}}-{\nabla}_{\widehat{\{x,y\}}}, \end{equation*} where $\nabla_{\hat{x}}\tau=\gamma(x,\tau)$ for any $\tau\in\Omega^1(G).$ Note here $[\hat{x},\hat{y}]=\widehat{\{x,y\}}.$ The curvature of a preconnection reflects the obstruction to the Jacobi identity on any functions $x,y$ and $1$-form $\tau$ up to third order, namely \[[x,[y,\tau]_\bullet]_\bullet+[y,[\tau,x]_\bullet]_\bullet+[\tau,[x,y]_\bullet]_\bullet=\lambda^2 R(\hat{x},\hat{y})(\tau)+O(\lambda^3).\] This is the deformation-theoretic meaning of curvature in this context. We say a preconnection is \textit{flat} if its curvature is zero on any Hamiltonian vector fields, namely \begin{equation*} \gamma(x,\gamma(y,\tau))-\gamma(y,\gamma(x,\tau))-\gamma(\{x,y\},\tau)=0, \end{equation*} for all $x,y\in C^\infty(G),\,\tau\in\Omega^1(G).$ This is equivalent to \begin{equation}\label{flat} \tilde{\gamma}(x,\tilde{\gamma}(y,s))-\tilde{\gamma}(y,\tilde{\gamma}(x,s))-\tilde{\gamma}(\{x,y\},s)=0 \end{equation} for all $x,y\in C^\infty(G)$ and $s\in C^\infty(G,{\mathfrak{g}}^*).$ \begin{theorem}\label{zero-curvature} A Poisson-compatible left-covariant preconnection $\gamma$ on a Poisson-Lie group $G$ with Lie algebra ${\mathfrak{g}}$ is flat if and only if the corresponding map $-\Xi$ is a right ${\mathfrak{g}}^*$-action (or left ${\mathfrak{g}}^{*op}$-action) on ${\mathfrak{g}}^*$, namely \begin{equation}\label{leftaction} \Xi([\phi,\psi]_{{\mathfrak{g}}^*},\zeta)=\Xi(\phi,\Xi(\psi,\zeta))-\Xi(\psi,\Xi(\phi,\zeta)), \quad\forall\ \phi,\psi,\zeta\in{\mathfrak{g}}^*. \end{equation} In this case, $\gamma$ is bicovariant if and only if the left almost ${\mathfrak{g}}$-crossed module ${\mathfrak{g}}^*$ given by $(ad^*,-\Xi)$ in Theorem~\ref{almostcross} is actually a left ${\mathfrak{g}}$-crossed module. \end{theorem} \proof Let $\gamma$ be a Poisson-compatible left-covariant preconnection on a Poisson-Lie group $G$. Firstly, we can rewrite formula (\ref{flat}) in terms of $\Xi:{\mathfrak{g}}^*\otimes{\mathfrak{g}}^*\to{\mathfrak{g}}^*.$ By definition, the three terms in (\ref{flat}) become \begin{gather*} \tilde{\gamma}(x,\tilde{\gamma}(y,s))(g)=\{x,\{y,s\}\}(g)+\{x,\Xi(\hat{L}_y(g),s(g))\}+\Xi(\hat{L}_x(g),\{y,s\}(g))\\ +\,\Xi(\hat{L}_x(g),\Xi(\hat{L}_y(g),s(g))),\qquad\qquad\quad\\ \tilde{\gamma}(y,\tilde{\gamma}(x,s))(g)=\{y,\{x,s\}\}(g)+\{y,\Xi(\hat{L}_x(g),s(g))\}+\Xi(\hat{L}_y(g),\{x,s\}(g))\\ +\,\Xi(\hat{L}_y(g),\Xi(\hat{L}_x(g),s(g))),\qquad\qquad\quad\\ \end{gather*} and $$\tilde{\gamma}(\{x,y\},s)(g)=\{\{x,y\},s\}(g)+\Xi(\hat{L}_{\{x,y\}}(g),s(g)).$$ Canceling terms involving the Jacobi identity of Poisson bracket, formula (\ref{flat}) becomes \begin{gather*} \{x,\Xi(\hat{L}_y(g),s(g))\}+\Xi(\hat{L}_x(g),\{y,s\}(g))+\Xi(\hat{L}_x(g),\Xi(\hat{L}_y(g),s(g)))\\ -\{y,\Xi(\hat{L}_x(g),s(g))\}-\Xi(\hat{L}_y(g),\{x,s\}(g))-\Xi(\hat{L}_y(g),\Xi(\hat{L}_x(g),s(g)))\\ =\Xi(\hat{L}_{\{x,y\}}(g),s(g)). \end{gather*} Note that $\gamma$ is Poisson-compatible, this implies \begin{align*} \hat{L}_{\{x,y\}}(g)&=\tilde{\gamma}(x,\hat{L}_y)(g)-\tilde{\gamma}(y,\hat{L}_x)(g)\\ &=\{x,\hat{L}_y\}(g)+\Xi(\hat{L}_x(g),\hat{L}_y(g))-\{y,\hat{L}_x\}(g)-\Xi(\hat{L}_y(g),\hat{L}_x(g)). \end{align*} and $\{x,\Xi(\hat{L}_y(g),s(g)\}=\Xi(\{x,\hat{L}_y\}(g),s(g))+\Xi(\hat{L}_y(g),\{x,s\}(g))$ by derivation property of $\{x,-\}.$ Then (\ref{flat}) is equivalent to \begin{equation}\label{flat1} \begin{split} \Xi(\hat{L}_x(g),\Xi(\hat{L}_y(g),s(g)))-\Xi(\hat{L}_y(g),\Xi(\hat{L}_x(g),s(g)))\\ =\Xi(\Xi(\hat{L}_x(g),\hat{L}_y(g))-\Xi(\hat{L}_y(g),\hat{L}_x(g)),s(g)). \end{split} \end{equation} for all $x,y\in C^\infty(G)$ and $s\in C^\infty(G,{\mathfrak{g}}^*).$ Now if $\gamma$ is flat, we can evaluate this equation at the identity $e$ of $G,$ and for any $\phi,\psi,\zeta\in{\mathfrak{g}}^*,$ set $\phi=\hat{L}_x(e),\,\psi=\hat{L}_y(e)$ and $\zeta=s(e)$ for some $x,y\in C^\infty(G),\,s\in C^\infty(G,{\mathfrak{g}}^*).$ Then (\ref{flat1}) becomes \begin{equation*} \Xi(\Xi(\phi,\psi)-\Xi(\psi,\phi),\zeta)=\Xi(\phi,\Xi(\psi,\zeta))-\Xi(\psi,\Xi(\phi,\zeta)). \end{equation*} Use compatibility again, we get (\ref{leftaction}) as displayed. This also shows $\Xi$ is a left ${\mathfrak{g}}^*$-action on itself, or say, ${\mathfrak{g}}^*$ is a left ${\mathfrak{g}}^{*op}$-module via $-\Xi.$ Conversely, if ${\mathfrak{g}}^*$ is a left ${\mathfrak{g}}^{*op}$-module via ${\triangleright}:{\mathfrak{g}}^*\otimes{\mathfrak{g}}^*\to{\mathfrak{g}}^*$ and such that $-\phi{\triangleright}\psi+\psi{\triangleright}\phi=[\phi,\psi]_{{\mathfrak{g}}^*},$ i.e., (\ref{leftaction}) holds. This implies (\ref{flat1}) for any $x,y\in C^\infty(G),\,s\in C^\infty(G,{\mathfrak{g}}^*),$ which is equivalent to (\ref{flat}). The rest of the proof is immediate from Theorem~\ref{almostcross} when the Poisson-Lie group is connected and simply connected. \endproof \subsection{Preconnections and pre-Lie algebras} Now we recall the notion of \textit{left pre-Lie algebra} (also known as \textit{Vinberg algebra, left symmetric algebra}). An algebra $(A,\circ)$, not necessarily associative, with product $\circ:A\otimes A\to A$ is called a \textit{(left) pre-Lie algebra} if the identity \begin{equation}\label{left-symmetric} (x\circ y)\circ z-(y\circ x)\circ z=x\circ(y\circ z)-y\circ(x\circ z). \end{equation} holds for all $x,y,z\in A.$ From definition, every associative algebra is a pre-Lie algebra and every pre-Lie algebra $(A,\circ)$ admits a Lie algebra structure (denoted by ${\mathfrak{g}}_A$) with Lie bracket given by \begin{equation} [x,y]_{{\mathfrak{g}}_A}:=x\circ y-y\circ x \end{equation} for all $x,y\in A$. The Jacobi identity of $[\ ,\ ]_{{\mathfrak{g}}_A}$ holds automatically due to (\ref{left-symmetric}). Regarding this, we can rephrase Theorem~\ref{almostcross} and \ref{zero-curvature} as follows. \begin{corollary}\label{preliecorol} A connected and simply connected Poisson-Lie group $G$ with Lie algebra ${\mathfrak{g}}$ admits a Poisson-compatible left-covariant flat preconnection if and only if $({\mathfrak{g}}^*,[\ ,\ ]_{{\mathfrak{g}}^*})$ admits a pre-Lie structure $\Xi$. Moreover, such left-covariant preconnection is bicovariant if and only if $\Xi$ in addition obeys \begin{equation}\label{Xi-bi} \begin{split} \delta_{{\mathfrak{g}}^*}\Xi(\phi,\psi)-\Xi(\phi,\psi\o)\otimes\psi\t-\psi\o\otimes\Xi(\phi,\psi\t)\\ =\Xi(\phi\o,\psi)\otimes\phi\t-\psi\o\otimes\Xi(\psi\t,\phi) \end{split} \end{equation} for any $\phi,\psi\in{\mathfrak{g}}^*.$ \end{corollary} \proof The first part is shown by (\ref{compatible}) and (\ref{leftaction}). For the bicovariant case, the addition condition on $\Xi$ is (\ref{bi}). Using compatibility and rearranging terms, we know that (\ref{bi}) is equivalent to (\ref{Xi-bi}) as displayed. \endproof \begin{example}\label{kk} Let ${\mathfrak{m}}$ be a finite-dimensional Lie algebra and $G={\mathfrak{m}}^*$ be an abelian Poisson-Lie group with its Kirillov-Kostant Poisson-Lie group structure $\{x,y\}=[x,y]$ for all $x,y\in{\mathfrak{m}}\subset C^\infty({\mathfrak{m}}^*)$ or $S({\mathfrak{m}})$ in an algebraic context. By Corollary~\ref{preliecorol}, this admits a Poisson-compatible left-covariant flat preconnection if and only if ${\mathfrak{m}}$ admits a pre-Lie algebra structure $\circ$. This preconnection is always bicovariant as (\ref{Xi-bi}) vanishes when Lie algebra ${\mathfrak{m}}^*$ is abelian ($\delta_{\mathfrak{m}}=0$). Then (\ref{preconnection-ijk}) with $\Xi=\circ$ implies \[ \nabla_{\hat x}{\rm d} y= {\rm d} (x\circ y),\quad \forall\,x,y\in {\mathfrak{m}}.\] (Note that $\widetilde{{\rm d} y}$ is a constant-valued function in $C^\infty(G,{\mathfrak{m}})$, so $\{x,\widetilde{{\rm d} y}\}\equiv0$ and $\tilde{\gamma}(x,\widetilde{{\rm d} y})=\Xi(x,y)$.) \end{example} In fact the algebra and its calculus in this example work to all orders. Thus the quantisation of $C^\infty({\mathfrak{m}}^*)$ is $U_\lambda({\mathfrak{m}})$ regarded as a noncommutative coordinate algebra with relations $x y - y x= \lambda [x,y]$. If ${\mathfrak{m}}$ has an underlying pre-Lie structure then the above results lead to relations \[ [x, {\rm d} y]=\lambda {\rm d} (x\circ y) ,\quad \forall x,y\in {\mathfrak{m}}\] and one can check that this works exactly and not only to order $\lambda$ precisely as a consequence of the pre-Lie algebra axiom. The full result here is: \begin{proposition}\label{envel} Let ${\mathfrak{m}}$ be a finite dimensional Lie algebra over a field $k$ of characteristic zero. Then the enveloping algebra $U({\mathfrak{m}})$ admits a connected bicovariant differential graded algebra with left-invariant $1$-forms $\Lambda^1$ of classical dimension (namely, $\dim \Lambda^1=\dim {\mathfrak{m}}$) if and only if ${\mathfrak{m}}$ admits a pre-Lie structure. \end{proposition} \proof We refer to \cite{MT,Wor} for the formal definition of a first order differential calculus over a Hopf algebra. A calculus is said to be connected if $\ker{\rm d} =k\{1\}$. It is clear from~\cite[Proposition 2.11, Proposition 4.7]{MT} that such differential graded algebra on $U({\mathfrak{m}})$ with left-invariant $1$-forms ${\mathfrak{m}}$ correspond to $1$-cocycle $Z_{\triangleleft}^1({\mathfrak{m}},{\mathfrak{m}})$ that extends to a surjective right ${\mathfrak{m}}$-module map $\omega: U({\mathfrak{m}})^+\to{\mathfrak{m}}$ and $\ker {\rm d} =k\{1\},$ where the derivation ${\rm d}: U({\mathfrak{m}})\to \Omega^1(U({\mathfrak{m}}))=U({\mathfrak{m}})\otimes{\mathfrak{m}}$ is given by ${\rm d} a= a\o\otimes\omega(\pi(a\t))$ for any $a\in U({\mathfrak{m}}).$ Suppose that $\omega$ is such a map, we take $\zeta=\omega|_{\mathfrak{m}}\in Z^1_{\triangleleft}({\mathfrak{m}},{\mathfrak{m}}).$ For any $x\in{\mathfrak{m}}$ such that $\zeta(x)=0,$ we have ${\rm d} x=1\otimes \omega(x)=0,$ then $\ker{\rm d}=k\{1\}$ implies $x=0,$ so $\zeta$ is an injection hence a bijection as ${\mathfrak{m}}$ is finite dimensional. Now we can define a product $\circ:{\mathfrak{m}}\otimes{\mathfrak{m}}\to {\mathfrak{m}}$ by $x\circ y=-\zeta^{-1}(\zeta(y){\triangleleft} x),$ this makes ${\mathfrak{m}}$ into a left pre-Lie algebra as $[x,y]\circ z=-\zeta^{-1}(\zeta(z){\triangleleft}[x,y])=\zeta^{-1}((\zeta(z){\triangleleft} y){\triangleleft} x)-\zeta^{-1}((\zeta(z){\triangleleft} x){\triangleleft} y)=x\circ (y\circ z)-y\circ(x\circ z).$ Conversely, if ${\mathfrak{m}}$ admits a left pre-Lie structure $\circ,$ then $y{\triangleleft} x=-x\circ y$ makes ${\mathfrak{m}}$ into a right ${\mathfrak{m}}$-module and $\zeta={\rm id}_{\mathfrak{m}}$ the identity map becomes a bijective $1$-cocycle in $Z^1_{\triangleleft}({\mathfrak{m}},{\mathfrak{m}}).$ The extended map $\omega:U({\mathfrak{m}})^+\to{\mathfrak{m}}$ and the derivation ${\rm d}: U({\mathfrak{m}})\to U({\mathfrak{m}})\otimes{\mathfrak{m}}$ are given by $\omega(x_1 x_2 \cdots x_n)=((x_1{\triangleleft} x_2)\cdots {\triangleleft} x_n)$ for any $x_1x_2\cdots x_n\in U({\mathfrak{m}})^+$ and \[{\rm d} (x_1 x_2 \cdots x_n)=\sum_{p=0}^{n-1}\sum_{\sigma\in Sh(p,n-p)}x_{\sigma(1)}\cdots x_{\sigma(p)}\otimes\omega (x_{\sigma(p+1)}\cdots x_{\sigma(n)})\] for any $x_1x_2\cdots x_n\in U({\mathfrak{m}})$ respectively. We need to show that $\ker{\rm d} =k\{1\}.$ On one hand, $k\{1\}\subseteq \ker{\rm d},$ as ${\rm d}(1)=0.$ On the other hand, denote by $U_n({\mathfrak{m}})$ the subspace of $U({\mathfrak{m}})$ generated by the products $x_1x_2\cdots x_p,$ where $x_1,\dots,x_p\in {\mathfrak{m}}$ and $p\le n.$ Clearly, $U_0=k\{1\},\, U_1({\mathfrak{m}})=k\{1\}\oplus{\mathfrak{m}}, U_p({\mathfrak{m}})U_q({\mathfrak{m}})\subseteq U_{p+q}({\mathfrak{m}})$ and thus $\left(U_n({\mathfrak{m}})\right)_{n\ge 0}$ is a filtration of $U({\mathfrak{m}}).$ In order to show $\ker{\rm d}\subseteq k\{1\}$, it suffices to show that the intersection \[\ker{\rm d} \cap U_n({\mathfrak{m}})=k\{1\} \text{ for any integer } n\ge 0.\] We prove by induction on $n\ge 0$. It is obvious for $n=0,$ and true for $n=1,$ as for any $v=\sum_i x_i\in \ker{\rm d}\cap{\mathfrak{m}},$ $0={\rm d} v=\sum_i 1\otimes \omega(x_i)=\sum_i 1\otimes x_i$ implies $v=\sum_i x_i=0.$ Suppose that $\ker{\rm d}\cap U_{n-1}({\mathfrak{m}})=k\{1\}$ for $n\ge 2.$ For any $v\in \ker{\rm d}\cap U_n({\mathfrak{m}}),$ without loss of generality, we can write $v=\sum_i x_{i_{1}}x_{i_{2}}\cdots x_{i_{n}}+v',$ where $x_{i_{j}}\in {\mathfrak{m}}$ and $v'$ is an element in $U_{n-1}({\mathfrak{m}}).$ We have \begin{align*} {\rm d} v&=\sum_i 1\otimes \omega(x_{i_{1}}\cdots x_{i_{n}})+\sum_i\sum_{j=1}^n x_{i_{1}}\cdots x_{i_{(j-1)}}\widehat{x_{i_{j}}} x_{i_{(j+1)}}\cdots x_{i_{n}}\otimes x_{i_{j}}\\ &+ \sum_{i}\sum_{r=1}^{n-2}\sum_{\sigma\in Sh(r,n-r)} x_{i_{\sigma(1)}}\cdots x_{i_{\sigma(r)}}\otimes \omega (x_{i_{\sigma(r+1)}}\cdots x_{i_{\sigma(n)}})+{\rm d} v'. \end{align*} We denote the elements $u_{i_{j}}:= x_{i_{1}}\cdots x_{i_{(j-1)}}\widehat{x_{i_{j}}} x_{i_{(j+1)}}\cdots x_{i_{n}}\in U_{n-1}({\mathfrak{m}})$ for any $i, 1\le j\le n.$ Except the term $\sum_i\sum_{j=1}^n u_{i_{j}}\otimes x_{i_j},$ all the summands in ${\rm d} v$ lie in $U_{n-2}({\mathfrak{m}})\otimes{\mathfrak{m}},$ thus $\sum_i\sum_{j=1}^n u_{i_{j}}\otimes x_{i_j}$ also lies in $ U_{n-2}({\mathfrak{m}})\otimes{\mathfrak{m}}$ as ${\rm d} v=0.$ This implies $\sum_i\sum_{j=1}^n u_{i_{j}}x_{i_j}=v''$ for some element $v''\in U_{n-1}({\mathfrak{m}}).$ Rearrange this and add $(n-1)$'s copies of $\sum_i u_{i_n}x_{i_n}=\sum_i x_{i_1}x_{i_2}\cdots x_{i_n}$ on both sides, we get $n\sum_i (x_{i_1}\cdots x_{i_n})=\sum_i\sum_{j=1}^{n-1} x_{i_1}\cdots x_{i_{(j-1)}}[x_{i_j}, x_{i_{j+1}}\cdots x_{i_n}]+v'',$ therefore \[v=\frac{1}{n}\sum_i\sum_{j=1}^{n-1} x_{i_1}\cdots x_{i_{(j-1)}}[x_{i_j}, x_{i_{j+1}}\cdots x_{i_n}]+\frac{1}{n} v''+v'\in U_{n-1}({\mathfrak{m}}).\] So we show that $v$ actually lies in $\ker{\rm d}\cap U_{n-1}({\mathfrak{m}})$ hence $v\in k\{1\}$ by assumption. Therefore we prove that $\ker{\rm d}\cap U_n({\mathfrak{m}})=k\{1\}$ for any $n\ge 0,$ this finishes our proof. \endproof To make contact with real classical geometry in the rest of the paper, the standard approach in noncommutative geometry is to work over ${\mathbb{C}}$ with complexified differential forms and functions and to remember the `real form' by means of a $*$-involution. We recall that a differential graded algebra over ${\mathbb{C}}$ is called \textit{$\ast$-DGA} if it is equipped with a conjugate-linear map $\ast:\Omega\to\Omega$ such that $\ast^2={\rm id},$ $ (\xi\wedge\eta)^*=(-1)^{|\xi||\eta|}\eta^*\wedge \xi^*$ and ${\rm d}(\xi^*)=({\rm d}\xi)^*$ for any $\xi,\eta\in\Omega.$ Let ${\mathfrak{m}}$ be a real pre-Lie algebra, i.e., there is a basis $\{e_i\}$ of ${\mathfrak{m}}$ with real structure coefficients. Then this is also a real form for ${\mathfrak{m}}$ as a Lie algebra. In this case $e_i^*=e_i$ extends complex-linearly to an involution $*:{\mathfrak{m}} \to {\mathfrak{m}}$ which then makes $\Omega(U_\lambda({\mathfrak{m}}))$ as $*$-DGA if $\lambda^*=-\lambda,$ i.e., if $\lambda$ is imaginary. If we want $\lambda$ real then we should take $e_i^*=-e_i$. \begin{example}\label{2D} Let $\mathfrak{b}$ be the 2-dimensional complex nonabelian Lie algebra defined by $[x,t]=x.$ It admits~\cite{Bu} five families of mutually non-isomorphic pre-Lie algebra structures over $\mathbb{C}$, which are \begin{align*} \mathfrak{b}_{1,\alpha}:&\quad t\circ x=-x,\quad t\circ t=\alpha t;\\ \mathfrak{b}_{2,\beta\neq 0}:& \quad x\circ t=\beta x,\quad t\circ x=(\beta-1) x,\quad t\circ t=\beta t;\\ \mathfrak{b}_3:&\quad t\circ x=-x,\quad t\circ t=x-t;\\ \mathfrak{b}_4:&\quad x\circ x= t, \quad t\circ x=-x,\quad t\circ t=-2 t;\\ \mathfrak{b}_5:&\quad x\circ t =x,\quad t\circ t=x+t, \end{align*} where $\alpha,\beta\in \mathbb{C}.$ (Here $\mathfrak{b}_{1,0}\cong\mathfrak{b}_{2,0}$, so let $\beta\neq 0$.) Thus there are five families of bicovariant differential calculi over $U_\lambda(\mathfrak{b})$: \begin{align*} \Omega^1(U_\lambda(\mathfrak{b}_{1,\alpha})):&\quad [t,{\rm d} x]=-\lambda{\rm d} x,\quad [t,{\rm d} t]=\lambda \alpha {\rm d} t;\\ \Omega^1(U_\lambda(\mathfrak{b}_{2,\beta\neq 0})):& \quad [x,{\rm d} t]=\lambda \beta {\rm d} x,\quad [t,{\rm d} x]=\lambda(\beta-1){\rm d} x,\quad [t,{\rm d} t]=\lambda\beta{\rm d} t;\\ \Omega^1(U_\lambda(\mathfrak{b}_{3})):&\quad [t,{\rm d} x]=-\lambda{\rm d} x,\quad [t,{\rm d} t]=\lambda{\rm d} x-\lambda {\rm d} t;\\ \Omega^1(U_\lambda(\mathfrak{b}_{4})):&\quad [x,{\rm d} x]=\lambda{\rm d} t,\quad [t,{\rm d} x]=-\lambda {\rm d} x,\quad [t,{\rm d} t]=-2\lambda {\rm d} t;\\ \Omega^1(U_\lambda(\mathfrak{b}_{5})):&\quad [x,{\rm d} t]=\lambda {\rm d} x,\quad [t,{\rm d} t]=\lambda{\rm d} x+\lambda{\rm d} t. \end{align*} All these examples are $*$-DGA's with $x^*=x$ and $t^*=t$ when $\lambda^*=-\lambda$ as $\{x, t\}$ is a real form of the relevant pre-Lie algebra. We also need for this that $\alpha,\beta$ are real. \end{example} \begin{example}\label{CqSU2} For $q\in{\mathbb{C}}, q\neq 0,$ we recall that the Hopf algebra ${\mathbb{C}}_q[SL_2]$ is, as an algebra, a quotient of free algebra ${\mathbb{C}}\<a,b,c,d\>$ modulo relations \begin{gather*} ba=qab,\quad ca=qac,\quad db=qbd,\quad dc=qcd,\quad bc=cb,\\ ad-da=(q^{-1}-q) bc,\quad ad-q^{-1}bc=1. \end{gather*} Usually, the generators $a,b,c,d$ are written as a single matrix $\begin{pmatrix} a & b\\ c & d \end{pmatrix}.$ The coproduct, counit and antipode of ${\mathbb{C}}_q[SL_2]$ are given by \[\Delta \begin{pmatrix} a & b\\ c & d \end{pmatrix} = \begin{pmatrix} a & b\\ c & d \end{pmatrix} \otimes \begin{pmatrix} a & b\\ c & d \end{pmatrix}, \quad \epsilon \begin{pmatrix} a & b\\ c & d \end{pmatrix} = \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}, \quad S \begin{pmatrix} a & b\\ c & d \end{pmatrix} = \begin{pmatrix} d & -qb\\ -q^{-1}c & a \end{pmatrix},\] where we understand $\Delta(a)=a\otimes a+b\otimes c, \ \epsilon(a)=1,\ S(a)=d,$ etc. By definition, the quantum group ${\mathbb{C}}_q[SU_2]$ is Hopf algebra ${\mathbb{C}}_q[SL_2]$ with $q$ real and $\ast$-structure \[ \begin{pmatrix} a^* & b^*\\ c^* & d^* \end{pmatrix}= \begin{pmatrix} d & -q^{-1}c\\ -qb & a \end{pmatrix}. \] We used the conventions of \cite{Ma:book} and refer there for the history, which is relevant both to~\cite{Wor} and the Drinfeld theory\cite{Dri}. On ${\mathbb{C}}_q[SU_2],$ there is a connected left-covariant calculus $\Omega^1({\mathbb{C}}_q[SU_2])$ in \cite{Wor} with basis, in our conventions, \[\omega^0=d{\rm d} a-qb{\rm d} c,\quad \omega^+=d{\rm d} b-qb{\rm d} d,\quad \omega^-=qa{\rm d} c- c{\rm d} a\] of left-invariant $1$-forms which is dual to the basis $\{\partial_0,\partial_\pm\}$ of left-invariant vector fields generated by the Chevalley basis $\{H,X_\pm\}$ of $su_2$ (so that $[H,X_\pm]=\pm2X_\pm$ and $[X_+,X_-]=H$). The first order calculus is generated by $\{\omega^0,\omega^\pm\}$ as a left module while the right module structure is given by the bimodule relations \[\omega^0 f= q^{2|f|}f\omega^0,\quad \omega^{\pm}f=q^{|f|}f\omega^\pm\] for homogeneous $f$ of degree $|f|$ where $|a|=|c|=1,\ |b|=|d|=-1,$ and exterior derivative \begin{gather*} {\rm d} a=a\omega^0+q^{-1}b\omega^+,\quad {\rm d} b=-q^{-2}b \omega^0+a\omega^-,\\ {\rm d} c=c\omega^0+q^{-1}d\omega^+,\quad {\rm d} d=-q^{-2} d\omega^0+c\omega^-. \end{gather*} These extend to a differential graded algebra $\Omega({\mathbb{C}}_q[SU_2])$ that has same dimension as classically. Moreover, it is a $\ast$-DGA with \[\omega^{0*}=-\omega^0,\quad\omega^{+*}=-q^{-1} \omega^-,\quad \omega^{-*}=-q\omega^+.\] Since ${\mathbb{C}}_q[SU_2]$ and $\Omega({\mathbb{C}}_q[SU_2])$ are $q$-deformations, from Corollary~\ref{preliecorol}, these must be quantised from some pre-Lie algebra structure of $su_2^*$, which we now compute. Let $q=e^{\frac{\imath\lambda}{2}}=1+\frac{\imath}{2}\lambda+O(\lambda^2)$ for imaginary number $\lambda.$ The Poisson bracket from the algebra relations is \begin{gather*} \{a,b\}=-\frac{\imath}{2}ab,\quad \{a,c\}=-\frac{\imath}{2}ac,\quad \{a,d\}=-\imath bc,\quad \{b,c\}=0,\\ \{b,d\}=-\frac{\imath}{2}bd, \quad\{c,d\}=-\frac{\imath}{2}cd. \end{gather*} The reader should not be alarmed with $\imath$, as this is a `complexified' Poisson-bracket on $C^\infty(SU_2,{\mathbb{C}})$ and is a real Poisson-bracket on $C^\infty(SU_2,{\mathbb{R}})$ when we choose real-valued functions instead of complex-valued functions $a,b,c,d$ here. As ${\rm d} x=\sum_i(\partial_i x)\omega^i,$ we know \[\partial_0 \begin{pmatrix} a & b\\ c & d \end{pmatrix}=\begin{pmatrix} a & -b\\ c & -d \end{pmatrix},\quad \partial_+\begin{pmatrix} a & b\\ c & d \end{pmatrix}= \begin{pmatrix} 0 & a\\ 0 & c \end{pmatrix},\quad \partial_-\begin{pmatrix} a & b\\ c & d \end{pmatrix}=\begin{pmatrix} b & 0\\ d & 0 \end{pmatrix}.\] From $a\omega^0-\omega^0 a=(1-q^2)a\omega^0=-\imath\lambda a\omega^0+O(\lambda^2),$ we know that $\gamma(a,\omega^0)=-\imath a\omega^0.$ Likewise, we can get \[\gamma(\begin{pmatrix} a & b\\ c & d \end{pmatrix},\omega^i)=\frac{1}{2} t_i \begin{pmatrix} a & -b\\ c & -d \end{pmatrix} \omega^i,\ \forall\,i\in \{0,\pm\},\quad t_0=-2\imath,\ t_\pm=-\imath.\] Now we can compute the pre-Lie structure $\Xi:su_2^*\otimes su_2^*\to su_2^*$ by comparing with (\ref{preconnection-ijk-1}), namely \[\gamma(\begin{pmatrix} a & b\\ c & d \end{pmatrix},\omega^j)=\sum_{i,k\in \{0,\pm\}} \Xi^{ij}_k(\partial_i\begin{pmatrix} a & b\\ c & d \end{pmatrix})\omega^k,\] we know the only nonzero coefficients are $\Xi^{00}_0=-\imath,\ \Xi^{0+}_+=-\frac{\imath}{2},\ \Xi^{0-}_-=-\frac{\imath}{2}$. Then $\Xi(\phi,\phi)=-\imath \phi,\ \Xi(\phi,\psi_+)=-\frac{\imath}{2}\psi_+,\ \Xi(\phi,\psi_-)=-\frac{\imath}{2}\psi_-,$ and $\Xi$ is zero on other terms, where $\{\phi,\psi_\pm\}$ is the dual basis of $su_2^*$ to $\{H,X_\pm\}$. Thus the corresponding pre-Lie structure of $su_2^*$ is \begin{gather*} \Xi (\phi,\phi)=-\imath \phi,\quad \Xi(\phi,\psi_\pm)=-\frac{\imath}{2}\psi_\pm,\quad\text{and zero on all other terms.} \end{gather*} Let $t=-2\imath \phi,\ x_1=\imath (\psi_++\psi_-),\ x_2=\psi_+-\psi_-,$ we have a real pre-Lie structure for $su_2^*=\mathrm{span}\{t,x_1,x_2\}:$ \[t\circ t=-2t,\quad t\circ x_i=-x_i,\ \forall\,i=1,2.\] This is a $3$D version of $\mathfrak{b}_{1,-2}.$ \end{example} \begin{example}\label{qt-flat} Let ${\mathfrak{g}}$ be a quasi-triangular Lie bialgebra with $r$-matrix $r=r{}^{(1)}\otimes r{}^{(2)}\in{\mathfrak{g}}\otimes{\mathfrak{g}}.$ Then ${\mathfrak{g}}$ acts on its dual ${\mathfrak{g}}^*$ by coadjoint action ${\rm ad}^*$ and by~\cite[Lemma 3.8]{Ma:blie}, ${\mathfrak{g}}^*$ becomes a left ${\mathfrak{g}}$-crossed module with $-\Xi,$ where $\Xi$ is left ${\mathfrak{g}}^*$-action $\Xi(\phi,\psi)=-\<\phi,r{}^{(2)}\>{\rm ad}_{r{}^{(1)}}\psi.$ To satisfy Poisson-compatibility (\ref{compatible}), $({\mathfrak{g}},r)$ is required to obey $r{}^{(1)}\otimes [r{}^{(2)},x]+r{}^{(2)}\otimes [r{}^{(1)},x]=0,$ i.e., $r_+{\triangleright} x=0$ for any $x\in{\mathfrak{g}},$ where $r_+=(r+r_{21})/2$ is the symmetric part of $r.$ In this case ${\mathfrak{g}}^*$ has a pre-Lie algebra structure with $\Xi(\phi,\psi)=-\<\phi,r{}^{(2)}\>{\rm ad}^*_{r{}^{(1)}}\psi$ by Corollary~\ref{preliecorol}. We see in particular that every finite-dimensional cotriangular Lie bialgebra is canonically a pre-Lie algebra. \end{example} \subsection{Quantum metrics on Example~\ref{2D}} We make a small digression to cover the quantum metrics on the differential calculi in Example~\ref{2D}.. By definition, given an algebra $A$ equipped with at least $\Omega^1,\Omega^2$ of a DGA, a quantum metric means $g=g{}^{(1)}\otimes g{}^{(2)}\in \Omega^1\otimes_A\Omega^1$ (a formal sum notation) such that $\wedge(g)=0$ (this expresses symmetry via the wedge product) and which also has an inverse $(\ ,\ ):\Omega^1\otimes_A\Omega^1\to A$ such that $(\omega,g{}^{(1)})g{}^{(2)}=g{}^{(1)}(g{}^{(2)},\omega)=\omega$ for all $\omega\in\Omega^1$. This data makes $\Omega^1$ left and right self-dual in the monoidal category of $A$-bimodules. It can be shown in this case \cite{BegMa14} that $g$ must be central. In the $*$-DGA case we also require that the metric is `real' in the sense \[ {\rm flip}(*\otimes *)g=g\] where `flip' interchanges the tensor factors. We analyse the moduli of quantum metrics and also their $\lambda=0$ limit in each of the five cases: 1) For the calculus $\Omega^1(U_\lambda(\mathfrak{b}_{1,\alpha})),$ direct computation shows that there is no nondegenerate metric at the polynomial level. However, if we allow a wider class of functions (including $x^{-1},x^{\pm\alpha}$), then the 1-forms \[ u=x^{-1}{\rm d} x,\quad v=x^{\alpha}{\rm d} t\] are central, and obey $u^*=u, v^*=v$ with the result that there is a 3-parameter family of nondegenerate quantum metrics of the form \begin{equation*} g=\frac{c_1}{x^2} {\rm d} x\otimes {\rm d} x +c_2 x^{\alpha-1}({\rm d} x\otimes{\rm d} t+{\rm d} t\otimes {\rm d} x) +c_3 x^{2\alpha}{\rm d} t\otimes {\rm d} t, \end{equation*} where $c_1,c_2,c_3$ are complex parameters with $\det c=c_1c_3-c_2^2\neq0$ so that $\det g=(\det c) x^{2\alpha-2}\neq 0.$ The metric obeys the `reality' condition if and only if the $c_i$ are real. In the classical limit $\lambda=0$ the metric has constant scalar curvature \[R=-\frac{2\alpha^2 c_3}{\det c}\] and with more work one can show that up to a change of coordinates, the metric depends only on the one parameter given by the value of $R$. For example, if we require $\det c<0$ for Minkowski signature, then this metric is essentially de Sitter or anti-de Sitter space for some length scale. This is taken up and extended to the $n$-dimensional case in \cite{MaTao:cos}. 2) For the calculus $\Omega^1(U_\lambda(\mathfrak{b}_{2,\beta\neq0}))$, one can show that \[u=x^{\beta-1}{\rm d} x,\quad v=x^{\beta-1}(x{\rm d} t-\beta t{\rm d} x)\] are central $1$-forms. Also $u^*=u,\ v^*=v+\lambda \beta(\beta-2)u.$ Then a quantum metric takes the form \begin{equation*} \begin{split} g=c_1 u\otimes u+c_2(u\otimes v+v^*\otimes u)+c_3(v^*\otimes v+\beta\lambda (u\otimes v-v^*\otimes u)) \end{split} \end{equation*} and obeys `reality' condition if an only if the $c_i$ is real. In this case, if we let $t'=t+\frac{c_2}{\beta},$ then ${\rm d} t'={\rm d} t, v'=x^{\beta-1}(x{\rm d} t'-\beta t'{\rm d} x)=v-c_2u$ and similarly for $v'{}^*$. We can use this freedom to set $c_2=0$ at the expense of a change to $c_1$ and prefer to assume this standard form: \begin{align*} g&=c_1 u\otimes u+c_3(v^*\otimes v+\beta\lambda(u\otimes v-v^*\otimes u))\\ &=x^{2\beta-2}(c_1+c_3\beta^2t^2-\lambda c_3\beta^2(2\beta-3)t+\lambda^2c_3\beta^2(\beta^2-3\beta+3)){\rm d} x\otimes{\rm d} x\\ &-x^{2\beta-1}c_3(t-\lambda \beta(\beta-1))({\rm d} x\otimes {\rm d} t+{\rm d} t\otimes {\rm d} x)+x^{2\beta}c_3{\rm d} t\otimes {\rm d} t \end{align*} where $c_1,c_3\ne 0$. The scaler curvature in the classical limit is \[R=-x^{-2\beta} \frac{2 \beta(\beta+1)c_1}{(c_1+c_3(\beta^2-1)t^2)^2}\] which has a singularity at $x=0$ when $\beta>0$, suggesting some kind of gravitational source. The $\beta=1$ case was already in \cite{BegMa14} and it was shown that for $c_1>0$ and $c_3<0$ the gravitational source at $x=0$ was so strong that even outgoing light was turned back in (or rather any particle of arbitrarily small mass). 3) For the calculus $\Omega^1(U_\lambda(\mathfrak{b}_{3}))$, there is no quantum metric at the algebraic level, but if we allow a wider class of functions (including $x^{-1}$ and $\ln(x)$), then \[ u=x^{-1}{\rm d} x,\quad v=x^{-1}\ln (x){\rm d} x+x^{-1}{\rm d} t\] are central 1-forms and obey $u^*=u, v^*=v$. Then there is a 3-parameter class of quantum metrics built from these. However, with this wider class of functions, $t'=t+x \ln x$ then takes us back to $\alpha=-1$ in case 1). Here $x,t'$ and their differentials obey the relations in case 1) and $x^{-1}{\rm d} t'=u+v$ here. 4) For the calculus $\Omega^1(U_\lambda(\mathfrak{b}_{4}))$, there is no quantum metric at the polynomial level but if we allow a wider class of functions (including $x^{-1}$), \[u=\frac{1}{x^2}{\rm d} t,\quad v=\frac{1}{x^2}(x{\rm d} x-t{\rm d} t)\] are central $1$-forms and $u^*=u, v^*=v-3\lambda u.$ A quantum metric then takes the form \begin{equation*} g=c_1 u\otimes u+c_2(u\otimes v+v^*\otimes u) + c_3(v^*\otimes v+\lambda(u\otimes v-v^*\otimes u)). \end{equation*} Thus $g$ in this form is manifestly central and the `reality' condition requires the $c_i$ to be real. If we choose a new variable $t'=t-c_2$ we can set $c_2=0$ and take our metric in the form \begin{align*} g&=c_1 u\otimes u+c_3(v^*\otimes v+\lambda(u\otimes v-v^*\otimes u))\\ &=\frac{c_3}{x^2}{\rm d} x\otimes {\rm d} x-c_3\frac{t+2\lambda}{x^3}({\rm d} x\otimes {\rm d} t+{\rm d} t\otimes{\rm d} x)+\frac{c_1+c_3(t^2+5\lambda t+\lambda^2)}{x^4}{\rm d} t\otimes {\rm d} t, \end{align*} where $c_1,c_3\ne 0$. The Ricci tensor in the classical limit is \[R=4{x^2-2t^2\over c_1}-{8\over c_3}.\] 5) For the calculus $\Omega^1(U_\lambda(\mathfrak{b}_{5})),$ we find that \[u={\rm d} x,\quad v=x{\rm d} t+(x-t){\rm d} x\] are central $1$-forms and $u^*=u,\ v^*=v-\lambda u.$ Then a general quantum metric takes the form \begin{equation*} g=c_1 u\otimes u+c_2(u\otimes v +v^*\otimes u)+c_3(v^*\otimes v+\lambda(u\otimes v-v^*\otimes u)), \end{equation*} which is real when the $c_i$ are real. In this case if we choose a new variable $t'=t-c_2$ which has the same relations in differential algebra and which can be used to absorb $c_2$ with a different value of $c_3$. We can therefore take our metric in a standard form \begin{align*} g&=c_1 u\otimes u+c_3(v^*\otimes v+\lambda(u\otimes v-v^*\otimes u))\\ &=(c_1+c_3(t^2-(2x-\lambda)t+x^2+\lambda^2)){\rm d} x\otimes{\rm d} x +c_3x(x-t)({\rm d} x\otimes {\rm d} t+{\rm d} t\otimes {\rm d} x)\\ &\quad+c_3x^2{\rm d} t\otimes {\rm d} t, \end{align*} where $c_1,c_3\ne 0$. The Ricci tensor in the classical limit is \[R=-\frac{4}{c_1x^2}.\] This case, if we allow a wider class of functions, is equivalent to $\beta=1$ in case 2). Here $t'=t+x\ln x$ obeys the relations there, and $x{\rm d} t'-t'{\rm d} x=x{\rm d} t+(x-t){\rm d} x$ as here. What these various examples show is that the classification of all left-covariant nice differential structures is possible and each has its only highly restrictive form of noncommutative Riemannian geometry associated to it. The quantum-Levi-Civita connection is found for case 1) in \cite{MaTao:cos} and for the $\beta=1$ case of 2) in \cite{BegMa14} and other cases should be achievable in a similar way. \section{Quantisation of tangent bundle $TG=G{\triangleright\!\!\!<} \underline{\mathfrak{g}}$} We are interested in quantisation of the tangent bundle $TG$ of a Poisson-Lie group $G$. The noncommutative coordinate algebra in this case is a bicrossproduct~\cite{Ma:book, Ma:bicross}. \subsection{Review on bicrossproduct of Hopf algebras} We start with the notions of double cross sum and bicross-sum of Lie bialgebras~\cite[Ch. 8]{Ma:book}. We say $({\mathfrak{g}},{\mathfrak{m}},{\triangleleft},{\triangleright})$ forms a right-left matched pair of Lie algebras if ${\mathfrak{g}},{\mathfrak{m}}$ are both Lie algebras and ${\mathfrak{g}}$ right acts on ${\mathfrak{m}}$ via ${\triangleleft},$ ${\mathfrak{m}}$ left acts on ${\mathfrak{g}}$ via ${\triangleright}$ such that \begin{gather*} [\phi,\psi]{\triangleleft}\xi=[\phi{\triangleleft}\xi,\psi]+[\phi,\psi{\triangleleft}\xi]+\phi{\triangleleft}(\psi{\triangleright}\xi)-\psi{\triangleleft}(\phi{\triangleright} \xi),\\ \phi{\triangleright} [\xi,\eta]=[\phi{\triangleright}\xi,\eta]+[\xi,\phi{\triangleright}\eta]+(\phi{\triangleleft}\xi){\triangleright}\eta-(\phi{\triangleleft}\eta){\triangleright}\xi, \end{gather*} for any $\xi,\eta\in{\mathfrak{g}},\, \phi,\psi\in{\mathfrak{m}}.$ Given such a matched pair, one can define the `double cross sum Lie algebra' ${\mathfrak{g}}{\bowtie}{\mathfrak{m}}$ as the vector space ${\mathfrak{g}}\oplus{\mathfrak{m}}$ with the Lie bracket \[[(\xi,\phi),(\eta,\psi)]=([\xi,\eta]+\phi{\triangleright}\eta-\psi{\triangleright}\xi,[\phi,\psi]+\phi{\triangleleft}\eta-\psi{\triangleleft}\xi).\] In addition, if both ${\mathfrak{g}}$ and ${\mathfrak{m}}$ are now Lie bialgebras with ${\triangleright},{\triangleleft}$ making ${\mathfrak{g}}$ a left ${\mathfrak{m}}$-module Lie coalgebra and ${\mathfrak{m}}$ a right ${\mathfrak{g}}$-module Lie coalgebra, such that $\phi{\triangleleft} \xi\o\otimes\xi\t+\phi\o\otimes\phi\t{\triangleright}\xi=0$ for all $\xi\in{\mathfrak{g}},\ \phi\in{\mathfrak{m}},$ then the direct sum Lie coalgebra structure makes ${\mathfrak{g}}{\bowtie}{\mathfrak{m}}$ into a Lie bialgebra, \textit{the double cross sum Lie bialgebra}. Next, if ${\mathfrak{g}}$ is finite dimensional, the matched pair of Lie bialgebras $({\mathfrak{g}},{\mathfrak{m}},{\triangleleft},{\triangleright})$ data equivalently defines a \textit{right-left bicross sum Lie bialgebra} ${\mathfrak{m}}{\triangleright\!\!\!\blacktriangleleft}{\mathfrak{g}}^*$ bulit on ${\mathfrak{m}}\oplus{\mathfrak{g}}^*$ with \begin{gather} [(\phi,f),(\psi,h)]=([\phi,\psi]_{{\mathfrak{m}}},[f,h]_{{\mathfrak{g}}^*}+f{\triangleleft}\psi-h{\triangleleft}\phi),\label{bicross-bra}\\ \delta \phi=\delta_{{\mathfrak{m}}}\phi+({\rm id}-\tau)\beta(\phi),\quad \delta f=\delta_{{\mathfrak{g}}^*} f,\label{bicross-cobra} \end{gather} for any $\phi,\psi\in{\mathfrak{m}}$ and $f,h\in{\mathfrak{g}}^*,$ where the right action of ${\mathfrak{m}}$ on ${\mathfrak{g}}^*$ and the left coaction of ${\mathfrak{g}}^*$ on ${\mathfrak{m}}$ are induced from ${\triangleleft},{\triangleright}$ by \[\<f{\triangleleft}\phi,\xi\>=\<f,\phi{\triangleright}\xi\>,\quad \beta(\phi)=\sum_i f^i\otimes\phi{\triangleleft} e_i,\] for all $\phi\in{\mathfrak{m}},\,f\in{\mathfrak{g}}^*,\,\xi\in{\mathfrak{g}}$ and $\{e_i\}$ is a basis of ${\mathfrak{g}}$ with dual basis $\{f^i\}.$ We refer to~\cite[Section 8.3]{Ma:book} for the proof. Now let $({\mathfrak{g}},{\mathfrak{m}},{\triangleleft},{\triangleright})$ be a matched pair of Lie algebras and $M$ be the connected and simply connected Lie group associated to ${\mathfrak{m}}.$ The Poisson-Lie group $M{\triangleright\!\!\!\blacktriangleleft} {\mathfrak{g}}^*$ associated to the bicross sum ${\mathfrak{m}}{\triangleright\!\!\!\blacktriangleleft}{\mathfrak{g}}^*$ is the semidirect product $M{\triangleright\!\!\!<}{\mathfrak{g}}^*$ (where ${\mathfrak{g}}^*$ is regarded as an abelian group) equipped with Poisson bracket \[\{f,g\}=0,\quad\{\xi,\eta\}=[\xi,\eta]_{\mathfrak{g}},\quad \{\xi,f\}=\alpha_{*\xi}(f)\] for all $f,g$ functions on $M$ and $\xi,\eta$ linear functions on ${\mathfrak{g}}^*,$ where $\alpha_{*\xi}$ is the vector field for the action of ${\mathfrak{g}}$ on $M$. See~\cite[Proposition 8.4.7]{Ma:book} for the proof. Note that here ${\mathfrak{g}},{\mathfrak{m}}$ are both viewed as Lie bialgebras with zero cobracket, so the Lie bracket and Lie cobracket of the bicross sum Lie bialgebra ${\mathfrak{m}}{\triangleright\!\!\!\blacktriangleleft}{\mathfrak{g}}^*$ here is given by (\ref{bicross-bra}) and (\ref{bicross-cobra}) but with $[\ ,\ ]_{{\mathfrak{g}}^*}=0,\delta_{\mathfrak{m}}=0$ there. More precisely, let $({\mathfrak{g}},{\mathfrak{m}},{\triangleleft},{\triangleright})$ be a matched pair of Lie algebras, with the associated connected and simply connected Lie groups $G$ acting on ${\mathfrak{m}}$ and $M$ acting on ${\mathfrak{g}}.$ The action ${\triangleleft}$ can be viewed as Lie algebra cocycle ${\triangleleft}\in Z^1_{{\triangleright}^*\otimes {-{\rm id}}}({\mathfrak{m}},{\mathfrak{g}}^*\otimes{\mathfrak{m}})$ and under some assumptions then can be exponentiated to a group cocycle $a\in Z^1_{{\triangleright}^*\otimes {\rm Ad}_R}(M,{\mathfrak{g}}^*\otimes{\mathfrak{m}}),$ which defines an infinitesimal action of ${\mathfrak{g}}$ on $M$. Hence, by evaluation of the corresponding vector fields, $a$ defines a left action of the Lie algebra ${\mathfrak{g}}$ on $C^\infty(M)$~\cite{Ma:mat}: \begin{equation}\label{vfieldaction} (\widetilde{\xi} f)(s)=\widetilde{a_\xi} (f)(s)=\frac{{\rm d}}{{\rm d} t}\Big|_{{t=0}} f(s \exp(t a_\xi(s))),\quad\forall\,f\in C^\infty(M),\ \xi\in{\mathfrak{g}}. \end{equation} We also note that ${\mathfrak{m}}$ acts on $M$ by left invariant vector field $(\widetilde{\phi}f)(s)=\frac{{\rm d}}{{\rm d} t}\Big|_{t=0} f(s\exp{(t\phi)})$ for any $\phi\in {\mathfrak{m}},\,f\in C^\infty(M)$ and these two actions fit together to an action of ${\mathfrak{g}}{\bowtie}{\mathfrak{m}}$ on $C^\infty(M).$ Finally, we can explain the bicrossproduct ${\mathbb{C}}[M]{\blacktriangleright\!\!\!\triangleleft} U_\lambda({\mathfrak{g}})$ based on a matched pair of Lie algebras $({\mathfrak{g}},{\mathfrak{m}},{\triangleleft},{\triangleright})$, where ${\mathbb{C}}[M]$ is the algebraic model of functions on $M.$ The algebra of ${\mathbb{C}}[M]{\blacktriangleright\!\!\!\triangleleft} U_\lambda({\mathfrak{g}})$ is the cross product defined by the action (\ref{vfieldaction}). Its coalgebra, on the other hand, is the cross coproduct given in reasonable cases by the right coaction (defined by the left action of $M$ on ${\mathfrak{g}}$) \[\beta:{\mathfrak{g}}\to{\mathfrak{g}}\otimes {\mathbb{C}}[M],\quad \beta(\xi)(s)=s{\triangleright}\xi,\quad\ \forall\,\xi\in{\mathfrak{g}},\ s\in M.\] The map $\beta$ is extended to products of the generators of $U_\lambda({\mathfrak{g}})$ in aim to form a bicrossproduct ${\mathbb{C}}[M]{\blacktriangleright\!\!\!\triangleleft} U_\lambda({\mathfrak{g}})$ as in \cite[Theorem 6.2.2]{Ma:book}. The Poisson-Lie group $M{\triangleright\!\!\!\blacktriangleleft}{\mathfrak{g}}^*$ quantises to ${\mathbb{C}}[M]{\blacktriangleright\!\!\!\triangleleft} U_\lambda({\mathfrak{g}})$ as a noncommutative deformation of the commutative algebra of functions ${\mathbb{C}}[M{\triangleright\!\!\!\blacktriangleleft} {\mathfrak{g}}^*]\subset C^\infty(M{\triangleright\!\!\!\blacktriangleleft}{\mathfrak{g}}^*)$. See more details in~\cite[Section 8.3]{Ma:book}. The half-dualisation process we have described at the Lie bialgebra level also works at the Hopf algebra level, at least in the finite-dimensional case. So morally speaking, $U_\lambda({\mathfrak{g}}){\bowtie} U({\mathfrak{m}})$ half-dualises in a similar way to the bicrossproduct Hopf algebras ${\mathbb{C}}[M]{\blacktriangleright\!\!\!\triangleleft} U_\lambda({\mathfrak{g}}).$ If one is only interested in the algebra and its calculus, we can extend to cross product $C^\infty(M){>\!\!\!\triangleleft} U_\lambda({\mathfrak{g}}).$ \subsection{Poisson-Lie group structures on tangent bundle $G{\triangleright\!\!\!<}\underline{\mathfrak{g}}$} Let $G$ be a Lie group with Lie algebra ${\mathfrak{g}}.$ As a Lie group, the tangent bundle $TG$ of Lie group $G$ can be identified with the semidirect product of Lie groups $G{\triangleright\!\!\!<}\underline{\mathfrak{g}}$ (under right adjoint action of $G$ on ${\mathfrak{g}}$) with product \[(g_{_{1}},x)(g_{_{2}},y)=(g_{_{1}}g_{_{2}},{\rm Ad}(g_{_{2}}^{-1})(x)+y),\quad\forall\,g_{_{1}},g_{_{2}}\in\,G,\ x,y\in\,{\mathfrak{g}},\] where $\underline{{\mathfrak{g}}}$ is ${\mathfrak{g}}$ but viewed as an abelian Poisson-Lie group under addition. Naturally, the Lie algebra of $G{\triangleright\!\!\!<}\underline{\mathfrak{g}}$ is the semidirect sum Lie algebra ${\mathfrak{g}}{\triangleright\!\!\!<}\underline{{\mathfrak{g}}}$ with Lie bracket \[[\xi,\eta]=[\xi,\eta]_{{\mathfrak{g}}},\quad [x,y]=0,\quad [x,\xi]=[x,\xi]_{\mathfrak{g}},\quad\forall\,\xi,\eta\in{\mathfrak{g}},\,x,y\in\underline{{\mathfrak{g}}}.\] In light of the observations in Section~5.1, we propose the following construction on the Poisson-Lie structure on tangent bundle $G{\triangleright\!\!\!<}\underline{{\mathfrak{g}}}$ via bicross sum. In what follows we assume that $G$ is a finite dimensional connected and simply connected Poisson-Lie group, and ${\mathfrak{g}}$ is its Lie algebra with the corresponding Lie bialgebra structure. Denote by $\overline{{\mathfrak{g}}^*}:=({\mathfrak{g}}^*,[\ , \ ]_{{\mathfrak{g}}^*},\text{ zero Lie cobracket})$ and $\overline{{\mathfrak{g}}}:=({\mathfrak{g}}, [\ ,\ ]_{\mathfrak{g}},\text{ zero Lie cobracket}),$ where $\overline{{\mathfrak{g}}^*}$ is the dual of Lie bialgebra $\underline{{\mathfrak{g}}}=({\mathfrak{g}},\text{ zero bracket},\delta_{\mathfrak{g}})$. One can check that $\overline{{\mathfrak{g}}^*}$ and $\overline{{\mathfrak{g}}}$ form together a matched pair of Lie bialgebra with coadjoint actions, i.e., \[\xi{\triangleleft} \phi=-{\rm ad}^*_\phi\xi=\<\phi,\xi\o\>\xi\t,\quad \xi{\triangleright} \phi={\rm ad}^*_\xi \phi=\phi\o\<\phi\t,\xi\>\] for any $\phi\in \overline{{\mathfrak{g}}^*},\,\xi\in\overline{{\mathfrak{g}}}.$ \subsubsection{Lie bialgebra level} The double cross sum Lie bialgebra $\overline{{\mathfrak{g}}^*}{\bowtie}\overline{{\mathfrak{g}}}$ is then built on ${\mathfrak{g}}^*\oplus{\mathfrak{g}}$ as a vector space with Lie bracket \begin{gather*} [\phi,\psi]=[\phi,\psi]_{{\mathfrak{g}}^*},\quad [\xi,\eta]=[\xi,\eta]_{{\mathfrak{g}}},\quad \forall\,\phi,\psi\in\overline{{\mathfrak{g}}^*},\,\xi,\eta\in\overline{{\mathfrak{g}}},\\ [\xi,\phi]=\xi{\triangleleft}\phi+\xi{\triangleright}\phi=\<\phi,\xi\o\>\xi\t+\phi\o\<\phi\t,\xi\>, \end{gather*} and zero Lie cobracket. This is nothing but the Lie algebra of $D({\mathfrak{g}})={\mathfrak{g}}^*{\bowtie}{\mathfrak{g}}$ (the double of Lie bialgebra ${\mathfrak{g}}$) with zero Lie-cobracket. Correspondingly, the right-left bicross sum Lie bialgebra defined by the matched pair $(\overline{{\mathfrak{g}}^*},\overline{{\mathfrak{g}}},{\triangleleft},{\triangleright})$ above is $\overline{{\mathfrak{g}}}{\triangleright\!\!\!\blacktriangleleft}\underline{{\mathfrak{g}}}$, whose Lie algebra is semidirect sum $\overline{{\mathfrak{g}}}{\triangleright\!\!\!<}\underline{{\mathfrak{g}}}$ and the Lie coalgebra is semidirect cobracket $\overline{{\mathfrak{g}}}{>\!\!\blacktriangleleft}\underline{{\mathfrak{g}}},$ namely \begin{equation}\label{LB-T} \begin{split} [\xi,\eta]=[\xi,\eta]_{{\mathfrak{g}}},\quad [x,y]=0,\quad [x,\xi]=[x,\xi]_{\mathfrak{g}};\quad\textbf{ }\\ \delta\xi=({\rm id}-\tau)\delta_{\mathfrak{g}}(\xi)=\underline{\xi\o}\otimes\overline{\xi\t}-\overline{\xi\t}\otimes \underline{\xi\o},\quad \delta x=\delta_{\mathfrak{g}} x,\quad \end{split} \end{equation} for any $\xi,\eta\in\overline{{\mathfrak{g}}},\,x,y\in\underline{{\mathfrak{g}}}$. Here the coaction on $\overline{{\mathfrak{g}}}$ is the Lie cobracket $\delta_{\mathfrak{g}}$ viewed as map from $\overline{{\mathfrak{g}}}$ to $\underline{{\mathfrak{g}}}\otimes\overline{{\mathfrak{g}}}.$ \subsubsection{Poisson-Lie level} Associated to the right-left bicross sum Lie bialgebra $\overline{{\mathfrak{g}}}{\triangleright\!\!\!\blacktriangleleft}\underline{{\mathfrak{g}}},$ the Lie group $G{\triangleright\!\!\!<} \underline{{\mathfrak{g}}}$ is a Poisson-Lie group (denoted by $\overline{G}{\triangleright\!\!\!\blacktriangleleft}\underline{{\mathfrak{g}}}$) with the Poisson bracket \begin{equation}\label{PB-T} \{f,h\}=0,\quad \{\phi,\psi\}=[\phi,\psi]_{{\mathfrak{g}}^*},\quad \{\phi,f\}=\widetilde{\phi}f, \end{equation} for any $\phi,\psi\in \overline{{\mathfrak{g}}^*}\subseteq C^\infty(\underline{{\mathfrak{g}}})$ and $f,h\in C^\infty(\overline{G}),$ where $\widetilde{\phi}$ denotes the left Lie algebra action of $\overline{{\mathfrak{g}}^*}$ on $C^\infty(G)$ (viewed as vector field on $G$) and is defined by the right action of $\overline{{\mathfrak{g}}^*}$ on $\overline{{\mathfrak{g}}}.$ The vector field $\widetilde{\phi}$ for any $\phi\in{\mathfrak{g}}^*$ in this case can be interpreted more precisely. We can view the actions between $\overline{{\mathfrak{g}}^*}$ and $\overline{{\mathfrak{g}}}$ as Lie algebra $1$-cocycles, namely the right coadjoint action ${\triangleleft}=-{\rm ad}^*:\overline{{\mathfrak{g}}}\otimes \overline{{\mathfrak{g}}^*}\to \overline{{\mathfrak{g}}}$ (of $\overline{{\mathfrak{g}}^*}$ on $\overline{{\mathfrak{g}}}$) is viewed as a map $\overline{{\mathfrak{g}}}\to (\overline{{\mathfrak{g}}^*})^*\otimes\overline{{\mathfrak{g}}}= (\underline{{\mathfrak{g}}})^{**}\otimes\overline{{\mathfrak{g}}}=\underline{{\mathfrak{g}}}\otimes\overline{{\mathfrak{g}}}.$ It maps $\xi$ to $\sum_i e_i\otimes\xi{\triangleleft} f^i=\sum_i e_i\otimes \<f^i,\xi\o\>\xi\t=\xi\o\otimes\xi\t,$ which is nothing but the Lie cobracket $\delta_{\mathfrak{g}}$ of ${\mathfrak{g}}.$ Likewise, the left coadjoint action of $\overline{{\mathfrak{g}}}$ on $\overline{{\mathfrak{g}}^*}$ is viewed as the Lie cobracket $\delta_{{\mathfrak{g}}^*}$ of ${\mathfrak{g}}^*.$ We already know that Lie $1$-cocycle $\delta_{\mathfrak{g}}\in Z^1_{-{\rm ad}}({\mathfrak{g}},{\mathfrak{g}}\otimes{\mathfrak{g}})$ exponentiates to group cocycle \[D^\vee\in Z^1_{{\rm Ad}_R}(G,{\mathfrak{g}}\otimes{\mathfrak{g}}),\] thus \begin{equation}\label{phitilde} \widetilde{\phi}_g:=(L_{g})_*((\phi\otimes{\rm id})D^\vee(g))\in T_gG,\quad\forall\,g\in G, \end{equation} defines the vector field on $G.$ According to \cite[Proposition 8.4.7]{Ma:book}, the Poisson bivector on tangent bundle $TG=\overline{G}{\triangleright\!\!\!\blacktriangleleft}\underline{{\mathfrak{g}}}$ is \begin{equation}\label{PV-T} P=\sum_i(\partial_i\otimes \widetilde{f^i}-\widetilde{f^i}\otimes\partial_i)+\sum_{i,j,k} d^{ij}_k f^k \partial_i\otimes \partial_j \end{equation} where $\{\partial_i\}$ is the basis of left-invariant vector fields generated by the basis $\{e_i\}$ of ${\mathfrak{g}}$ and $\{f^i\}$ is the dual basis of ${\mathfrak{g}}^*.$ Here $P_{KK}=\sum_{i,j,k} d^{ij}_k f^k \partial_i\otimes \partial_j$ is the known Kirillov-Kostant bracket on $\underline{\mathfrak{g}}$ with $\delta_{\mathfrak{g}} e_k=\sum_{ij}d^{ij}_k e_i\otimes e_j.$ We arrive at the following special case of \cite[Proposition~8.4.7]{Ma:book}. \begin{lemma}\label{tangent} Let $G$ be a finite dimensional connected and simply connected Poisson-Lie group and ${\mathfrak{g}}$ be its Lie algebra. The tangent bundle $TG={G}{\triangleright\!\!\!<}\underline{{\mathfrak{g}}}$ of $G$ admits a Poisson-Lie structure given by (\ref{PB-T}) or (\ref{PV-T}), denoted by $\overline{G}{\triangleright\!\!\!\blacktriangleleft}\underline{{\mathfrak{g}}}$. The corresponding Lie bialgebra is $\overline{{\mathfrak{g}}}{\triangleright\!\!\!\blacktriangleleft}\underline{{\mathfrak{g}}}$ given by (\ref{LB-T}). \end{lemma} \subsubsection{Bicrossproduct Hopf algebra} Finally, when the actions and coactions are suitably algebraic, we have a bicrossproduct Hopf algebra ${\mathbb{C}}[\overline{G}]{\blacktriangleright\!\!\!\triangleleft} U_\lambda(\overline{{\mathfrak{g}}^*})$ as a quantisation of the commutative algebra of functions ${\mathbb{C}}[\overline{G}{\triangleright\!\!\!\blacktriangleleft} \underline{{\mathfrak{g}}}]$ on tangent bundle $\overline{G}{\triangleright\!\!\!\blacktriangleleft} \underline{{\mathfrak{g}}}$ of Poisson-Lie group $G.$ The commutation relations of ${\mathbb{C}}[\overline{G}]{\blacktriangleright\!\!\!\triangleleft} U_\lambda(\overline{{\mathfrak{g}}^*})$ are \[[f,h]=0,\quad [\phi,\psi]=\lambda[\phi,\psi]_{{\mathfrak{g}}^*},\quad [\phi,f]=\lambda\widetilde{\phi} f,\] for any $\phi,\psi\in \overline{{\mathfrak{g}}^*}\subseteq C^\infty(\underline{{\mathfrak{g}}})$ and $f,h\in {\mathbb{C}}[\overline{G}].$ This construction is still quite general but includes a canonical example for all compact real forms ${\mathfrak{g}}$ of complex simple Lie algebras based in the Iwasawa decomposition to provide the double cross product or `Manin triple' in this case\cite{Ma:mat}. We start with an even simpler example. \begin{example}\label{Tm^*} Let ${\mathfrak{m}}$ be a finite dimensional real Lie algebra, viewed as a Lie bialgebra with zero Lie-cobracket. Take $G={\mathfrak{m}}^*$ the abelian Poisson-Lie group with Kirillov-Kostant Poisson bracket given by the Lie bracket of ${\mathfrak{m}}.$ Then ${\mathfrak{g}}={\mathfrak{m}}^*$ and $\overline{{\mathfrak{g}}^*}={\mathfrak{m}}$ and $\overline{{\mathfrak{g}}}=\overline{{\mathfrak{m}}^*}=\mathbb{R}^n,$ where $n=\dim{\mathfrak{m}}.$ Since the Lie bracket of ${\mathfrak{m}}^*$ is zero, $\overline{{\mathfrak{m}}^*}$ acts trivially on ${\mathfrak{m}}$, while ${\mathfrak{m}}$ acts on $\overline{{\mathfrak{m}}^*}$ by right coadjoint action $-{\rm ad}^*$, namely, \[f{\triangleleft}\xi=-{\rm ad}^*_\xi f,\quad\text{or}\ \<f{\triangleleft}\xi,\eta\>=\<f,[\xi,\eta]_{{\mathfrak{m}}}\>\] for any $f\in\overline{{\mathfrak{m}}^*},\,\xi,\eta\in{\mathfrak{m}}$. So $({\mathfrak{m}},\overline{{\mathfrak{m}}^*},{\triangleleft}=-{\rm ad}^*,{\triangleright}=0)$ forms a matched pair. The double cross sum of the matched pair $({\mathfrak{m}},\overline{{\mathfrak{m}}^*})$ is ${\mathfrak{m}}{\triangleright\!\!\!<}\overline{{\mathfrak{m}}^*},$ the semidirect sum Lie algebra with coadjoint action of ${\mathfrak{m}}$ on $\overline{{\mathfrak{m}}^*}$ \begin{gather*} [\xi,\eta]=[\xi,\eta]_{\mathfrak{m}},\quad [f,h]=0,\quad [f,\xi]=f{\triangleleft}\xi=\<\xi,f\o\>f\t,\\ \delta\xi=0,\quad\delta f=0,\quad\forall\,\xi,\eta\in{\mathfrak{m}},\,f,h\in\overline{{\mathfrak{m}}^*}. \end{gather*} Meanwhile, the right-left bicross sum of the matched pair $({\mathfrak{m}},\overline{{\mathfrak{m}}^*})$ is $\overline{{\mathfrak{m}}^*}{>\!\!\blacktriangleleft}{\mathfrak{m}}^*,$ the semidirect sum Lie coalgebra \begin{gather*} [f,h]=0,\quad [\phi,\psi]=0,\quad [\phi,f]=\phi{\triangleleft} f=0,\\ \delta f=({\rm id}-\tau)\beta(f),\quad \delta\phi=\delta_{{\mathfrak{m}}^*}\phi, \end{gather*} for any $f,h\in \overline{{\mathfrak{m}}^*},\,\phi,\psi\in{\mathfrak{m}}^*,$ where the left coaction of ${\mathfrak{m}}^*$ on $\overline{{\mathfrak{m}}^*}$ is given by \[\beta:\overline{{\mathfrak{m}}^*}\to{\mathfrak{m}}^*\otimes\overline{{\mathfrak{m}}^*},\quad \beta(f)=\sum_i f^i\otimes f{\triangleleft} e_i,\] and $\{e_i\}$ is a basis of ${\mathfrak{m}}$ with dual basis $\{f^i\}$ of ${\mathfrak{m}}^*.$ The tangent bundle of ${\mathfrak{m}}^*$ is the associated Poisson-Lie group of $\overline{{\mathfrak{m}}^*}{>\!\!\blacktriangleleft}{\mathfrak{m}}^*$, which is $\overline{M^*}{>\!\!\blacktriangleleft}{\mathfrak{m}}^*={\mathbb{R}}^n{>\!\!\blacktriangleleft}{\mathfrak{m}}^*,$ an abelian Lie group, where we identify the abelian Lie group $\overline{M^*}$ with its abelian Lie algebra $\overline{{\mathfrak{m}}^*}.$ Let $\{x^i\}$ be the coordinate functions on ${\mathbb{R}}^n$ identified with $\{e_i\}\subset{\mathfrak{m}}\subseteq C^\infty(\overline{{\mathfrak{m}}^*})=C^\infty({\mathbb{R}}^n),$ as $e_i(\sum_j\lambda_jf^j)=\lambda_i.$ The right action of ${\mathfrak{m}}$ on $\overline{{\mathfrak{m}}^*}$ transfers to $\delta_{{\mathfrak{m}}^*}\in Z^1(\overline{{\mathfrak{m}}^*},\underline{{\mathfrak{m}}^*}\otimes \overline{{\mathfrak{m}}^*}).$ As Lie group $\overline{M^*}$ is abelian and $\overline{M^*}=\overline{{\mathfrak{m}}^*}={\mathbb{R}}^n,$ so the associated group cocycle is identical to $\delta_{{\mathfrak{m}}^*},$ thus from (\ref{phitilde}), we have $\widetilde{\xi}_x f =\<x\o,\xi\>x\t{}_x f=\sum_i\<x\o,\xi\>\<x\t,e_i\>f^i{}_x f=\sum_i\<[\xi,e_i]_{\mathfrak{m}},x\>\frac{\partial f}{\partial x^i}(x),$ where we use the Lie cobracket in an explicit notation. This shows that \[\widetilde{\xi}=\sum_{i,j,k}\<f^i,\xi\> c^k_{ij}x^k\frac{\partial}{\partial x^j},\quad\forall\,\xi\in{\mathfrak{m}},\] where $c^k_{ij}$ are the structure coefficients of Lie algebra ${\mathfrak{m}},$ i.e., $[e_i,e_j]_{\mathfrak{m}}=\sum_k c^k_{ij}e_k.$ Therefore the Poisson-bracket on ${\mathbb{R}}^n{>\!\!\blacktriangleleft}{\mathfrak{m}}^*$ is given by \[\{f,h\}=0,\quad\{\xi,\eta\}=[\xi,\eta]_{\mathfrak{m}},\quad \{\xi,f\}=\widetilde{\xi} f=\sum_{i,j,k}\<f^i,\xi\> c^k_{ij}x^k\frac{\partial f}{\partial x^j},\] where $f,h\in C^\infty({\mathbb{R}}^n),\,\xi,\eta\in{\mathfrak{m}}.$ The bicrossproduct Hopf algebra ${\mathbb{C}}[\overline{G}]{\blacktriangleright\!\!\!\triangleleft} U_\lambda(\overline{{\mathfrak{g}}^*})={\mathbb{C}}[{\mathbb{R}}^n]{>\!\!\!\triangleleft} U_\lambda({\mathfrak{m}}),$ as the quatisation of $C^\infty({\mathbb{R}}^n{>\!\!\blacktriangleleft}{\mathfrak{m}}^*),$ has commutation relations \[[x^i,x^j]=0,\quad[e_i,e_j]=\lambda\sum_{k} c^k_{ij}e_k,\quad [e_i,x^j]=\lambda \sum_k c^k_{ij}x^k\] where $\{x^i\}$ are coordinate functions of ${\mathbb{R}}^n=\overline{{\mathfrak{m}}^*}$, identified with $\{e_i\}$ basis of ${\mathfrak{m}}.$ As an algebra we can equally well take $C^\infty({\mathbb{R}}^n){>\!\!\!\triangleleft} U_\lambda({\mathfrak{m}})$, i.e., not limiting ourselves to polynomials. Then $[e_i,f]=\lambda \sum_{j,k} c^k_{ij}x^k\frac{\partial f}{\partial x^j}$ more generally for the cross relations. \end{example} \begin{example}\label{TSU_2} We take $SU_2$ with its standard Lie bialgebra structure on $su_2$, where the matched pair comes from the Iwasawa decomposition of $SL_2({\mathbb{C}})$\cite{Ma:mat}. The bicrossproduct Hopf algebra ${\mathbb{C}}[{SU_2}]{\blacktriangleright\!\!\!\triangleleft} U_\lambda({su^*_2}),$ as an algebra, is the cross product ${\mathbb{C}}[SU_2]{>\!\!\!\triangleleft} U_\lambda(su_2^*)$ with $a,b,c,d$ commutes, $ad-bc=1$, $[x^i,x^3]=\lambda x^i$ ($i=1,2$) and \[[x^i,\mathbf{t}]=\lambda\mathbf{t}[e_i,\mathbf{t}^{-1}e_3\mathbf{t}-e_3],\ i=1,2,3,\] namely, \begin{gather} [x^1,\mathbf{t}]=-\lambda bc\,\mathbf{t}e_2+\frac{\lambda}{2}\mathbf{t}\,\mathrm{diag}(ac,-bd)+\frac{\lambda}{2}\mathrm{diag}(b,-c),\nonumber\\ [x^2,\mathbf{t}]=\lambda bc\,\mathbf{t} e_1-\frac{\imath \lambda}{2}\mathbf{t}\,\mathrm{diag}(ac,bd)+\frac{\imath \lambda}{2}\,\mathrm{diag}(b,c),\label{TSU_2-action}\\ [x^3,\mathbf{t}]=-\lambda ad\,\mathbf{t}+\lambda\mathrm{diag}(a,d),\nonumber \end{gather} where we denote $\mathbf{t}=\begin{pmatrix} a & b\\ c & d \end{pmatrix}$ and $\{e_i\}$ and $\{x^i\}$ are bases of $su_2$ and $su_2^*$ as the half-real forms of $sl_2({\mathbb{C}})$ and $sl_2^*({\mathbb{C}})$ respectively. The coalgebra of ${\mathbb{C}}[{SU_2}]{\blacktriangleright\!\!\!\triangleleft} U_\lambda({su^*_2})$ is the cross coproduct ${\mathbb{C}}[SU_2]{\blacktriangleright\!\!<} U_\lambda(su_2^*)$ associated with \begin{gather*} \Delta(x^i)=1\otimes x^i-2\sum_k x^k\otimes \mathrm{Tr}(\mathbf{t} e_i \mathbf{t}^{-1} e_k),\quad \epsilon (x^i)=0,\quad\forall\,i\in\{1,2,3\}. \end{gather*} The $\ast$-structure is the known one on ${\mathbb{C}}[SU_2]$ with $x^{i*}=-x^i$ for each $i.$ \end{example} \proof We already know that the coordinate algebra ${\mathbb{C}}[SU_2]$ is the commutative algebra ${\mathbb{C}}[a,b,c,d]$ modulo the relation $ad-bc=1$ with $\ast$-structure $\begin{pmatrix} a^* & b^*\\ c^* & d^* \end{pmatrix} =\begin{pmatrix} d & -c\\ -b & a \end{pmatrix}.$ As a Hopf $\ast$-algebra, the coproduct, counit and antipode of ${\mathbb{C}}[SU_2]$ are given by $\Delta \begin{pmatrix} a & b\\ c & d \end{pmatrix} = \begin{pmatrix} a & b\\ c & d \end{pmatrix} \otimes \begin{pmatrix} a & b\\ c & d \end{pmatrix}, \ \epsilon \begin{pmatrix} a & b\\ c & d \end{pmatrix}= \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}, \ S \begin{pmatrix}a & b\\ c & d \end{pmatrix} = \begin{pmatrix} d & -b\\ -c & a \end{pmatrix}.$ Let $\{H,X_\pm\}$ and $\{\phi,\psi_\pm\}$ be the dual bases of $sl_2({\mathbb{C}})$ and $sl_2^*({\mathbb{C}})$ respectively, where $H=\begin{pmatrix}1 & 0\\ 0 & -1 \end{pmatrix} ,$ $X_+=\begin{pmatrix}0 & 1\\ 0 & 0 \end{pmatrix} $ and $X_-=\begin{pmatrix}0 & 0\\ 1 & 0 \end{pmatrix} .$ As the half-real forms of $sl_2({\mathbb{C}})$ and $sl^*_2({\mathbb{C}})$, the Lie algebras $su_2$ and $su^*_2$ have bases \begin{gather*} e_1=-\frac{\imath}{2}(X_++X_-),\quad e_2=-\frac{1}{2}(X_+-X_-),\quad e_3=-\frac{\imath}{2}H,\\ x^1=\psi_++\psi_-,\quad x^2=\imath(\psi_+-\psi_-),\quad x^3=2\phi. \end{gather*} respectively. Note that $x^i=-\imath f^i$ where $\{f^i\}$ is the dual basis of $\{e_i\}.$ The Lie brackets and Lie cobrackets of $su_2$ and $su^*_2$ are given by \begin{gather*} [e_i,e_j]=\epsilon_{ijk}e_k,\quad \delta e_i=\imath e_i\wedge e_3,\ \forall\,i,j,k;\\ [x^1,x^2]=0,\quad [x^i,x^3]=x^i,\ i=1,2,\quad \delta x^1=\imath (x^2\otimes x^3-x^3\otimes x^2),\\ \delta x^2=\imath (x^3\otimes x^1-x^1\otimes x^3),\quad \delta x^3=\imath (x^1\otimes x^2-x^2\otimes x^1), \end{gather*} where $\epsilon_{ijk}$ is totally antisymmetric and $\epsilon_{123}=1.$ Writing $\xi=\xi^ie_i\in su_2$ and $\phi=\phi_ix^i\in su^*_2$ for $3$-vectors $\vec{\xi}=(\xi^i),\,\vec{\phi}=(\phi_i),$ we know that $(su^*_2,su_2)$ forms a the matched pair of Lie bialgebra with interacting actions \[\vec{\xi}{\triangleleft} \vec{\phi}=(\vec{\xi}\times\vec{e_3})\times\vec{\phi},\quad\vec{\xi}{\triangleright}\vec{\phi}=\vec{\xi}\times \vec{\phi}.\] To obtain the action of $su_2^*$ on ${\mathbb{C}}[SU_2],$ we need to solve \cite[Proposition 8.3.14]{Ma:book} \[\frac{{\rm d}}{{\rm d} t}\Big|_0 a_\phi(\mathrm{e}^{t\xi}u)={\rm Ad}_{u^{-1}}(\xi{\triangleleft}(u{\triangleright}\phi)),\quad a_\phi(I_2)=0.\] Note that $SU_2$ acts on $su_2^*$ by $u{\triangleright}\vec{\phi}=\mathrm{Rot}_u\vec{\phi},$ where we view $\phi$ as an element in $su_2$ via $\rho(\phi)=\phi_ie_i.$ One can check that \[a_{\vec{\phi}}(u)=\vec{\phi}\times\left(\mathrm{Rot}_{u^{-1}}(\vec{e_3})-\vec{e_3}\right)\] is the unique solution to the differential equation. Now we can compute by (\ref{vfieldaction}) \begin{align*} (\phi{\triangleright} \mathbf{t}^i{}_j)(u)&=\frac{{\rm d}}{{\rm d} t}\Big|_0 \mathbf{t}^i{}_j (u \mathrm{e}^{t a_\phi(u)})\\ &=\sum_k\frac{{\rm d}}{{\rm d} t}\Big|_0 \mathbf{t}^i{}_k(u) \mathbf{t}^k{}_j(\mathrm{e}^{t a_\phi(u)})\\ &=\sum_k u^i{}_k (a_\phi(u))^k{}_j\\ &=\sum_k u^i{}_k [\rho(\phi),u^{-1}e_3 u-e_3]^k{}_j, \end{align*} where $\rho(\phi)=\sum_i\phi_ie_i.$ This shows that \[[x^i,\mathbf{t}]=\lambda x^i{\triangleright}\mathbf{t}=\lambda\mathbf{t}[e_i,\mathbf{t}^{-1}e_3\mathbf{t}-e_3]\] as displayed. For each $i,$ we can work out the terms on the right explicitly (using $ad-bc=1$) as \begin{gather*} [x^1,\mathbf{t}]=-\frac{\lambda}{2}\begin{pmatrix} abd-a^2c-2b, & b^2d-a^2d+a\\ ad^2-ac^2-d, & bd^2-acd+2c \end{pmatrix},\\ [x^2,\mathbf{t}]=-\frac{\imath \lambda}{2}\begin{pmatrix} a^2c+abd-2b, & a^2d+b^2d-a\\ ac^2+ad^2-d, & bd^2+acd-2c \end{pmatrix},\\ [x^3,\mathbf{t}]=-\lambda \begin{pmatrix} a^2d-a, & abd\\ acd, & ad^2-d \end{pmatrix}. \end{gather*} These can be rewritten as the formulae (\ref{TSU_2-action}) we stated. For convenience, we use Pauli matrices $\sigma_1=\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix},\ \sigma_2=\begin{pmatrix} 0 & -\imath\\ \imath & 0 \end{pmatrix},\ \sigma_3=\begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}.$ Clearly, $e_i=-\frac{\imath}{2}\sigma_i$ and $\sigma_i$ obey identities like $\sigma_i\sigma_j=\delta_{ij}I_2+\imath \epsilon_{ijk}\sigma_k$ and $[\sigma_i,\sigma_j]=2\imath \epsilon_{ijk}\sigma_k.$ The coaction of ${\mathbb{C}}[SU_2]$ on $su^*_2$ is defined by $\beta(\phi)(u)=u{\triangleright}\phi=\mathrm{Rot}_u\vec{\phi}$ for any $u\in SU_2,\,\phi\in su^*_2.$ Again, we view $\phi$ as an element in $su_2,$ so $\rho(u{\triangleright}\phi)=u\rho(\phi)u^{-1},$ namely $\sum_i(u{\triangleright}\phi)_i\sigma_i=\sum_i \phi_i u\sigma_i u^{-1}.$ In particular, we have \[(u{\triangleright} x^i)_1\sigma_1+(u{\triangleright} x^i)_2\sigma_2+(u{\triangleright} x^i)_3\sigma_3=u\sigma_i u^{-1},\quad i=1,2,3.\] Note that $\sigma_i\sigma_j=\delta_{ij}I_2+\imath\epsilon_{ijk}\sigma_k$ and $\mathrm{Tr}(\sigma_i\sigma_j)=2\delta_{ij}.$ Time $\sigma_k$ from right to the displayed equation above and then take trace on both sides, we have $2 (u{\triangleright} x^i)_k=\mathrm{Tr}(u\sigma_i u^{-1}\sigma_k).$ Therefore $u{\triangleright} x^i=\frac{1}{2}\sum_k\mathrm{Tr}(u\sigma_iu^{-1}\sigma_k)x^k=-2\sum_k\mathrm{Tr}(ue_iu^{-1}e_k)x^k$ and thus $\beta(x^i)=\frac{1}{2}\sum_k x^k\otimes \mathrm{Tr}(\mathbf{t}\sigma_i \mathbf{t}^{-1}\sigma_k)=-2\sum_k x^k\otimes \mathrm{Tr}(\mathbf{t}e_i \mathbf{t}^{-1}e_k)$. This gives rise to the coproduct of $x^i$ as stated. \endproof \subsection{Preconnections on the tangent bundle $\overline{G}{\triangleright\!\!\!\blacktriangleleft} \underline{g}$. } We use the following lemma to construct left pre-Lie structures structure of $(\overline{{\mathfrak{g}}}{\triangleright\!\!\!\blacktriangleleft}\underline{{\mathfrak{g}}})^*=(\overline{{\mathfrak{g}}})^*{\blacktriangleright\!\!\!\triangleleft} (\underline{{\mathfrak{g}}})^*=\underline{{\mathfrak{g}}^*}{\blacktriangleright\!\!\!\triangleleft} \overline{{\mathfrak{g}}^*},$ whose Lie bracket is the semidirect sum $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft}{\mathfrak{g}}^*$ and the Lie cobracket is the semidirect cobracket ${{\mathfrak{g}}^*}{\blacktriangleright\!\!<}\overline{{\mathfrak{g}}^*},$ namely \begin{gather*} [\phi,\psi]=0,\quad [f,\phi]=f{\triangleright}\phi=[f,\phi]_{{\mathfrak{g}}^*},\quad [f,g]=[f,g]_{{\mathfrak{g}}^*},\\ \delta\phi=\delta_{{\mathfrak{g}}^*}\phi=\phi\o\otimes\phi\t,\quad\delta f=(f\o,0)\otimes (0,f\t)-(f\t,0)\otimes(0,f\o), \end{gather*} for any $\phi,\psi\in\underline{{\mathfrak{g}}^*},\ f,g\in\overline{{\mathfrak{g}}^*}.$ For convenience, we denote $f\in{\mathfrak{g}}^*$ by $\overline{f}$ (or $\underline{f}$ ) if viewed in $\overline{{\mathfrak{g}}^*}$ (or $\underline{{\mathfrak{g}}^*}$). Thus $\delta f=\underline{f\o}\otimes\overline{f\t}+\overline{f\o}\otimes\underline{f\t}$ for any $f\in\overline{{\mathfrak{g}}^*}.$ \begin{lemma}\label{semipre} Let $(A,\circ)$ be a left pre-Lie algebra and $(B,\ast)$ be a left pre-Lie algebra in ${\mathfrak{g}}_{_{A}}\mathcal{M},$ namely, there is a left ${\mathfrak{g}}_{_{A}}$-action ${\triangleright}$ on $B$ such that \begin{equation}\label{semipre-con} a{\triangleright} (x\ast y)=(a{\triangleright} x)\ast y+ x\ast (a{\triangleright} y), \end{equation} for any $a,b\in A,\,x,y\in B.$ Then there is a left pre-Lie algebra structure on $B\oplus A$ \[(x,a)\tilde{\circ} (y,b)=(x\ast y+a{\triangleright} y,a\circ b).\] Denote this pre-Lie algebra by $B{>\!\!\!\triangleleft} A$, we have ${\mathfrak{g}}_{_{B\rtimes A}}={\mathfrak{g}}_{_{B}}{>\!\!\!\triangleleft} {\mathfrak{g}}_{_{A}}.$ \end{lemma} \proof Checking the definition of left pre-Lie algebra directly. \endproof \begin{corollary}\label{semipre-m} Let $({\mathfrak{m}},\circ)$ be a left pre-Lie algebra. Suppose it admits a (not necessarily unital) commutative associative product $\cdot$ such that \[[\xi,x\cdot y]_{\mathfrak{m}}=[\xi,x]_{\mathfrak{m}}\cdot y+x\cdot [\xi,\eta]_{\mathfrak{m}},\quad \forall\,\xi,x,y\in{\mathfrak{m}},\] where $[\ ,\ ]_{\mathfrak{m}}$ is the Lie bracket defined by $\circ.$ Denote the underlying pre-Lie algebra $\underline{{\mathfrak{m}}}=({\mathfrak{m}},\cdot).$ Then $\underline{{\mathfrak{m}}}{>\!\!\!\triangleleft}_{{\rm ad}}{\mathfrak{m}}$ is a left pre-Lie algebra with product \begin{equation} (x,\xi)\tilde{\circ} (y,\eta)=(x\cdot y+[\xi,y]_{\mathfrak{m}}, \xi\circ \eta) \end{equation} for any $x,y\in\underline{{\mathfrak{m}}},\,\xi,\eta\in{\mathfrak{m}}.$ \end{corollary} \proof Take $(A,\circ)=({\mathfrak{m}},\circ)$ and $(B,\ast)=({\mathfrak{m}},\cdot)$ in Lemma~\ref{semipre}. Here $({\mathfrak{m}},\circ)$ left acts on $({\mathfrak{m}},\cdot)$ by adjoint action and (\ref{semipre-con}) is exactly the condition displayed. \endproof The assumption made in Corollary~\ref{semipre-m} is that $({\mathfrak{m}},\cdot, [\ ,\ ])$ is a (not necessarily unital) Poisson algebra with respect to the Lie bracket, and that the latter admits a left pre-Lie structure $\circ.$ \begin{theorem}\label{prelie-T} Let $G$ be a finite-dimensional connected and simply connected Poisson-Lie group with Lie bialgebra ${\mathfrak{g}}.$ Assume that $({\mathfrak{g}}^*,[\ ,\ ]_{{\mathfrak{g}}^*})$ admits a pre-Lie structure $\circ$ and also that ${\mathfrak{g}}^*$ admits a (not necessarily unital) Poisson algebra structure $({\mathfrak{g}}^*,\ast, [\ ,\ ]_{{\mathfrak{g}}^*})$ \begin{equation}\label{pa} [f,\phi\ast\psi]_{{\mathfrak{g}}^*}=[f,\phi]_{{\mathfrak{g}}^*}\ast\psi+\phi\ast [f,\psi]_{{\mathfrak{g}}^*}, \end{equation} for any $\phi,\psi\in\underline{{\mathfrak{g}}^*},\,f\in {\mathfrak{g}}^*.$ Then the semidirect sum $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft} {\mathfrak{g}}^*$ admits a pre-Lie algebra product $\tilde{\circ}$ \begin{equation}\label{circtilde-T} (\phi,f)\tilde{\circ}(\psi,h)=(\phi\ast\psi+[f,\psi]_{{\mathfrak{g}}^*},f\circ h) \end{equation} and the tangent bundle $\overline{G}{\triangleright\!\!\!\blacktriangleleft}\underline{{\mathfrak{g}}}$ in Lemma~\ref{tangent} admits a Poisson-compatible left-covariant flat preconnection. \end{theorem} \proof Take ${\mathfrak{m}}={\mathfrak{g}}^*$ in Corollary~\ref{semipre-m}. We know $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft}{\mathfrak{g}}^*$ is the Lie algebra of $\underline{{\mathfrak{g}}^*}{\blacktriangleright\!\!\!\triangleleft} \overline{{\mathfrak{g}}^*},$ dual to Lie algebra $\overline{{\mathfrak{g}}}{\triangleright\!\!\!\blacktriangleleft} \underline{{\mathfrak{g}}}$ of tangent bundle. Then we apply Corollary~\ref{preliecorol}. \endproof The corresponding preconnection can be computed by (\ref{preconnection-ijk-1}) explicitly. For Poisson-Lie group $G,$ let $\{e_i\}$ be a basis of ${\mathfrak{g}}$ and $\{f^i\}$ be dual basis of ${\mathfrak{g}}^*.$ Denote $\{\omega^i\}$ the basis of left--invariant $1$-forms that is dual to $\{\partial_i\}$ the left-invariant vector fields of $G$ generated by $\{e_i\}$ as before. For the abelian Poisson-Lie group $\underline{\mathfrak{g}}$ with Kirillov-Kostant Poisson bracket, let $\{E_i\}$ be a basis of $\underline{{\mathfrak{g}}}$ and $\{x^i\}$ be the dual basis of $\overline{{\mathfrak{g}}^*}.$ Then $\{ {\rm d} x^i\}$ is the basis of left-invariant $1$-forms that is dual to $\{ \frac{\partial}{\partial x^i}\}$ the basis of the left-invariant vector fields of $\underline{\mathfrak{g}}$, which are generated by $\{E_i\}.$ Now we can choose $\{e_i, E_i\}$ to be the basis of $\overline{{\mathfrak{g}}}{\triangleright\!\!\!\blacktriangleleft} \underline{{\mathfrak{g}}},$ and so $\{f^i,x^i\}$ is the dual basis for $\underline{{\mathfrak{g}}^*}{\blacktriangleright\!\!\!\triangleleft} \overline{{\mathfrak{g}}^*}.$ Denote by $\{\widetilde{\partial_i},D_i\}$ the left-invariant vector fields on $\overline{G}{\triangleright\!\!\!\blacktriangleleft}\underline{{\mathfrak{g}}}$ generated by $\{e_i, E_i\},$ and denote by $\{\widetilde{\omega^i},\widetilde{{\rm d} x^i}\}$ the corresponding dual basis of left-invariant $1$-forms. By construction, when viewing any $f\in C^\infty(G),\ \phi\in{\mathfrak{g}}^*\subset C^\infty({\mathfrak{g}})$ as functions on tangent bundle, we know \begin{gather*} \widetilde{\partial_i} f=\partial_i f,\quad \widetilde{\partial_i}\phi={\rm ad}^*_{e_i}\phi,\quad D_i f=0,\quad D_i\phi= \frac{\partial}{\partial x^i}\phi. \end{gather*} This implies \[\widetilde{\partial_i}=\partial_i+\sum_j({\rm ad}^*_{e_i}x^j) \frac{\partial}{\partial x^j},\quad D_i=\frac{\partial}{\partial x^i},\quad \widetilde{\omega^i}=\omega^i,\quad \widetilde{{\rm d} x^i}={\rm d} x^i-\sum_k({\rm ad}^*_{e_k} x^i)\omega^k.\] Let $\widetilde\circ$ be the pre-Lie structure of $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft} {\mathfrak{g}}^*$ constructed by (\ref{circtilde-T}) in terms of $\ast,\circ$ in the setting of Theorem~\ref{prelie-T}. The Poisson-compatible left-covariant flat preconnection on the tangent bundle then is, for any function $a$ \begin{gather*} \nabla_{\widehat{a}}\,{\omega^j}=\sum_{i,k}\widetilde{\partial_i} a\<f^i\ast f^j,e_k\>{\omega^k}+\sum_{i,k}D_i a\<[x^i,f^j]_{{\mathfrak{g}}^*},e_k\>{\omega^k},\\ \nabla_{\widehat{a}}\,\widetilde{{\rm d} x^j}=\sum_{i,k} D_i a\<x^i\circ x^j,E_k\>\widetilde{{\rm d} x^k}. \end{gather*} If we write \begin{gather*} f^i\ast f^j=\sum_k a^{ij}_k f^k,\quad x^i\circ x^j=\sum_k b^{ij}_k x^k,\\ [x^i, f^j]_{{\mathfrak{g}}^*}=\sum_k \<[x^i,f^j]_{{\mathfrak{g}}^*},e_k\> f^k=\sum_{s,k}d^{sj}_k\<x^i,e_s\>f^k, \end{gather*} where $[f^i,f^j]_{{\mathfrak{g}}^*}=d^{ij}_k f^k.$ Then, in terms of these structure coefficients, the left covariant preconnection on the tangent bundle $\overline{G}{\triangleright\!\!\!\blacktriangleleft} \underline{{\mathfrak{g}}}$ is \begin{gather*} \nabla_{\widehat{f}}\, {\omega^j}=\sum_{i,k} a^{ij}_k(\partial_i f) {\omega^k},\quad \nabla_{\widehat{f}}\, \widetilde{{\rm d} x^j}=0;\\ \nabla_{\widehat{\phi}}\,{\omega^j}=\sum_{i,k} \left(a^{ij}_k{\rm ad}^*_{e_i}\phi+ \sum_s d^{sj}_k\<x^i,e_s\>(\frac{\partial \phi}{\partial x^i})\right)\,{\omega^k},\quad \nabla_{\widehat{\phi}}\,\widetilde{{\rm d} x^j}=\sum_{i,k}b^{ij}_k(\frac{\partial \phi}{\partial x^i})\, \widetilde{{\rm d} x^k}, \end{gather*} for any $f\in C^\infty(G),\ \phi\in{\mathfrak{g}}^*\subset C^\infty({\mathfrak{g}}).$ This result applies, for example, to tell us that we have a left-covariant differential structure on quantum groups such as ${\mathbb{C}}[\overline G]{\blacktriangleright\!\!\!\triangleleft} U_\lambda({\mathfrak{g}}^*)$ at least to lowest order in deformation. In the special case when the product $\ast$ is zero, there is a natural differential calculus not only at lowest order. Under the notations above, we have: \begin{proposition}\label{propC(G)U(g*)} Let $G$ be a finite dimensional connected and simply connected Poisson-Lie group with Lie algebra ${\mathfrak{g}}.$ If the dual Lie algebra ${\mathfrak{g}}^*$ admits a pre-Lie structure $\circ:{\mathfrak{g}}^*\otimes{\mathfrak{g}}^*\to{\mathfrak{g}}^*$ with respect to its Lie bracket ($[\ ,\ ]_{{\mathfrak{g}}^*}$ determined by $\delta_{\mathfrak{g}}$), then the bicrossproduct ${\mathbb{C}}[\overline{G}]{\blacktriangleright\!\!\!\triangleleft} U_\lambda({\mathfrak{g}}^*)$ (if it exists) admits a left covariant differential calculus $\Omega^1=({\mathbb{C}}[\overline{G}]{\blacktriangleright\!\!\!\triangleleft} U_\lambda({\mathfrak{g}}^*)){\triangleright\!\!\!<}\Lambda^1$ with left invariant $1$-forms $\Lambda^1$ spanned by basis $\{\omega^i,\widetilde{{\rm d} x^i}\}$, where the commutation relations and the derivatives are given by \begin{gather*} [f,{\omega^i}]=0,\quad [f,\widetilde{{\rm d} x^i}]=0,\quad [x^i,{\omega^j}]=\sum_k \lambda \<[{x^i},f^j]_{{\mathfrak{g}}^*},e_k\> {\omega^k},\quad [x^i,\widetilde{{\rm d} x^j}]=\lambda\widetilde{{\rm d} (x^i\circ x^j)},\\ {{\rm d}} f=\sum_j (\partial_j f)\omega^j,\quad {{\rm d}} x^i=\widetilde{{\rm d} x^i}+ \sum_j({\rm ad}^*_{e_j}x^i)\omega^j \end{gather*} for any $f\in {\mathbb{C}}[\overline{G}].$ This first order differential calculus extends to $C^\infty(G){>\!\!\!\triangleleft} U_\lambda({\mathfrak{g}}^*)$ if one is only interested in algebra and its calculus. \end{proposition} \proof It is easy to see that we have a bimodule $\Omega^1$. As the notation indicates\cite{MT}, the left action on $\Omega^1$ is the product of the bicrossproduct quantum group on itself while the right action is the tensor product of the right action of the bicrossproduct on itself and a right action on $\Lambda^1$. The right action of ${\mathbb{C}}[G]$ here is trivial, namely $\omega^j{\triangleleft} f=f(e)\omega^j,\ \widetilde{{\rm d} x^j}{\triangleleft} f=f(e)\widetilde{{\rm d} x^j}$, the right actions of $x^i$ are clear from the commutation relations and given by (summation notation omitted) \[\omega^j{\triangleleft} x^i=-\lambda \<[x^i,f^j]_{{\mathfrak{g}}^*},e_k\>\omega^k=-\lambda d^{sj}_k\<x^i,e_s\>\omega^k,\quad \widetilde{{\rm d} x^j}{\triangleleft} x^i=-\lambda\widetilde{{\rm d} (x^i\circ x^j)}=-\lambda b^{ij}_k\widetilde{{\rm d} x^k}.\] One can check that these fit together to a right action of the bicrossproduct quantum group by using the Jacobi identity of ${\mathfrak{g}}^*,$ the pre-Lie identity on $\circ$, and the fact that $(\widetilde{x^i} f)(e)=\widetilde{x^i}_ef=0$ by (\ref{phitilde}). We check that the Leibniz rule holds. The conditions ${\rm d} [f,h]=0$ and ${\rm d} [x^i, x^j]=\lambda{\rm d} [x^i,x^j]_{{\mathfrak{g}}^*}$ are easy to check, so we omit these. It remains to check that \begin{equation}\label{cross} {\rm d} [x^i,f]=\lambda{\rm d} (\widetilde{x^i} f),\quad \forall\,f\in {\mathbb{C}}[\overline{G}]. \end{equation} The right hand side of (\ref{cross}) is \[\lambda{\rm d} (\widetilde{x^i}f)=\lambda\partial_j(\widetilde{x^i}f)\omega^j,\] while the left hand side of (\ref{cross}) is \begin{align*} {\rm d} [x^i, f]&={\rm d} (x^i f- x^i f)=[{\rm d} x^i,f]+[x^i,{\rm d} f]\\ &=[\widetilde{{\rm d} x^i}+ ({\rm ad}^*_{e_j}x^i)\omega^j,f]+[x^i,(\partial_j f)\omega^j]\\ &=0+[{\rm ad}^*_{e_j}x^i, f]\omega^j+[x^i,\partial_jf]\omega^j+(\partial_k f)[x^i,\omega^k]\\ &=[{\rm ad}^*_{e_j}x^i, f]\omega^j+[x^i,\partial_jf]\omega^j+\lambda(\partial_k f)\<[x^i,f^k]_{{\mathfrak{g}}^*},e_j\>\omega^j\\ &=\left([{\rm ad}^*_{e_j}x^i, f]+[x^i,\partial_jf]+\lambda \<[x^i,f^k]_{{\mathfrak{g}}^*},e_j\>(\partial_k f)\right)\omega^j\\ &=\lambda\left(\widetilde{{\rm ad}^*_{e_j}x^i}f+\widetilde{x^i}(\partial_jf)+ \<[x^i,f^k]_{{\mathfrak{g}}^*},e_j\>(\partial_k f)\right)\omega^j. \end{align*} It suffices to show that $\partial_j(\widetilde{x^i}f)=\widetilde{{\rm ad}^*_{e_j}x^i}f+\widetilde{x^i}(\partial_jf)+\<[x^i,f^k]_{{\mathfrak{g}}^*},e_j\>(\partial_k f),$ namely \[[\partial_j,\widetilde{x^i}]=\widetilde{{\rm ad}^*_{e_j}x^i}+ \<[x^i,f^k]_{{\mathfrak{g}}^*},e_j\>\partial_k.\] Recall that in double cross sum ${\mathfrak{g}}^*{\bowtie}{\mathfrak{g}},$ for any $e_j\in{\mathfrak{g}},\,x^i\in{\mathfrak{g}}^*$ \[[e_j, x^i]=e_j{\triangleleft} x^i+e_j{\triangleright} x^i=\<[x^i,f^k]_{{\mathfrak{g}}^*},e_j\>e_k+{\rm ad}^*_{e_j}x^i.\] Therefore the condition left to check is nothing but the Lie bracket of elements $e_j,x^i$ viewing as infinitesimal action of ${\mathfrak{g}}^*{\bowtie}{\mathfrak{g}}$ on $C^\infty(G)$ as explained in the general theory of double cross sum in Section~5.1. \endproof Now we compute the left covariant first order differential calculus on the bicrossproduct quantum group ${\mathbb{C}}[SU_2]{\blacktriangleright\!\!\!\triangleleft} U_\lambda(su^*_2)$ constructed in Example~\ref{TSU_2} in detail. \begin{example} As in Example~\ref{CqSU2}, the classical connected left-covariant calculus on ${\mathbb{C}}[SU_2]$ has basis of left-invariant $1$-forms \[\omega^0=d{\rm d} a-b{\rm d} c=c{\rm d} b-a{\rm d} d,\quad \omega^+=d{\rm d} b-b{\rm d} d,\quad \omega^-=a{\rm d} c-c{\rm d} a\] (corresponding to the Chevalley basis $\{H,X_\pm\}$ of $su_2$) with exterior derivative \begin{gather*} {\rm d} a= a \omega^0+b \omega^-,\quad {\rm d} b=a \omega^+-b \omega^0,\\ {\rm d} c= c \omega^0+d \omega^-,\quad {\rm d} d=c \omega^+-d \omega^0. \end{gather*} Let $\circ:su^*_2\otimes su^*_2\to su^*_2$ be a left pre-Lie algebra structure of $su^*_2$ with respect to the Lie bracket $[x^1,x^2]=0,$ $[x^i,x^3]=x^i,$ for $i=1,2$. Let $\{\widetilde{{\rm d} x^1},\widetilde{{\rm d} x^2},\widetilde{{\rm d} x^3}\}$ complete the basis of left invariant 1-forms on the tangent bundle as explained above. According to Proposition~\ref{propC(G)U(g*)}, this defines a $6$D connected left-covariant differential calculus over the bicrossproduct ${\mathbb{C}}[SU_2]{\blacktriangleright\!\!\!\triangleleft} U_\lambda(su_2^*)$ with commutation relations and exterior derivative given by \begin{gather*} [\mathbf{t},{\omega^l}]=0,\ \forall\,l\in\{0,\pm\},\quad [\mathbf{t},\widetilde{{\rm d} x^i}]=0,\quad[x^i,\widetilde{{\rm d} x^j}]=\lambda\widetilde{{\rm d} (x^i\circ x^j)},\ \forall\,i,j\in\{1,2,3\},\\ [x^1,\omega^0]=\frac{\lambda}{2}(\omega^++\omega^-),\quad [ x^1, \omega^+]=0,\quad [x^1,\omega^-]=0,\nonumber\\ [x^2,\omega^0]=\frac{\imath \lambda}{2}(\omega^+-\omega^-),\quad [x^2,\omega^+]=0,\quad [x^2,\omega^-]=0,\\ [x^3,\omega^0]=0,\quad [x^3,\omega^+]=-\lambda\omega^+,\quad [x^3,\omega^-]=-\lambda\omega^-,\nonumber\\ {\rm d} \begin{pmatrix} a & b\\ c & d \end{pmatrix} =\begin{pmatrix} a & b\\ c & d \end{pmatrix} \begin{pmatrix} \omega^0 & \omega^+\\ \omega^- & -\omega^0 \end{pmatrix},\\ {\rm d} x^1=\widetilde{{\rm d} x^1}+2\imath x^2\omega^0+x^3\omega^+-x^3\omega^-,\quad {\rm d} x^2=\widetilde{{\rm d} x^2}-2\imath x^1\omega^0+\imath x^3\omega^++\imath x^3\omega^-,\\ {\rm d} x^3=\widetilde{{\rm d} x^3}-(x^1+\imath x^2)\omega^++(x^1-\imath x^2)\omega^-. \end{gather*} \end{example} \proof The commutation relations and derivative are computed from the formulae provided in Proposition~\ref{propC(G)U(g*)}. It is useful to also provide an independent more algebraic proof of the example from \cite[Theorem 2.5]{MT}, where left-covariant first order differential calculi $\Omega^1$ over a Hopf algebra $H$ are constructed from pairs $(\Lambda^1,\omega)$ where $\Lambda^1$ is a right $H$-module and $\omega:H^+\to\Lambda^1$ is surjective right $H$-module map. Given such a pair, the commutation relation and derivative are given by $[h,v]=hv-h\o v{\triangleleft} h\t$ and ${\rm d} h=h\o\otimes\omega\pi(h\t)$ for any $h\in H,\,v\in\Lambda^1,$ where $\Delta h=h\o\otimes h\t$ denotes the coproduct of $H$ and $\pi={\rm id}-\epsilon_H.$ Firstly, the classical calculus on $A:={\mathbb{C}}[SU_2]$ corresponds to a pair $(\Lambda^1_A,\omega_{_{A}})$ with $\Lambda^1_A=\mathrm{span}\{\omega^0,\omega^\pm\},$ where the right ${\mathbb{C}}[SU_2]$-action on $\Lambda^1_A$ and the right ${\mathbb{C}}[SU_2]$-module surjective map $\omega_{_{A}}:{\mathbb{C}}[SU_2]^+\to \Lambda^1_A$ are given by \[\omega^j{\triangleleft} \mathbf{t}=\epsilon(\mathbf{t})\omega^j,\ j\in\{0,\pm\},\quad\omega_{_{A}}(\mathbf{t}-I_2)=\omega_{_{A}}\begin{pmatrix} a-1 & b\\ c & d-1 \end{pmatrix}=\begin{pmatrix} \omega^0 & \omega^+\\ \omega^- & -\omega^0 \end{pmatrix}.\] Meanwhile the calculus over $H:=U_\lambda(su^*_2)$ corresponds to pair $(\Lambda^1_H,\omega_{_{H}})$ with $\Lambda^1_H=\mathrm{span}\{\widetilde{{\rm d} x^1},\widetilde{{\rm d} x^2},\widetilde{{\rm d} x^3}\},$ in which the right $U_\lambda(su^*_2)$-action on $\Lambda^1_H$ and the right $U_\lambda(su^*_2)$-module surjective map $\omega_{_{H}}: U_\lambda(su^*_2)^+\to \Lambda^1_H$ are given by \[\widetilde{{\rm d} x^j}{\triangleleft} x^i=-\lambda\widetilde{{\rm d} (x^i\circ x^j)},\quad \omega_{_{H}}(x^i)=\widetilde{{\rm d} x^i},\quad\forall\, i,j\in\{1,2,3\}.\] Next we construct a pair over $\widetilde H=A{\blacktriangleright\!\!\!\triangleleft} H$ with direct sum $\Lambda^1=\Lambda^1_A\oplus\Lambda^1_H$. First, it is clear that $\Lambda^1_H$ is a right $\widetilde{H}$-module with trivial $A$-action ${\rm d} x^j{\triangleleft} \mathbf{t}=\epsilon(\mathbf{t}){\rm d} x^j,$ One can see this more generally as $v{\triangleleft} ((h\o{\triangleright} a)h\t)=\epsilon(a) v{\triangleleft} h=(v{\triangleleft} h){\triangleleft} a=v{\triangleleft} (ha).$ Next, we define a right $U_\lambda(su^*_2)$-action on $\Lambda^1_A$ by the Lie bracket of $su^*_2$ viewing $\{\omega^0,\omega^\pm\}$ as $\{\phi,\psi_\pm\}$ (the dual basis to $\{H,X_\pm\}$), where $\{x^1=\psi_++\psi_-, x^2=\imath(\psi_+-\psi_-), x^3=2\phi\}$ is the basis for the half-real form $su_2^*$ of $sl_2^*,$ namely \begin{gather} \omega^0{\triangleleft} x^1=-\frac{\lambda}{2}(\omega^++\omega^-),\quad \omega^+{\triangleleft} x^1=0,\quad \omega^-{\triangleleft} x^1=0,\nonumber\\ \omega^0{\triangleleft} x^2=-\frac{\imath \lambda}{2}(\omega^+-\omega^-),\quad \omega^+{\triangleleft} x^2=0,\quad \omega^-{\triangleleft} x^2=0,\label{HonW}\\ \omega^0{\triangleleft} x^3=0,\quad \omega^+{\triangleleft} x^3=\lambda\omega^+,\quad \omega^-{\triangleleft} x^3=\lambda\omega^-.\nonumber \end{gather} This $H$-action commutes with the original trivial $A$-action on $\Lambda^1_A,$ hence $\Lambda^1_A$ also becomes a right $\widetilde{H}$-module, so does $\Lambda^1_A\oplus\Lambda^1_H.$ We then define the map $\omega:\widetilde H^+\to \Lambda^1_A\oplus \Lambda^1_H$ on generators by \[\omega(\mathbf{t}-I_2)=\omega_{_{A}}(\mathbf{t}-I_2)=\begin{pmatrix} \omega^0 & \omega^+\\ \omega^- & -\omega^0 \end{pmatrix},\quad \omega(x^i)=\omega_{_{H}}(x^i)=\widetilde{{\rm d} x^i},\quad\forall\, i\in\{1,2,3\}.\] This extends to the whole $\widetilde H^+$ as a right $\widetilde{H}$-module map. To see such $\omega$ is well-defined, it suffices to check \[\omega(x^i\mathbf{t}-\mathbf{t}x^i)=\omega([x^i,\mathbf{t}]),\quad\forall\, i\in\{1,2,3\},\] where $[x^i,\mathbf{t}]$ are cross relations (\ref{TSU_2-action}) computed in Example~\ref{TSU_2}. On the one hand, $\omega(x^i\mathbf{t}-\mathbf{t}x^i)=\omega(x^i\mathbf{t}-(\mathbf{t}-I_2)x^i-x^i I_2)=\omega_{_{H}}(x^i){\triangleleft}\mathbf{t}-\omega_{_{A}}(\mathbf{t}-I_2){\triangleleft} x^i-\omega_{_{H}}(x^i)I_2=-\omega_{_{A}}(\mathbf{t}-I_2){\triangleleft} x^i,$ namely \begin{equation}\label{HonW1} \omega(x^i\mathbf{t}-\mathbf{t} x^i)=-\begin{pmatrix} \omega^0 & \omega^+\\ \omega^- & -\omega^0 \end{pmatrix}{\triangleleft} x^i. \end{equation} Since \begin{align*} [x^1,\mathbf{t}]&=-\lambda bc\,\mathbf{t}e_2+\frac{\lambda}{2}\mathbf{t}\,\mathrm{diag}(ac,-bd)+\frac{\lambda}{2}\mathrm{diag}(b,-c)\\ &=-\lambda bc\,\mathbf{t}e_2+\frac{\lambda}{2}(\mathbf{t}-I_2)\,\mathrm{diag}(ac,-bd)+\frac{\lambda}{2}\mathrm{diag}(ca,-bd)+\frac{\lambda}{2}\mathrm{diag}(b,-c), \end{align*} we know \begin{align*} \omega([x^1,\mathbf{t}])&=-\lambda \omega(b)\epsilon(c\,\mathbf{t}e_2)+\frac{\lambda}{2}\omega((\mathbf{t}-I_2))\epsilon(\mathrm{diag}(ac,-bd))\\ &+\frac{\lambda}{2}\mathrm{diag}(\omega(c){\triangleleft} a,-\omega(b){\triangleleft} d)+\frac{\lambda}{2}\mathrm{diag}(\omega(b),-\omega(c))\\ &=\frac{\lambda}{2}\mathrm{diag}(\omega^++\omega^-,-\omega^+-\omega^-), \end{align*} as $\epsilon(\mathbf{t})=I_2.$ Likewise, we have \begin{gather*} \omega([x^1,\mathbf{t}])=\frac{\lambda}{2}\begin{pmatrix} \omega^++\omega^- & 0\\ 0 & -\omega^+-\omega^- \end{pmatrix},\quad \omega([x^2,\mathbf{t}])=\frac{\imath \lambda}{2}\begin{pmatrix} \omega^+-\omega^- & 0\\ 0 & -\omega^++\omega^- \end{pmatrix},\\ \omega([x^3,\mathbf{t}])=\lambda \begin{pmatrix} 0 & -\omega^+\\ -\omega^- & 0 \end{pmatrix}. \end{gather*} Compare with (\ref{HonW1}), we see that $\omega(x^i\mathbf{t}-\mathbf{t}x^i)=\omega([x^i,\mathbf{t}])$ holds for each $i=1,2,3$ if and only if the right $H$-action on $W$ is the one defined by (\ref{HonW}). From the coproduct of $x^i$ given in Example~\ref{TSU_2}, we know ${{\rm d}} x^i=\widetilde{{\rm d} x^i}+\frac{1}{2}x^k\omega(\pi(\mathrm{Tr}(\mathbf{t}\sigma_i \mathbf{t}^{-1}\sigma_k)))$. This give rise to the formulae for derivatives on $x^i$ as displayed. \endproof We now analyse when a Poisson-compatible left-covariant flat preconnection is bicovariant. \begin{lemma}\label{prelie-T-bi} Let ${\mathfrak{g}}$ be in the setting of Theorem~\ref{prelie-T}. The pre-Lie structure $\widetilde\circ$ given by (\ref{circtilde-T}) of $\underline{{\mathfrak{g}}^*}{\blacktriangleright\!\!\!\triangleleft} \overline{{\mathfrak{g}}^*}$ obeys the corresponding (\ref{Xi-bi}) if and only if the following holds \begin{gather} \delta_{{\mathfrak{g}}^*}(f\circ g)=0,\quad f\o\otimes [f\t,g]_{{\mathfrak{g}}^*}=0,\\ f\o\circ g\otimes f\t=-f\circ g\o\otimes g\t,\\ \delta_{{\mathfrak{g}}^*}(\phi\ast\psi)=0,\quad \phi\ast f\o\otimes f\t=0 \end{gather} for any $\phi,\psi\in\underline{{\mathfrak{g}}^*},\,f,g\in \overline{{\mathfrak{g}}^*}.$ \end{lemma} \proof Since (\ref{Xi-bi}) is bilinear on entries, it suffices to show that $\widetilde\circ$ obeys (\ref{Xi-bi}) on any pair of elements $(\phi,\psi)$, $(\phi,f)$, $(f,\phi)$ and $(f,g)$ if and only if all the displayed identities holds for any $\phi,\psi\in\underline{{\mathfrak{g}}^*},\,f,g\in \overline{{\mathfrak{g}}^*}.$ Firstly, for any $f\in\overline{{\mathfrak{g}}^*},\,\phi\in\underline{{\mathfrak{g}}^*},$ (\ref{Xi-bi}) on $\widetilde\circ$ reduces to \[\delta_{{\mathfrak{g}}^*}[f,\phi]_{{\mathfrak{g}}^*}-[f,\phi\o]\otimes\phi\t-\phi\o\otimes[f,\phi\t]=\underline{f\o}\ast\phi\otimes\overline{f\t}+[\overline{f\o},\phi]\otimes f\t.\] The only term in the above identity not lying in $\underline{{\mathfrak{g}}^*}\otimes\underline{{\mathfrak{g}}^*}$ is $\underline{f\o}\ast\phi\otimes\overline{f\t}$ hence equals to zero. Note that $\delta_{{\mathfrak{g}}^*}$ is a $1$-cocycle, the rest of terms implies $f\o\otimes [f\t,\phi]_{{\mathfrak{g}}^*}=0.$ Change the role of $f$ and $\phi$ in (\ref{Xi-bi}) implies $\phi\ast\underline{f\o}\otimes\overline{f\t}=0,$ which is the same thing we just get. Next, for any $f,g\in\overline{{\mathfrak{g}}^*},$ the condition (\ref{Xi-bi}) on $\widetilde\circ$ requires \begin{equation*} \begin{split} \underline{(f\circ g)\o}\otimes\overline{(f\circ g)\t}+\overline{(f\circ g)\o}\otimes \underline{(f\circ g)\t}-[f,\underline{g\t}]_{{\mathfrak{g}}^*}\otimes \overline{g\t}-f\circ\overline{g\o}\otimes \underline{g\t}\\ -\underline{g\o}\otimes f\circ\overline{g\t}-\overline{g\o}\otimes [f,\underline{g\t}]_{{\mathfrak{g}}^*}\\ =[\underline{f\o},g]_{{\mathfrak{g}}^*}\otimes\overline{f\t}+\overline{f\o}\circ g\otimes \underline{f\t}-\underline{g\o}\otimes\overline{g\t}\circ f. \end{split} \end{equation*} The terms in the above identity lying in $\overline{{\mathfrak{g}}^*}\otimes\underline{{\mathfrak{g}}^*}$ is exactly the condition (\ref{Xi-bi}) on pre-Lie structure $\circ$ for $\overline{{\mathfrak{g}}^*}.$ Regarding this and forgetting all the lines, the rest terms lying in $\underline{{\mathfrak{g}}^*}\otimes\overline{{\mathfrak{g}}^*}$ reduces to $g\o\circ f\otimes g\t+g\circ f\o\otimes f\t=0,$ which is equivalent to \[f\o\circ g\otimes f\t+f\circ g\o\otimes g\t=0,\quad \forall\,f,g\in {\mathfrak{g}}^*.\] Combining above wih $f\o\otimes [f\t,\phi]_{{\mathfrak{g}}^*}=0,$ we know the condition (\ref{Xi-bi}) on $\circ$ reduce to $\delta_{{\mathfrak{g}}^*}(f\circ g)=0.$ Finally, for any $\phi,\psi\in\underline{{\mathfrak{g}}^*},$ the condition (\ref{Xi-bi}) on $\widetilde\circ$ reduces to (\ref{Xi-bi}) on $\ast$ for $\underline{{\mathfrak{g}}^*}.$ Since $\ast$ is commutative, this eventually becomes \[(\phi\ast\psi)\o\otimes(\phi\ast\psi)\t=\phi\ast\psi\o\otimes\psi\t+\phi\o\ast\psi\otimes\phi\t.\] Since $\phi\ast f\o\otimes f\t=0,$ this reduces to $\delta_{{\mathfrak{g}}^*}(\phi\ast \psi)=0.$ This finishes our proof. \endproof The conditions in Lemma~\ref{prelie-T-bi} all hold when the Lie bracket of ${\mathfrak{g}}$ (or the Lie cobracket of ${\mathfrak{g}}^*$) vanishes. Putting these results together we have: \begin{proposition}\label{tangentthm} Let $G$ be a finite dimensional connected and simply connected Poisson-Lie group with Lie bialgebra ${\mathfrak{g}}.$ Assume that $({\mathfrak{g}}^*,[\ ,\ ]_{{\mathfrak{g}}^*})$ obeys the conditions in Theorem~\ref{prelie-T} and Lemma~\ref{prelie-T-bi}. Then the tangent bundle $\overline{G}{\triangleright\!\!\!\blacktriangleleft}\underline{{\mathfrak{g}}}$ in Lemma~\ref{tangent} admits a Poisson-compatible bicovariant flat preconnection. \end{proposition} \begin{example} In the setting of Example~\ref{Tm^*}, we already know from Corollary~\ref{preliecorol} that the abelian Poisson-Lie group ${\mathbb{R}}^n{>\!\!\blacktriangleleft}{\mathfrak{m}}^*$ admits a Poisson-compatible left-covariant (bicovariant) flat preconnection if and only if $(\overline{{\mathfrak{m}}^*}{>\!\!\blacktriangleleft}{\mathfrak{m}}^*)^*=\underline{{\mathfrak{m}}}{>\!\!\!\triangleleft}_{\rm ad}{\mathfrak{m}}$ admits a pre-Lie structure. From Corollary~\ref{semipre-m}, we know such pre-Lie structure $\widetilde\circ$ exists and is given by $(x,\xi)\widetilde\circ(y,\eta)=(x\cdot y+[\xi,y]_{\mathfrak{m}},\xi\circ\eta)$ if we assume $({\mathfrak{m}},\cdot,[\ ,\ ]_{\mathfrak{m}})$ to be a finite dimensional (not necessarily unital) Poisson algebra such that $({\mathfrak{m}}, [\ ,\ ]_{\mathfrak{m}})$ admits a pre-Lie structure $\circ:{\mathfrak{m}}\otimes{\mathfrak{m}}\to{\mathfrak{m}}.$ Then the corresponding preconnection is \[\nabla_{\widehat{(x,\xi)}}{\rm d} (y,\eta)={\rm d} (x\cdot y+[\xi,y]_{\mathfrak{m}},\xi\circ\eta)\] for any $x,y\in\underline{{\mathfrak{m}}},\,\xi,\eta\in{\mathfrak{m}}.$ In fact this extends to all orders. Under the assumptions above, according to Proposition~\ref{envel}, the noncommutative algebra $U_\lambda(\underline{{\mathfrak{m}}}{>\!\!\!\triangleleft}_{\rm ad}{\mathfrak{m}})=S(\underline{{\mathfrak{m}}}){>\!\!\!\triangleleft} U_\lambda({\mathfrak{m}}),$ or the cross product of algebras ${\mathbb{C}}[{\mathbb{R}}^n]{>\!\!\!\triangleleft} U({\mathfrak{m}})$ (as quatisation of $C^\infty({\mathbb{R}}^n{>\!\!\blacktriangleleft}{\mathfrak{m}}^*)$), admits a connected bicovariant differential graded algebra \[\Omega(U_\lambda(\underline{{\mathfrak{m}}}{>\!\!\!\triangleleft}_{\rm ad}{\mathfrak{m}}))=(S(\underline{{\mathfrak{m}}}){>\!\!\!\triangleleft} U_\lambda({\mathfrak{m}})){\triangleright\!\!\!<} \Lambda(\underline{{\mathfrak{m}}}{>\!\!\!\triangleleft}_{\rm ad}{\mathfrak{m}})\] as quantised differential graded algebra. Note that ${\rm d} (x,\xi)=1\otimes (x,\xi)\in 1\otimes\Lambda^1.$ The commutation relations on generators are \begin{gather*} [\xi,\eta]=\lambda[\xi,\eta]_{\mathfrak{m}},\quad [x,y]=0,\quad [\xi,x]=\lambda[\xi,x]_{\mathfrak{m}},\\ [x,{\rm d} y]=\lambda{\rm d}(x\cdot y),\quad [\xi,{\rm d} x]=\lambda{\rm d} [\xi,x]_{{\mathfrak{m}}},\quad [\xi,{\rm d}\eta]=\lambda {\rm d}(\xi\circ\eta) \end{gather*} for any $x,y\in\underline{{\mathfrak{m}}},\,\xi,\eta\in{\mathfrak{m}}.$ \end{example} \section{Quantisation of cotangent bundle $T^*G=\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft} G$} In this section, we focus on quantisation of cotangent bundle $T^*G$ of a Poisson-Lie group $G.$ We aim to construct preconnections on $T^*G.$ As a Lie group, the cotangent bundle $T^*G$ can be identified with the semidirect product of Lie groups $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft} G$ with product given by \begin{equation*} (\phi,g)(\psi,h)=(\phi+{\rm Ad}^*(g)(\psi),gh) \end{equation*} for any $g,h\in G,\,\phi,\psi\in{\mathfrak{g}}^*.$ As before, $\underline{{\mathfrak{g}}^*}$ is ${\mathfrak{g}}^*$ but viewed as abelian Lie group under addition. In particular, $(\phi,g)^{-1}=(-{\rm Ad}^*(g^{-1})(\phi),g^{-1})$ and $(0,g)(\phi,e)(0,g)^{-1}=({\rm Ad}^*(g)\phi,e).$ Here ${\rm Ad}^*$ is the coadjoint action of $G$ on the dual of its Lie algebra. The Lie algebra of $T^*G$ is then identified with the semidirect sum of Lie algebras $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft} {\mathfrak{g}},$ where the Lie bracket of $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft} {\mathfrak{g}}$ is given by \begin{equation}\label{semi-lie} [(\phi,x),(\psi,y)]=({\rm ad}^*_x\psi-{\rm ad}^*_y\phi,[x,y]_{{\mathfrak{g}}}) \end{equation} for any $\phi,\psi\in\underline{{\mathfrak{g}}^*},\,x,y\in{\mathfrak{g}}.$ Here $\underline{{\mathfrak{g}}^*}$ is ${\mathfrak{g}}^*$ viewed as abelian Lie algebra and ${\rm ad}^*$ denotes the usual left coadjoint action of ${\mathfrak{g}}$ on ${\mathfrak{g}}^*$ (or $\underline{{\mathfrak{g}}}^*$). The strategy to build Poisson-Lie structure on cotangent bundle here is to construct Lie bialgebra structures on $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft}{\mathfrak{g}}$ via bosonization of Lie bialgebras. Then we can exponentiate the obtained Lie cobracket of $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft}{\mathfrak{g}}$ to a Poisson-Lie structure on $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft} G.$ We can always do this, as we work in the nice case when Lie group is connected and simply connected. \subsection{Lie bialgebra structures on $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft}{\mathfrak{g}}$ via bosonization} Let ${}^{{\mathfrak{g}}}_{{\mathfrak{g}}}{\mathcal{M}}$ denote the monoidal category of left Lie ${\mathfrak{g}}$-crossed modules. A \textit{braided-Lie bialgebra} $\mathfrak{b}\in {}^{{\mathfrak{g}}}_{{\mathfrak{g}}}{\mathcal{M}}$ is $(\mathfrak{b},[\ ,\ ]_{\mathfrak{b}},\delta_{\mathfrak{b}},{\triangleright},\beta)$ given by a ${\mathfrak{g}}$-crossed module $(\mathfrak{b},{\triangleright},\beta)$ that is both Lie algebra $(\mathfrak{b},[\ ,\ ]_{\mathfrak{b}})$ and Lie coalgebra $(\mathfrak{b},\delta_{\mathfrak{b}})$ live in ${}^{{\mathfrak{g}}}_{{\mathfrak{g}}}{\mathcal{M}},$ and the infinitesimal braiding $\Psi:\mathfrak{b}\otimes \mathfrak{b}\to \mathfrak{b}\otimes \mathfrak{b}$ obeying $\Psi(x,y)={\rm ad}_x\delta_{\mathfrak{b}} y-{\rm ad}_y\delta_{\mathfrak{b}} x-\delta_{\mathfrak{b}}([x,y]_{\mathfrak{b}})$ for any $x,y\in {\mathfrak{b}}.$ If $\mathfrak{b}$ is a braided-Lie bialgebra in ${}^{{\mathfrak{g}}}_{{\mathfrak{g}}}{\mathcal{M}},$ then the bisum $\mathfrak{b}{>\!\!\!\triangleleft\kern-.33em\cdot} {\mathfrak{g}}$ with semidirect Lie bracket/cobracket is a Lie bialgebra~\cite{Ma:blie}. For our purpose, a straightforward solution is to ask for $\underline{{\mathfrak{g}}^*}=({\mathfrak{g}}^*, [\ ,\ ]=0,\delta_{{\mathfrak{g}}^*},{\rm ad}^*,\alpha)$ being a braided-Lie algebra in ${}^{{\mathfrak{g}}}_{{\mathfrak{g}}}{\mathcal{M}}$ for some left ${\mathfrak{g}}$-coaction $\alpha$ on ${\mathfrak{g}}^*.$ \begin{lemma}\label{blie} Let ${\mathfrak{g}}$ be a finite-dimensional Lie bialgebra and suppose there is a linear map $\Xi:{\mathfrak{g}}^*\otimes{\mathfrak{g}}^* \to {\mathfrak{g}}^*$ such that (\ref{compatible}) holds. Then $\underline{{\mathfrak{g}}^*}=({\mathfrak{g}}^*,[\ ,\ ]=0,\delta_{{\mathfrak{g}}^*},{\rm ad}^*,\alpha)$ (a variation of ${\mathfrak{g}}^*$) is a braided-Lie bialgebra in ${}^{{\mathfrak{g}}}_{{\mathfrak{g}}}{\mathcal{M}}$ if and only if $\Xi$ is a pre-Lie structure on ${\mathfrak{g}}^*$ satisfying that $\Xi$ is covariant under Lie cobracket $\delta_{{\mathfrak{g}}^*}$ in the sense of \begin{equation}\label{Xi-ass} \Xi(\phi,\psi)\o\otimes \Xi(\phi,\psi)\t=\Xi(\phi, \psi\o)\otimes\psi\t+\psi\o\otimes\Xi(\phi,\psi\t), \end{equation} and \begin{equation}\label{Xi-con} \Xi(\phi\o,\psi)\otimes\phi\t=\psi\o\otimes\Xi(\psi\t,\phi) \end{equation} for any $\phi,\psi\in{\mathfrak{g}}^*$. Here the left ${\mathfrak{g}}$-coaction $\alpha$ and the left pre-Lie product $\Xi$ of ${\mathfrak{g}}^*$ are mutually determined via \begin{equation}\label{alpha-Xi} \<\alpha(\phi),\psi\otimes x\>=-\Xi(\psi,\phi)(x), \end{equation} for any $\phi,\psi\in{\mathfrak{g}}^*,\,x\in{\mathfrak{g}}.$ In this case, the bisum $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft\kern-.33em\cdot} {\mathfrak{g}}$ is a Lie bialgebra with Lie bracket given by (\ref{semi-lie}) and Lie cobracket given by \begin{equation}\label{semi-lie-co} \delta(\phi,X)=\delta_{{\mathfrak{g}}}X+\delta_{{\mathfrak{g}}^*}\phi+({\rm id}-\tau)\alpha(\phi) \end{equation} for any $\phi\in{\mathfrak{g}}^*,X\in{\mathfrak{g}}.$ \end{lemma} \proof Since Lie bracket is zero, by definition, the question amounts to find left ${\mathfrak{g}}$-coaction $\alpha$ on ${\mathfrak{g}}^*$ such that 1) $({\rm ad}^*,\alpha)$ makes $\underline{{\mathfrak{g}}^*}$ into a left ${\mathfrak{g}}$-crossed module; 2) $\delta_{{\mathfrak{g}}^*}$ is a left ${\mathfrak{g}}$-comodule map under $\alpha$; 3) The infinitesimal braiding $\Psi$ on $\underline{{\mathfrak{g}}^*}$ is trivial, i.e., \begin{equation}\label{Psi} \Psi(\phi,\psi)={\rm ad}^*_{\psi{}^{(1)}}\phi\otimes\psi{}^{(2)}-{\rm ad}^*_{\phi{}^{(1)}}\psi\otimes\phi{}^{(2)} -\psi{}^{(2)}\otimes{\rm ad}^*_{\psi{}^{(1)}}\phi+\phi{}^{(2)}\otimes{\rm ad}^*_{\phi{}^{(1)}}\psi \end{equation} is zero for any $\phi,\psi\in\underline{{\mathfrak{g}}^*},$ where we denote $\alpha(\phi)=\phi{}^{(1)}\otimes\phi{}^{(2)}.$ Clearly, $\alpha$ is a left ${\mathfrak{g}}$-coaction on ${\mathfrak{g}}^*$ if and only if $\Xi$ defines a left ${\mathfrak{g}}^*$ action on itself, since $\alpha$ and $\Xi$ are adjoint to each other by (\ref{alpha-Xi}), thus if and only if $\Xi$ is left pre-Lie structure, due to (\ref{compatible}). Next, the condition that the Lie cobracket $\delta_{{\mathfrak{g}}^*}$ is a left ${\mathfrak{g}}$-comodule map under $\alpha$ means $\delta_{{\mathfrak{g}}^*}$ is a right ${\mathfrak{g}}^*$-module map under $-\Xi.$ This is exactly the assumption (\ref{Xi-ass}) on $\Xi.$ In this case, the cross condition (\ref{g-crossv}) or (\ref{Xi-bi}) (using compatibility) for making ${\mathfrak{g}}^*$ a left ${\mathfrak{g}}$-crossed module under $({\rm ad}^*,\alpha)$ becomes (\ref{Xi-con}). It suffices to show that the infinitesimal braiding $\Psi$ on ${\mathfrak{g}}^*$ is trivial on $\underline{{\mathfrak{g}}}^*.$ By construction, $\<\alpha(\phi),\varphi\otimes x\>=-\Xi(\varphi,\phi)(x),$ so ${\rm ad}^*_{\psi{}^{(1)}}\phi\otimes\psi{}^{(2)}=\phi\t\otimes\Xi(\phi\o,\psi)$ where $\alpha(\phi)=\phi{}^{(1)}\otimes\phi{}^{(2)}$ and $\delta_{{\mathfrak{g}}^*}\phi=\phi\o\otimes\phi\t.$ Thus \begin{align*} \Psi(\phi,\psi)&={\rm ad}^*_{\psi{}^{(1)}}\phi\otimes\psi{}^{(2)}-{\rm ad}^*_{\phi{}^{(1)}}\psi\otimes\phi{}^{(2)} -\psi{}^{(2)}\otimes{\rm ad}^*_{\psi{}^{(1)}}\phi+\phi{}^{(2)}\otimes{\rm ad}^*_{\phi{}^{(1)}}\psi\\ &= \Xi(\psi\o,\phi)\otimes\psi\t-\psi\t\otimes\Xi(\psi\o,\phi) -\Xi(\phi\o,\psi)\otimes\phi\t+\phi\t\otimes\Xi(\phi\o,\psi)\\ &=\Xi(\psi\o,\phi)\otimes\psi\t+\psi\o\otimes\Xi(\psi\t,\phi) -\Xi(\phi\o,\psi)\otimes\phi\t-\phi\o\otimes\Xi(\phi\t,\psi)\\ &=0, \end{align*} using (\ref{Xi-con}). This finishes our proof. \endproof \begin{example}\label{kk-blie} Let ${\mathfrak{m}}$ be a pre-Lie algebra with product $\circ:{\mathfrak{m}}\otimes{\mathfrak{m}}\to{\mathfrak{m}}$ and ${\mathfrak{g}}={\mathfrak{m}}^*$ with zero Lie bracket as in Example~\ref{kk}. This meets the conditions in Lemma~\ref{blie} and we have a Lie bialgebra $\underline{{\mathfrak{g}}^*}{>\!\!\blacktriangleleft}{\mathfrak{g}}=\underline{{\mathfrak{m}}}{>\!\!\blacktriangleleft}{\mathfrak{m}}^*$ with zero Lie bracket and with Lie cobracket \[ \delta \phi=\delta_{{\mathfrak{m}}^*}\phi,\quad \delta x= ({\rm id}-\tau)\alpha(x),\quad\forall \phi\in {\mathfrak{m}}^*,\ x\in{\mathfrak{m}}\] where $\alpha$ is given by the pre-Lie algebra structure $\circ$ on ${\mathfrak{m}}$, i.e., $\<x\otimes\phi,\alpha(y)\>=-\<\phi,x\circ y\>.$ The Lie bialgebra here is the dual of semidirect sum Lie algebra $\tilde {\mathfrak{m}}={\mathfrak{m}}^*{>\!\!\!\triangleleft}{\mathfrak{m}}$ (viewed as Lie bialgebra with zero Lie cobracket) where ${\mathfrak{m}}$ acts on ${\mathfrak{m}}^*$ by the adjoint action of ${\mathfrak{m}}$ on ${\mathfrak{m}}$ given by $\circ$, i.e., $\<x{\triangleright}\phi,y\>=-\phi(x\circ y)$, \[ [x,y]=[x,y]_{\mathfrak{m}},\quad [x,\phi]=x{\triangleright}\phi,\quad [\phi,\psi]=0,\quad \forall x,y\in{\mathfrak{m}},\ \phi,\psi\in{\mathfrak{m}}^*.\] The Poisson bracket on $\tilde{{\mathfrak{m}}}^*=\underline{{\mathfrak{m}}}{>\!\!\blacktriangleleft}{\mathfrak{m}}^*$ is then the Kirillov-Kostant one for $\tilde{\mathfrak{m}}$, i.e., given by this Lie bracket. \end{example} \begin{example}\label{qt-blie} Let ${\mathfrak{g}}$ be a quasitriangular Lie bialgebra with $r$-matrix $r=r{}^{(1)}\otimes r{}^{(2)}\in {\mathfrak{g}}\otimes{\mathfrak{g}}$ such that $r_+{\triangleright} X=0$ on ${\mathfrak{g}}$. As in Example~\ref{qt-flat}, ${\mathfrak{g}}^*$ is a pre-Lie algebra with product $\Xi(\phi,\psi)=-\<\phi,r{}^{(2)}\>{\rm ad}^*_{r{}^{(1)}}\psi.$ Direct computation shows $\Xi$ satisfies (\ref{Xi-ass}), (\ref{Xi-con}) without any further requirement. So $\underline{{\mathfrak{g}}^*}=({\mathfrak{g}}^*,[\ ,\ ]=0,\delta_{{\mathfrak{g}}^*},{\rm ad}^*,\alpha)$ is a braided-Lie bialgebra in ${}^{{\mathfrak{g}}}_{{\mathfrak{g}}}{\mathcal{M}}$ with $\alpha(\phi)=r{}^{(2)}\otimes{\rm ad}^*_{r{}^{(1)}}\phi.$ Hence from Lemma~\ref{blie}, $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft\kern-.33em\cdot}{\mathfrak{g}}$ is a Lie bialgebra with Lie bracket given by (\ref{semi-lie}) and Lie cobracket given by (\ref{semi-lie-co}), i.e., \begin{equation} \delta(\phi,X)=\delta_{{\mathfrak{g}}}X+\delta_{{\mathfrak{g}}^*}\phi+({\rm id}-\tau)(r{}^{(2)}\otimes{\rm ad}^*_{r{}^{(1)}}\phi). \end{equation} Note that if ${\mathfrak{g}}$ is a quasitriangular Lie bialgebra, it is shown in~\cite[Corollary 3.2, Lemma 3.4]{Ma:blie} that $({\mathfrak{g}}^*,\delta_{{\mathfrak{g}}^*})$ is a braided-Lie algebra with Lie bracket given by \begin{equation*} [\phi,\psi]=2\<\phi,r_+{}^{(1)}\>{\rm ad}^*_{r_+{}^{(2)}}\psi=0 \end{equation*} in our case, so in this example $\underline{{\mathfrak{g}}}^*$ in Lemma~\ref{blie} agrees with a canonical construction. \end{example} \subsection{Poisson-Lie structures on $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft} G$ induced from $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft\kern-.33em\cdot}\,{\mathfrak{g}}$} Next we exponentiate our Lie bialgebra structure $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft\kern-.33em\cdot} {\mathfrak{g}}$ constructed by Lemma~\ref{blie} to a Poisson-Lie structure on cotangent bundle. As usual this is done by exponentiating $\delta$ to a group 1-cocycle $D$. \begin{proposition}\label{bpoisson} Let $G$ be connected and simply connected Poisson-Lie group. If its Lie algebra ${\mathfrak{g}}$ with a given coaction $\alpha$ is in the setting of Lemma~\ref{blie}. Then $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft} G$ is a Poisson-Lie group with $$D(\phi,g)={\rm Ad}_{\phi} D(g)+\delta_{{\mathfrak{g}}^*}\phi+({\rm id}-\tau)(\phi{}^{(1)}\otimes\phi{}^{(2)}-\frac{1}{2}{\rm ad}^*_{\phi{}^{(1)}}\phi\otimes\phi{}^{(2)}),$$ where $\alpha(\phi)=\phi{}^{(1)}\otimes\phi{}^{(2)}.$ \end{proposition} \proof Because of the cocycle condition it suffices to find $D(\phi):= D(\phi,e)$ and $D(g):=D(e,g)$, then \[ D(\phi,g)=D(\phi)+{\rm Ad}_\phi D(g),\quad \forall (\phi,g)\in {\mathfrak{g}}^*{>\!\!\!\triangleleft} G\] where \[ {\rm Ad}_\phi(X)=X- {\rm ad}^*_X\phi,\quad\forall X\in {\mathfrak{g}}\subset{\mathfrak{g}}^*{>\!\!\!\triangleleft}{\mathfrak{g}},\quad \phi\in {\mathfrak{g}}^*.\] We require \[ {{\rm d} \over{\rm d} t}D(t\phi)={\rm Ad}_{t\phi}(\delta\phi)\] which we solve writing \[ D(\phi)=\delta_{{\mathfrak{g}}^*}\phi+ Z(\phi)\] so that \[ {{\rm d} \over{\rm d} t}Z(t\phi)={\rm Ad}_{t\phi}(({\rm id}-\tau)\circ\alpha(\phi))=({\rm id}-\tau)\circ\alpha(\phi)-t({\rm id}-\tau)({\rm ad}^*_{\phi{}^{(1)}}\phi\otimes\phi{}^{(2)}),\ \ Z(0)=0.\] Integrating this to \[Z(t\phi)=t({\rm id}-\tau)\circ\alpha(\phi)-{1\over 2} t^2({\rm id}-\tau)({\rm ad}^*_{\phi{}^{(1)}}\phi\otimes\phi{}^{(2)}),\] we obtain \[ D(\phi)=\delta_{{\mathfrak{g}}^*}\phi+({\rm id}-\tau)(\phi{}^{(1)}\otimes\phi{}^{(2)}-\frac{1}{2}{\rm ad}^*_{\phi{}^{(1)}}\phi\otimes\phi{}^{(2)}),\] where $\alpha(\phi)=\phi{}^{(1)}\otimes\phi{}^{(2)}.$ The general case $\frac{{\rm d}}{{\rm d} t}|_{t=0}D(\phi+t\psi)={\rm Ad}_{\phi}(\delta\psi)$ amounts to vanishing of the expression (\ref{Psi}) which we saw holds under our assumptions in the proof of Lemma~\ref{blie}. \endproof \begin{example} In the setting of Example~\ref{qt-blie} with $({\mathfrak{g}},r)$ quasitriangular such that $r_+{\triangleright} X=0$ on $X\in{\mathfrak{g}}$ we know that ${\mathfrak{g}}^*{>\!\!\!\triangleleft} G$ is a Poisson-Lie group with \[ D(\phi,g)=\delta_{{\mathfrak{g}}^*}\phi+{\rm Ad}_{(\phi,g)}(r)-r+2r_+{\triangleright}\phi - r_+{\triangleright}(\phi\otimes\phi),\] where ${\triangleright}$ denotes coadjoint action ${\rm ad}^*$. As $\alpha(\phi)=r_{21}{\triangleright}\phi,$ direct computation shows that $D(\phi)=\delta_{{\mathfrak{g}}^*}(\phi)+({\rm id}-\tau)r_{21}{\triangleright}\phi+ r_-{\triangleright}(\phi\otimes\phi)$. Since the differential equation for $D(g)$ is the same as one on $G$, so $D(g)={\rm Ad}_g(r)-r$ as ${\mathfrak{g}}$ quasitriangular, we obtain the stated result. Note that ${\rm Ad}_\phi(r)=(r{}^{(1)}-r{}^{(1)}{\triangleright}\phi)\otimes (r{}^{(2)}-r{}^{(2)}{\triangleright}\phi)=r+r{\triangleright}(\phi\otimes\phi)-r{}^{(1)}{\triangleright}\phi\otimes r{}^{(2)}-r{}^{(1)}\otimes r{}^{(2)}{\triangleright}\phi.$ The differential equation $\frac{{\rm d}}{{\rm d} t}|_{t=0}D(\phi+t\psi)={\rm Ad}_{\phi}(\delta\psi)$ amounts to $r_+{\triangleright}({\rm id}-\tau)(\phi\otimes\psi)=0$, which is guaranteed by $r_+{\triangleright} X=0$ on ${\mathfrak{g}}.$ Note that we can view $r\in (\underline{\mathfrak{g}}^*{>\!\!\!\triangleleft\kern-.33em\cdot}{\mathfrak{g}})^{\otimes 2}$ and it will obey the CYBE and in our case ${\rm ad}_\phi(r_+)=0$ as $r_+{\triangleright}\phi=0$ on ${\mathfrak{g}}^*$ under our assumptions. In this case $\underline{\mathfrak{g}}^*{>\!\!\!\triangleleft\kern-.33em\cdot}{\mathfrak{g}}$ is quasitriangular with the same $r$, with Lie cobracket \[ \delta_r(\phi)= {\rm ad}_\phi(r)=-r{}^{(1)}{\triangleright}\phi\otimes r{}^{(2)}-r{}^{(1)}\otimes r{}^{(2)}{\triangleright}\phi=({\rm id}-\tau)r_{21}{\triangleright}\phi\] at the Lie algebra level (differentiating the above ${\rm Ad}_{t\phi}$) and $\delta X$ as before. In our case the cobracket has an additional $\delta_{{\mathfrak{g}}^*}\phi$ term reflected also in $D$. \end{example} \subsection{Preconnections on cotangent bundle $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft\kern-.33em\cdot}\, G$} Let ${\mathfrak{g}}$ be a finite-dimensional of Lie bialgebra and its dual ${\mathfrak{g}}^*$ admits a pre-Lie structure $\Xi:{\mathfrak{g}}^*\otimes{\mathfrak{g}}^*\to {\mathfrak{g}}^*$ such that (\ref{Xi-ass}) and (\ref{Xi-con}) as in the setting of Lemma~\ref{blie}. Then the dual of Lie bialgebra $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft\kern-.33em\cdot} {\mathfrak{g}}$ is $\overline{{\mathfrak{g}}}{>\!\!\!\triangleleft\kern-.33em\cdot} {\mathfrak{g}}^*,$whose Lie bracket is the semidirect sum ${\mathfrak{g}}{>\!\!\!\triangleleft}{\mathfrak{g}}^*$ and Lie cobracket is the semidirect cobracket $\overline{{\mathfrak{g}}}{>\!\!\blacktriangleleft}{\mathfrak{g}}^*,$ namely \begin{gather*} [x,y]=[x,y]_{\mathfrak{g}},\quad [\phi,x]=\phi{\triangleright} x,\quad [\phi,\psi]=[\phi,\psi]_{{\mathfrak{g}}^*};\\ \delta x=({\rm id}-\tau)\beta(x),\quad \delta\phi=\delta_{{\mathfrak{g}}^*}\phi, \end{gather*} for any $x,y\in{\mathfrak{g}},\,\phi,\psi\in{\mathfrak{g}}^*.$ Here the left action and coaction of ${\mathfrak{g}}^*$ on ${\mathfrak{g}}$ are given by \begin{equation}\label{Xi-action} \<\phi{\triangleright} x, \psi\>=-\Xi(\phi,\psi)(x),\quad\text{ and }\quad \<\beta(x),y\otimes\phi\>=\<\phi,[x,y]\>, \end{equation} respectively. Here again, we use Lemma~\ref{semipre} to construct pre-Lie algebra structures on semidirect sum $\overline{{\mathfrak{g}}}{>\!\!\!\triangleleft}{\mathfrak{g}}^*$. \begin{theorem}\label{prelie-C} Let $G$ be a connected and simply connected Poisson-Lie group with Lie bialgebra ${\mathfrak{g}}$. Let ${\mathfrak{g}}^*$ admit two pre-Lie structures $\Xi$ and $\circ$, with $\Xi$ obeying (\ref{Xi-ass}) and (\ref{Xi-con}) as in the setting of Lemma~\ref{blie}. Let ${\mathfrak{g}}$ also admit a pre-Lie structure $\ast$ such that \begin{equation}\label{Xi-ast} \phi{\triangleright} (x\ast y)=(\phi{\triangleright} x)\ast y+ x\ast (\phi{\triangleright} y), \end{equation} where ${\triangleright}$ is defined by (\ref{Xi-action}). Then the Lie algebra $\overline{{\mathfrak{g}}}{>\!\!\!\triangleleft}{\mathfrak{g}}^*$ admits a pre-Lie structure $\widetilde{\circ}:$ \begin{equation}\label{circtilde-C} (x,\phi)\widetilde{\circ}(y,\psi)=(x\ast y + \phi{\triangleright} y, \phi\circ \psi) \end{equation} and then the cotangent bundle $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft} G$ admits a Poisson-compatible left-covariant flat preconnection. \end{theorem} \proof Since $({\mathfrak{g}},\Xi)$ is in the setting of Lemma~\ref{blie}, the left ${\mathfrak{g}}^*$-action in semidirect sum $\overline{{\mathfrak{g}}}{>\!\!\!\triangleleft}{\mathfrak{g}}^*$ is the one defined in (\ref{Xi-action}). The rest is immediate from Lemma~\ref{semipre} and Corollary~\ref{preliecorol}. \endproof To construct a bicovariant preconnection, the pre-Lie structure constructed in Theorem~\ref{prelie-C} must satisfy corresponding (\ref{Xi-bi}). \begin{proposition}\label{prelie-C-bi} In the setting of Theorem~\ref{prelie-C}, the pre-Lie structure $\widetilde\circ$ of $\overline{{\mathfrak{g}}}{>\!\!\!\triangleleft}{\mathfrak{g}}^*$ defined by (\ref{circtilde-C}) obeys the corresponding (\ref{Xi-bi}) if and only if $\circ$ obeys (\ref{Xi-bi}), $\ast$ is associative and the following identities hold \begin{gather} [x,y]\ast z=[y,z]\ast x,\label{ast}\\ (({\rm ad}^*_x\psi)\circ\phi)(y)+\Xi({\rm ad}^*_y\phi,\psi)(x)=0,\label{circad}\\ \Xi(\phi,\psi)([x,y]_{\mathfrak{g}})=\Xi(\phi,{\rm ad}^*_y\psi)(x)-(\phi\circ{\rm ad}^*_x\psi)(y),\label{Xiad} \end{gather} for any $x,y,z\in{\mathfrak{g}}$ and $\phi,\psi\in{\mathfrak{g}}^*.$ The associated preconnection is then bicovariant. \end{proposition} \proof Since (\ref{Xi-bi}) is bilinear, it suffices to show that (\ref{Xi-bi}) holds on any pair of elements $(x,y)$, $(x,\phi)$, $(\phi,x)$ and $(\phi,\psi)$ if and only if all the conditions and displayed identities hold. Here we denote $\beta(x)=x^1\otimes x_2\in{\mathfrak{g}}^*\otimes{\mathfrak{g}},$ so we know \[\<x^1,y\>x_2=[x,y]_{{\mathfrak{g}}},\quad x^1\<x_2,\phi\>=-{\rm ad}^*_x\phi.\] Firstly, for any $\phi,\psi\in{\mathfrak{g}}^*,$ the condition (\ref{Xi-bi}) holds for $\widetilde\circ$ reduces to (\ref{Xi-bi}) on pre-Lie structure $\circ$ for ${\mathfrak{g}}^*.$ Secondly, for any $x,y\in\overline{{\mathfrak{g}}},$ the condition (\ref{Xi-bi}) requires \begin{equation*} \begin{split} (x\ast y)^1\otimes (x\ast y)_2- (x\ast y)_2\otimes (x\ast y)^1-x^1{\triangleright} y\otimes x_2+x_2\ast y\otimes x^1+x\ast y_2\otimes y^1\\ =y^1\otimes [x,y_2]_{{\mathfrak{g}}}+y_2\otimes y^1{\triangleright} x. \end{split} \end{equation*} The terms lying in $\overline{{\mathfrak{g}}}\otimes \overline{{\mathfrak{g}}}$ on both sides should be equal, i.e., $-x^1{\triangleright} y\otimes x_2=y_2\otimes y^1{\triangleright} x,$ which is equivalent to $-\Xi({\rm ad}^*_x\psi,\phi)(y)=\Xi({\rm ad}^*_y\phi,\psi)(x).$ This is true from our assumption (\ref{Xi-con}) on $\Xi.$ The terms in $\overline{{\mathfrak{g}}}\otimes{\mathfrak{g}}^*$ is equivalent to $[x\ast y,z]=[x,z]\ast y+x\ast[y,z],$ i.e., $\ast$ is associative. The terms in ${\mathfrak{g}}^*\otimes\overline{{\mathfrak{g}}}$ is $(x\ast y)^1\otimes (x\ast y)_2=y^1\otimes [x,y_2]_{{\mathfrak{g}}},$ apply the first entry to $z\in{\mathfrak{g}},$ we get $[x\ast y,z]=[x,[y,z]],$ which is equivalent to $[x,z]\ast y=[z,y]\ast x.$ Now, for any $x\in{\mathfrak{g}},\phi\in{\mathfrak{g}}^*,$ the condition (\ref{Xi-bi}) reduces to $0=x^1\circ\phi\otimes x_2-\phi\o\otimes\phi\t{\triangleright} x.$ Apply $y\otimes\psi,$ this becomes $-\Xi({\rm ad}^*_y\phi,\psi)(x)=(({\rm ad}^*_x\psi)\circ\phi)(y).$ Finally, for any $\phi\in{\mathfrak{g}}^*,\,x\in\overline{{\mathfrak{g}}},$ the condition (\ref{Xi-bi}) requires \begin{equation*} \begin{split} (\phi{\triangleright} x)^1\otimes (\phi{\triangleright} x)_2- (\phi{\triangleright} x)_2\otimes (\phi{\triangleright} x)^1-\phi\circ x^1\otimes x_2+\phi{\triangleright} x_2\otimes x^1\\ -x^1\otimes\phi{\triangleright} x_2+x_2\otimes\phi\circ x^1 =\phi\o{\triangleright} x\otimes \phi\t+x_2\otimes x^1\circ\phi. \end{split} \end{equation*} The terms lying in ${\mathfrak{g}}^*\otimes\overline{{\mathfrak{g}}}$ give $(\phi{\triangleright} x)^1\otimes (\phi{\triangleright} x)_2-\phi\circ x^1\otimes x_2-x^1\otimes \phi{\triangleright} x_2=0.$ Apply $y\otimes\psi,$ this is equivalent to $-\Xi(\phi,{\rm ad}^*_y\psi)(x)+(\phi\circ {\rm ad}^*_x\psi)(y)+\Xi(\phi,\psi)([x,y]_{\mathfrak{g}})=0.$ Apply $\psi\otimes y$ to terms lying $\overline{{\mathfrak{g}}}\otimes{\mathfrak{g}}^*,$ after cancelling the identity we just get, we have $(({\rm ad}^*_x\psi)\circ\phi)(y)+\Xi({\rm ad}^*_y\phi,\psi)(x)=0.$ This finishes our proof. \endproof For simplicity, one can certainly choose $\Xi=\circ$ in Theorem~\ref{prelie-C} and Proposition~\ref{prelie-C-bi}. \begin{corollary}\label{prelie-C-cor} Let ${\mathfrak{g}}$ be finite dimensional Lie bialgebra. Assume that ${\mathfrak{g}}^*$ admits a pre-Lie structure $\Xi$ such that (\ref{Xi-ass}) and (\ref{Xi-con}). Also assume that ${\mathfrak{g}}$ admits a pre-Lie structure $\ast$ such that (\ref{Xi-ast}) where the action is defined by (\ref{Xi-action}) from $\Xi$. Then \begin{equation*} (x,\phi)\widetilde{\circ} (y,\psi)=(x\ast y+\phi{\triangleright} y,\Xi(\phi,\psi)) \end{equation*} defines a pre-Lie structure of Lie algebra $\overline{{\mathfrak{g}}}{>\!\!\!\triangleleft}{\mathfrak{g}}^*,$ and thus provides a Poisson-compatible left-covariant flat preconnection on cotangent bundle ${\mathfrak{g}}^*{>\!\!\!\triangleleft\kern-.33em\cdot}\, G.$ Moreover, if $\ast$ is associative and obeys (\ref{ast}), then the pre-Lie structure $\widetilde{\circ}$ obeys (\ref{Xi-bi}), thus the corresponding preconnection is bicovariant. \end{corollary} \proof Clearly, there is no further condition on $\circ$ in the case $\circ=\Xi$ in Theorem~\ref{prelie-C}. In bicovariant case, the further conditions on $\circ$ in Proposition~\ref{prelie-C-bi} are (\ref{Xi-bi}), (\ref{circad}) and (\ref{Xiad}). These all can be showed by the assumptions (\ref{Xi-ass}) and (\ref{Xi-con}) we already made on $\Xi$. In particular, (\ref{Xi-con}) can show (\ref{circad}) is true, and (\ref{Xi-ass}) is simply a variation of (\ref{Xiad}) when $\circ=\Xi$. The only conditions left in Proposition~\ref{prelie-C-bi} are $\ast$ is associative and (\ref{ast}). This completes our proof. \endproof \begin{example}\label{kk-pc} In the case of Example~\ref{kk-blie} we know the answer: a Poisson-compatible bicovariant flat preconnection on $\tilde{\mathfrak{m}}^*=\underline{{\mathfrak{m}}}{>\!\!\blacktriangleleft}{\mathfrak{m}}^*$ corresponds to a pre-Lie algebra structure on $\tilde{\mathfrak{m}}={\mathfrak{m}}^*{>\!\!\!\triangleleft}{\mathfrak{m}}$. Assume $\tilde{\circ}$ is such a pre-Lie structure, and also assume $\tilde{\circ}$ is such that $\tilde\circ({\mathfrak{m}}\otimes{\mathfrak{m}})\subseteq{\mathfrak{m}},\,\tilde\circ({\mathfrak{m}}^*\otimes{\mathfrak{m}}^*)\subseteq{\mathfrak{m}}^*,\,\tilde\circ({\mathfrak{m}}\otimes{\mathfrak{m}}^*)\subseteq{\mathfrak{m}}^*$ and the restriction of $\tilde\circ$ on other subspace is zero. Directly from the definition of pre-Lie structure, one can show $\circ:=\tilde\circ|_{{\mathfrak{m}}\otimes{\mathfrak{m}}}$ also provides a pre-Lie structures for $({\mathfrak{m}},[\ ,\ ]_{{\mathfrak{m}}})$, while $\ast:=\tilde\circ|_{{\mathfrak{m}}^*\otimes{\mathfrak{m}}^*}$ provides a pre-Lie structure for $({\mathfrak{m}}^*,[\ ,\ ]_{{\mathfrak{m}}^*}=0),$ thus $\ast$ is associative and (\ref{ast}) holds automatically. Besides ${\triangleright}:=\tilde\circ|_{{\mathfrak{m}}\otimes{\mathfrak{m}}^*}$ can be shown to be a left ${\mathfrak{m}}$-action on ${\mathfrak{m}}^*$, which is exactly the adjoint action of left ${\mathfrak{m}}$-action on ${\mathfrak{m}}$ given by pre-Lie structure $\circ$ on ${\mathfrak{m}}$. Apply $\tilde{\circ}$ to any $x\in{\mathfrak{m}},\,\phi,\psi\in{\mathfrak{m}}^*,$ one has $x{\triangleright}(\phi\ast\psi)=(x{\triangleright}\phi)\ast\psi+\phi\ast(x{\triangleright}\psi)$, i.e., (\ref{Xi-ast}). The analysis above shows that $\circ,\ast,{\triangleright}$ corresponds to the data in Corollary~\ref{prelie-C-cor}. So this example agrees with our construction of Poisson-compatible bicovariant flat preconnection on $\underline{{\mathfrak{g}}}^*{>\!\!\!\triangleleft\kern-.33em\cdot}{\mathfrak{g}}=\underline{{\mathfrak{m}}}{>\!\!\blacktriangleleft}{\mathfrak{m}}^*$ in case of ${\mathfrak{g}}=({\mathfrak{m}}^*,[\ ,\ ]_{{\mathfrak{m}}^*}=0)$ in Corollary~\ref{prelie-C-cor}. We already know how to quantise the algebra $C^\infty(\tilde{{\mathfrak{m}}}^*)$ or $S(\tilde{{\mathfrak{m}}})$ and its differential graded algebra as in Example~\ref{kk}. More precisely, the quantisation of $S(\tilde{{\mathfrak{m}}})$ is the noncommutative algebra $U_\lambda(\tilde{{\mathfrak{m}}})$ with relation $xy-yx=\lambda [x,y]$ for any $x,y\in\tilde{{\mathfrak{m}}},$ namely \[U_\lambda(\tilde{{\mathfrak{m}}})=U_\lambda({\mathfrak{m}}^*{>\!\!\!\triangleleft}{\mathfrak{m}})=S({\mathfrak{m}}^*){>\!\!\!\triangleleft} U_\lambda({\mathfrak{m}})\] with relation $x\phi-\phi x=\lambda x{\triangleright}\phi$ for any $x\in{\mathfrak{m}},\,\phi\in{\mathfrak{m}}^*.$ Besides, as in Example~\ref{kk} and Proposition~\ref{envel}. the preconnection on $\tilde{{\mathfrak{m}}}^*=\underline{\mathfrak{m}}{>\!\!\blacktriangleleft}{\mathfrak{m}}^*$ is given by \[\nabla_{\widetilde{(\phi,x)}}{\rm d}((\psi,y))={\rm d}((\phi,x)\tilde{\circ} (\psi,y))={\rm d}(\phi\ast\psi+x{\triangleright}\phi,x\circ y).\] Thus, the quantised differential calculus is \[\Omega(U_\lambda(\tilde{{\mathfrak{m}}}))=U_\lambda(\tilde{{\mathfrak{m}}}){\triangleright\!\!\!<} \Lambda(\tilde{{\mathfrak{m}}})=(S({\mathfrak{m}}^*){>\!\!\!\triangleleft} U_\lambda({{\mathfrak{m}}})){\triangleright\!\!\!<} \Lambda({\mathfrak{m}}^*\oplus{\mathfrak{m}})\] with relation \[[(\phi,x),{\rm d}(\psi,y)]=\lambda\,{\rm d} (\phi\ast\psi+x{\triangleright}\phi,x\circ y)\] for any $(\phi,x), (\psi,y)\in \tilde{{\mathfrak{m}}}\subset U_\lambda(\tilde{{\mathfrak{m}}}),$ where $\Lambda({\mathfrak{m}}^*\oplus{\mathfrak{m}})$ denotes the usual exterior algebra on vector space ${\mathfrak{m}}^*\oplus{\mathfrak{m}}$ and ${\rm d} (\psi,y)=1\otimes (\psi+y)\in 1\otimes\Lambda.$ \end{example} \begin{example}\label{qt-pc} In the case of ${\mathfrak{g}}$ quasitriangular with $r_+{\triangleright} x=0$ on ${\mathfrak{g}}$ as in Example~\ref{qt-blie}. According to Corollary~\ref{prelie-C-cor}, if ${\mathfrak{g}}$ admits a pre-Lie product $\ast$ such that \[[r{}^{(1)},x\ast y]\otimes r{}^{(2)}=[r{}^{(1)},x]\ast y\otimes r{}^{(2)}+x\ast [r{}^{(1)},y]\otimes r{}^{(2)},\] i.e., corresponding (\ref{Xi-ast}), then $\overline{{\mathfrak{g}}}{>\!\!\!\triangleleft\kern-.33em\cdot}{\mathfrak{g}}^*$ in Example~\ref{qt-blie} admits a pre-Lie structure $\widetilde\circ$ \[x\widetilde\circ y=x\ast y,\quad \phi\widetilde\circ x=\phi{\triangleright} x=-\<\phi,r{}^{(2)}\>[r{}^{(1)}, x],\quad \phi\widetilde\circ\psi=-\<\phi,r{}^{(2)}\>{\rm ad}^*_{r{}^{(1)}}\psi,\] thus determines a Poisson-compatible left-covariant flat preconnection on cotangent bundle $\underline{{\mathfrak{g}}^*}{>\!\!\!\triangleleft\kern-.33em\cdot} G.$ Such preconnection is bicovariant if $\ast$ is associative and (\ref{ast}) holds, in this case condition (\ref{Xi-ast}) vanishes. \end{example} \begin{example} Let ${\mathfrak{m}}$ the 2-dimensional complex nonabelian Lie algebra defined by $[x,y]=x.$ Its dual ${\mathfrak{m}}^*$ is 2-dimensional abelian Lie algebra which also admits five families of pre-Lie structures on ${\mathfrak{m}}$, see~\cite{Bu}. Among many choices of pairs of pre-Lie structures for ${\mathfrak{m}}$ and ${\mathfrak{m}}^*,$ there are two pairs meet our condition (\ref{Xi-ast}) and provide a pre-Lie structure for $\tilde{{\mathfrak{m}}}={\mathfrak{m}}^*{>\!\!\!\triangleleft}{\mathfrak{m}},$ namely \begin{align*} (1)&\quad y\circ x=-x,\quad y^2=-\frac{1}{2} y,\quad Y\ast Y=X,\\ {}&\quad x{\triangleright} X=0,\quad x{\triangleright} Y=0,\quad y{\triangleright} X= X,\quad y{\triangleright} Y={1\over 2} Y;\\ (2)&\quad y\circ x=- x,\quad X\ast Y=X,\quad Y\ast X=X,\quad Y\ast Y=Y,\\ {}&\quad x{\triangleright} X=0,\quad y{\triangleright} Y=0,\quad y{\triangleright} X= X,\quad y{\triangleright} Y=0, \end{align*} where $\{X,Y\}$ is chosen to be the basis of ${\mathfrak{m}}^*$ dual to $\{x,y\}$. According to Theorem~\ref{prelie-C} and analysis in Example~\ref{kk-pc}, we know that $\Omega(U_\lambda(\tilde{{\mathfrak{m}}}))=U_\lambda(\tilde {\mathfrak{m}}){\triangleright\!\!\!<}\Lambda({\mathfrak{m}}^*\oplus{\mathfrak{m}})$ is a bicovariant differential graded algebra. In particular, $\Omega^1(U_\lambda(\tilde{{\mathfrak{m}}}))=U_\lambda(\tilde {\mathfrak{m}}){\rm d} x\oplus U_\lambda(\tilde {\mathfrak{m}}){\rm d} y\oplus U_\lambda(\tilde {\mathfrak{m}}){\rm d} X\oplus U_\lambda(\tilde {\mathfrak{m}}){\rm d} Y.$ The commutation relations for case (1) are: \begin{gather*} [y,{\rm d} x]=-\lambda{\rm d} x,\quad [y,{\rm d} y]=-\frac{1}{2}\lambda{\rm d} y,\quad [Y,{\rm d} Y]=\lambda{\rm d} X,\\ [y,{\rm d} X]=\lambda {\rm d} X,\quad [y,{\rm d} Y]=\frac{1}{2}\lambda{\rm d} Y. \end{gather*} For case (2), we have \begin{gather*} [y,{\rm d} x]=-\lambda {\rm d} x,\quad [X,{\rm d} Y]=\lambda{\rm d} X,\quad [Y,{\rm d} X]=\lambda{\rm d} X,\quad [Y,{\rm d} Y]=\lambda{\rm d} Y,\\ [y,{\rm d} X]=\lambda{\rm d} X. \end{gather*} \end{example}
1,108,101,566,159
arxiv
\section{Introduction} While the experiments at RHIC \cite{Muller:2006ee,Adams:2005dq} advanced the empirical knowledge of the hot QCD matter dramatically, the understanding of the state of matter that has been formed is still lacking. For example, the STAR collaboration's assessment \cite{Adams:2005dq} of the evidence from RHIC experiments depicts a very intricate, difficult-to-understand picture of the hot QCD matter. Among the issues pointed out as important was the need to clarify the role of quark-antiquark ($q\bar q$) bound states continuing existence above the critical temperature $T_c$, as well as the role of the chiral phase transition. Both of these issues are consistently treated within the Dyson-Schwinger (DS) approach to quark-hadron physics. Dynamical chiral symmetry breaking (DChSB) as the crucial low-energy QCD phenomenon is well-understood in the rainbow-ladder approximation (RLA), a symmetry preserving truncation of the hierarchy of DS equations. Thanks to this, the behavior of the pion mass is in accord with the Goldstone theorem: the pion mass shows correct behavior while approaching the chiral limit, as seen on Fig.~\ref{Mpi2}. This correct chiral behavior is a general feature of the DS approach in RLA, and not a consequence of our specific model choice. \begin{figure \centerline{\includegraphics[width=100mm,angle=0]{Mpi2.eps}} \caption{Correct chiral behavior of the pion mass close to chiral limit: $M_\pi^2 \propto {\widetilde m}_q $, where ${\widetilde m}_q $ is the bare quark mass parameter. It is akin to the notion of the current quark mass in QCD and gives the extent of the explicit chiral symmetry breaking as opposed to DChSB -- see Eq. (\ref{DSE}) below. } \label{Mpi2} \end{figure} For recent reviews of the DS approach, see, e.g., Refs. \cite{Roberts:2000aa,Alkofer:2000wg}, of which the first \cite{Roberts:2000aa} also reviews the studies of QCD DS equations at finite temperature, started in \cite{Bender:1996bm}. Unfortunately, the extension of DS calculations to non-vanishing temperatures is technically quite difficult. This usage of separable model interactions greatly simplifies DS calculations at finite temperatures, while yielding equivalent results on a given level of truncation \cite{Burden:1996nh,Blaschke:2000gd}. A recent update of this covariant separable approach with application to the scalar $\sigma$ meson at finite temperature can be found in \cite{Kalinovsky:2005kx}. Here, we present results for the quark mass spectrum at zero and finite temperature, extending previous work by including the strange flavor. \section{The separable model at zero temperature} \label{Model} The dressed quark propagator $S_q(p)$ is the solution of its DS equation \cite{Roberts:2000aa,Alkofer:2000wg}, \begin{eqnarray}\label{sde} S_q(p)^{-1} = i \gamma\cdot{p} + \widetilde{m}_q + \frac{4}{3} \int \frac{d^4\ell}{(2\pi)^4} \, g^2 D_{\mu\nu}^{\mathrm{eff}} (p-\ell) \gamma_\mu S_q(\ell) \gamma_\nu \, , \label{DSE} \end{eqnarray} while the $q\bar q'$ meson Bethe-Salpeter (BS) bound-state vertex $\Gamma_{q\bar q'}(p,P)$ is the solution of the BS equation (BSE) \begin{eqnarray}\label{bse} -\lambda(P^2)\Gamma_{q\bar q'}(p,P) = \frac{4}{3} \int \frac{d^4\ell}{(2\pi)^4} g^2 D_{\mu\nu}^{\mathrm{eff}} (p-\ell) \gamma_\mu S_q(\ell_+) \Gamma_{q\bar q'}(\ell,P) S_q(\ell_-) \gamma_\nu, \, \label{BSE} \end{eqnarray} where $D_{\mu\nu}^{\mathrm{eff}}(p-\ell)$ is an effective gluon propagator modeling the nonperturbative QCD effects, $\widetilde{m}_q$ is the current quark mass, the index $q$ (or $q'$) stands for the quark flavor ($u, d$ or $s$), $P$ is the total momentum, and $\ell_{\pm}=\ell\pm P/2$. The chiral limit is obtained by setting $\widetilde{m}_q=0$. The meson mass is identified from $\lambda(P^2=-M^2)=1$. Equations (\ref{DSE}) and (\ref{BSE}) are written in the Euclidean space, and in the consistent rainbow-ladder truncation. \begin{figure}[!hbt] \centerline{\includegraphics[width=120mm,angle=0]{figmats1.eps}} \caption{The sum in Eq.~(\ref{gapat}) as the function of number of the Matsubara modes included in summation at temperature $T=1$ MeV. Eq.~(\ref{gapat}) is normalized to value calculated with enough Matsubara modes ($n=1000$) to achieve prescribed numerical precision.} \label{mats1} \end{figure} The simplest separable Ansatz which reproduces in RLA a nonperturbative solution of (\ref{DSE}) for any effective gluon propagator in a Feynman-like gauge $g^2 D_{\mu\nu}^{\mathrm{eff}} (p-\ell) \rightarrow \delta_{\mu\nu} D(p^2,\ell^2,p\cdot \ell)$ is \cite{Burden:1996nh,Blaschke:2000gd} \begin{eqnarray} D(p^2,\ell^2,p\cdot \ell)=D_0 {\cal F}_0(p^2) {\cal F}_0(\ell^2) + D_1 {\cal F}_1(p^2) (p\cdot \ell ) {\cal F}_1(\ell^2)~. \label{sepAnsatz} \end{eqnarray} This is a rank-2 separable interaction with two strength parameters $D_i$ and corresponding form factors ${\cal F}_i(p^2)$, $i=1,2$. The choice for these quantities is constrained to the solution of the DSE for the quark propagator (\ref{DSE}) \begin{eqnarray} S_q(p)^{-1} = i \gamma\cdot{p} A_q(p^2) + B_q(p^2) \equiv Z^{-1}_q(p^2) [ i \gamma\cdot{p} + m_q(p^2) ]~, \end{eqnarray} where $m_q(p^2)=B_q(p^2)/A_q(p^2)$ is the dynamical mass function and $Z_q(p^2)=A^{-1}_q(p^2)$ the wave function renormalization. Using the separable Ansatz (\ref{sepAnsatz}) in (\ref{sde}), the gap equations for the quark amplitudes $A_q(p^2)$ and $B_q(p^2)$ read \begin{eqnarray} B_q(p^2) - \widetilde{m}_q = \frac{16}{3} \int \frac{d^4\ell}{(2\pi)^4} D(p^2,\ell^2,p \cdot \ell) \frac{B_q(\ell^2)}{\ell^2 A_q^2(\ell^2)+ B_q^2(\ell^2)} = b_q {\cal F}_0(p^2)\, , \label{gap1}\\ \left[A_q(p^2)-1 \right] =\frac{8}{3p^2} \int \frac{d^4\ell}{(2\pi)^4} D(p^2,\ell^2,p \cdot \ell) \frac{(p\cdot \ell) A_q(\ell^2)}{\ell^2A_q^2(\ell^2)+B_q^2(\ell^2)} = a_q {\cal F}_1(p^2)\, . \label{gap2} \end{eqnarray} Once the coefficients $a_q$ and $b_q$ are obtained by solving the gap equations (\ref{gap1}) and (\ref{gap2}), the only model parameters remaining are $\widetilde{m}_q$ and the parameters of the gluon propagator, to be fixed by meson phenomenology. \begin{figure}[!hbt] \centerline{\includegraphics[width=120mm,angle=0]{figmats10.eps}} \caption{The sum in Eq.~(\ref{gapat}) as the function of number of the Matsubara modes included in summation at temperature $T=10$ MeV. Eq.~(\ref{gapat}) is normalized to value calculated with enough Matsubara modes ($n=100$) to achieve prescribed numerical precision.} \label{mats10} \end{figure} \begin{figure}[!hbt] \centerline{\includegraphics[width=120mm,angle=0]{figmats100.eps}} \caption{The sum in the Eq.~(\ref{gapat}) as function of number of the Matsubara modes included in summation at temperature $T=100$ MeV. Eq.~(\ref{gapat}) is normalized to value calculated with enough Matsubara modes ($n=10$) to achieve prescribed numerical precision.} \label{mats100} \end{figure} \section{Extension to finite temperature} The extension of the separable model studies to the finite temperature case, $T\neq 0$, is systematically accomplished by a transcription of the Euclidean quark 4-momentum via {$p \rightarrow$} {$ p_n =$} {$(\omega_n, \vec{p})$}, where {$\omega_n=(2n+1)\pi T$} are the discrete Matsubara frequencies. In the Matsubara formalism, the number of coupled equations represented by (\ref{sde}) and (\ref{bse}) scales up with the number of fermion Matsubara modes included. For studies near and above the transition, \mbox{$T \ge 100 $}~MeV, using only 10 such modes appears adequate. Nevertheless, the appropriate number can be more than $10^3$ if the continuity with \mbox{$T=0$} results is to be verified. Convergence of the sum in the Eq.~(\ref{gapat}) is shown on Figs.~\ref{mats1}--\ref{mats100}. The effective $\bar q q$ interaction, defined in the present paper by the Ansatz (\ref{sepAnsatz}) and the form factors (\ref{model_interaction}) below, will automatically decrease with increasing $T$ without the introduction of an explicit $T$-dependence which would require new parameters. The solution of the DS equation for the dressed quark propagator now takes the form \begin{eqnarray} S_q^{-1}(p_n, T) = i\vec{\gamma} \cdot \vec{p}\; A_q(p_n^2,T) + i \gamma_4 \omega_n\; C_q(p_n^2,T)+ B_q(p_n^2,T),\; \label{invprop} \end{eqnarray} where {$p_n^2=\omega_n^2 + \vec{p}^{\,2}$} and the quark amplitudes {$B_q(p_n^2,T) = \widetilde{m}_q + b_q(T) {\cal F}_0(p_n^2)$}, \mbox{$A_q(p_n^2,T) = 1+ a_q(T) {\cal F}_1(p_n^2)$}, and {$C_q(p_n^2,T) = 1+ c_q(T) {\cal F}_1(p_n^2)$} are defined by the temperature-dependent coefficients $a_q(T)$,$ b_q(T)$, and $c_q(T)$ to be determined from the set of three coupled non-linear equations \begin{eqnarray} a_q(T) = \frac{8 D_1}{9}\, T \sum_n \int \frac{d^3p}{(2\pi)^3}\,{\cal F}_1(p_n^2)\, \vec{p}^{\,2}\, [1 + a_q(T) {\cal F}_1(p_n^2)]\; d_q^{-1}(p_n^2,T) \; , \label{gapat} \\ c_q(T) = \frac{8 D_1}{3}\, T \sum_n \int \frac{d^3p}{(2\pi)^3}\,{\cal F}_1(p_n^2)\, \omega_n^2\, [1 + c_q(T) {\cal F}_1(p_n^2)]\; d_q^{-1}(p_n^2,T) \; , \\ b_q(T) = \frac{16 D_0}{3}\, T \sum_n \int \frac{d^3p}{(2\pi)^3}\,{\cal F}_0(p_n^2)\, [\widetilde{m}_q + b_q(T) {\cal F}_0(p_n^2)]\; d_q^{-1}(p_n^2,T) \; . \end{eqnarray} The function $d_q(p_n^2,T)$ is the denominator of the quark propagator $S_q(p_n, T)$, and is given by \begin{eqnarray} d_q(p_n^2,T) = \vec{p}^{\,2}A_q^2(p_n^2,T) +\omega_n^2C_q^2(p_n^2,T) + B_q^2(p_n^2,T). \label{Sdenominator} \end{eqnarray} The procedure for solving gap equations for a given temperature $T$ is the same as in $T=0$ case, but one has to control the appropriate number of Matsubara modes as mentioned above. \begin{figure}[!hbt] \centerline{\includegraphics[width=120mm,angle=0]{figForm.eps}} \caption{The $p^2$ dependence of the form factors ${\cal F}_0(p^2)$ and ${\cal F}_1(p^2)$.} \label{figformf} \end{figure} \section{Confinement and Dynamical Chiral Symmetry Breaking} If there are no poles in the quark propagator $S_q(p)$ for real timelike $p^2$ then there is no physical quark mass shell. This entails that quarks cannot propagate freely, and the description of hadronic processes will not be hindered by unphysical quark production thresholds. This sufficient condition is a viable possibility for realizing quark confinement~\cite{Blaschke:2000gd}. A nontrivial solution for $B_q(p^2)$ in the chiral limit (${\widetilde m}_0=0$) signals DChSB. There is a connection between quark confinement realized as the lack of a quark mass shell and the existence of a strong quark mass function in the infrared through DChSB. The propagator is confining if \mbox{$m_q^2(p^2) \neq -p^2$} for real $p^2$, where the quark mass function is \mbox{$m_q(p^2)=B_q(p^2)/A_q(p^2)$}. In the present separable model, the strength \mbox{$b_q=B_q(0)$}, which is generated by solving Eqs. (\ref{gap1}) and (\ref{gap2}), controls both confinement and DChSB. At finite temperature, the strength $b_q(T)$ for the quark mass function will decrease with $T$, until the denominator (\ref{Sdenominator}) of the quark propagator can vanish for some timelike momentum, and the quark can come on the free mass shell. The connection between deconfinement and disappearance of DChSB is thus clear in the DS approach. Also the present model is therefore expected to have a deconfinement transition at or a little before the chiral restoration transition associated with \mbox{$b_0(T) \to 0$}. The following simple choice of the separable interaction form factors (graphically represented on Fig.~\ref{figformf}), \begin{equation} {\cal F}_0(p^2)=\exp(-p^2/\Lambda_0^2)~,~~ {\cal F}_1(p^2)=\frac{1+\exp(-p_0^2/\Lambda_1^2)} {1+\exp((p^2-p_0^2)/\Lambda_1^2)}~, \label{model_interaction} \end{equation} is used to obtain numerical solutions which reproduce very well the phenomenology of the light pseudoscalar mesons and generate an acceptable value for the chiral condensate. The resulting quark propagator is found to be confining and the infrared strength and shape of quark amplitudes $A_q(p^2)$ and $B_q(p^2)$ are in quantitative agreement with the typical DS studies. Gaussian-type form factors are used as a minimal way to preserve these properties while realizing that the ultraviolet suppression is much stronger than the asymptotic fall off (with logarithmic corrections) known from perturbative QCD and numerical studies on the lattice \cite{Parappilly:2005ei}. \begin{figure}[!hbt] \centerline{\includegraphics[width=120mm,angle=0]{fig3b.eps}} \caption{The $p^2$ dependence (at $T=0$) of the dynamically generated quark masses $m_s(p^2), m_u(p^2)$ for, respectively, the strange and the (isosymmetric) nonstrange case.} \label{figMp2} \end{figure} \section{Bound-state amplitudes} With the separable interaction, the allowed form of the solution of Eq. (\ref{bse}) for the pseudoscalar BS amplitude is \cite{Burden:1996nh} \begin{eqnarray} \Gamma_{PS}(\ell;P) =\gamma_5 \left(i E_{PS} (P^2) + \slashchar{P} F_{PS}(P^2)\right) \; {\cal F}_0(\ell^2). \label{psbsa} \end{eqnarray} The dependence on the relative momentum $\ell$ is described only by the first form factor ${\cal F}_0(\ell^2)$. The second term ${\cal F}_1$ of the interaction can contribute to BS amplitude only indirectly via the quark propagators. The pseudoscalar BSE (\ref{bse}) becomes a $2\times 2$ matrix eigenvalue problem \mbox{${\cal K}(P^2) {\cal V} = \lambda(P^2) {\cal V}$} where the eigenvector is \mbox{${\cal V} = (E_{PS}, F_{PS})$}. The kernel is \begin{eqnarray} {\cal K}_{ij}(P^2) = - \frac{4 D_0}{3}\, {\rm tr_s}\, \int\, \frac{d^4\ell}{(2\pi)^4}{\cal F}_0^2(\ell^2) \left[ \hat{t}_i\, S_q(\ell_+)\, t_j\, S_{\bar{q}}(\ell_-)\, \right]~, \label{pskernel} \end{eqnarray} where $i,j=1,2$ denote the components of \mbox{$t=(i\gamma_5, \gamma_5\, \slashchar{P})$} and \mbox{$\hat{t}=(i\gamma_5,$} \mbox{$-\gamma_5\, \slashchar{P}/2P^2)$}. The separable model produces the same momentum dependence for both amplitudes (containing $F_{PS}$ and $E_{PS}$) in the BS amplitude (\ref{psbsa}): the dependence of the quark amplitude $B_q(\ell^2)$. Goldstone's theorem is preserved by the present separable model; in the chiral limit, whenever a nontrivial gap-equation solution for $B_q(p^2)$ exists, there will be a massless pion solution to (\ref{pskernel}). The normalization condition for the pseudoscalar BS amplitude can be expressed as \begin{eqnarray} \label{pinorm} 2 P_\mu &=& N_f N_c \, \frac{\partial}{\partial P_\mu} \, \, \int \frac{d^4\ell}{(2\pi)^4}\, {\rm tr}_s \left[ \bar\Gamma_{PS}(\ell;-K)\, \right.\times \nonumber \\ &\times&\left.\left.S_q(\ell_+)\, \Gamma_{PS}(\ell;K)\,S_{\bar{q}}(\ell_-) \right] \right|_{P^2=K^2=-M_{PS}^2}. \end{eqnarray} Here $\bar\Gamma(q;K)$ is the charge conjugate amplitude $[{\cal C}^{-1} \Gamma(-q,K) {\cal C}]^{\rm t}$, where ${\cal C}=\gamma_2 \gamma_4$ and the index t denotes a matrix transpose. Both the number of colors $N_c$ and light flavors $N_f$ are 3. At \mbox{$T=0$} the mass-shell condition for a meson as a $\bar q q$ bound state of the BSE is equivalent to the appearance of a pole in the $\bar q q$ scattering amplitude as a function of $P^2$. At $T\neq0$ in the Matsubara formalism, the $O(4)$ symmetry is broken by the heat bath and we have \mbox{$P \to (\Omega_m,\vec{P})$} where \mbox{$\Omega_m = 2m \pi T$}. Bound states and the poles they generate in propagators may be investigated through polarization tensors, correlators or Bethe-Salpeter eigenvalues. This pole structure is characterized by information at discrete points $\Omega_m$ on the imaginary energy axis and at a continuum of 3-momenta. One may search for poles as a function of $\vec{P}^2$ thus identifying the so - called spatial or screening masses for each Matsubara mode. These serve as one particular characterization of the propagator and the \mbox{$T > 0 $} bound states. In the present context, the eigenvalues of the BSE become $\lambda(P^2) \to \tilde{\lambda}(\Omega_m^2,\vec{P}^2;T)$. The temporal meson masses identified by zeros of $1-\tilde{\lambda}(\Omega^2,0;T)$ will differ in general from the spatial masses identified by zeros of $1-\tilde{\lambda}(0,\vec{P}^2;T)$. They are however identical at \mbox{$T =0$} and an approximate degeneracy can be expected to extend over the finite $T$ domain, where the $O(4)$ symmetry is not strongly broken. The general form of the finite-$T$ pseudoscalar BS amplitude allowed by the separable model is \begin{eqnarray} \Gamma_{PS}(q_n;P_m) &=&\gamma_5 \left(i E_{PS} (P_m^2) + \gamma_4 \, \Omega_m \tilde{F}_{PS}(P_m^2)\right. \nonumber \\ &+& \left. \vec{\gamma} \cdot \vec{P} F_{PS}(P_m^2)\right) \; {\cal F}_0(q_n^2) \; . \label{pibsaT} \end{eqnarray} The separable BSE becomes a $3\times 3$ matrix eigenvalue problem with a kernel that is a generalization of Eq.~(\ref{pskernel}). In the limit \mbox{$\Omega_m \to 0$}, as is required for the spatial mode of interest here, the amplitude \mbox{$\hat{F}_{PS} = \Omega_m \tilde{F}_{PS}$} is trivially zero. \section{Results} Parameters of the model are completely fixed by meson phenomenology calculated from the model as discussed in \cite{Blaschke:2000gd,Kalinovsky:2005kx}. In the nonstrange sector, we work in the isosymmetric limit and adopt bare quark masses ${\widetilde m}_u = {\widetilde m}_d = 5.5$ MeV and in strange sector we adopt ${\widetilde m}_s = 115$ MeV. Then the parameter values \begin{equation} \Lambda_0=758 \, {\rm MeV}, \quad \Lambda_1=961 \, {\rm MeV}, \quad p_0=600 \, {\rm MeV}, \label{Lambda12p0} \end{equation} \begin{equation} D_0\Lambda_0^2=219 \, , \qquad D_1\Lambda_1^4=40 \, , \label{D0D1} \end{equation} lead, through the gap equation, to $a_{u,d}=0.672$, $b_{u,d}=660$ MeV, $a_{s}=0.657$ and $b_{s}=998$ MeV i.e., to the dynamically generated momentum-dependent mass functions $m_q(p^2)$ shown in Fig. \ref{figMp2}. In the limit of high $p^2$, they converge to ${\widetilde m}_u$ and ${\widetilde m}_s$. However, at low $p^2$, the values of $m_u(p^2)$ are close to the typical constituent quark mass scale $\sim m_\rho/2 \sim m_N/3$ with the maximum value at $p^2=0$, $m_u(0)=398$ MeV. The corresponding value for the strange quark is $m_s(0)=672$ MeV. Fig. \ref{figMp2} hence shows that in the domain of low and intermediate $p^2 \lsim 1$ GeV$^2$, the dynamically generated quark masses are of the order of typical constituent quark mass values. Thus, the DS approach provides a derivation of the constituent quark model \cite{Kekez:1998xr} from a more fundamental level, with the correct chiral behavior of QCD. \begin{figure}[!hbt] \centerline{\includegraphics[width=120mm,angle=0]{figBT.eps}} \caption{The temperature dependence of $B_s(0), B_u(0)$ and $B_0(0)$, the scalar propagator functions at $p^2=0$, for the strange, the nonstrange and the chiral-limit cases, respectively. The temperature dependence of the chiral quark-antiquark condensate, $-\langle q\bar{q}\rangle^{1/3}_0$, is also shown (by the lowest curve). Both chiral-limit quantities, $B_0(0)$ and $-\langle q\bar{q}\rangle^{1/3}_0$, vanish at the chiral-symmetry restoration temperature $T_\mathrm{Ch}=128$ MeV. } \label{figBT} \end{figure} Obtaining such dynamically generated constituent quark masses, as previous experience with the DS approach shows (see, e.g., Refs. such as \cite{Roberts:2000aa,Alkofer:2000wg,Kekez:1998x }), is essential for reproducing the static and other low-energy properties of hadrons, including decays. (We would have to turn to less simplified DS models for incorporating the correct perturbative behaviors, including that of the quark masses. Such models are amply reviewed or used in, e.g., Refs. \cite{Roberts:2000aa,Alkofer:2000wg,Kekez:1998xr Klabucar:1997zi}, but addressing them is beyond the present scope, where the perturbative regime is not important.) Another important result related to the dynamically generated, dressed quark propagator, is the chiral quark-antiquark condensate $\langle q\bar{q}\rangle_0$. For the parameter values quoted above, we obtain the zero-temperature value $\langle q\bar{q}\rangle_0 = (-217 \, {\rm MeV})^3$, which practically coincides with the standard QCD value. The extension of these results to finite temperatures is given in Figs. \ref{figBT}, \ref{figMT}. Very important is the temperature dependence of the chiral-limit quantities $B_0(0)_T$ and $\langle q \bar q \rangle_{0}(T)$, whose vanishing with $T$ determines the chiral restoration temperature $T_\mathrm{Ch}$. We find $T_\mathrm{Ch} = 128$ MeV in the present model. \begin{figure}[!hbt] \centerline{\includegraphics[width=120mm,angle=0]{figMT.eps}} \caption{The temperature dependence of $m_s(0), m_u(0)$ and $m_0(0)$, the dynamically generated quark masses at $p^2=0$ for the strange, the nonstrange and the chiral-limit cases, respectively.} \label{figMT} \end{figure} The temperature dependences of the functions giving the vector part of the quark propagator, $A_{u,s}(0)_T$ and $C_{u,s}(0)_T$, are depicted in Fig. \ref{figACT}. Their difference is a measure of the O(4) symmetry breaking with the temperature $T$. \begin{figure}[!hbt] \centerline{\includegraphics[width=120mm,angle=0]{figACT.eps}} \caption{The violation of $O(4)$ symmetry with $T$ is exhibited on the example of $A_{u,s}(0,T)$ and $C_{u,s}(0,T)$.} \label{figACT} \end{figure} The results for pseudoscalar $E_{PS}$ and pseudovector $F_{PS}$ amplitudes can be seen on Fig.~\ref{figvert}. The pseudovector amplitude for pion $F_{\pi}$ is significantly different from zero but decreases rapidly above the transition. \begin{figure}[!hbt] \centerline{\includegraphics[width=120mm,angle=0]{figvert.eps}} \caption{The temperature dependence of pseudoscalar covariants in BS amplitude} \label{figvert} \end{figure} The presented model, when applied in the framework of the Bethe-Salpeter approach to mesons as quark-antiquark bound states, produces a very satisfactory description of the whole light pseudoscalar nonet, both at zero and finite temperatures \cite{Blaschke+alZTF}. The masses and decay constants of the pseudoscalar nonet mesons at $T=0$ are summarized and compared with experiment in Table \ref{piKssbarTable}. The first three rows in Table \ref{piKssbarTable} give the masses $M_{PS}$ and decay constants $f_{PS}$ of the pseudoscalar $q{\bar q}'$ bound states ${PS} = \pi^+, K^+$, and $s\bar s$ resulting, through the BS equation (\ref{BSE}), from the separable interaction (\ref{sepAnsatz}). The $s\bar s$ pseudoscalar meson is a useful theoretical construct, but is not realized physically, at least not at $T=0$. (It is therefore not associated with any experimental value in this table. Also note that the unphysical mass $M_{s\bar s}$ given in this table does not include the contribution from the gluon anomaly.) The parameter values [(\ref{Lambda12p0}) and (\ref{D0D1}) in the effective interaction $D_{\mu\nu}^{\rm eff}$, and the bare quark masses ${\widetilde m}_{u,d} = 5.5$ MeV and ${\widetilde m}_s = 115$ MeV] are fixed by fitting the pion and kaon masses and decay constants. These masses and decay constants are the input for the description of the $\eta$--$\eta^\prime$ complex \cite{Klabucar:1997zi}. More precisely, $\eta$ and $\eta'$ masses are obtained by combining the contributions from the non-Abelian (gluon) axial anomaly with the non-anomalous contributions obtained from the results on the masses of $\pi, K$, and the unphysical $s\bar s$ pseudoscalar \cite{Klabucar:1997zi}. For this procedure, it is essential that we have the good chiral behavior of our $q{\bar q}'$ bound states, which are simultaneously also the (almost-)Goldstone bosons, so that \begin{equation} M_{q{\bar q}'}^2 = {\rm Const} \, ({\widetilde m}_q + {\widetilde m}_{q'}) \, , \label{goodChiral} \end{equation} as seen in Fig. \ref{Mpi2}. For example, Eq. (\ref{goodChiral}) guarantees the relation $M_{\pi}^2 + M_{s\bar s}^2 = 2 M_K^2$ which is utilized in Refs. \cite{Klabucar:1997zi,Blaschke+alZTF} in the treatment of the $\eta$-$\eta'$ complex. Indeed, the concrete model results for $M_{\pi}, M_K$ and $M_{s\bar s}$ in Table \ref{piKssbarTable} obey this relation up to $\frac{1}{4}$\%. \begin{table}[!ht] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $ {PS} $ & $M_{PS}$ & $M_{PS}^{\rm exp}$ & $f_{PS}$ & $f_{PS}^{\rm exp}$ \\ \hline $\pi^+$ & {0.140} & {0.1396} & {0.092} & ${0.0924\pm 0.0003}$ \\ \hline $K^+$ & {0.495} & {0.4937} & {0.110} & ${0.1130\pm 0.0010}$ \\ \hline $s\bar s$ & {0.685} & & {0.119} & \\ \hline $\eta$ & {0.543} & 0.5473 & & \\ \hline $\eta'$ & {0.933} & 0.9578 & & \\ \hline \end{tabular} \end{center} \caption{ Results on the pseudoscalar mesons at zero temperature, $T=0$, and comparison with experiment (where appropriate). All results are in GeV. } \label{piKssbarTable} \end{table} Especially interesting is the temperature behavior of the $\eta$-$\eta'$ complex, where the results for the meson masses differ very much for various possible relationships between the chiral restoration temperature $T_\mathrm{Ch}$ and the temperature of melting of the topological susceptibility, denoted by $T_\chi$. The once favored scenario of Pisarski and Wilczek \cite{Pisarski:ms}, where $\eta$ would smoothly evolve with $T$ to purely non-strange $\eta_{\mbox{\scriptsize\it NS}}$, \begin{equation} |\eta_{\mbox{\scriptsize\it NS}}\rangle = \frac{1}{\sqrt{2}} (|u\bar{u}\rangle + |d\bar{d}\rangle) = \frac{1}{\sqrt{3}} |\eta_8\rangle + \sqrt{\frac{2}{3}} |\eta_0\rangle~, \label{etaNSdef} \end{equation} and $\eta'$ to purely strange $\eta_{\mbox{\scriptsize\it S}}$, \begin{equation} |\eta_{\mbox{\scriptsize\it S}}\rangle = |s\bar{s}\rangle = - \sqrt{\frac{2}{3}} |\eta_8\rangle + \frac{1}{\sqrt{3}} |\eta_0\rangle~, \label{etaSdef} \end{equation} would occur only when $T_\chi$ is significantly below $T_\mathrm{Ch}$. Such a case is depicted in Fig. \ref{figPseudoTc85}, where $T_\chi = 2/3 \, T_\mathrm{Ch}$. In this case, around the chiral restoration temperature $\eta$ becomes quite light, and one would expect an increase of the relative multiplicity of $\eta$ mesons around $T_\mathrm{Ch}$. Nevertheless, the possibility that $T_\chi < T_\mathrm{Ch}$ is nowadays disfavored by the lattice results on the temperature dependence of the topological susceptibility \cite{Alles:1996nm}. On the other hand, for $T_\chi \sim T_\mathrm{Ch}$ or $T_\chi > T_\mathrm{Ch}$ we find that $\eta$ never becomes light, while $\eta'$ even becomes very heavy \cite{Blaschke+alZTF}. Thus, for the relationships favored by the lattice, our results \cite{Blaschke+alZTF} indicate so strong suppression of $\eta'$ around the chiral restoration temperature, that it may constitute a useful signal from the hot QCD matter. \begin{figure}[!hbt] \centerline{\includegraphics[width=120mm,angle=0]{figPseudoTc85.eps}} \caption{The relative temperature dependence, on $T/T_\chi$, of the pseudoscalar meson masses for $T_\chi = 2/3 \, T_{\rm Ch}$. The three variously dashed curves represent the pseudoscalar meson masses which do not receive contributions from the gluon anomaly: $M_{\pi}(T)$, $M_{K}(T)$ and $M_{s\bar s}(T)$, depicted by the long-dashed, short dashed and dash-dotted curves, respectively. The lower solid curve is $M_{\eta}(T)$, and the upper solid curve is $M_{\eta'}(T)$. The lower and upper dotted curves are the masses of $\eta_{\mbox{\scriptsize\it NS}}$ and $\eta_{\mbox{\scriptsize\it S}}$. The thin diagonal line is twice the zeroth Matsubara frequency, $2\pi T$. This is the limit to which meson masses should ultimately approach from below at still higher temperatures, where $q\bar q$ states should totally dissolve into a gas of weakly interacting quarks and antiquarks. } \label{figPseudoTc85} \end{figure} \subsubsection*{Acknowledgments} We thank M. Bhagwat, Yu.L. Kalinovsky and P.C. Tandy for discussions. A.E.R.~acknowledges support by RFBR grant No. 05-02-16699, the Heisenberg-Landau program and the HISS Dubna program of the Helmholtz Association. D.H.~and~D.K. were supported by MZT project No.~0119261. D.B. is grateful for support by the Croatian Ministry of Science for a series of guest lectures held in the Physics Department at University of Zagreb, where the present work has been completed. D.K. acknowledges the partial support of Abdus Salam ICTP at Trieste, where a part of this paper was written.
1,108,101,566,160
arxiv
\section{Introduction} It is an inherent feature of natural languages that their expressions have the potential to convey a range of semantic values which their usage in context will constrain in such a way that some value is eventually circumscribed and expressed. This is exemplified in the minimal contrast below, where out of its possible semantic values, the expression \textit{flying planes} express one of such values in each context. \is{flying planes} \begin{exe} \ex \begin{xlist} \ex{Flying planes are complex machines.} \ex{Flying planes is a difficult task.} \end{xlist} \end{exe} Like other natural language expressions, anaphors are also semantically polyvalent. They form however a class of expressions whose context sensitiveness is rather peculiar in as much as for them to eventually express a semantic value, more than being circumscribed from an intrinsic repertoire of potential values, that value is co-specified by the semantic value of other expressions, which are for this reason termed as their antecedents. This is exemplified in the contrasts below with the anaphor \textit{it}, that could be continued at will. This anaphor inherently contributes the information that its denotation is singular and non human. Yet in each different context, it eventually conveys a different semantic value as a result of that value being co-specified by the semantic value of a different antecedent (in italics), \is{it examples} \begin{exe} \ex \begin{xlist} \ex{John pulled off \textit{the wheel}. It was heavy.} \ex{Paul bought \textit{a computer}. It has a touch screen.} \ex{Peter got \textit{a ticket that Paul wanted to buy}. It is for Saturday night.} \ex{...} \end{xlist} \end{exe} Adding to this semantic peculiarity, the dependency of anaphors with respect to their antecedents may exhibit the syntactic peculiarity of being a long-distance relation. This is illustrated with the set of contrasts below, that could be continued at will, where the anaphor \textit{her} and its antecedent anaphor \textit{Mary} can be separated by a string of words of arbitrary length. \pagebreak \is{it examples} \begin{exe} \ex \begin{xlist} \ex{Mary thinks that Peter saw her.} \ex{Mary thinks that John knows that Peter saw her.} \ex{Mary thinks that Paul believes that John thinks that Peter saw her.} \ex{...} \end{xlist} \end{exe} In large enough contexts, an anaphoric expression may have more admissible antecedents than the antecedent that happen to eventually co-specify its interpretation. This receives a minimal example in the sentence below, where the anaphor \textit{herself} has two admissible antecedents, out of which one will eventually end up being the selected antecedent given the respective utterance context (not represented below). \begin{exe} \ex \textit{Claire} described \textit{Joan} to herself. \end{exe} Interestingly, when occurring in a given syntactic position, different anaphoric expressions may have different sets of admissible antecedents. This is illustrated in the emblematic examples below, with two anaphoric expressions from \ili{English} --- \textit{herself} and \textit{her} --- occurring in the same position, each with different sets of admissible antecedents. \begin{exe} \ex Mary's friend knows that Paula's sister described Joan to herself / her. \end{exe} For the expression \textit{herself}, either \textit{Joan} or \textit{Paula's sister} is an admissible antecedent. For \textit{her}, its set of admissible antecedents includes instead \textit{Paula}, \textit{Mary's friend} and \textit{Mary}. Further examples with two anaphoric expressions occurring in the same position and each with different sets of admissible antecedents are the following: \begin{exe} \ex Mary's friend knows that Paula's sister saw her / the little girl. \end{exe} For the expression \textit{the little girl}, either \textit{Paula} or \textit{Mary} is an admissible antecedent. For \textit{her}, its set of admissible antecedents includes additionally \textit{Mary's friend}. Such differences in terms of sets of admissible antecedents are the basis for the partition of nominal anaphoric expressions into different groups according to their anaphoric capacity. It has been an important topic of research to determine how many such groups or types of anaphoric expressions there are, what are the sets of admissible antecedents for each type, what expressions belong to which type in each language, and how to represent and process this anaphoric capacity. The regularities emerging with this inquiry have been condensed in a handful of anaphoric binding constraints, or principles, which seek to capture the relative positioning of anaphors and their admissible antecedents in grammatical representations. From an empirical perspective, these constraints stem from what appears as quite cogent generalisations and exhibit a universal character, given their cross linguistic validity.% \footnote{ \citep{branco:livro00}, \em{i.a.} } From a conceptual point of view, in turn, the relations among binding constraints involve non-trivial cross symmetry, which lends them a modular nature and provides further strength to the plausibility of their universal character.% \footnote{ \citep{branco:2005}. } Accordingly, anaphoric binding principles appear as one of the most significant subsets of grammatical knowledge, usually termed as Binding Theory. \centerline{$\star$} This paper provides a condensed yet systematic and integrated overview of these grammatical constraints on anaphoric binding, that is of the grammatical constraints holding on the pairing of nominal anaphors with their admissible antecedents. The integration into grammar of these anaphoric binding constraints in a formally sound and computationally tractable way, as well as of the appropriate semantic representation of anaphors are also covered in the present summary. On the basis of such overview, the ultimate goal of this paper is to provide an outlook into promising avenues for future research on anaphoric binding and its modelling --- both from symbolic and neural perspectives. \centerline{$\star$} In the next section, Section~\ref{spec}, the empirical generalisations captured in the binding constraints are introduced, together with the relevant auxiliary notions and parameterisation options.% \footnote{ To support this presentation, the frameworks adopted are Head-Driven Phrase Structure Grammar \citep{polsag:hpsg94}, for syntax, and Minimal Recursion Semantics \citep{copesatke:mrs2005} and Underspecified Discourse Representation Theory \citep{frank:sem95}, for semantics. The adoption of or transposition to other sufficiently expressive and well defined frameworks will be quite straightforward.} The key ingredients for the integration of binding constraints into grammar are discussed in Section \ref{sem}, and a detailed account of this integration is provided in the following Section \ref{spec1} --- which is further illustrated with the support of the working example in the Appendix. Section \ref{discuss} is devoted to discuss how the account of anaphoric binding presented in the previous sections ensures a neat interface of grammar with reference processing systems, and thus supports a seamlessly articulation of binding constraints with anaphora resolution. In the penultimate section, Section \ref{reverse}, additional binding constraints are introduced, that hold from the perspective of the antecedents, rather from the perspective of the anaphors, together with the respective supporting empirical evidence. The final Section \ref{outlook} is devoted to provide an outlook into promising avenues for future research that may further enhance our understanding of and our coping with anaphoric binding and its modelling --- both symbolic and neural. \section{Empirical generalisations \label{spec}} \is{binding} \is{anaphoric binding} \is{binding principles} \is{Principle A} \is{Principle Z} \is{Principle B} \is{Principle C} \is{reflexives} \is{short-distance reflexives} \is{long-distance reflexives} \is{pronouns} \is{non-pronouns} Since the so called integrative approach to anaphora resolution was set up,% \footnote{The integrative approach to anaphora resolution was set up in \citep{carb:resol88, richluper:resol88, asher:resol89}, and its practical viability was extensively checked out in \citep{lappin:pron94, mitkov:resol97}. } it is common wisdom that factors determining the antecedents of anaphors divide into filters, or hard constraints, and preferences, or soft constraints. The former exclude impossible antecedents and help to circumscribe the set of admissible antecedents; the latter interact to converge on the eventual antecedent among the admissible antecedents. So-called binding principles are a notorious subset of hard constraints on anaphora resolution: they capture generalisations concerning the constraints on the relative positioning of anaphors with respect to their admissible antecedents in the grammatical geometry of sentences. We present below the definition of binding constraints,% \footnote{ This is the approach to Binding Theory proposed in \citep{polsag:binding92} and \citep[Chap.6]{polsag:hpsg94}, and subsequent developments in \citep{xue:ziji94,branco:branch96,brancoMarrafa:subject97, manningSag:1999, wechsler:1999, koenig:equa99, branco:ldrefl99, richter:quant99, golde:diss99, branco:livro00, kiss:2001, branco:2002a, branco:2002b, branco:2002c} {\em i.a.}} which resorts to a few auxiliary notions --- locality, o-command, o-binding ---, whose definition, in turn, are presented right afterwards. There are four such constraints on the anaphoric capacity of nominals, named Principle A, Z, B and C. They induce a partition of the set of anaphors into four classes. According to this partition, every nominal anaphor is of one of the following anaphoric types: short-distance reflexive, long-distance reflexive, pronoun, or non-pronoun. The definition of each binding principle in (\ref{PrincipleA})-(\ref{PrincipleC}) is paired with an illustrative example with key grammatical contrasts empirically supporting the respective generalisation. In particular, Principle A in (\ref{PrincipleA}) is paired with an example with the short-distance reflexive {\em himself}, Principle Z in (\ref{PrincipleZ}) is paired with the Portuguese long-distance reflexive {\em ele pr\'{o}prio}, Principle B in (\ref{PrincipleB}) with the pronoun {\em him}, and Principle C in (\ref{PrincipleC}) with the non pronoun {\em the boy}. These examples will be discussed below right after the definitions of the auxiliary notions have been presented. \begin{exe} \ex \label{PrincipleA} {\textbf{Principle A:}} A locally o-commanded short-distance reflexive must be \mbox{locally} o-bound. \sn {...{\em X}$_{x}$...[Lee$_{i}$'s friend]$_{j}$ thinks [[Max$_{k}$'s brother]$_{l}$ likes himself$_{*x/*i/*j/*k/l}$].} \end{exe} \begin{exe} \ex \label{PrincipleZ} {\textbf{Principle Z:}} An o-commanded long-distance reflexive must be o-bound. \sn \gll ...{\em X}$_{x}$...[O amigo do Lee$_{i}$]$_{j}$ acha [que [o irm\~{a}o do Max$_{k}$]$_{l}$ gosta dele pr\'{o}prio$_{*x/*i/j/*k/l}$]. (\ili{Portuguese})\\ \mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }the friend of.the Lee thinks \mbox{ }that \mbox{ }the brother of.the Max likes of.him self\\ \trans '...{\em X}$_{x}$...[Lee$_{i}$'s friend]$_{j}$ thinks [[Max$_{k}$'s brother]$_{l}$ likes him$_{*x/*i/j/*k}$ / \linebreak himself$_{l}$].' \end{exe} \begin{exe} \ex \label{PrincipleB} {\textbf{Principle B:}} A pronoun must be locally o-free. \sn {...{\em X}$_{x}$...[Lee$_{i}$'s friend]$_{j}$ thinks [[Max$_{k}$'s brother]$_{l}$ likes him$_{x/i/j/k/*l}$].} \end{exe} \begin{exe} \ex \label{PrincipleC} {\textbf{Principle C:}} A non-pronoun must be o-free. \sn {...{\em X}$_{x}$...[Lee$_{i}$'s friend]$_{j}$ thinks [[Max$_{k}$'s brother]$_{l}$ likes the boy$_{x/i/*j/k/*l}$].} \end{exe} \vspace{4 mm} \subsection{Binding, coindexation, locality, command and crosslinguistic variation}\label{parameterisation} The empirical generalisations presented above result from linguistic analysis supported by empirical evidence of which the respective examples above are just a few key illustrative cases. These examples will be discussed in detail in the next subsection, thus illustrating the analysis underlying the binding principles above. The above definition of binding principles is rendered with the help of a few auxiliary notions. For many of these auxiliary notions, their final value or definition is amenable to be set according to a range of options: as briefly exemplified below, this parameterisation may be driven by the particular language at stake, by the relevant predicator selecting the anaphor, by the specific anaphoric form, etc. These are the definitions of those auxiliary notions: \is{o-binding} {\bf Binding} {\em O-binding} is such that ``$x$ o-binds $y$ iff $x$ o-commands $y$ and $x$ and $y$ are coindexed" ({\em o-freeness} is non o-binding).% \footnote{\citep[p.279]{polsag:hpsg94}.} {\bf Coindexation} {\em Coindexation} is meant to represent an anaphoric link between the expressions with the same index. A starred index, in turn, indicates that the anaphoric link represented is not acceptable, as in the following examples: \is{coindexation} \begin{exe} \ex \begin{xlist} \ex John_{i} said that Peter_{j} shaved himself_{*i/j}. \ex John_{i} said that Peter_{j} shaved him_{i/*j}. \end{xlist} \end{exe} Turning to example (\ref{PrincipleB}), for instance, {\em him}$_{k}$ and {\em Max}$_{k}$ are coindexed with {\em k}, thus indicating their anaphoric binding and representing that {\em Max} is the antecedent of {\em him}. The starred index {\em *l}, in turn, indicates that the coindexation between {\em Max's brother}$_{l}$ and {\em him}$_{*l}$ is not felicitous, and thus that {\em Max's brother} is not an admissible antecedent of {\em him} in (\ref{PrincipleB}). In the examples above, '...{\em X}$_{x}$...' represents a generic, extra-sentential antecedent, available from the context. Plural anaphors with so-called split antecedents, that has concomitantly more than one antecedent, are represented with a sum of indexes as a subscript, as exemplified below by {\em them} being interpreted as referring to John and Mary:% \footnote{ When at least one of the antecedents in a split antecedent relation does not comply with the relevant binding principle (and there is at least one that complies with it), the acceptability of that anaphoric link degrades. Apparently, the larger the number of antecedents that violate the binding constraint the less acceptable is the anaphoric link: while both examples below are not fully acceptable, two coindexations out of three, via $j$ and $k$, in violation of the Principle B render example b.~less acceptable than example a., which has one coindexation only, via $k$, in violation of that binding constraint \citep[313]{seeley93}: \is{split antecedents} \begin{exe} \ex \begin{xlist} \ex[?]{The doctor_{i} told the patient_{j} [that the nurse_{k} would protect them_{i+j+k} during the storm].} \ex[??]{The doctor_{i} said [that the patient_{j} told the nurse_{k} about them_{i+j+k}].} \end{xlist} \end{exe} As for plural reflexives, which in turn comply with Principle A, they accept split antecedents only in exempt positions --- on the notion of exemption, see Section \ref{Exemption}.} \begin{exe} \ex John_{i} told Mary_{j} that Kim talked about them_{i+j}. \end{exe} \is{local domain} \is{binding local domain} \textbf{Locality} The {\em local domain} of an anaphor results from the partition of sentences and associated grammatical geometry into two zones of greater or less proximity with respect to the anaphor. Typically, the local domain coincides with the immediate selectional domain of the predicator directly selecting the anaphor. In the following example, the local domain of {\em him} is explicitly marked within square brackets: \begin{exe} \ex John knows that [Peter described him]. \end{exe} In the example in (\ref{PrincipleA}), for instance, {\em Max's brother} is immediately selected by {\em likes}, the predicator that immediately selects {\em himself}, while {\em Lee's friend} is not. Hence, the first is in the local domain of {\em himself}, while the latter is not. In some cases, there may be additional requirements that the local domain is circumscribed by the first selecting predicator that happens to be finite, bears tense or indicative features, etc.% \footnote{ Vd. \citep{manziniWexler:parameters87, kosterReuland:longdistance91, dal:bind93} for further details. } One such example can be the following:% \footnote{ \citep[p.47]{manziniWexler:parameters87}.} \begin{exe} \ex \begin{xlist} \ex \gll J\'{o}n$_{i}$ segir a$\eth$ [Maria$_{j}$ elskar sig$_{*i/j}$ ]. (\ili{Icelandic})\\ J\'{o}n says-\textsc{ind} that \mbox{ }Maria loves-\textsc{ind} himself\\ \trans 'J\'{o}n$_{i}$ says that [Maria$_{j}$ loves himself$_{*i}$/herself$_{j}$].' \ex \gll [J\'{o}n$_{i}$ segir a$\eth$ Maria$_{j}$ elski sig$_{i/j}$ ].\\ J\'{o}n says-\textsc{ind} that Maria loves-\textsc{subj} himself\\ \trans '[J\'{o}n$_{i}$ says that Maria$_{j}$ loves himself$_{i}$/herself$_{j}$].' \end{xlist} \end{exe} In the first sentence above, the verb in the embedded clause is Indicative and the local domain of its Direct Object is circumscribed to this clause as the reflexive cannot have the Subject of the upwards clause as its antecedent. The second sentence is identical to the first one except that the mood of the embedded verb is now Subjunctive. This leads to a change in the local domain of the reflexive: it can now have also the upwards Subject as its antecedent, thus revealing that its local domain is determined by the first selecting verb in the Indicative, which happens now to be the verb of the upwards clause. In some other languages, there are anaphors whose local domain is the immediate selectional domain not of the directly selecting predicator but of the immediately upwards predicator, irrespective of the inflectional features of the directly or indirectly selecting predicators. This seems to be the case of the \ili{Greek} {\it o idhios}:% \footnote{ Alexis Dimitriadis p.c. See also \citep{iatridou:86, varlokostaHornstein:93}. } \enlargethispage*{2mm} \begin{exe} \ex \gll O Yannis$_{i}$ ipe stin Maria [oti o Costas$_{j}$ pistevi [oti o Vasilis$_{k}$ aghapa ton idhio$_{??i/j/*k}$]]. (\ili{Greek}) \\ the Yannis told the Maria \mbox{ }that the Costas believes \mbox{ }that the Vasilis loves the same.\\ \trans 'Yannis$_{i}$ told Maria that [Costas$_{j}$ believes that [Vasilis$_{k}$ loves him$_{??i/j/*k}$]].' \end{exe} Languages shows diversity concerning which of these options are materialized and which grammatical and lexical means are brought to bear.% \footnote{ \citep{dimitriadisDatabase:2005}. } Additionally, not all languages have anaphors of every one of the anaphoric types: For instance, \ili{English} is not known to have long-distance reflexives. \is{command} \is{o-command} \is{obliqueness} {\bf Command} {\em O-command} is a partial order defined on the basis of the obliqueness hierarchies of grammatical functions, possibly embedded in each other along the relation of subcategorisation:``Y o-commands Z just in case either Y is less oblique than Z; or Y {\mbox o-commands} some X that subcategorises for Z; or Y o-commands some X that is a projection of Z".% \footnote{\citep[p.279]{polsag:hpsg94}.} The grammatical function Subject is less oblique than the Direct Object, the Direct Object is less oblique than the Indirect Object, etc., thus establishing a so-called obliqueness hierarchy. The obliqueness hierarchy of grammatical functions is represented in the value of ARG-ST feature in as much as the arguments are ordered from those whose grammatical function is less oblique to those whose function is more oblique. As discussed in detail in Section \ref{spec1} and in connection with the working example in the Appendix, ARG-ST feature value plays a crucial role in the formalization and explicit integration of binding principles into grammar.\footnote{ For further discussion of the notion of obliqueness of grammatical functions as well as further references on this topic, see \citep[Sec.5.2]{polsag:hpsg87}.} Accordingly, the Subject o-commands the Direct Object, the Direct Object o-commands the Indirect Object, etc.; and in a multi-clausal sentence, the arguments in the upwards clauses o-command the arguments in the successively embedded clauses. \begin{exe} \ex $[[$John's friend$]$ said that $[[$Peter's brother$]$ presented [Martin's cousin] to him$]]$. \end{exe} In the example above, {\em John's friend} o-commands {\em Peter's brother}, {\em Peter}, {\em Martin's cousin}, {\em Martin} and {\em him}. {\em Peter's brother} locally o-commands {\em Martin's cousin} and {\em him}, and (non-locally) o-commands {\em Martin}. Neither {\em John}, {\em Peter}, {\em Martin} nor {\em him} is o-commanding any nominal in this example. In the example of (\ref{PrincipleZ}), for instance, the Portuguese long-distance reflexive {\em ele pr\'{o}prio}, which is the Object of the embedded clause, is o-commanded by {\em o irm\~{a}o do Max} (Max's brother), the Subject of that clause, and by {\em o amigo do Lee} (Lee's friend), the Subject of the upwards clause. {\em Lee} and {\em Max}, in turn, do not o-command this reflexive in this example. \subsection{Binding principles}\label{principles} With the definition of the auxiliary notions above in place, the definition of binding principles in (\ref{PrincipleA})-(\ref{PrincipleC}) is now complete and it is possible to appreciate how the respective examples instantiate them. {\bf Principle A} The example in (\ref{PrincipleA}) shows that the anaphoric capacity of {\em himself} complies with the anaphoric discipline captured by Principle A: if it is locally o-commanded, it has to be locally o-bound, i.e. only locally o-commanders can be its admissible antecedents if it happens to be locally o-commanded. {\em Max's brother} is in the local domain of {\em himself} because it is immediately selected by the predicator {\em likes} which also immediately selects {\em himself}. Moreover, {\em Max's brother}, in a Subject position, o-commands {\em himself}, in an Object position. {\em Max's brother} is thus a local o-commander of {\em himself}, and hence it is an admissible antecedent of {\em himself}. The other nominals in this example are not local o-commanders of {\em himself}: both {\em Lee} and {\em Lee's friend} are selected by the main clause predicator {\em thinks}, not by the predicator {\em likes} which is immediately selecting {\em himself} and are thus not in its local domain; {\em Max} in turn, given it is embedded inside the local Subject, it is not immediately selected by the predicator {\em likes} that is immediately selecting {\em himself}. Hence, none of the nominals in the sentence other than {\em Max's brother} happen to be local o-commanders of {\em himself} and thus are not one of its admissible antecedents. Also any other antecedent candidate eventually available in the extra-sentential context is not a local o-commander of {\em himself} and thus it is not one of its admissible antecedents. Given the anaphoric capacity of {\em himself} complies with the anaphoric discipline captured by Principle A, it belongs to the class of short-distance reflexives. In connection with Principle A, it is also worth signalling that it is not the case that only Subjects can be local o-commanders of short-distance reflexives, as illustrated below. \begin{exe} \ex \label{objectsSDR} \begin{xlist} \ex Peter$_{i}$ didn't talk to John$_{j}$ about himself$_{i/j}$. \label{objectsSDRunmarked} \ex About himself$_{i/j}$, Peter$_{i}$ didn't talk to John$_{j}$. \label{objectsSDRtopicalization} \end{xlist} \end{exe} In the examples in (\ref{objectsSDR}), {\em John} and {\em himself} are in the same local domain. Moreover, {\em John}, in the Object position, is less oblique than {\em himself}, in the Indirect Object position. Hence, {\em John} is a local o-commander of the reflexive and qualifies as one of its admissible antecedents, together with {\em Peter}, in the Subject position. \is{c-command} The absence of contrast between (\ref{objectsSDRunmarked}) and (\ref{objectsSDRtopicalization}) is a central piece of evidence that the command relation for anaphoric binding is based on the obliqueness hierarchy of grammatical functions (o-command) rather than on a configurational hierarchy based on surface syntactic structure (c-command).% \footnote{Binding principles based on o-command, rather than on c-command as proposed in \citep{chom:bind80} and \citep{chom:knowledge86}, is a hallmark of the analysis in \citep{polsag:binding92}. The analysis based on c-command incorrectly predicts that anaphoric links like those in (\ref{objectsSDRtopicalization}) would not be acceptable, because the admissible antecedents of {\em himself} do not c-command it (it is {\em himself} that c-commands them instead). The analysis based on c-command also incorrectly predicts that the anaphoric link between {\em himself} and {\em John} in (\ref{objectsSDRunmarked}) would not be acceptable, because {\em John} does not c-command {\em himself}. For a detailed discussion, see \citep[Chap.6]{polsag:hpsg94}. } {\bf Principle Z} The example in (\ref{PrincipleZ}) shows that the anaphoric capacity of the Portuguese nominal {\em ele pr\'{o}prio} complies with the anaphoric discipline captured by Principle Z: if it is o-commanded, it has to be o-bound, i.e. only o-commanders can be its admissible antecedents if it happens to be o-commanded. {\em ele pr\'{o}prio} is (locally) o-commanded by {\em o irm\~{a}o do Max} because both are selected by the predicator {\em gosta} and {\em o irm\~{a}o do Max} is less oblique than {\em ele pr\'{o}prio}. {\em ele pr\'{o}prio} is also o-commanded by {\em o amigo do Lee} because {\em o amigo do Lee} is selected by the predicator of the upwards clause {\em acha}, which selects the embedded clause whose predicator selects {\em ele pr\'{o}prio}, and {\em o amigo do Lee} is thus less oblique than {\em ele pr\'{o}prio} in the composite obliqueness hierarchy. The other nominals in this example are not o-commanders of {\em ele pr\'{o}prio}: both {\em Lee} and {\em Max} are embedded inside arguments of the relevant predicators {\em acha} and {\em gosta} but are not arguments of them. Hence, none of the nominals in the sentence other than {\em o amigo do Lee} and {\em o irm\~{a}o do Max} happen to be o-commanders of {\em ele pr\'{o}prio} and thus are not one of its admissible antecedents. Also any other antecedent candidate eventually available in the extra-sentential context is not an o-commander of {\em ele pr\'{o}prio} and thus it is not one of its admissible antecedents. Given the anaphoric capacity of {\em ele pr\'{o}prio} complies with the anaphoric discipline captured by Principle Z, it belongs to the class of long-distance reflexives. {\bf Principle B} The example in (\ref{PrincipleB}) shows that the anaphoric capacity of {\em him} complies with the anaphoric discipline captured by Principle B: it has to be locally o-free, i.e. its local o-commanders cannot be its admissible antecedents. In that example, {\em Max's brother} is the only local o-commander of {\em him} because {\em Max's brother} is the only argument of the predicator {\em likes} other than {\em him}, and is less oblique than {\em him}. The other nominals in this example are not local o-commanders of {\em him}: neither {\em Lee's friend}, {\em Lee} or {\em Max} are immediately selected by {\em likes}. Also any other antecedent candidate eventually available in the extra-sentential context is not a local o-commander of {\em him}. Hence, in this example all antecedent candidates, sentential and non sentential, are admissible antecedents of {\em him} except {\em Max's brother}. Given the anaphoric capacity of {\em him} complies with the anaphoric discipline captured by Principle B, it belongs to the class of pronouns. {\bf Principle C} The example in (\ref{PrincipleC}) shows that the anaphoric capacity of {\em the boy} complies with the anaphoric discipline captured by Principle C: it has to be o-free, i.e. its o-commanders are not admissible antecedents. In that example, {\em Lee's friend} and {\em Max's brother} are the only o-commanders of {\em the boy}: {\em Lee's friend} is selected by the predicator of the upwards clause {\em likes}, which selects the embedded clause whose predicator selects {\em the boy}; {\em Max's brother}, in turn, is the only argument immediately selected by the predicator {\em likes} other than {\em the boy}; and both {\em Lee's friend} and {\em Max's brother} are less oblique than {\em the boy}. The other nominals in this example are not o-commanders of {\em the boy}: neither {\em Lee} or {\em Max} are immediately selected by {\em thinks} or {\em likes}. Also any other antecedent candidate eventually available in the extra-sentential context is not an o-commander of {\em the boy}. Hence, in this example all antecedent candidates, sentential and non sentential, are admissible antecedents of {\em the boy} except {\em Lee's friend} and {\em Max's brother}. Given the anaphoric capacity of {\em the boy} complies with the anaphoric discipline captured by Principle C, it belongs to the class of non pronouns. \is{o-bottom} \is{local domain reshuffling} \is{binding exemption} \subsection{O-bottom positions: reshuffling and exemption}\label{Exemption} For the interpretation of an anaphor to be accomplished, an antecedent has to be found for it. Such an antecedent is to be picked from the set of its o-commanders, if the anaphor is a long-distance reflexive, or from the set of its local o-commanders, if it is a short-distance reflexive. This requirement may not be satisfied in some specific cases, namely when the reflexive occurs in a syntactic position such that it is the least element of its \mbox{o-command} order, in an o-bottom position for short. In such circumstances, it has no \mbox{o-commander} (other than itself, if the o-command relation is formally defined as a reflexive relation) to qualify as its antecedent. That is the motivation for the conditional formulation of Principles A and Z, in (\ref{PrincipleA}) and (\ref{PrincipleZ}) respectively: a (short/) long-distance reflexive has to be (locally/) o-bound if it is (locally/) o-commanded. In case it is not (locally/) o-commanded, there is no imposition concerning their admissible antecedents following from Principles A and Z. \textbf{Reshuffling} As a consequence, in some cases, the binding domain for the reflexive which happens to be the least element of its local obliqueness order may be reshuffled, being reset as containing the o-commanders of the reflexive in the domain circumscribed by the immediately upwards predicator.% \footnote{ \citep{brancoHpsg:2005}. } One such case for a nominal domain can be found in the following example:% \footnote{Tibor Kiss p.c., which is a development with regards to his data in \citep{kiss:2001}.} \begin{exe} \ex \begin{xlist} \ex \gll Gernot$_{i}$ dachte, dass Hans$_{j}$ dem Ulrich$_{k}$ [Marias$_{l}$ Bild von sich$_{*i/*j/*k/l}$] \"{u}berreichte. (\ili{German})\\ Gernot thought that Hans the Ulrich \mbox{ }Maria's picture of self gave\\ \trans 'Gernot$_{i}$ thought that Hans$_{j}$ gave Ulrich$_{k}$ [Maria$_{l}$'s picture of \linebreak himself$_{*i/*j/*k}$/herself$_{l}$].' \ex \gll Gernot$_{i}$ dachte, dass [Hans$_{j}$ dem Ulrich$_{k}$ ein Bild von sich$_{*i/j/k}$ \"{u}berreichte].\\ Gernot thought that \mbox{ }Hans the Ulrich a picture of self gave\\ \trans 'Gernot$_{i}$ thought that [Hans$_{j}$ gave Ulrich$_{k}$ [a picture of \linebreak himself$_{*i/j/k}$]].' \end{xlist} \end{exe} In the first sentence above, the short-distance reflexive is locally \mbox{o-commanded} by {\em Maria} and only this nominal can be its antecedent. In the second sentence, the reflexive is the first element in its local obliqueness hierarchy and its admissible antecedents, which form now its local domain, are the nominals in the obliqueness hierarchy of the immediately upwards predicator. The null subject in languages like \ili{Portuguese} is another example of a short-distance reflexive that is in an o-bottom position and whose local domain is reshuffled:% \footnote{ \citep{brancoNullSubject:2007}. } \is{null subject} \begin{exe} \ex \gll O m\'{e}dico_{i} disse-me [que [o director do Pedro_{j}]_{k} ainda n\~{a}o reparou [que $\emptyset_{*i/*j/k}$ cometeu um erro]]. (Portuguese)\\ the doctor told-me \mbox{ }that \mbox{ }the director of.the Pedro yet not noticed \mbox{ }that { } made a mistake.\\ \trans 'The doctor_{i} told me [that [Pedro_{j}'s director]_{k} didn't notice yet [that he_{*i/*j/k} made a mistake]].' \end{exe} In the example above, as the null reflexive is in an o-bottom position, its local domain gets reshuffled to include the immediately upwards o-commander {\em Pedro's director}. Once it is thus o-commanded, in accordance do Principle A, the null reflexive cannot take other nominal in the sentence, viz. {\em the doctor} or {\em Pedro}, as its admissible antecedent given none of these o-commands it. \textbf{Exemption} In some other cases, this resetting of the binding domain is not available. In such cases, the reflexive is in the bottom of its local obliqueness order and is observed to be exempt of its typical binding regime: the reflexive may take antecedents that are not its o-commanders or that are outside of its local or immediately upward domains,\footnote{ \citep[p.263]{polsag:hpsg94}.} as illustrated in the following example:\footnote{ \citep[p.73]{golde:diss99}. } \begin{exe} \ex Mary_{i} thought the artist had done a bad job, and was sorry that her parents came all the way to Columbus just to see the portrait of herself_{i}. \end{exe} In an exempt position, a reflexive can even have so-called split antecedents, as illustrated in the following example with a short-distance reflexive:% \footnote{ \citep[p.42]{zribi:pview89}. } \begin{exe} \ex Mary$_{i}$ eventually convinced her sister Susan$_{j}$ that John had better pay visits to everybody except themselves$_{i+j}$. \end{exe} That is an option not available for reflexives in non exempt positions: \begin{exe} \ex Mary$_{i}$ described$_{j}$ John to themselves$_{*(i+j)}$. \end{exe} Some long-distance reflexives may also be exempt from their binding constraint if they occur in the bottom of their o-command relation. In such cases, they can have an antecedent in the previous discourse sentences or in the context, or a deictic use, as illustrated in the following example: \begin{exe} \ex \gll [O Pedro e o Nuno]$_{i}$ tamb\'{e}m conheceram ontem a Ana. Eles pr\'{o}prios$_{i}$ ficaram logo a gostar muito dela. (\ili{Portuguese})\\ the Pedro and the Nuno also met yesterday the Ana. They {\em pr\'{o}prios} stayed immediately to liking much of.her\\ \trans '[Pedro and Nuno]$_{i}$ also met Ana yesterday. They$_{i}$ liked her very much right away.' \end{exe} Such options are not available in non exempt positions:% \footnote{ For further details, vd. \citep{branco:ldrefl99}. } \begin{exe} \label{portugueseLDreflexive} \ex \gll A Ana tamb\'{e}m conheceu ontem [o Pedro e o Nuno]$_{i}$. Ela ficou logo a gostar muito deles pr\'{o}prios$_{*i}$. (Portuguese)\\ The Ana also met yesterday \mbox{ }the Pedro and the Nuno. She stayed immediately to liking much of.them {\em pr\'{o}prios}\\ \trans 'Ana also met [Pedro and Nuno]$_{i}$ yesterday. She liked them$_{*i}$ very much right away.' \end{exe} Admittedly, an overarching interpretability condition is in force in natural languages requiring the ``meaningful'' anchoring of anaphors to antecedents. Besides this general requirement, anaphors are concomitantly ruled by specific constraints concerning their particular anaphoric capacity, including the sentence-level constraints in (\ref{PrincipleA})-(\ref{PrincipleC}), i.e. the binding principles. When reflexives are in o-bottom positions, an o-commander (other than the reflexive itself) may not be available to function as antecedent and anchor their interpretation. Hence, such specific binding constraints, viz. Principle A and Z, cannot be satisfied in a ``meaningful" way and the general interpretability requirement may supervene them. As a consequence, in cases displaying so-called exemption from binding constraints, o-bottom reflexives appear to escape their specific binding regime to comply with such general requirement and its interpretability be rescued. The anaphoric links of exempt reflexives have been observed to be governed by a range of non sentential factors (from discourse, dialogue, non linguistic context, etc.), not being determined by the sentence-level binding principles in (\ref{PrincipleA})-(\ref{PrincipleC}).% \footnote{For further details, vd. \citep{kuno:func87, zribi:pview89, golde:diss99} among others. } \is{subject orientedness} \is{alternations} \subsection{O-command: alternations and subject-orientedness}\label{ocommand} \is{alternations} \textbf{Alternations} In languages like \ili{English}, the o-command order can be established over the obliqueness hierarchies of active and passive sentences alike:% \footnote{ \citep{Jackendoff72a-u, polsag:hpsg94}.} \is{passive} \begin{exe} \ex \begin{xlist} \ex[]{John$_{i}$ shaved himself$_{i}$.} \label{acti} \ex[]{John$_{i}$ was shaved by himself$_{i}$.} \label{passi} \end{xlist} \end{exe} The obliqueness hierarchy of grammatical functions is represented in ARG-ST and in both ARG-ST values of (\ref{acti}) and of (\ref{passi}), {\em John} appears as the Subject and qualifies as a local o-commander of {\em himself}, and thus as an admissible antecedent of this reflexive. In some other languages, only the obliqueness hierarchy of a given syntactic alternation is available to support the o-command order relevant for binding constraints in both alternations. This is the case, for example, of the alternation active/objective voice in \ili{Toba Batak}. In this language, a reflexive in Object position of an active voice sentence can have the Subject as its antecedent, but not vice-versa:% \footnote{ \citep[p.72]{manningSag:1999}.} \begin{exe} \ex \begin{xlist} \ex \gll mang-ida diri-na$_{i}$ si John$_{i}$. (\ili{Toba Batak})\\ [{\sc active}-saw himself\textsubscript{\textsc{Object}}]\textsubscript{VP} {\sc pm} John\textsubscript{\textsc{Subject}} \\ \trans 'John$_{i}$ saw himself$_{i}$.' \ex \gll mang-ida si John$_{i}$ diri-na$_{*i}$.\\ [{\sc active}-saw {\sc pm} John\textsubscript{\textsc{Object}}]\textsubscript{VP} himself\textsubscript{\textsc{Subject}} \\ \end{xlist} \end{exe} Taking the objective voice paraphrase corresponding to the active sentence above, the binding pattern is inverted: a reflexive in Subject position can have the Object as its antecedent, but not vice-versa, thus revealing that the obliqueness hierarchy relevant for the verification of its binding constraint remains the hierarchy of the corresponding active voice sentence above: \begin{exe} \ex \begin{xlist} \ex \gll di-ida diri-na$_{*i}$ si John$_{i}$.\\ [{\sc objective}-saw himself\textsubscript{\textsc{Object}}]\textsubscript{VP} {\sc pm} John\textsubscript{\textsc{Subject}} \\ \ex \gll di-ida si John$_{i}$ diri-na$_{i}$.\\ [{\sc objective}-saw {\sc pm} John\textsubscript{\textsc{Object}}]\textsubscript{VP} himself\textsubscript{\textsc{Subject}} \\ \trans 'John$_{i}$ saw himself$_{i}$.' \end{xlist} \end{exe} \is{subject-orientedness} \textbf{Subject-orientedness} O-command may take the shape of a linear or non linear order depending on the specific obliqueness hierarchy upon which it is realised. In a language like \ili{English}, the arguments in the subcategorisation frame of a predicator are typically arranged in a linear obliqueness hierarchy. In some other languages, the obliqueness hierarchy upon which the o-command order is based may happen to be non linear: in the subcategorisation frame of a predicator, the Subject is less oblique than any other argument while the remaining arguments are not comparable to each other under the obliqueness relation. As a consequence, in a clause, a short-distance reflexive with an Indirect Object grammatical function, for instance, may only have the Subject as its antecedent, its only local o-commander.% \footnote{For a thorough argument and further evidence motivated independently of binding facts see \citep{branco:branch96, brancoMarrafa:subject97, branco:livro00}. In some languages, there can be an additional requirement that the Subject be animate to qualify as a commander to certain anaphors. On this, see \citep{huangTang:longdistance91, xue:ziji94} about Chinese {\em ziji}, among others.} This Subject-orientedness effect induced on the anaphoric capacity of reflexives by the non linearity of the o-command relation can be observed in contrasts like the following:% \footnote{ Lars Hellan p.c. See also \citep[p.67]{hellan:book88}.} \is{subject-oriented anaphor} \begin{exe} \ex \begin{xlist} \ex \gll Lars$_{i}$ fortalte Jon$_{j}$ om seg selv$_{i/*j}$. (\ili{Norwegian}) \\ Lars told Jon about self {\em selv} \\ \trans 'Lars$_{i}$ told Jon$_{j}$ about himself$_{i/*j}$.' \ex \gll Lars$_{i}$ fortalte Jon$_{j}$ om ham selv$_{*i/j}$.\\ Lars told Jon about him {\em selv}\\ \trans 'Lars$_{i}$ told Jon$_{j}$ about him$_{*i/j}$.' \end{xlist} \end{exe} In the first sentence above, the reflexive cannot have the Direct Object as its antecedent given that the Subject is its only local o-commander in the non linear obliqueness hierarchy. In the second sentence, under the same circumstances, a pronoun presents the symmetric pattern: it can have any co-argument as its antecedent except the Subject, its sole local o-commander.% \footnote{ For an analysis of the Subject-orientedness of French {\em se} resorting to a notion of s-command, see \citep{abeille:depend98, abeille:composition98}. } \section{Binding Constraints at the Syntax-Semantics Interface\label{sem}} Like other sorts of constraints on semantic composition, binding constraints impose grammatical conditions on the interpretation of certain expressions --- anaphors, in the present case --- based on syntactic geometry.% \footnote{ For a discussion of proposals in the literature that have tried to root binding principles on non-grammatical, cognitive search optimisation mechanisms, and their pitfalls, see \citep{branco:2004,branco:2003,branco:2000}. } This should not be seen, however, as implying that they express grammaticality requirements. By replacing, for instance, a pronoun by a reflexive in a sentence, we are not turning a grammatical construction into an ungrammatical one, even if we assign to the reflexive the antecedent adequately selected for the pronoun. In that case, we are just asking the hearer to try to assign to that sentence a meaning that it cannot express, in the same way as what would happen if we asked someone whether he could interpret {\it The red book is on the white table} as describing a situation where a white book is on a red table. In this example, given how they happen to be syntactically related, the semantic values of {\it red} and {\it table} cannot be composed in a way that this sentence could be used to describe a situation concerning a red table, rather than a white table. Likewise, if we take the sentence {\it John thinks Peter shaved him}, given how they happen to be syntactically related, the semantic values of {\it Peter} and {\it him} cannot be composed in a way that this sentence could be used to describe a situation where John thinks that Peter shaved himself, i.e.\ Peter, rather than a situation where John thinks that Peter shaved other people, e.g.\ Paul, Bill, etc., or even John himself. The basic difference between these two cases is that, while in the first the composition of the semantic contributions of {\it white} and {\it table} (for the interpretation of their NP {\it white table}) is constrained by local syntactic geometry, in the latter the composition of the semantic contributions of {\it John} and {\it him} (for the interpretation of the NP {\it him}) is constrained by non-local syntactic geometry. These grammatical constraints on anaphoric binding should thus be taken as conditions on semantic interpretation given that they delimit (non-local) aspects of meaning composition, rather than aspects of syntactic wellformedness.% \footnote{ This approach is in line with \citep{gawron:anaph90}, and departs from other approaches where binding constraints have been viewed as wellformedness conditions, thus belonging to the realm of Syntax: ``[they] capture the distribution of pronouns and reflexives" \citep[p.657]{rein:refl93}.} These considerations leads one to acknowledge that, semantically, an anaphor should be specified in the lexicon as a function whose argument is a suitable representation of the context --- providing a semantic representation of the NPs available in the discourse vicinity ---, and delivers both an update of its anaphoric potential~--- which is instantiated as the set of its grammatically admissible antecedents --- and an update of the context, against which other NPs are interpreted.% \footnote{ \citep{brancoDaarc:1998,brancoColing:2000,branco:2002a}. } Naturally, all in all, there will be four of such functions available to be lexically associated to anaphors, each corresponding to one of the different four classes of anaphors, in accordance with the four binding constraints A,~Z,~B~or~C.% \footnote{ This is in line with~\citep{johnson:disc90} concerning the processing of the semantics of nominals, and also the spirit (but by no means the letter) of the dynamic semantics framework~---~vd.~\ \citep{chi:dyn95} and \citep{stal:context98} {\em i.a.} } \subsection{Semantic patterns}\label{semanticPatterns} \is{reference marker} For an anaphoric nominal {\em w}, the relevant input context may be represented in the form of a set of three lists of reference markers,% \footnote{ See \citep{Karttunen1976, Kamp1981, Heim1982, Seuren1985, kamp:drt93} for the notion of reference marker. } {\bf A}, {\bf Z} and {\bf U}. List {\bf A} contains the reference markers of the local \mbox{o-command} order where {\it w} is included, ordered according to their relative grammatical obliqueness; {\bf Z} contains the markers of the (local and non local) \mbox{o-command} order where {\it w} is included, i.e.\ reference markers organised in a possibly multi-clausal o-command relation, based upon successively embedded clausal obliqueness hierarchies; and {\bf U} is the list of all reference markers in the discourse context, possibly including those not linguistically introduced. The updating of the context by an anaphoric nominal {\it w} may be seen as consisting simply in the incrementing of the representation of the context, with a copy of the reference marker of {\it w} being added to the three lists above. The updating of the anaphoric potential of {\it w}, in turn, delivers a representation of the contextualised anaphoric potential of {\it w} in the form of the list of reference markers of its admissible antecedents. This list results from the binding constraint associated to {\it w} being applied to the relevant representation of the context of {\it w}. Given this setup, the algorithmic verification of binding constraints consists of a few simple operations, and their grammatical specification will consist thus in stating each such sequence of operations in terms of the grammar description formalism. If the nominal {\it w} is a short-distance reflexive, its semantic representation is updated with {\bf A'}, where {\bf A'} contains the reference markers of the \mbox{o-commanders} of {\it w} in {\bf A}. If {\it w} is a long-distance reflexive, its semantic representation includes {\bf Z'}, such that {\bf Z'} contains the \mbox{o-commanders} of {\it w} in {\bf Z}. If {\it w} is a pronoun, its semantics should include the list of its non-local \linebreak o-commanders, that is the list {\bf B}={\bf U}$\backslash$({\bf A'}$\cup$[r-mark$_{w}$]) is encoded into its semantic representation, where r-mark_{w} is the reference marker of {\it w}. Finally if {\it w} is a non-pronoun, its updated semantics keeps a copy of list \linebreak {\bf C}={\bf U}$\backslash$({\bf Z'}$\cup$[r-mark$_{w}$]), which contains the non-o-commanders of {\it w}. \subsection{Binding principles and other constraints for anaphora resolution} These lists {\bf A'}, {\bf Z'}, {\bf B} and {\bf C} collect the reference markers that are antecedent candidates at the light only of the relevant binding constraints, which are relative positioning filters in the process of anaphora resolution.% \footnote{See \cite[Chap.2]{branco:diss99} for an overview of filters and preferences for anaphora resolution proposed in the literature. } The elements in these list have to be submitted to the other constraints and preferences of this process so that one of them ends up being chosen as the antecedent. In particular, some of these markers may eventually turn up not being admissible antecedent candidates due to the violation of some other constraints --- e.g. those requiring similarity of morphological features or of semantic type --- that on a par with binding constraints have to be complied with. For example, in this example {\it John described Mary to himself}, by the sole constraining effect of Principle A, \mbox{[r-mark_{John}, r-mark_{Mary}]} is the list of antecedent candidates of {\it himself}, which will be narrowed down to [r-mark_{John}] when all the other filters for anaphora resolution have been taken into account, including the one concerning similarity of morphological features, as {\it Mary} and {\it him} do not have the same gender feature value. In this particular case, separating these two type of filters --- similarity of morphological features and binding constraints --- seems to be the correct option, required by plural anaphors with so called split antecedents. In an example of this type, such as {\it John_{i} told Mary_{j} they_{i+j} would eventually get married}, where {\it they} is resolved against {\it John} and {\it Mary}, the morphological features of the anaphor are not identical to the morphological features of each of its antecedents, though the relevant binding constraint applies to each of them.% \footnote{This was noted by \citep{higg:split83}. In this respect, this approach improves on the proposal in~\citep{polsag:hpsg94}, where the token-identity of indices --- internally structured in terms of Person, Number and Gender features --- is meant to be forced upon the anaphor and its antecedent in tandem with the relevant binding constraint. For further reasons why token-identity between the reference markers of the anaphor and the corresponding antecedent is not a suitable option for every anaphoric dependency, see the discussion below in Section \ref{discuss} on the semantic representation of different modes of anaphora.} When a plural anaphor takes more than one antecedent, as in the example above, its (plural) reference marker will end up being semantically related with a plural reference marker resulting from some semantic combination of the markers of its antecedents. Separating binding constraints from other constraints on the relation between anaphors and their antecedents are thus compatible with and justified by proposals for plural anaphora resolution that take into account split anaphora.% \footnote{ That is the case e.g. of \citep{eschen:plural89}. According to this approach, the set of antecedent candidates of a plural anaphor which result from the verification of binding constraints has to receive some expansion before subsequent filters and preferences apply in the anaphora resolution process. The reference markers in that set, either singular or plural, will be previously combined into other plural reference markers: it is thus from this set, closed under the semantic operation of pluralisation (e.g.\ i-sum a la~\citep{link:isums83}), that the final antecedent will be chosen by the anaphor resolver. } \is{computational tractability} \is{computational complexity} \subsection{Computational tractability} It is also worth noting that the computational tractability of the grammatical compliance with binding principles is ensured given the polynomial complexity of the underlying operations described above. Let {\it n} be the number of words in an input sentence to be parsed, which for the sake of the simplicity of the argument, and of the worst case scenario, it is assumed to be made only of nominal anaphors, that is every word in that sentence is a nominal anaphor. Assume also that the sets {\bf A}, {\bf Z} and {\bf U}, thus of length {\it n} at worst, are available at each node of the parsed tree via copying or via list appending (more details about these two operations in the next sections), which is a process of constant time complexity. At worst, the operations involved at each one of the {\em n} leaf nodes of the tree to obtain one of the sets {\bf A'}, {\bf Z'}, {\bf B} or {\bf C} are: list copying and list appending operations, performed in constant time; extraction of the predecessors of an element in a list, which is of linear complexity; or at most one list complementation, which can be done in time proportional to $n log(n)$. The procedure of verifying binding constraints in a sentence of length {\it n} is thus of tractable complexity, namely $\mathcal{O}(n^2\log{}n)$ in the worst case.% \footnote{ For a thorough discussion of alternative procedures for the compliance with binding principles and their drawbacks, see \citep{branco:esslli2000}, very briefly summarised here: The verification of binding constraints proposed in~\citep{chom:bind80,chom:lect81} requires extra-grammatical processing steps of non tractable computational complexity \citep{correa:bind88, fong:index90}, which, moreover, are meant to deliver a forest of indexed trees to anaphor resolvers. In Lexical Functional Grammar, the account of binding constraints requires special purpose extensions of the description formalism \citep{dal:bind93}, which ensures only a partial handling of these constraints. For accounts of binding principles in the family of Categorial Grammar frameworks, see \citep{szabol:89, hepple:90, morrill:2000}, and for a critical overview, see \citep{jaeger:2001}. } \section{Binding Constraints in the Grammar\label{spec1}} In this section, the binding constraints receive a principled integration into formal grammar. For the sake of brevity, we focus on the \ili{English} language. Given the discussion in the previous sections, the parameterisation for other languages will follow from this example by means of seamless adaptation. We show how the module of Binding Theory is specified with the description language of HPSG, as an extension of the grammar fragment in the Annex of the foundational HPSG book,\footnote{ \citep[Annex]{polsag:hpsg94}.} following the feature geometry in Ivan Sag's proposed extension of this fragment to relative clauses,\footnote{ \citep{Sag97a}.} and adopting a semantic component for HPSG based on Underspecified Discourse Representation Theory (UDRT).\footnote{ \citep{frank:sem95}.} As exemplified in (\ref{pronfeat}), this semantic component is encoded as the value of the feature {\sc cont(ent)}. This value, of sort {\em udrs}, has a structure permitting that the mapping into underspecified discourse representations be straightforward.\footnote{ \citep{reyle:udrt93}.} The value of subfeature {\sc conds} is a set of labeled semantic conditions. The hierarchical structure of these conditions is expressed by means of a subordination relation of the labels identifying each condition, a relation that is encoded as the value of {\sc subord}. The attribute {\sc ls} defines the distinguished labels, which indicate the upper ({\sc l-max}) and lower ({\sc l-min}) bounds for a semantic condition within the overall semantic representation to be constructed. \textbf{{\sc anaph(ora)} subfeature of {\sc cont(ent)}} The integration of Binding Theory into formal grammar consists of a simple extension of this semantic component for the {\em udrs} of nominals, enhancing it with the subfeature {\sc anaph(ora)}. This new feature keeps information about the anaphoric potential of the corresponding anaphor {\it w}. Its subfeature {\sc antec(edents)} keeps record of how this potential is realised when the anaphor enters a grammatical construction: its value is the list with the antecedent candidates of {\it w} which comply with the relevant binding constraint for {\it w}. And its subfeature {\sc r(eference)-mark(er)} indicates the reference marker of {\it w}, which is contributed by its referential force to the updating of the context. \textbf{{\sc bind(ing)} subfeature of {\sc loc(al)}} On a par with this extension of the {\sc loc} value, also the {\sc nonloc} value is extended with a new feature, {\sc bind(ing)}, with subfeatures {\sc list-a}, {\sc list-z}, and {\sc list-u}. These lists provide a specification of the relevant context and correspond to the lists {\bf A}, {\bf Z} and {\bf U} in the sections above. Subfeature {\sc list-lu} is a fourth, auxiliary list encoding the contribution of the local context to the global, non local context, as explained in the next sections.% \footnote{For the sake of readability, the working example in (\ref{pronfeat}) displays only the more relevant features for the point at stake. The {\sc nonloc} value has this detailed definition in \citep{polsag:hpsg94}: \bigskip \avmoptions{sorted} \avmfont{\sc} \avmvalfont{\it} \avmsortfont{\it} \begin{avm} \[{nonloc} to-bind & nonloc1 \\ inherited & nonloc1 \] \end{avm} \bigskip And these are the details of the extension we are using, where the information above is coded now as a {\em udc} object, which keeps record of the relevant non local information for accounting to {\em u(nbounded) d(ependency) c(onstructions)}: \bigskip \avmoptions{sorted} \avmfont{\sc} \avmvalfont{\it} \avmsortfont{\it} \begin{avm} \[{nonloc}udc & \[{udc} to-bind & nonloc1 \\ inherited & nonloc1 \]\\ bind & \[{bind} list-a & list(refm) \\ list-z & list(refm) \\ list-u & list(refm) \\ list-lu & list(refm) \] \] \end{avm} \bigskip Given this extension, HPSG principles constraining {\sc nonloc} feature structure, or part of it, should be fine-tuned with adjusted feature paths in order to correctly target the intended (sub)feature structures. } \subsection{Handling the anaphoric potential} \textbf{Pronouns: lexical entry} Given this adjustment to the grammatical geometry, the lexical definition of a pronoun, for instance, will include the following {\sc synsem} value: \begin{exe} \ex\label{pronfeat} \avmoptions{active} \avmfont{\sc} \avmvalfont{\it} \begin{avm} [loc|cont & { [ ls & [l-max & @1\\ l-min & @1 ]\\ subord & \rm \{\} \\ conds & \{[label & @1\\ dref & @2 ]\}\\ anaph & [r-mark & @2\\ antec & \it @5 principleB\ (@4,@3,@2)] ] }\\ nonloc|bind & { [list-a & @3 \\ list-z & list\(refm\) \\ list-u & @4 \\ list-lu & <@2> \\] } ] \end{avm} \end{exe} In this feature structure, the semantic condition in {\sc conds} associated to the pronoun corresponds simply to the introduction of the discourse referent \raisebox{-.6ex}{\begin{avm}\@2\end{avm}} as the value of {\sc dref}. This semantic representation is expected to be further specified as the lexical entry of the pronoun gets into the larger representation of the relevant utterance. In particular, the {\sc conds} value of the sentence will be enhanced with a condition specifying the relevant semantic relation between this reference marker \raisebox{-.6ex}{\begin{avm}\@2\end{avm}} and one of the reference markers in the value \raisebox{-.6ex}{\begin{avm}\@5\end{avm}} of {\sc antec}. The latter will be the antecedent against which the pronoun will happen to be resolved, and the condition where the two markers will be related represents the relevant type of anaphora assigned to the anaphoric relation between the anaphor and its antecedent.\footnote{ More details on the interface with anaphora resolvers and on the semantic types of anaphora in Section \ref{discuss}.} The anaphoric binding constraint associated to pronouns, in turn, is specified as the relational constraint \textit {principleB}/3 in the value of {\sc antec}. This is responsible for the realisation of the anaphoric potential of the pronoun as it enters a grammatical construction. When the arguments of this relational constraint are instantiated, it returns list {\bf B} as the value of {\sc antec}. As discussed in Section \ref{semanticPatterns}, this relational constraint \textit {principleB}/3 is defined to take all markers in the discourse context (in the first argument and given by the {\sc list-u} value), and remove from them both the local \mbox{o-commanders} of the pronoun (included in the second argument and made available by the {\sc list-a} value) and the marker corresponding to the pronoun (in the third argument and given by the {\sc dref} value). Finally, the contribution of the reference marker of the pronoun to the context is ensured via token-identity between {\sc r-mark} and a {\sc list-lu} value. The piling up of this reference marker in the global {\sc list-u} value is determined by a new HPSG principle specific to Binding Theory, to be detailed in the next Section \ref{contextRep}. \textbf{Non pronouns and reflexives: lexical entries} The {\sc synsem} of other anaphors --- ruled by principles~A, C or Z --- are similar to the {\sc synsem} of pronouns above. The basic difference lies in the relational constraints to be stated in the {\sc antec} value. Such constraints ---~{\it principleA}/2, {\it principleC}/3 and {\it principleZ}/2 --- encode the corresponding binding principles and return the realised anaphoric potential of anaphors according to the surrounding context, coded in their semantic representation under the form of a list in the {\sc antec} value. Such lists --- {\bf A'}, {\bf C} or {\bf Z'}, respectively --- are obtained by these relational constraints along the lines discussed in Section \ref{semanticPatterns}. \textbf{Non lexical anaphoric expressions} Note that, for non-lexical anaphoric nominals in \ili{English}, namely those ruled by Principle C, the binding constraint is stated in the lexical representation of the determiners contributing to the anaphoric capacity of such NPs. Also the reference marker corresponding to an NP of this kind is brought into its semantic representation from the {\sc r-mark} value specified in the lexical entry of its determiner. Accordingly, for the values of {\sc anaph} to be visible in the signs of non lexical anaphors, Clause I of the Semantics Principle in UDRT\footnote{ \citep[p.12]{frank:sem95}.} is extended with the requirement that the {\sc anaph} value is token-identical, respectively, with the {\sc anaph} value of the specifier daughter, in an NP, and with the {\sc anaph} value of the nominal complement daughter, in a subcategorised PP. \textbf{Exemption} Note also that for short-distance reflexives, exemption from the effect of the corresponding Principle A occurs when \raisebox{-0.085cm}{ \begin{avm} {\it principleA}(\@3,\@2) \end{avm}} returns the empty list as the value of feature {\sc antec}:% \footnote{ This account applies also to exempt occurrences of long-distance reflexives.} \begin{exe} \ex \avmoptions{active} \avmfont{\sc} \avmvalfont{\it} \begin{avm} [loc|cont & { [ ls & [l-max & @1\\ l-min & @1 ]\\ subord & \rm \{\} \\ conds & \{[label & @1\\ dref & @2 ]\}\\ anaph & [r-mark & @2\\ antec & \it @4 principleA\ (@3,@2)] ] }\\ nonloc|bind & { [list-a & @3 \\ list-z & list\(refm\) \\ list-u & list\(refm\) \\ list-lu & <@2> \\] } ] \end{avm} \end{exe} This happens if the reference marker of the reflexive \raisebox{-0.085cm}{ \begin{avm} \@2 \end{avm}} is the first element in the relevant obliqueness hierarchy, i.e.\ it is the first element in the {\sc list-a} value in \raisebox{-0.085cm}{ \begin{avm} \@3 \end{avm}}, thus \mbox{o-commanding} the other possible elements of this list and not being \mbox{o-commanded} by any of them. As discussed in Section \ref{Exemption}, given its essential anaphoricity, a reflexive has nevertheless to be interpreted against some antecedent. As in the exempt occurrences no antecedent candidate is identified by virtue of Principle A activation, the anaphora resolver --- which will operate then on the empty {\sc antec} list% \footnote{ More details of the interface between grammar and reference processing systems in Section~\ref{discuss}.} --- has thus to resort to antecedent candidates outside the local domain of the reflexive: this implies that it has to find antecedent candidates for the reflexive which actually escape the constraining effect of Principle A. The anaphora resolver will then be responsible for modelling the behaviour of reflexives in such exempt occurrences, in which case the anaphoric capacity of these anaphors appears as being exceptionally ruled by discourse-based factors. \subsection{Handling the context representation}\label{contextRep} \is{Binding Domains Principle} Turning now to the representation of the context, this consists in the specification of the constraints on the values of the attributes {\sc list-a}, {\sc list-z}, {\sc list-u} and {\sc list-lu}. This is handled by adding an HPSG principle to the grammar, termed the Binding Domains Principle (BDP). This principle has three clauses constraining signs with respect to these four lists of reference markers. A full understanding of their details, presented below, will be facilitated with the working example discussed in detail in the Appendix. \textbf{Binding Domains Principle, Clause I} Clause I of BDP is responsible for ensuring that the values of {\sc list-u} and \mbox{{\sc list-lu}} are appropriately setup at the different places in a grammatical representation: \begin{exe} \ex\label{bdp} \textbf{Binding Domains Principle}, Clause I \begin{xlisti} \ex The {\sc list-lu} value is identical to the concatenation of the {\sc list-lu} values of its daughters in every sign; \ex the {\sc list-lu} and {\sc list-u} values are token-identical in a sign of sort {\it discourse}; \ex \begin{xlisti} \ex the {\sc list-u} value is token-identical to each {\sc list-u} value of its daughters in a non-NP sign; \ex in an NP sign {\it k}: \begin{itemize} \ex in Spec-daughter, the {\sc list-u} value is the result of removing the elements of the {\sc list-a} value of Head-daughter from the {\sc list-u} value of {\it k}; \ex in Head-daughter, the {\sc list-u} value is the result of removing the value of {\sc r-mark} of Spec-daughter from the {\sc list-u} value of {\it k}. \end{itemize} \end{xlisti} \end{xlisti} \end{exe} \noindent By virtue of (i.), {\sc list-lu} collects up to the outmost sign in a grammatical representation --- which is of sort {\it discourse} --- the markers contributed to the context by each NP. Given (ii.), this list with all the markers is passed to the {\sc list-u} value at this outmost sign. And (iii.) ensures that this list with the reference markers in the context is propagated to every NP. Subclause (iii.ii) prevents self-reference loops due to anaphoric interpretation, avoiding what is known in the literature as the i-within-i effect --- recall that the {\sc r-mark} value of non lexical NPs is contributed by the lexical representation of their determiners, in Spec-daughter position, as noted above. \is{i-within-i effect} The HPSG top ontology is thus extended with the new subsort {\it discourse} for signs: $sign \equiv word \vee phrase \vee discourse$. This new type of linguistic object corresponds to sequences of sentential signs. A new Schema 0 is also added to the Immediate Dominance Principle, where the Head daughter is a phonologically null object of sort {\em context(ctx)}, and the Text daughter is a list of phrases. As the issue of discourse structure is out of the scope of this paper, we adopted a very simple approach to the structure of discourses which suffices for the present account of Binding Theory. As discussed in the next \mbox{Section \ref{verif},} this object of sort {\em ctx} helps representing the contribution of the non linguistic context to the interpretation of anaphors. \textbf{Binding Domains Principle, Clause II} As to the other two Clauses of the Binding Domains Principle, Clause II and Clause III, they constrain the lists {\sc list-a} and {\sc list-z}, respectively, whose values keep a record of o-command relations. BDP-Clause II is responsible for constraining {\sc list-a}: \begin{exe} \ex \textbf{Binding Domains Principle}, Clause II \begin{xlisti} \ex Head/Arguments: in a phrase, the {\sc list-a} value of its head, and of its nominal (or nominal preceded by preposition) or trace Subject or Complement daughters are token-identical; \ex Head/Phrase: \begin{xlisti} \ex in a non-nominal and non-prepositional sign, the {\sc list-a} values of a sign and its head are token-identical; \ex in a prepositional phrase, \begin{itemize} \ex if it is a complement daughter, the {\sc list-a} values of the phrase and of its nominal complement daughter are token-identical; \ex otherwise, the {\sc list-a} values of the phrase and its head are token-identical; \end{itemize} \ex in a nominal phrase, \begin{itemize} \ex in a maximal projection, the {\sc list-a} value of the phrase and its Specifier daughter are token-identical; \ex in other projections, the {\sc list-a} values of the phrase and its head are token-identical. \end{itemize} \end{xlisti} \end{xlisti} \end{exe} This clause ensures that the {\sc list-a} value is shared between a head-daughter and its arguments, given (i.), and also between the lexical heads and their successive projections, by virtue of (ii.). \textbf{O-command} On a par with this Clause II, it is important to make sure that at the lexical entry of any predicator {\it p}, {\sc list-a} includes the {\sc r-mark} values of the subcategorised arguments of {\it p} specified in its {\sc arg-st} value. Moreover, the reference markers appear in the {\sc list-a} value under the same partial order as the order of the corresponding {\em synsem} in {\sc arg-st}. This is ensured by the following constraints on the lexical entries of predicators: \begin{samepage} \begin{exe} \ex\label{lexconst} \end{exe} \avmoptions{active} \avmfont{\sc} \avmvalfont{\it} \begin{avm} \hfill \sort{{\it synsem}}{[loc|cont|arg-st <$\cdots$, [loc|cont|anaph|r-mark & @i],$\cdots$>\\ ]} \end{avm} \begin{flushright} $\longrightarrow$ \begin{avm} \hfill \sort{{\it synsem}}{[nonloc|bind|list-a <$\cdots$, @i,$\cdots$>]} \end{avm} \end{flushright} \begin{avm} \hfill \sort{{\it synsem}}{[loc|cont|arg-st <$\cdots$, [loc|cont|anaph|r-mark & @k],\\ $\cdots$, [loc|cont|anaph|r-mark @l],$\cdots$> ]} \end{avm} \begin{flushright} $\longrightarrow$ \begin{avm} \hfill \sort{{\it synsem}}{[nonloc|bind|list-a <$\cdots$, @k,$\cdots$, @l,$\cdots$>]} \end{avm} \\ \end{flushright} \end{samepage} In case a subcategorised argument is quantificational, it contributes also with its {\sc var} value to the make up of {\sc list-a}:\footnote{ More details on this and on the e-type anaphora vs. bound-variable anaphora distinction are discussed in the next sections.} \begin{exe} \ex\label{lexconst2} \end{exe} \avmoptions{active} \avmfont{\sc} \avmvalfont{\it} \begin{avm} \hfill \sort{{\it synsem}}{[loc|cont|arg-st <$\cdots$, [loc|cont|anaph [r-mark & @r\\var & @v]],$\cdots$>]} \end{avm} \begin{flushright} $\longrightarrow$ \begin{avm} \sort{{\it synsem}}{[nonloc|bind|list-a <$\cdots$, @v, @r,$\cdots$>]} \end{avm} \end{flushright} \avmoptions{} \pagebreak \textbf{Binding Domains Principle, Clause III} Finally, BDP-Clause III ensures that {\sc list-z} is properly constrained: \begin{samepage} \begin{exe} \ex \textbf{Binding Domains Principle}, Clause III\\ For a sign F: \begin{xlisti} \ex in a Text daughter, the {\sc list-z} and {\sc list-a} values are token-identical; \ex in a non-Text daughter, \begin{xlisti} \ex in a sentential daughter, the {\sc list-z} value is the concatenation of the {\sc list-z} value of F with the {\sc list-a} value; \ex in a Head daughter of a non-lexical nominal, the {\sc list-z} value is the concatenation of L with the {\sc list-a} value, where L is the list which results from taking the list of o-commanders of the {\sc r-mark} value, or instead of {\sc var} value when this exists, of its Specifier sister from the {\sc list-z} value of F; \ex in other, non-filler, daughters of F, the {\sc list-z} value is token-iden\-ti\-cal to the {\sc list-z} value of F. \end{xlisti} \end{xlisti} \end{exe} \end{samepage} By means of (i.), this Clause III ensures that, at the top node of a grammatical representation, {\sc list-z} is set up as the {\sc list-a} value of that sign. Moreover, given (ii.), it is ensured that {\sc list-z} is successively incremented at suitable downstairs nodes --- those defining successive locality domains for binding, as stated in (ii.i) and (ii.ii) --- by appending, in each of these nodes, the {\sc list-a} value to the {\sc list-z} value of the upstairs node. \textbf{Locality} From this description of the Binding Domains Principle, it follows that the locus in grammar for the parameterisation of what counts as a local domain for a particular language is the specification of BDP--Clauses II and III for that language. \is{anaphora resolution} \is{reference processing} \section{Interface with Reference Processing Systems \label{discuss}} The appropriateness of the grammatical constraints on anaphoric binding presented above extends to its suitable accounting of the division of labor between grammars and reference processing systems, and of the suitable interfacing between them. \subsection{Anaphora Resolution \label{resolvers}} While the grammatical anaphoric binding constraints are specified and verified as part of the global set of grammatical constraints, they provide also for a suitable hooking up of the grammar with modules for anaphora resolution. Feature {\sc antec} is the neat interface point between them: its value with a list of antecedent candidates that comply with Binding Theory requirements is easily made accessible to anaphor resolvers. This list will be then handled by a resolver where further non grammatical soft and hard constraints on anaphora resolution will apply and will filter down that list until the most likely candidate will be determined as the antecedent. \subsection{Reference Processing\label{semanticTypes}} The anaphoric binding constraints also provide a convenient interface for anaphoric links of different semantic types --- exemplified below --- to be handled and specified by reference processing systems: \begin{exe} \ex \begin{xlist} \ex {John$_{i}$ said that he$_{i}$ would leave soon.} (coreference) \label{anTypes} \ex {Kim$_{i}$ was introduced to Lee$_{j}$ and a few minutes later they$_{i+j}$ went off for dinner.} (split anaphora) \label{anTypesb} \ex {Mary could not take [her car]$_{i}$ because [the tyre]$_{i}$ was flat.} (bridging anaphora) \label{anTypesc} \ex {[Fewer than twenty Parliament Members]$_{i}$ voted against the proposal because they$_{i}$ were afraid of riots in the streets.} (e-type anaphora) \label{anTypesd} \ex {[Every sailor in the Bounty]$_{i}$ had a tattoo with [his mother's]$_{i}$ name on the left shoulder.} (bound anaphora) \label{anTypese} \end{xlist} \end{exe} \is{coreference} \is{plit antecedent} \is{bridging anaphora} \is{e-type anaphora} \is{bound anaphora} Example (\ref{anTypes}) displays a coreference relation, where {\it he} has the same semantic value as its antecedent {\it John}. A case of split antecedent can be found in (\ref{anTypesb}) as {\it they} has two syntactic antecedents and it refers to an entity comprising the two referents of the antecedents. The referent of {\it the tyre} is part of the referent of its antecedent {\it his car} in (\ref{anTypesc}), thus illustrating a case of so called bridging anaphora (also know as indirect or associative anaphora), where an anaphor may refer to an entity that is e.g.\ an element or part of the denotation of the antecedent, or an entity that includes the denotation of the antecedent, etc \footnote{ See~\citep{poesio:ana98} for an overview. } In (\ref{anTypesd}) {\it they} has a so called non-referential antecedent, {\it fewer than twenty Parliament Members}, from which a reference marker is inferred to serve as the semantic value of the plural pronoun: {\it they} refer to those Parliament Members, who are fewer than twenty in number, and who voted against the proposal. Example (\ref{anTypesd}) illustrates a case of e-type anaphora,\footnote{ \citep{evans:pron80}.} and this inference mechanism to obtain an antecedent marker from a non referring nominal is described in Section~\ref{circAnaphPotential}. Finally in (\ref{anTypese}), though one also finds a quantificational antecedent for the anaphoric expression, the relation of semantic dependency differs to the one in the previous example. The anaphoric expression {\it his mother} does nor refer to the mother of the sailors of the Bounty. It acts rather in the way of a bound variable of logical languages --- for each sailor $s$, {\it his mother} refers to the mother of $s$ --- thus exemplifying a case of so called bound anaphora.\footnote{ \citep{reinhart:bound83}. } Given that the semantic relation between antecedent marker and anaphor marker can be specified simply as another semantic condition added to the {\sc conds} value, a DRT/HPSG representation for the resolved anaphoric link under the relevant semantic type of anaphora is straightforward and the integration of the reference processing outcome into grammatical representation is seamlessly ensured. For the sake of the illustration of this point, assume that a given reference marker {\bf x} turns out to be identified as the antecedent for the anaphoric nominal Y, out of the set of antecedent candidates for Y in its {\sc antec} value. This antecedent {\bf x} can be related to the reference marker {\bf y} of anaphor Y by means of an appropriate semantic condition in its {\sc conds} value. Such a condition will be responsible for modelling the specific mode of anaphora at stake. For instance, coreference will require the expected condition {\bf y}=_{coref}{\bf x}, as exemplified below with the {\sc cont} value of the pronoun in (\ref{pronfeat}) extended with a solution contributed by an anaphor resolver, where \raisebox{-0.7ex}{\em \begin{avm}\@{7}\end{avm}} would be the marker picked up as the plausible antecedent. \begin{exe} \ex \avmoptions{active} \avmfont{\sc} \avmvalfont{\it} \begin{avm} [ ls & [l-max & @1\\ l-min & @1 ]\\ subord & \rm \{@1=@6\} \\ conds & \{[label & @1\\ dref & @2 ], [label & @6\\ rel & $=$_{coref}\\ arg1 & @2\\ arg2 & @7]\}\\ anaph & [r-mark & @2\\ antec & @5<..., @7,...>] ] \end{avm} \end{exe} An instance of bridging anaphora, in turn, may be modelled by {\it bridg}({\bf x}, {\bf y}), where {\it bridg} stands for the relevant bridging function between {\bf y} and {\bf x}, and similarly for the other semantic anaphora types. \subsection{Coreference Transitivity \label{transitivity}} It is also noteworthy that the interfacing of grammar with reference processing systems ensured by anaphoric binding constraints provides a neat accommodation of coreference transitivity. If as a result of the process of anaphora resolution, a given anaphor N and another anaphor B end up being both coreferent with a given antecedent A, then they end up being coreferent with each other. That is, in addition to having marker {\em r_{a}} as an admissible antecedent in its set of candidate antecedents, that anaphor N has also to eventually have marker {\em r_{b}} included in that set. This is ensured by including, in the {\sc conds} value in (\ref{pronfeat}), semantic conditions that follow as logical consequences from this overall coreference transitivity requirement that is operative at the level of the reference processing system with which grammar is interfaced: $\forall r_{a},r_{b} ((\raisebox{-.6ex}{\em \begin{avm}\@2\end{avm}}$=_{coref}$r_{b} \wedge r_{a}$=_{coref}$r_{b}) \Rightarrow (\langle r_{a}\rangle \cup\raisebox{-.6ex}{\em \begin{avm}\@5\end{avm}} = \raisebox{-.6ex}{\em \begin{avm}\@5\end{avm}}))$. An important side effect of this overall constraint is that ``accidental" violations of Principle B are prevented, as illustrated with the help of the following example. \begin{exe} \ex [*]{The captain_{i/j} thinks he_{i} loves him_{j}.} \end{exe} Given that the Subject of the main clause, {\em the captain}, does not locally o-command any one of them, either the pronoun {\em he} or the pronoun {\em him} can have the nominal phrase {\em the captain} as antecedent, in compliance with Principle B. By transitivity of anaphoric coreference though, the reference marker of {\em he} is made to belong to the admissible set of antecedents of {\em him}, which violates Principle B. Hence, by the conjoined effect of coreference transitivity and of Principle~B, that ``accidental" violation of Principle B that would make {\em he} an (o-commanding) antecedent of {\em him} in this example is (correctly) blocked. By the same token, ``accidental" violations of Principle C with an analogous pattern as above, but for non pronouns, are prevented: \begin{exe} \ex [*]{When John_{i/j} will conclude his therapy, [the boy_{i} will stop believing [that the patient_{j} is a Martian]].} \label{transitivityC} \end{exe} Separately, {\em the boy} and the {\em the patient} can have {\em John} as antecedent, in accordance to Principle C. But {\em the patient} --- because is o-commanded by {\em the boy} --- cannot have {\em the boy} as antecedent, which, also here, is (correctly) ensured by a conjoined effect of the coreference transitivity requirement and the relevant Principle~C. Accordingly, when the semantic type of anaphora is not one of coreference, no coreference transitivity holds, and there happens no ``accidental" violation of Principle~C. This is illustrated in the following example with bridging anaphora instead, where two non pronouns, though occurring in the same clause, like in (\ref{transitivityC}) , can be (correctly) resolved against the same antecedent --- in contrast with that example (\ref{transitivityC}) above, where such possibility is blocked. \begin{exe} \ex \gll Quando [o robot]_{i} concluiu a tarefa, o operador viu que [a roda]_{i} estava a esmagar [o cabo de alimenta\c{c}\~ao]_{i}. (\ili{Portuguese})\\ when the robot concluded the task, the operator saw that the wheel was to crush the cord of power\\ \trans 'When [the robot]_{i} concluded the task, the operator saw that [his_{i} wheel] was crushing [his_{i} power cord].' \end{exe} Another range of examples where the semantic type of anaphora is not one of coreference --- also with no coreference transitivity holding --- and thus also where (correctly) there happens no``accidental" violation of the respective binding principle can be found for reflexives, as illustrated in the following example. \begin{exe} \ex {The captain_{i} thinks he_{i/j} loves himself_{*i/j}.} \label{accidentalReflexives} \end{exe} The reflexive {\em himself} can have {\em he} as antecedent, because the later locally o-commands it, but cannot have {\em the captain} as antecedent because the later does not locally o-command it. But while the semantic anaphoric relation between {\em the captain} and {\em he} is one of coreference, the semantic anaphoric relation between {\em he} and {\em himself} is not, being rather one of bound anaphora.% \footnote{ Confluent evidence that reflexives entertain a bound anaphora relation with their antecedents was also observed when their inability to enter split anaphora relations in non exempt positions was noted in Section \ref{Exemption}. } Hence, the coreference transitivity requirement does not apply and the referent of {\em the captain} does not land into the set of possible antecedents of the reflexive, thus not inducing an ``accidental" violation of Principle A. Example (\ref{accidentalReflexives}) can thus felicitously be interpreted as the captain thinking that the agent of loving him is himself, resulting from {\em himself} having {\em him} as antecedent and {\em him} having {\em the captain} as antecedent. \section{Binding Constraints for Antecedents \label{reverse}} The Binding Theory presented in this paper is also serendipitous in terms of improving the accuracy of empirical predictions offered by a formal grammar with respect to anaphoric binding restrictions that are outside the realm of the binding principles in (\ref{PrincipleA})-(\ref{PrincipleC}). \is{e-type anaphora} \is{bound anaphora} Note first that a reference marker introduced by a non quantificational NP can be the antecedent either of an anaphor that it o-commands, as in (\ref{nonquantOcom}), or of an anaphor that it does not o-command, as in (\ref{nonquantNotOcom}): \begin{exe} \ex \begin{xlist} \label{nonquantOcom} \ex[]{$[$The captain who knows this sailor]_{i} thinks Mary loves him_{i}.} \label{nonquantOcom} \ex[]{$[$The captain who knows [this sailor]_{i}] thinks Mary loves him_{i}.} \label{nonquantNotOcom} \end{xlist} \end{exe} Differently from a non quantificational NP, which contributes one reference marker to the representation of the context, a quantificational NP contributes two markers that exhibit symmetric features with respect to each other in several respects. The fact that one of them can serve as an antecedent in e-type anaphora, while the other can serve as an antecedent in bound-variable anaphora is certainly one of such symmetries.\footnote {Extensive discussion of this difference is presented in the Appendix.} But there are more. Let us take a quantificational NP, introduced for instance by the quantifier {\em every}, acting as an antecedent. This imposes different Number requirements on its anaphors depending on the type of anaphora relation at stake --- e-type or bound-variable anaphora --- so that the underlying occurrence of each one of the corresponding two markers can be tracked down. For ease of reference, let us term the marker ensuring e-type anaphora as the e-marker, and the marker ensuring bound anaphora as the v-marker.\footnote{ In the formalisation presented in the Appendix, an e-marker is the marker in the {\sc r-mark} value, introduced by \mbox{\textSigma-abstraction}, and a v-marker is the marker in the the {\sc var} value, introduced by the restrictor argument of the determiner.} The contrast below illustrates that, in an e-type anaphoric link, the e-marker stands for a plurality: \begin{exe} \ex[]{Every sailor_{i} has many girlfriends. They_{i}/He_{*i} travel(s) a lot.} \end{exe} And the next contrast illustrates that, in a bound-variable anaphoric link, the v-marker is singular: \begin{exe} \ex[]{Every sailor_{i} shaves themselves_{*i}/himself_{i}.} \end{exe} The following contrasts can now be considered. An e-marker can be the antecedent of anaphors that it does not o-command, in (\ref{etypeb}), but cannot be the antecedent of anaphors that it o-commands, in (\ref{etypea}): \begin{exe} \ex\label{etype} \begin{xlist} \ex[*]{$[$Every captain who knows this sailor]_{i} thinks Mary loves them_{i}.} \label{etypea} \ex[]{$[$The captain who knows [every sailor]_{i}] thinks Mary loves them_{i}.} \label{etypeb} \end{xlist} \end{exe} This contrast is symmetric to the contrast for the other reference marker: a v-marker can be the antecedent of anaphors that it o-commands, in (\ref{weakcrossovera}), but cannot be the antecedent of anaphors that it does not o-command, in (\ref{weakcrossoverb}): \begin{exe} \ex\label{weakcrossover} \begin{xlist} \ex[]{$[$Every captain who knows this sailor]_{i} thinks Mary loves him_{i}.} \label{weakcrossovera} \ex[*]{$[$The captain who knows [every sailor]_{i}] thinks Mary loves him_{i}.} \label{weakcrossoverb} \end{xlist} \end{exe} As these contrasts are empirically observed as patterns holding for quantificational NPs in general (not only for those introduced by {\em every}), constraints emerge on which anaphors different markers can be the antecedents of, in case such markers are contributed by quantificational NPs. E-markers and v-markers of a given quantificational NP induce a partition of the space of their possible anaphors when that NP is acting as an antecedent: a~\mbox{v-marker} is an antecedent for anaphors in the set of its o-commanded anaphors, while an e-marker is an antecedent for anaphors in the complement of such set, i.e.\ in the set of its non o-commanded anaphors. This implies that on a par with the grammatical constraints on {\em the relative positioning of antecedents with respect to anaphors} in (\ref{PrincipleA})-(\ref{PrincipleC}), there are also grammatical constraints on {\em the relative positioning of anaphors with respect to their antecedents} when the corresponding markers are introduced by quantificational NPs. Building on the same auxiliary notions, these ``reverse" binding constraints receive the following definition as R-Principles E and V: \is{R-Principle V} \is{R-Principle E} \begin{exe} \ex\label{reverseBindingPrinciples} {\textbf{R-Principle E:}} An antecedent cannot o-bind its anaphor (in \mbox{e-type} anaphora). \sn {$[$Every captain who knows [every sailor]_{i}]_{j} thinks Mary loves them_{i/*j}.} \end{exe} \begin{exe} \sn {\textbf{R-Principle V:}} An antecedent must o-bind its anaphor (in bound-anaphora). \sn {$[$Every captain who knows [every sailor]_{i}]_{j} thinks Mary loves him_{*i/j}.} \end{exe} It is worth noting that these principles account also for what has been observed in the literature as the weak crossover effect.% \footnote{See \citep[Sec.2.1]{jacobson:paycheck2000} for an extensive overview of accounts of weak crossover. For an account of strong crossover in HPSG see \citep[p.279]{polsag:hpsg94}. } In the example below, displaying a case of weak crossover, the anaphoric link is ruled out by \mbox{R-Principle V} since the quantificational NP {\em every sailor} does not o-command the pronoun {\em him}, which is singular and could thus enter only into a bound-anaphora relation. \begin{exe} \ex[*]{$[$The captain who knows him_{i}] thinks Mary loves every sailor_{i}.} \end{exe} Weak crossover constructions appear thus as a sub-case of the class of constructions ruled out by the binding constraints for antecedents.% \footnote{ To the best of our knowledge, the integration of the reverse anaphoric constraints E and V in (\ref{reverseBindingPrinciples}) into HPSG --- like what is obtained in Section \ref{spec1} above for Principles A-Z in (\ref{PrincipleA})-(\ref{PrincipleC}) --- was not worked out yet in the literature. Besides an explicit formal specification of (\ref{reverseBindingPrinciples}) in terms of HPSG, there are also empirical aspects that ask to be worked out in future work. For weak crossover, for instance, it is interesting to note Jacobson's remarks: ``... it is well known that weak crossover (WCO) is indeed weak, and that the effect can be ameliorated in a variety of configurations. To list a few relevant observations: WCO violations are much milder if the offending pronoun is within a sentence rather than in an NP; the more deeply one embeds the offending pronoun the milder the WCO effect; WCO effects are ameliorated or even absent in generic sentences; they are milder in relative clauses than in questions [...] For example, the possibility of binding in {\em Every man's_{i} mother loves him_{i}} remains to be accounted for." \citep[p.120]{jacobson:paycheck2000}. } \section{Outlook \label{outlook}} With the material presented in the sections above, it emerges that the grammar of anaphoric binding constraints builds on the following key ingredients: \begin{itemize} \item Interpretation: binding constraints are grammatical constraints on interpretation contributing to the contextually determined semantic value of anaphors --- rather than syntactic wellformedness constraints. \item Lexicalisation: binding constraints are properties of anaphors determining how their semantic value can be composed or co-specified, under a non-local syntactic geometry, with the semantic value of other expressions --- rather than properties of grammatical representations of sentences as such: accordingly, the proper place of these constraints in grammar is at the lexical description of the relevant anaphoric units (e.g. the English pronoun {\em him}, or the Portuguese multiword long distance reflexive {\em ele pr\'{o}prio}) or the anaphora inducing items (e.g. the English definite article {\em the} that introduces non-pronouns). \item Underspecification: binding constraints delimit how the anaphoric potential of anaphors can be realised when they enter a grammatical construction --- rather than determining the eventual antecedent: on the one hand, this realisation of anaphoric potential is not a final solution in terms of circumscribing the elected antecedent, but a space of grammatically admissible solutions; on the other hand, this realisation of anaphoric potential has to be decided, locally, in terms of non-local information: accordingly, an underspecification-based strategy is required to pack ambiguity and non-locality. \item Articulation: binding constraints are grammatical constraints --- rather than anaphora resolvers: accordingly, grammars, where grammatical ana\-phoric constraints reside, and reference processing systems, where further constraints on the resolution of anaphora reside, are autonomous with respect to each other, and their specific contribution gains from them being interfaced, rather than being mixed up. \end{itemize} Binding principles capture the relative positioning of anaphors and their admissible antecedents in grammatical representations. As noted at the introduction of the present paper, together with their auxiliary notions, they have been considered one of the most outstanding modules of grammatical knowledge. From an empirical perspective, these constraints stem from quite cogent generalisations and exhibit a universal character, given the hypothesis of their parameterised validity across anaphoric expressions and natural languages. From a conceptual point of view, in turn, the relations among binding constraints involve non-trivial cross symmetry that lends them a modular nature and provides further strength to the plausibility of their universal character. To conclude the overview presented in this paper, the remainder two subsections below present intriguing and promising research questions, respectively for symbolic and neural approaches. \subsection{Symbolic} \textbf{Symmetries} The recurrent complementary distribution of the admissible antecedents of a pronoun and of a short-distance reflexive in the same, non exempt syntactic position, in different languages from different language families, has perhaps been the most emblematic symmetry. For the sake of convenience, the examples in (\ref{PrincipleA})-(\ref{PrincipleC}) are copied to (\ref{exA})-(\ref{exC}) below. The pair (\ref{exA}) vs. (\ref{exB}), with the anaphoric expressions in the same syntactic position of the same syntactic construction, illustrates the symmetry just mentioned, between reflexives and pronouns, suggestively grasped by comparing the starred and non starred indexes. \pagebreak \begin{exe} \ex \label{exA} {...{\em X}$_{x}$...[Lee$_{i}$'s friend]$_{j}$ thinks [[Max$_{k}$'s brother]$_{l}$ likes himself$_{*x/*i/*j/*k/l}$].} \ex \label{exZ} \gll ...{\em X}$_{x}$...[O amigo do Lee$_{i}$]$_{j}$ acha [que [o irm\~{a}o do Max$_{k}$]$_{l}$ gosta dele pr\'{o}prio$_{*x/*i/j/*k/l}$]. (\ili{Portuguese})\\ \mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }the friend of.the Lee thinks \mbox{ }that \mbox{ }the brother of.the Max likes of.him self\\ \trans '...{\em X}$_{x}$...[Lee$_{i}$'s friend]$_{j}$ thinks [[Max$_{k}$'s brother]$_{l}$ likes him$_{*x/*i/j/*k}$ / \linebreak himself$_{l}$].' \ex \label{exB} {...{\em X}$_{x}$...[Lee$_{i}$'s friend]$_{j}$ thinks [[Max$_{k}$'s brother]$_{l}$ likes him$_{x/i/j/k/*l}$].} \ex \label{exC} {...{\em X}$_{x}$...[Lee$_{i}$'s friend]$_{j}$ thinks [[Max$_{k}$'s brother]$_{l}$ likes the boy$_{x/i/*j/k/*l}$].} \end{exe} But given also the complementary distribution of the admissible antecedents of a long-distance reflexive and of a non pronoun in the same, non exempt syntactic position, a similar symmetry is also found between these two other types of anaphors. This is illustrated by the complementarity of the indexes in (\ref{exZ}) vs. (\ref{exC}). Another double ``symmetry'' worth noting is the one between short- and long-distance reflexives, on the one hand, and non pronouns and pronouns on the other hand. Both sorts of reflexives present the same binding regime but over o-command orders whose length is possibly different: the set of admissible antecedents of a short-distance reflexive is a subset of the set of admissible antecedents of a long-distance reflexive in the same, non exempt syntactic position. For a given non-exempt position, the admissible antecedents of a short-distance reflexive are the antecedents that are in the set of admissible antecedents of a long-distance reflexive in that same position and that are local, i.e. are in the local domain. The felicitous (non starred) indexes in (\ref{exA}) are a subset of the felicitous indexes in (\ref{exZ}), which illustrates this symmetry. A ``symmetry" similar to this one is displayed by non pronouns and pronouns with respect to a given syntactic position: the set of admissible antecedents of a non pronoun is a subset of the set of admissible antecedents of a pronoun. For a given position, the admissible antecedents of a non pronoun are the antecedents that are in the set of admissible antecedents of a pronoun in that same position and that are not o-commanding the pronoun (or non pronoun). The felicitous (non starred) indexes in (\ref{exC}) are a subset of the felicitous indexes in (\ref{exB}). \textbf{Quantificational Strength} When these symmetries are further explored, the intriguing observation that emerges with respect to the empirical generalisations in (\ref{PrincipleA})-(\ref{PrincipleC}) is that when stripped away from their procedural phrasing and non-exemption safeguards, they instantiate a square of logical oppositions: \is{square of opposition} \begin{exe} \ex \label{bindingSquareOpposition} \end{exe} \vspace{-7mm} \centerline{\includegraphics[width=18pc]{bindingSquareOpposition.pdf}} \is{square of opposition} \is{square of duality} Like in the Aristotelian square of opposition, depicted in (\ref{patternSquareOpposition}), there are two pairs of {\em contradictory} constraints, which are formed by the two diagonals, (Principles A, B) and (C, Z). One pair of {\em contrary} constraints (they can be both false but cannot be both true) is given by the upper horizontal edge (A, C). One pair of {\em compatible} constraints (they can be both true but cannot be both false) is given by the lower horizontal edge (Z, B). Finally two pairs of {\em subcontrary} constraints (the first coordinate implies the second, but not vice-versa) are obtained by the vertical edges, (A, Z) and (C, B). \begin{exe} \ex \label{patternSquareOpposition} \end{exe} \vspace{-10mm} \centerline{\includegraphics[width=13pc]{patternSquareOpposition.pdf}} The empirical emergence of a square of oppositions for the semantic values of natural language expressions naturally raises the question about the possible existence of an associated square of duality --- and importantly, about the quantificational nature of these expressions. \is{square of duality} \is{quantification} \is{quantifier} \begin{exe} \ex \label{patternSquareDuality} \end{exe} \vspace{-7mm} \centerline{\includegraphics[width=12pc]{patternSquareDuality.pdf}} It is of note that the classical square of oppositions in (\ref{patternSquareOpposition}) is different and logically independent from the square of duality in (\ref{patternSquareDuality}) --- with the semantic values of the English expressions {\em every N}, {\em no N}, {\em some N} and {\em not every N}, or their translational equivalents in other natural languages, providing the classical example of an instantiation of the latter: \begin{exe} \ex \end{exe} \vspace{-7mm} \centerline{\hspace{0 mm}\includegraphics[width=14pc]{classicalSquareDuality.pdf}} The difference lies in the fact that inner negation, outer negation and duality (concomitant inner and outer negation) are third order concepts, while compatibility, contrariness and implication are second order concepts. As a consequence, it is possible to find instantiations of the square of oppositions without a corresponding square of duality, and vice-versa.% \footnote{ Vd.~\citep{Lobner1987} for examples and discussion. } Logical duality has been a key issue in the study of natural language and, in particular, in the study of quantification as this happens to be expressed in natural language. It is a pattern noticed in the semantics of many linguistic expressions and phenomena, ranging from the realm of determiners to the realm of temporality and modality, including topics such as the semantics of the adverbials {\em still}/{\em already} or of the conjunctions {\em because}/{\em although}, etc.% \footnote{ \citep{Lobner1987, Lobner1989, Lobner1999, terMeulen1988, Konig1991, Smessaert1997}. } Under this pattern, one recurrently finds groups of syntactically related expressions whose formal semantics can be rendered as one of the operators arranged in a square of duality. Such a square is made of operators that are interdefinable by means of the relations of outer negation, inner negation, or duality. Accordingly, the emergence of a notoriously non trivial square of logical duality between the semantic values of natural language expressions has been taken as a major empirical touchstone to ascertain their quantificational nature.% \footnote{ Vd. \citep{Lobner1987,vanBenthem1991}. While noting that the ubiquity of the square of duality may be the sign of a semantic invariant possibly rooted in some cognitive universal, \citep[p.23]{vanBenthem1991} underlined its heuristic value for research on quantification inasmuch as ``it suggests a systematic point of view from which to search for comparative facts". } By exploring these hints, and motivated by the intriguing square of opposition in (\ref{bindingSquareOpposition}), the empirical generalisations captured in the binding principles were shown to be the effect of four quantifiers that instantiate a square of duality like (\ref{patternSquareDuality}).% \footnote{ \citep{branco:2006,branco:2005,branco:2001,branco:1998}. } \is{Principle A} \is{Principle Z} \is{Principle B} \is{Principle C} \is{reflexives} \is{short-distance reflexives} \is{long-distance reflexives} \is{pronouns} \is{non-pronouns} \is{phase quantification} For instance, Principle A is shown to capture the constraining effects of the existential quantifier that is part of the semantic value of short-distance reflexives. Like the existential quantifier expressed by other expressions, such as the adverbial {\em already},% \footnote{ \citep{Lobner1987}. } this a phase quantifier. What is specific here is that the quantification is over a partial order of reference markers, the two relevant semi-phases over this order include the local o-commanders and the other reference markers that are not local o-commanders, respectively for the positive and the negative semi-phases, and the so-called parameter point in phase quantification is the reference marker of the eventual antecedent for the anaphoric nominal at stake. Accordingly, the other three quantifiers --- corresponding to the other three binding Principles B, C and Z --- are defined by means of this existential one being under external negation (quantifier expressed by pronouns), internal negation (by non pronouns) or both external and internal negation (by long-distance reflexives). \is{referential nominals} \is{quantificational nominals} \is{dual nominals} \is{e-type anaphora} \textbf{Doubly Dual Nominals} While these findings deepen the rooting of binding constraints into the semantics of anaphoric nominals,% \footnote{ Their fully-fledged discussion and justification are outside the scope of the present paper. A thorough presentation can be found in \citep{branco:2005}.} more importantly, they also point towards promising research directions with the potential to advance our understanding of the grammar of anaphoric binding, in particular, and more widely, to further our insights into the semantics of nominals, in general. A shared wisdom is that nominals convey either quantificational or referential force. The findings introduced above imply that nominals with ``primary" referential force (e.g. {\em John}, {\em the book}, {\em he},...) have also a certain ``secondary" quantificational force: they express quantificational requirements --- over reference markers, i.e. entities that live in linguistic representations ---, but do not directly quantify over extra-linguistic entities, like the other ``primarily" quantificational nominals (e.g. {\em every man}, {\em most students},...) do. This duality of semantic behaviour, however, turns out not to be that much surprising if one takes into account a symmetric duality with regards to ``primarily" quantificational nominals, which is apparent when they are able to act as antecedents in e-type anaphora. Nominals with ``primary" quantificational force have also a certain ``secondary" referential force: they have enough referential strength to evoke and introduce reference markers in the linguistic representation that can be picked as antecedents by anaphors --- and thus support the referential force of the latter~---, but they cannot be used to directly refer to extra-linguistic entities, like the other ``primarily" referential terms do. As a result, the duality quantificational vs. referential nominals appears thus as less strict and more articulated than it has been assumed. Possibly taking indefinite descriptions aside, every nominal makes a contribution in both semantic dimensions of quantification and reference but with respect to different universes. Primarily referential nominals have a dual semantic nature --- they are primarily referential (to extra-linguistic entities) and secondarily quantificational (over linguistic entities) ---, which is symmetric of the dual semantic nature of primarily quantificational ones --- these are primarily quantificational (over extra-linguistic entities) and secondarily referential (to linguistic entities). \subsection{Neural} \textbf{Natural Language Processing Task} Some natural language processing tasks, e.g. question answering, appear as end to end procedures serving some useful, self-contained application. Some other tasks, in turn, e.g. part-of-speech tagging, appear more as instrumental procedures to support those downstream, self-contained applications. To help assess research progress in neural natural language processing, sets of processing tasks, of both kinds, have been bundled together, e.g. in the GLUE benchmark.% \footnote{\citep{wang-etal-2018-glue}.} As one such instrumental natural language processing task, possibly contributing or being embedded into downstream applications, anaphora resolution, including coreference resolution, is a procedure by means of which anaphors are paired with their antecedents. It has been addressed with neural approaches% \footnote{\citep{lee-etal-2017-end,xu-choi-2020-revealing}.} and has been integrated into natural language processing benchmarks. While related to anaphora resolution, and eventually instrumental to it, determining the set of grammatically admissible antecedents for a given anaphor is a procedure that, as such, has not been addressed yet with neural approaches, to the best of our knowledge. Like many other instrumental tasks, this is a challenge that can contribute to make empirically evident and to appreciate the strength of different neural approaches in handling natural language processing. Grammatical anaphoric binding is thus an intriguing research question open to be addressed with neural approaches, and also with a good potential to provide a research challenge that may pave the way for neuro-symbolic solutions to emerge. \textbf{Probing for Linguistic Plausibility} While providing outstanding performance scores in many natural processing tasks, neural models have been challenged, like in other applications areas, due to its opacity and lack of interpretability, specially when compared to symbolic methods. As a way to respond to this type of challenge, neural models have been submitted to ingenious probing procedures aimed at assessing them with respect to the linguistic knowledge they may eventually have specifically encoded while having been trained for generic or high level natural language processing tasks, like for instance language modelling, machine translation, etc.% \footnote{\citep{conneau-etal-2018-cram,tenney-etal-2019-bert,miaschi-dellorletta-2020-contextual}, \textit{i.a.}} This endeavour of unveiling the possible linguistic knowledge represented in neural models will certainly benefit from integrating the task of grammatical anaphoric binding in this kind of toolboxes that may be used for linguistic probing and interpretability. \textbf{Inductive Bias for Natural Language} An increasingly important research question in neural natural language processing is to design models that possibly have an appropriate inductive bias such that their internal linguistic representations and capabilities resemble as much as possible the ones of human language learners after being exposed with as little volume of raw training data as the ones humans learners are exposed to.% \footnote{\citep{mccoy-etal-2020-syntax}.} A most outstanding feature of natural language is the possibility of there being so called long distance relations, that is relations between expressions among which a string of other expressions of arbitrary length may intervene. This builds on another feature that has been widely recognized as underlying natural language, namely the hierarchical nature of its complex expressions.% \footnote{\citep{chomsky:1965}} As amply documented in the overview above, grammatical anaphoric binding relations, among anaphors and antecedents, are grammar regulated connections that are long distance relations par excellence. Hence, anaphoric binding is essential, and of utmost importance, for the endeavour of designing neural models with appropriate inductive bias for natural language. \section*{Unified BibLaTeX style sheet for linguistics} \nocite{*} \printbibliography[heading=none] \end{document}
1,108,101,566,161
arxiv
\section{Introduction} Stern's diatomic sequence (defined in Section \ref{CF.5}) stems from the study of continued fractions and has a number of remarkable combinatorial properties, as seen in Northshield \cite{NorthshieldS10}. There are many different multidimensional continued fraction algorithms, and they serve a number of different purposes ranging from simultaneous Diophantine approximation problems (see Lagarias \cite{Lagarias93}) to attempts to understand algebraic numbers via periodicity conditions (see the third author's \cite{GarrityT01}) to automata theory (see Fogg \cite{Fogg}). This paper concerns a generalization of Stern's diatomic sequence defined using triangle partition maps, a family of multidimensional continued fractions that includes most of the well-known multidimensional continued fractions presented in Schweiger \cite{Schweiger1}. For background on multidimensional continued fractions, see Schweiger \cite{Schweiger1} and Karpenkov \cite{Karpenkov}. For background on the properties of Stern's diatomic sequence, see Lehmer \cite{Lehmer1}. For background on Stern's diatomic sequence in the context of continued fractions, see Northshield \cite{NorthshieldS10}. Knauf found connections between Stern's diatomic sequence and statistical mechanics \cite{Knauf1, Knauf2, Knauf5, Knauf3, Knauf4}, though Knauf called the sequences \textit{Pascal with memory}, which is a more apt description. The connection between Stern's sequence and statistical mechanics was further developed in Contucci and Knauf \cite{Contucci-Knauf1}, Esposti, Isola and Knauf \cite{Esposti-Isola-Knauf1}, Fiala and Kleban \cite{ Fiala-Kleban1}, Fiala, Kleban and Ozluk \cite{ Fiala-Kleban-Ozluk1}, Garrity \cite{ Garrity10}, Guerra and Knauf \cite{ Guerra-Knauf1}, Kallies, Ozluk, Peter and Syder\cite{ Kallies-Ozluk-Peter-Syder1}, Kleban and Ozluk \cite{ Kelban-Ozluk1}, Mayer \cite{ Mayer2}, Mend\`{e}s France and Tenenbaum \cite{MendesFrance-Tenenbaum1, MendesFrance-Tenenbaum2}, Prellberg, Fiala and Kleban \cite{Prellberg-Fiala-Kleban1}, and Prellberg and Slawny \cite{ Prellberg-Slawny1}. Other important earlier work was done by Allouche and Shallit \cite{Allouche-Shallit92, Allouche-Shallit03}, who showed that Stern's sequence is 2-regular. There seems to have been little work on extending Stern's diatomic sequence to multidimensional continued fraction algorithms. The first generalization of Stern's diatomic sequence was for a type of multidimensional continued fraction called the Farey map, in Garrity \cite{Garrity13}. The Farey map is not one of the multidimensional continued fractions that we will be considering. Another generalization used the M\"{o}nkemeyer map, in Goldberg \cite{Goldberg12}. This paper uses the family of multidimensional continued fractions called triangle partition maps (TRIP maps for short) \cite{SMALL11q1} to construct analogous sequences. As mentioned, many, if not most, known multidimensional continued fraction algorithms can be put into the language of triangle partition maps; thus, the collection of TRIP maps is a rich family. In Section \ref{CF}, we give a quick overview of continued fractions, Stern's diatomic sequence and how the two are related. Section \ref{S2} reviews triangle partition maps and triangle partition sequences. Section \ref{S2.5} introduces the construction of TRIP-Stern sequences. In Section \ref{pic}, we give a more pictorial description of TRIP-Stern sequences. Section \ref{S3} contains results about the maximum terms and locations thereof for each level of the TRIP-Stern tree. Section \ref{S4} discusses minimum terms and locations thereof. Section \ref{sums} examines sums of levels of the sequence. Section \ref{S6} determines which lattice points appear in the TRIP-Stern sequence for the triangle map, a multidimensional fraction algorithm discussed below. Section \ref{S7} introduces a generalization of the original TRIP-Stern sequence. Finally, we close in Section \ref{Conclusion} with some of the many questions that remain. \section{Continued fractions and Stern's diatomic sequence}\label{CF} Nothing in this section is new. In the first subsection, we review continued fractions in order to motivate, in part, the definition of triangle partition maps given in Section \ref{S2}. In the second subsection, we review the classical Stern's diatomic sequence and show how it is linked to continued fractions. This link is what this paper generalizes. \subsection{Continued fractions and subdivisions of the unit interval} All of the content in this subsection is well-known. Let $\alpha$ be a real number in the unit interval $I=(0,1]$. The \textit{Gauss map} is the function $G: (0,1] \rightarrow [0,1)$ defined by \[ G(\alpha) = \frac{1}{\alpha} - \left\lfloor \frac{1}{\alpha} \right\rfloor, \] where $\lfloor x \rfloor$ denotes the floor function, meaning the greatest integer less than or equal to $x$. Subdivide the unit interval into subintervals \[ I_k = \left( \frac{1}{k+1}, \frac{1}{k} \right] \] for $k$ a positive integer. If $\alpha \in I_k$, then the Gauss map is simply $G(\alpha) = \frac{1 - k \alpha}{\alpha}$. The continued fraction expansion of $\alpha$ is \[ \alpha = \frac{1}{ a_0 + \frac{1}{ a_1 + \frac{1}{ a_2 + \frac{1}{ \ddots }}} } \] where $\alpha\in I_{a_0}, G(\alpha)\in I_{a_1}, G(G(\alpha)) \in I_{a_2}, \ldots$. (If $\alpha$, under the iterations of $G$, is ever zero, then the algorithm stops.) We now want to translate the definition of the Gauss map into the language of two-by-two matrices, which can be more easily generalized. Set \[ v_1 = \left( \begin{array}{c} 0 \\ 1 \end{array} \right) \text{ and } v_2 = \left( \begin{array}{c} 1 \\ 1 \end{array} \right).\] We have the standard identification of a vector in $\mathbb{R}^2$ to a real number via \[ \left( \begin{array}{c} x \\ y \end{array} \right) \rightarrow \frac{x}{y},\] provided of course that $y\neq 0$. Then we think of the two-by-two matrix \[V = ( v_1, v_2) = \left( \begin{array}{cc} 0 & 1 \\ 1& 1 \end{array} \right) \] as being identified to the unit interval $I$. Set \[F_0 = \left( \begin{array}{cc} 0 & 1 \\ 1& 1 \end{array} \right)\text{ and } F_1 = \left( \begin{array}{cc} 1 & 1 \\ 0& 1 \end{array} \right).\] Then, by a calculation, we have \[VF_1^{k-1}F_0 = \left( \begin{array}{cc} 1 & 1 \\k&k+1 \end{array} \right),\] which can be identified to the subinterval $I_k $. Further, by a calculation, we have that \begin{eqnarray*} V(VF_1^{k-1}F_0)^{-1} \left( \begin{array}{c} \alpha \\ 1 \end{array} \right) &=& \left( \begin{array}{cc} -k & 1 \\1&0 \end{array} \right) \left( \begin{array}{c} \alpha \\ 1 \end{array} \right) \\ &=& \left( \begin{array}{c} -k\alpha + 1 \\ \alpha \end{array} \right) \\ &\rightarrow & \frac{1 - k \alpha}{\alpha}, \end{eqnarray*} and thus captures the Gauss map. Using the matrices $F_0$ and $F_1$, we can also interpret the Gauss map as a method of systematically subdividing the unit interval. This interpretation leads to the classical Stern diatomic sequence. Note that \begin{eqnarray*} VF_0 &=& (v_1, v_2 ) F_0 \\ &=& (v_1, v_2 ) \left( \begin{array}{cc} 0 & 1 \\ 1& 1 \end{array} \right). \\ &=& (v_2, v_1 + v_2) \\ &=& \left( \begin{array}{cc} 1 & 1 \\ 1& 2 \end{array} \right) \\ \end{eqnarray*} and \begin{eqnarray*} VF_1 &=& (v_1, v_2 ) F_1 \\ &=& (v_1, v_2 ) \left( \begin{array}{cc} 1 & 1 \\ 0& 1 \end{array} \right). \\ &=& (v_1, v_1 + v_2) \\ &=& \left( \begin{array}{cc} 0 & 1 \\ 1& 2 \end{array} \right) \end{eqnarray*} We can interpret $VF_0$ as the half interval $(1/2, 1)$ and $VF_1$ as the half interval $(0, 1/2)$. If we iterate multiplying by $F_0$ and $F_1$, then we get the following at the next step: \[VF_1F_1 = \left( \begin{array}{cc} 0 & 1 \\ 1& 3 \end{array} \right),\; VF_1F_0 = \left( \begin{array}{cc} 1 & 1 \\ 2& 3 \end{array} \right),\; VF_0F_1 = \left( \begin{array}{cc} 1 & 2 \\ 1& 3 \end{array} \right)\text{ and } VF_0F_0 = \left( \begin{array}{cc} 1 & 2 \\ 2& 3 \end{array} \right) \] Each real number $\alpha \in I$ can be described by a sequence $(i_0, i_1, i_2, \ldots )$ of zeros and ones, where, for all $n$, the number $\alpha$ lies in the subinterval coming from $VF_{i_0} F_{i_1} F_{i_2} \cdots F_{i_n}$. (We are being somewhat sloppy with issues of $\alpha$ being on the boundaries of these subintervals. Such issues do not affect what is going on.) We can link the sequence $(i_0, i_1, i_2, \ldots )$ with $\alpha$'s continued fraction expansion as follows. Let $1^k$ denote a sequence of $k$ ones. Then our sequence $(i_0, i_1, i_2, \ldots )$ can be written as \[(i_0, i_1, i_2, \ldots ) = (1^{k_{0}}, 0, 1^{k_1},0,1^{k_2},0, \dots ),\] with each $k_j$ a non-negative integer. (It is important that we allow a $k_j$ to be zero.) Then we have \[\alpha = \frac{1}{ k_0 + 1+ \frac{1}{ k_1 +1+ \frac{1}{ k_2+1 + \frac{1}{ \ddots }}} }.\] For example, the sequence $ (1,1,0,0,1,1,1,0, \ldots )$ can be written as $(1^2,0,1^0,0,1^3,0, \ldots)$, and we have \[\alpha = \frac{1}{ 3 + \frac{1}{ 1+ \frac{1}{ 4 + \frac{1}{ \ddots }}} }.\] Thus, continued fractions can be interpreted as a systematic method for subdividing an interval using two-by-two matrices. Multi-dimensional continued fractions, as we will see, are systematic subdivisions of triangles determined by three-by-three matrices. \subsection{Stern's diatomic sequence} \label{CF.5} In this section, we will briefly review Stern's diatomic sequence (number \seqnum{A002487} in Sloane's \textit{Online Encyclopedia of Integer Sequences}). In particular, we highlight the link between Stern's diatomic sequence and continued fractions. The classical \textit{Stern's diatomic sequence} $a_1,a_2, a_3, \ldots$ is the sequence defined by $a_1= 1$ and, for $n \geq 1$, \begin{eqnarray*} a_{2n} & = & a_n \\ a_{2n + 1} &=& a_n + a_{n+1}. \end{eqnarray*} Stern's diatomic sequence is linked to the Stern-Brocot array, which is an array of fractions in lowest terms that contains all rationals in the interval $[0,1]$. Starting with the fractions $\frac{0}{1}$ and $\frac{1}{1}$ on the $0^{\rm th}$ level, we construct the $n^{\rm th}$ level by rewriting the $(n-1)^{\rm st}$ level with the addition of the mediant between consecutive pairs of fractions from the $(n-1)^{\rm st}$ level. Here, the \textit{mediant} of two fractions $\frac{a}{b}$ and $\frac{c}{d}$ refers to the fraction $\frac{a+c}{b+d}$. In the Stern-Brocot array, the mediant of consecutive fractions is always in lowest terms. Below are levels $0$ through $3$ of the Stern-Brocot array: \vspace{2mm} \begin{center} $\begin{array}{ccccccccc} \vspace{2mm} \frac{0}{1},&&&&&&&&\frac{1}{1}\\ \vspace{2mm} \frac{0}{1},&&&&\frac{1}{2},&&&&\frac{1}{1}\\ \vspace{2mm} \frac{0}{1},&&\frac{1}{3},&&\frac{1}{2},&&\frac{2}{3},&&\frac{1}{1}\\ \vspace{2mm} \frac{0}{1},&\frac{1}{4},&\frac{1}{3},&\frac{2}{5},&\frac{1}{2},&\frac{3}{5},&\frac{2}{3},&\frac{3}{4},&\frac{1}{1}\\ \end{array}$ \end{center} The denominators of the Stern-Brocot array form Stern's diatomic sequence. Many of the combinatorial properties of this sequence are presented in Northshield \cite{NorthshieldS10}. The first row of the array can be thought of as either the unit interval, or, as above, the two-by-two matrix $V = \left( \begin{array}{cc} 0 & 1 \\ 1& 1 \end{array} \right)$. The second row can be thought of as the two subintervals, $(0, 1/2)$ and $ (1/2, 1)$, or, as the two matrices $VF_0$ and $VF_1$. Similarly, the third row gives us four subintervals, each corresponding to one of the matrices $VF_0F_0, VF_0F_1, VF_1F_0$ and $VF_1F_1$. The pattern continues. To be more precise, let $s_{n,k}$ denote the $k^{\rm th}$ fraction in the $n^{\rm th}$ level of the Stern-Brocot tree. One can use the Stern-Brocot array to express the continued fraction expansion of a real number in $[0,1]$ as follows: let $\alpha\in[0,1]$. The $1^{\rm st}$ level of the Stern-Brocot tree divides the unit interval in two as the subintervals $[0,\frac{1}{2})$ and $[\frac{1}{2},0]$. Label the first interval 0 and the second interval 1. The $2^{\rm nd}$ level divides the unit interval into four subintervals: $[0,\frac{1}{3}),$ $[\frac{1}{3},\frac{1}{2}),$ $[\frac{1}{2},\frac{2}{3}),$ and $[\frac{2}{3},\frac{1}{1}]$. Label these intervals 00, 01, 10, and 11 respectively. The $n^{\rm th}$ level divides the unit interval into $[0,s_{n,1}),\ldots,[s_{n,2^{n}},1]$. We label the interval $[s_{n,k},s_{n,k+1})$ with a sequence of $0$'s and $1$'s, where the first $n-1$ digits mark the label of the interval containing $[s_{n,k},s_{n,k+1})$ on the $(n-1)^{\rm st}$ level, and where the last digit is 0 or 1 depending on whether $[s_{n,k},s_{n,k+1})$ is in the left or right half of that interval, respectively. Recording the infinite sequence of $0$'s and $1$'s that corresponds to any number $\alpha$ in $[0,1]$ yields a sequence encoding the continued fraction expansion of $\alpha$, as in described in Northshield \cite{NorthshieldS10}. Thus, Stern's sequence is linked to subdivisions of the unit interval. Our generalizations of Stern's sequence will be linked to subdivisions of a triangle. \section{Review of triangle partition maps} \label{S2} TRIP-Stern sequences can be interpreted geometrically in terms of subdivisions of a triangle. (This section closely follows Sections 2 and 3 from Dasaratha et al.\ \cite{SMALL11q1}.) In this section, we describe the triangle division and triangle function, as defined in Garrity \cite{GarrityT01} and further developed in Chen et al.\ \cite{GarrityT05} and Messaoudi et al.\ \cite{SchweigerF08}. We then discuss how ``permutations'' of this triangle division generate a family of multidimensional continued fractions called triangle partition maps (TRIP maps for short) -- which were introduced in Dasaratha et al.\ \cite{SMALL11q1, SMALL11q3} -- and studied in Jensen \cite{Jensen} and in Amburg \cite{Amburg}. This will give us the needed notation to define TRIP-Stern sequences in the next section. \subsection{The triangle division}\label{triangle} The triangle division generalizes the method for computing continued fractions via the above systematic subdivision of the unit interval from the previous section. Instead of dividing the unit interval, we now use a 2-simplex, i.e., a triangle. As discussed in earlier papers, this triangle division is just one of many generalizations of the continued fraction algorithm. Define \[\triangle^* = \{(b_0,b_1,b_2): b_0 \geq b_1 \geq b_2 > 0 \}.\] The set $\triangle^*$ can be thought of as a ``triangle" in $\mathbb{R}^3$, using the projection map $\pi: \mathbb{R}^3 \to \mathbb{R}^2$ defined by \[\pi(b_0,b_1,b_2) = \left(\frac{b_1}{b_0}, \frac{b_2}{b_0}\right).\] The image of $\triangle^*$ under $\pi$, \[\triangle = \{ (1,x,y) : 1 \geq x \geq y > 0 \},\] is a triangle in $\mathbb{R}^2$ with vertices $(0,0),$ $(1,0)$, and $(1,1)$. Thus $\pi$ maps the vectors \[ v_1 = \left(\begin{array}{c}1 \\0 \\0\end{array}\right), ~ v_2 = \left(\begin{array}{c}1 \\1 \\0\end{array}\right),~ v_3 = \left(\begin{array}{c}1 \\1 \\1\end{array}\right)\] to the vertices of $\triangle$. The change of basis matrix from triangle coordinates to the standard basis is \[(v_1\ v_2\ v_3)= \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \\ \end{pmatrix}.\] Now consider the matrices \[A_0 = \left(\begin{array}{ccc}0 & 0 & 1 \\1 & 0 & 0 \\0 & 1 & 1\end{array}\right)\text{ and } A_1 = \left(\begin{array}{ccc}1 & 0 & 1 \\0 & 1 & 0 \\0 & 0 & 1\end{array}\right).\] Applying $A_0$ and $A_1$ to $(v_1 \ v_2 \ v_3)$ yields \[(v_1 \ v_2 \ v_3)A_0=(v_2\ v_3 \ v_1+v_3)\mbox{ and }(v_1\ v_2\ v_3)A_1=(v_1\ v_2\ v_1+v_3).\] This gives a disjoint bipartition of $\triangle$ under the map $\pi$, as seen in the diagram below. \begin{center} \setlength{\unitlength}{.1 cm} \begin{picture}(70,70) \put(5,5){\line(1,0){60}} \put(65,5){\line(0,1){60}} \put(5,5){\line(1,1){60}} \put(0,0){(0,0)} \put(60,0){(1,0)} \put(60,68){(1,1)} \put(65,5){\line(-1,1){30}} \end{picture} \end{center} This is the first step of the triangle division algorithm. Now consider the result of applying $A_1$ to the original vertices of the triangle $k$ times followed by applying $A_0$ once. The vertices of the original triangle $\triangle$ are thus mapped as follows: \[\triangle_k = \{ (1,x,y) \in \triangle : 1 - x - ky \geq 0 > 1- x - (k+1)y\}.\] We let $T: \triangle_k \to \triangle$ denote a collection of maps, where each is the following bijection between the subtriangle $\triangle_k$ and $\triangle$: For any point $(x, y) \in \triangle_k$ under the standard basis, first change the basis to triangle coordinates by multiplying the coordinates by $(v_1 \ v_2 \ v_3)$, apply the inverse of $A_0$ and the inverse of $A_1^k$, and then finally change the basis back to the standard basis. That is, \begin{equation} \label{eq_tfunction} T(x,y) = \pi( (1 \ x \ y)\left( (v_1 \ v_2 \ v_3)(A_0)^{-1} (A_1)^{-k} (v_1 \ v_2 \ v_3)^{-1}\right)^T). \end{equation} Rewriting yields the following: \[T(x,y):=\left(\frac{y}{x},\frac{1-x-ky}{x}\right)\] where $k=\lfloor\frac{1-x}{y}\rfloor$. This triangle map is analogous to the Gauss map for continued fractions. Using $T$, define the \emph{triangle sequence}, $(t_k)_{k\geq 0}$, for an element $(\alpha, \beta) \in \triangle$ by setting $t_n$ to be the non-negative integer satisfying $T^{(n)}(\alpha, \beta) \in \triangle_{t_n}$. Thus, \[(\alpha, \beta) \in \triangle_{t_0}, T(\alpha, \beta) \in \triangle_{t_1}, T(T(\alpha, \beta)) \in \triangle_{t_2}, \ldots \] \subsection{Incorporating permutations} The previously described triangle division partitions the triangle with vertices $v_1, v_2,$ and $v_3$ into triangles with vertices $v_2, v_3,$ and $v_1 + v_3$ and $v_1,v_2,$ and $v_1+v_3$. This process assigns a particular ordering of vertices to the vertices of the original triangle and the vertices of the two triangles produced. By considering all possible permutations we generate a family of 216 maps, each corresponding to a partition of $\triangle$. Specifically, we allow a permutation of the vertices of the initial triangle as well as a permutation of the vertices of the triangles obtained after applying $A_0$ and $A_1$. First, we permute the vertices by $\sigma \in S_3$ before applying either $A_0$ or $A_1$. Once we apply either $A_0$ or $A_1$, we then permute by either $\tau_0 \in S_3$ or $\tau_1 \in S_3$, respectively. This leads to the following definition: \begin{definition}For every $(\sigma, \tau_0, \tau_1) \in S^3_3$, define \[F_0 = \sigma A_0 \tau_0 \text{ and } F_1 = \sigma A_1 \tau_1\] by thinking of $\sigma,$ $\tau_0$, and $\tau_1$ as column permutation matrices. \end{definition} In particular, we define the permutation matrices as follows: $e=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right),$ $(12)=\left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array} \right),$ $(13)=\left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{array} \right),$ $(23)=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ \end{array} \right),$ $(123)=\left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \\ \end{array} \right),$ and $(132)=\left( \begin{array}{ccc} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ \end{array} \right)$. Note that applying $F_0$ and $F_1$ partitions any triangle into two subtriangles. Thus, for any $(\sigma, \tau_0, \tau_1) \in S^3_3$, we can partition $\triangle$ using the matrices $F_0$ and $F_1$ instead of $A_0$ and $A_1$. This produces a map that is similar to, but not identical to, the triangle map from section \ref{triangle}. We call each of these maps a \textit{triangle partition map}, or \textit{TRIP map} for short. Because $\left|S_3\right|^3 = 216$, the family of triangle partition maps has 216 elements. One of the main goals of Dasaratha et al.\ \cite{SMALL11q1} is showing that this class of triangle partition maps includes well-studied algorithms such as the M\"{o}nkemeyer map, and in combination, these triangle partition maps can be used to produce many other known algorithms, such as the Brun, Parry-Daniels, G\"{u}ting, and fully subtractive algorithms. \subsection{TRIP sequences}\label{Tsequence} We use the triangle division for a particular set of permutations to produce the corresponding triangle sequence. Recall the ``subtriangle $\triangle_k$" from the original triangle map. We generalize the definition of $\triangle_k$ as follows: for any $(\sigma, \tau_0, \tau_1)$, let $\triangle_k$ be the image of the triangle $\triangle$ under $F_1^kF_0$. We now define functions $T: \triangle \rightarrow \triangle$ mapping each subtriangle $\triangle_k$ bijectively to $\triangle$. First, let us formalize our subtriangles. \begin{definition} Let $F_0$ and $F_1$ be generated from some triplet of permutations. We define $\bigtriangleup_n$ to be the triangle with vertices given by the columns of $ (v_1\ \ v_2\ \ v_3){F_1}^n{F_0}$. \end{definition} We are now ready to define triangle partition maps. \begin{definition} We define the \textit{triangle partition map} $T_{\sigma, \tau_0,\tau_1}$ by \[T_{\sigma, \tau_0, \tau_1}(x, y) =\pi((1, x, y) ( (v_1\ \ v_2\ \ v_3) F_0^{-1} F_1^{-k} (v_1\ \ v_2\ \ v_3)^{-1})^T) \] when $ (x,y)\in\triangle_k$. \end{definition} We take the transpose of the matrix $ (v_1\ \ v_2\ \ v_3) F_0^{-1} F_1^{-k} (v_1\ \ v_2\ \ v_3)^{-1}$ because our matrices have vertices as columns but they are multiplied by the row vector $(1,x,y)$. This definition facilitates the following: \begin{definition} For any an $(\alpha, \beta)\in\triangle$, define $t_n$ to be the non-negative integer such that $\big[T_{\sigma, \tau_0, \tau_1}\big]^n(\alpha, \beta)$ is in $\triangle_{t_n}$. The \textit{TRIP sequence} of $(\alpha, \beta)$ with respect to $(\sigma, \tau_0, \tau_1)$ is $(t_k)_{k\geq 0}$. \end{definition} \section{TRIP-Stern sequences} \label{S2.5} \subsection{Construction of TRIP-Stern sequences} \label{TRIPSternDef} \begin{definition} For any permutations $(\sigma,\tau_0,\tau_1)$ in $S_3$, the \textit{triangle partition-Stern sequence} (\textit{TRIP-Stern sequence} for short) of $(\sigma, \tau_0, \tau_1)$ is the unique sequence such that $a_1 = (1,1,1)$ and, for $n\geq 1$, \[ \begin{cases} a_{2n} = a_n \cdot F_0; \\ a_{2n+1} =a_n \cdot F_1. \end{cases} \] The $n^{\rm th}$ level of the TRIP-Stern sequence is the set of $a_m$ with $2^{n-1} \leq m < 2^n$. \end{definition} Each choice of $(\sigma, \tau_0, \tau_1)$ produces some TRIP-Stern sequence. The $n^{\rm th}$ level terms of the TRIP-Stern sequence give the first coordinate of the vertices of the subtriangles of $\bigtriangleup$ after $n$ divisions. These terms are the denominators of the convergents of the triangle partition map defined by $(\sigma, \tau_0, \tau_1)$. Thus, TRIP-Stern sequences can be used to test when a sequence of triangle subdivisions converges to a unique point. This involves a simple definition and restatement of Theorem 7.2 in Dasaratha et al.\ \cite{SMALL11q1}. \begin{definition} For each $n$-tuple $v= (i_1, \ldots , i_n)$ of 0's and 1's, define \[\triangle(v) = (1, 1, 1)F_{i_1}F_{i_2} \cdots F_{i_n}. \] \end{definition} For a TRIP-Stern term $a_n$, the length of the binary representation $v$ such that $\triangle(v) = a_n$ gives the level of $a_n$. If there are multiple binary representations of $a_n$, then choose the representation that gives the $n^{\rm th}$ term of the sequence. \subsection{Examples} We start by examining the TRIP-Stern sequence for the triangle map, which is given by the identity permutations $(\sigma, \tau_0, \tau_1) = (e,e,e)$. The first few terms of the sequence $(a_n)_{n\geq 1}$ are \[(1, 1, 1), (1, 1, 2), (1, 1, 2), (1, 2, 3), (1, 1, 3), (1, 2, 3), (1, 1, 3), \ldots\] We arrange these as follows: \begin{center} \setlength{\unitlength}{.05 cm} \begin{picture}(200,70) \put(63,70){$a_1=(1,1,1)$} \put(80,65){\vector(-1,-1){25}} \put(95,65){\vector(1,-1){25}} \put(30,30){$a_2=(1,1,2)$} \put(110,30){$a_3=(1,1,2)$} \put(40,25){\vector(-2,-1){44}} \put(-30,-7){$a_4=(1,2,3)$} \put(50,25){\vector(1,-2){12}} \put(30,-7){$a_5=(1,1,3)$} \put(120,25){\vector(-1,-2){12}} \put(88,-7){$a_6=(1,2,3)$} \put(150,25){\vector(1,-2){12}} \put(150,-7){$a_7=(1,1,3)$} \put(53,53){$A_0$} \put(110,53){$A_1$} \put(3,15){$A_0$} \put(57,15){$A_1$} \put(103,15){$A_0$} \put(158,15){$A_1$} \end{picture} \end{center} or as \begin{center} \setlength{\unitlength}{.05 cm} \begin{picture}(200,70) \put(85,70){$\triangle$} \put(80,65){\vector(-1,-1){25}} \put(95,65){\vector(1,-1){25}} \put(38,30){$\triangle(0)$} \put(118,30){$\triangle(1)$} \put(40,25){\vector(-2,-1){44}} \put(-20,-7){$\triangle(00)$} \put(50,25){\vector(1,-2){12}} \put(48,-7){$\triangle(01)$} \put(120,25){\vector(-1,-2){12}} \put(95,-7){$\triangle(10)$} \put(130,25){\vector(1,-2){12}} \put(130,-7){$\triangle(11)$} \put(53,53){$A_0$} \put(110,53){$A_1$} \put(3,15){$A_0$} \put(57,15){$A_1$} \put(103,15){$A_0$} \put(138,15){$A_1$} \end{picture} \end{center} Note that because the two triples on level $1$ are the same, the left and right subtrees are symmetric. To see the connection to the triangle division, recall the matrix \[(v_1 \ v_2 \ v_3)= \begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \\ \end{pmatrix},\] whose columns are the three vertices of $\bigtriangleup$. Applying $A_0$ and $A_1$ to $(v_1 \ v_2 \ v_3)$ yields \[(v_1 \ v_2 \ v_3)A_0 = \begin{pmatrix} 1 &1 & 2 \\ 1 & 1 & 1 \\ 0 & 1 & 1 \end{pmatrix} \text{ and }(v_1\ v_2\ v_3)A_1 = \begin{pmatrix} 1 &1 & 2 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}.\] Repeating this process yields matrices at each step $n$ whose columns give the vertices of the subtriangles at the $n^{\rm th}$ division of $\bigtriangleup$. The figure below shows this subdivision. Additionally, the top row of each of these $n^{\rm th}$ step matrices gives a term in the $n^{\rm th}$-level TRIP-Stern sequence for $(e,e,e)$. \begin{center} \includegraphics[width=2.5in]{triangle.eps} \end{center} For another example, consider the permutations $(\sigma, \tau_0, \tau_1) = \big(13,132,132\big)$. Recall here that we divide the triangle using the matrices $F_0=13A_0132$ and $F_1=13A_1132$. Applying $F_0$ and $F_1$ to $(v_1 \ v_2 \ v_3)$ yields \[(v_1 \ v_2 \ v_3)F_0 = (v_1 \ v_2 \ v_3)\begin{pmatrix} 1 &1 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}=\begin{pmatrix} 1 &2 & 1 \\ 0 & 1 & 1 \\ 0 & 1 & 0 \end{pmatrix}\] and \[(v_1\ v_2\ v_3)F_1 = (v_1 \ v_2 \ v_3)\begin{pmatrix} 0 &1 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 1 \end{pmatrix}=\begin{pmatrix} 1 &2 & 1 \\ 1 & 1 & 1 \\ 0 & 1 & 1 \end{pmatrix}.\] These permutations give the well-studied M\"{o}nkemeyer map, as described in Dasaratha et al.\ \cite{SMALL11q1}. Here, the first few terms are \[(1, 1, 1), (1, 2, 1), (1, 2, 1), (1, 2, 2), (2, 2, 1), (1, 2, 2), (2, 2, 1), \ldots\] Similarly, these terms now give the first coordinate of the vertices of the subtriangles of $\bigtriangleup$ according to the M\"{o}nkemeyer algorithm. The following figure shows this division. \begin{center} \includegraphics[width=2.5in]{monkm.eps} \end{center} We will examine TRIP-Stern sequences generated by each of the 216 maps. As we will see in Lemma \ref{onlye}, many properties of each of these TRIP-Stern sequences can be captured by only examining the 36 sequences associated with permutation triplets of the form $(e,\tau_0,\tau_1)$ for $\tau_0,\tau_1\in S_3$. The following table, which illustrates the behavior of $F_0$ and $F_1$ for maps of the form $(e,\tau_0,\tau_1)$ on the initial seed $(a,b,c)$ (which equals (1,1,1) in the case of regular TRIP-Stern sequences), will come in handy throughout the paper. \begin{center} $\begin{array}{l|c|c} (e, \tau_0, \tau_1) & (a,b,c) F_0 & (a,b,c) F_1 \\ \hline \hline ( e, e, e) & (b, c, a+c) & (a, b, a+c) \\ ( e, e, 12) & (b, c, a+c) & (b, a, a+c) \\ ( e, e, 13) & (b, c, a+c) & (a+c, b, a) \\ ( e, e, 23) & (b, c, a+c) & (a, a+c, b) \\ ( e, e, 123) & (b, c, a+c) & (a+c, a, b) \\ ( e, e, 132) & (b, c, a+c) & (b, a+c, a) \\ ( e, 12, e) & (c, b, a+c) & (a, b, a+c) \\ ( e, 12, 12) & (c, b, a+c) & (b, a, a+c) \\ ( e, 12, 13) & (c, b, a+c) & (a+c, b, a) \\ ( e, 12, 23) & (c, b, a+c) & (a, a+c, b) \\ ( e, 12, 123) & (c, b, a+c) & (a+c, a, b) \\ ( e, 12, 132) & (c, b, a+c) & (b, a+c, a) \\ ( e, 13, e) & (a+c, c, b) & (a, b, a+c) \\ ( e, 13, 12) & (a+c, c, b) & (b, a, a+c) \\ ( e, 13, 13) & (a+c, c, b) & (a+c, b, a) \\ ( e, 13, 23) &(a+c, c, b) & (a, a+c, b) \\ ( e, 13, 123) & (a+c, c, b) & (a+c, a, b) \\ ( e, 13, 132) & (a+c, c, b) & (b, a+c, a) \\ ( e, 23 , e) & (b, a+ c, c) & (a, b, a+c) \\ ( e, 23, 12) & (b, a+ c, c) & (b, a, a+c) \\ ( e, 23, 13) & (b, a+ c, c) & (a+c, b, a) \\ ( e, 23, 23) & (b, a+ c, c) & (a, a+c, b) \\ ( e, 23, 123) &(b, a+ c, c) & (a+c, a, b) \\ ( e, 23, 132) &(b, a+ c, c) & (b, a+c, a) \\ ( e, 123, e) & (a+c, b, c) & (a, b, a+c) \\ ( e, 123, 12) & (a+c, b, c) & (b, a, a+c) \\ ( e, 123, 13) & (a+c, b, c) & (a+c, b, a) \\ ( e, 123, 23) & (a+c, b, c) & (a, a+c, b) \\ ( e, 123, 123) & (a+c, b, c) & (a+c, a, b) \\ ( e, 123, 132) & (a+c, b, c) & (b, a+c, a) \\ ( e, 132 , e) & (c,a+ c, b) & (a, b, a+c) \\ ( e, 132, 12) & (c,a+ c, b) & (b, a, a+c) \\ ( e, 132, 13) & (c,a+ c, b) & (a+c, b, a) \\ ( e, 132, 23) & (c,a+ c, b) & (a, a+c, b) \\ ( e, 132, 123) & (c,a+ c, b) & (a+c, a, b) \\ ( e, 132, 132) & (c,a+ c, b) & (b, a+c, a) \end{array}$ \captionof{table}{Behavior of $F_0$ and $F_1$} \end{center} \subsection{Alternative definitions} An alternative way to define the TRIP-Stern sequences is in terms of matrix generating functions. The advantage of this method is that it allows us to connect terms in a TRIP-Stern sequence with the product of $F_i$ matrices that produced the given sequence and to define the sequence non-recursively. Let \[P(x) = F_0 + F_1 x,\] where $x$ commutes with $F_0$ and $F_1$. Now it is clear that any integer can be uniquely expressed as a sum of the form $2^n + k$, where $0 \leq k < 2^n$. \begin{definition} We define the \textit{TRIP-Stern sequence} for $(\sigma, \tau_0,\tau_1)$ to be the unique sequence defined by \[a_{2^n + k} = (1, 1, 1) \cdot B,\] where $B$ is the coefficient of $x^k$ in the product $P(x)P(x^2) \cdots P(x^n)$. Then $a_{2^n + k}$ is the $k^{\rm th}$ term of the $n^{\rm th}$ level of the TRIP-Stern sequence. \end{definition} For instance, take $(\sigma, \tau_0,\tau_1) = (e,e,e)$. Then \[P(x)P(x^2) = (A_0 + A_1x)(A_0 + A_1x^2) = A_0^2 + A_1A_0 x + A_0A_1x^2 + A_1^2 x^3\] and so the terms of the $2^{\rm nd}$ level are given by \[(1,1,1)A_0^2, \ (1,1,1)A_1A_0,\ (1,1,1)A_0A_1, \text{ and }(1,1,1)A_1^2.\] \section{A more pictorial approach} \label{pic} We need the technical definitions from the last section for the proofs given in the rest of the paper and in order to use Mathematica to discover many of our formulas. But there is a more intuitive approach to generate particular TRIP-Stern sequences via subdivisions of a triangle. Let us first examine the classical Stern diatomic sequence analog. Start with an interval $I$ whose endpoints are $a$ and $b$. For Stern's diatomic sequence, we set $a=b=1$. As before, we set $I=(a,b)$. \begin{center} $\setlength{\unitlength}{.1 cm} \begin{picture}(70,20) \put(35,15){$I$} \put(10,10){\line(1,0){50}} \put(10,9){\line(0,1){2}} \put(60,9){\line(0,1){2}} \put(10,5){$a$} \put(60,5){$b$} \end{picture} \setlength{\unitlength}{.1 cm} \begin{picture}(70,20) \put(35,15){$I$} \put(10,10){\line(1,0){50}} \put(10,9){\line(0,1){2}} \put(60,9){\line(0,1){2}} \put(10,5){$1$} \put(60,5){$1$} \end{picture}$ \end{center} We then subdivide the interval and add together the weights of the endpoints, getting two new intervals, which by an abuse of notation we write as $I(0)=(a,a+b)$ and $I(1)=(a+b,b)$. \begin{center} $\setlength{\unitlength}{.1 cm} \begin{picture}(70,20) \put(20,15){$I(0)$} \put(45,15){$I(1)$} \put(10,10){\line(1,0){50}} \put(10,9){\line(0,1){2}} \put(35,9){\line(0,1){2}} \put(60,9){\line(0,1){2}} \put(10,5){$a$} \put(30,5){$a+b$} \put(60,5){$b$} \end{picture} \setlength{\unitlength}{.1 cm} \begin{picture}(70,20) \put(20,15){$I(0)$} \put(45,15){$I(1)$} \put(10,10){\line(1,0){50}} \put(10,9){\line(0,1){2}} \put(35,9){\line(0,1){2}} \put(60,9){\line(0,1){2}} \put(10,5){$1$} \put(35,5){$2$} \put(60,5){$1$} \end{picture}$ \end{center} We can continue, getting four new subintervals: \begin{center} $\setlength{\unitlength}{.1 cm} \begin{picture}(70,20) \put(12,16){$I(00)$} \put(15,15){\vector(0,-1){4}} \put(25,0){$I(01)$} \put(29,4){\vector(0,1){4}} \put(38,0){$I(10)$} \put(41,4){\vector(0,1){4}} \put(52,16){$I(11)$} \put(55,15){\vector(0,-1){4}} \put(10,10){\line(1,0){50}} \put(10,9){\line(0,1){2}} \put(26.7,9){\line(0,1){2}} \put(35,9){\line(0,1){2}} \put(43.3,9){\line(0,1){2}} \put(60,9){\line(0,1){2}} \put(10,5){$a$} \put(21.7,12){$2a+b$} \put(30,5){$a+b$} \put(38.3,12){$a+2b$} \put(60,5){$b$} \end{picture} \begin{picture}(70,20) \setlength{\unitlength}{.1 cm} \put(12,16){$I(00)$} \put(15,15){\vector(0,-1){4}} \put(25,0){$I(01)$} \put(29,4){\vector(0,1){4}} \put(38,0){$I(10)$} \put(41,4){\vector(0,1){4}} \put(52,16){$I(11)$} \put(55,15){\vector(0,-1){4}} \put(10,10){\line(1,0){50}} \put(10,9){\line(0,1){2}} \put(26.7,9){\line(0,1){2}} \put(35,9){\line(0,1){2}} \put(43.3,9){\line(0,1){2}} \put(60,9){\line(0,1){2}} \put(10,5){$1$} \put(26.7,12){$3$} \put(35,5){$2$} \put(43.3,12){$3$} \put(60,5){$1$} \end{picture}$ \end{center} The classical Stern diatomic sequence corresponds to $I, I(0), I(1), I(00), I(01),I(10),I(11), \ldots$ We now see how to get the analogous geometric picture for TRIP maps. We will concentrate on the TRIP-Stern sequence for the triple $(e,e,e)$. Similar pictures, though, will work for any $(\sigma,\tau_0, \tau_1)\in S_{3}^{3}$. For $(e,e,e)$ we have $F_0=A_0, F_1=A_1$. For any vector $(a,b,c)$, we know that \[(a,b,c)A_0 = (b,c,a+c)\text{ and } (a,b,c) A_1 = (a,b,a+c).\] We will think of $(a,b,c)$ as vertices of a triangle $\triangle$. By a slight abuse of notation, we will write $\triangle = (a,b,c)$. In the diagrams for this section, the ones on the left side are for the general case of any $a$, $b,$ and $c$, while the right side is in the special case when $a=b=c=1$. We have \[\begin{array}{lr} \setlength{\unitlength}{.05 cm} \begin{picture}(200,70) \put(5,5){\line(1,0){60}} \put(65,5){\line(0,1){60}} \put(5,5){\line(1,1){60}} \put(0,-.1){$a$} \put(69,0){$b$} \put(60,68){$c$} \put(-3,30){$\triangle$} \put(6,32){\vector(1,0){40}} \end{picture} & \setlength{\unitlength}{.05 cm} \begin{picture}(70,70) \put(5,5){\line(1,0){60}} \put(65,5){\line(0,1){60}} \put(5,5){\line(1,1){60}} \put(0,-.1){1} \put(69,0){1} \put(60,68){1} \put(-3,30){$\triangle$} \put(6,32){\vector(1,0){40}} \end{picture} \end{array}\] We let $\triangle(0)$ be the triangle with vertices $(a,b,c)A_0 = (b,c,a+c)$ and $\triangle(1)$ be the triangle with vertices $(a,b,c) A_1 = (a,b,a+c)$. Pictorially, we have \[\begin{array}{lr} \setlength{\unitlength}{.05 cm} \begin{picture}(200,70) \put(5,5){\line(1,0){60}} \put(65,5){\line(0,1){60}} \put(5,5){\line(1,1){60}} \put(0,-.1){$a$} \put(69,0){$b$} \put(60,68){$c$} \put(16,35){$a+c$} \put(65,5){\line(-1,1){30}} \put(3,55){$\triangle(0)$} \put(22,55){\vector(2,-1){28}} \put(-12,18){$\triangle(1)$} \put(6,20){\vector(1,0){22}} \end{picture} & \setlength{\unitlength}{.05 cm} \begin{picture}(70,70) \put(5,5){\line(1,0){60}} \put(65,5){\line(0,1){60}} \put(5,5){\line(1,1){60}} \put(0,-.1){1} \put(69,0){1} \put(60,68){1} \put(28,35){2} \put(65,5){\line(-1,1){30}} \put(-12,18){$\triangle(1)$} \put(6,20){\vector(1,0){22}} \put(3,55){$\triangle(0)$} \put(22,55){\vector(2,-1){28}} \end{picture} \end{array}\] We let $\triangle(00)$ be the triangle with vertices $(a,b,c)A_0A_0 =(c, a+c, a+b+c)$, $\triangle(01)$ be the triangle with vertices $(a,b,c)A_0 A_1 = (b,c,a+b+c)$, $\triangle(10)$ be the triangle with vertices $(a,b,c)A_1A_0 =(b, a+c, 2a+c)$ and $\triangle(11)$ be the triangle with vertices $(a,b,c)A_1A_1 =(a,b, 2a+c)$ Pictorially, we have \[\begin{array}{lr} \setlength{\unitlength}{.05 cm} \begin{picture}(200,70) \put(5,5){\line(1,0){60}} \put(65,5){\line(0,1){60}} \put(5,5){\line(1,1){60}} \put(0,-.1){$a$} \put(69,0){$b$} \put(60,68){$c$} \put(16,35){$a+c$} \put(65,5){\line(-1,1){30}} \put(3,55){$\triangle(00)$} \put(25,55){\vector(2,-1){24}} \put(-16,12){$\triangle(11)$} \put(7,14){\vector(1,0){14}} \put(65,5){\line(-2,1){40}} \put(65,65){\line(-2, -5){17}} \put(75,20){$a+b+c$} \put(70,22){\vector(-1,0){21}} \put(0,23){$2a+c$} \put(72,37){$\triangle(01)$} \put(70,40){\vector(-1,0){10}} \put(30,-10){$\triangle(10)$} \put(38,0){\vector(0,1){23}} \end{picture} & \setlength{\unitlength}{.05 cm} \begin{picture}(70,70) \put(5,5){\line(1,0){60}} \put(65,5){\line(0,1){60}} \put(5,5){\line(1,1){60}} \put(0,-.1){1} \put(69,0){1} \put(60,68){1} \put(28,35){2} \put(65,5){\line(-1,1){30}} \put(3,55){$\triangle(00)$} \put(25,55){\vector(2,-1){24}} \put(-16,12){$\triangle(11)$} \put(7,14){\vector(1,0){14}} \put(65,5){\line(-2,1){40}} \put(65,65){\line(-2, -5){17}} \put(75,20){$3$} \put(70,22){\vector(-1,0){21}} \put(17,23){$3$} \put(72,37){$\triangle(01)$} \put(70,40){\vector(-1,0){10}} \put(30,-10){$\triangle(10)$} \put(38,0){\vector(0,1){23}} \end{picture} \end{array}\] The TRIP-Stern sequence for $(e,e,e)$ is simply $\triangle$, $ \triangle(0)$, $ \triangle(1)$, $ \triangle(00)$, $ \triangle(01)$, $ \triangle(10)$, $ \triangle(11)$, $ \ldots$ We just add the appropriate vertices to generate new subdivisions. This is in direct analog to the classical Stern diatomic sequence. As another example, let us look at the triangle partition Stern sequence for $(12,e,e)$. We first need the matrices $F_0$ and $F_1$: \[F_0 = (12)A_0 =\left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) \left( \begin{array}{ccc} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 1 \\ \end{array} \right) = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 1 \\ \end{array} \right) \] and \[F_1 = (12)A_1 = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) \left( \begin{array}{ccc} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 0 & 1 \\ \end{array} \right) \] Now for any vector $(a,b,c)$, we have \[(a,b,c)F_0 = (a,c,b+c)\text{ and } (a,b,c) F_1 = (b,a,b+c).\] We still let $\triangle$ have vertices $a$, $b$, and $c$, but now the subtriangle $\triangle(0)$ will have vertices $a$, $c$, and $b+c$ and the subtriangle $\triangle(1)$ will have vertices $a$, $b$, and $b+c$. Pictorially, we have \[\begin{array}{lr} \setlength{\unitlength}{.05 cm} \begin{picture}(200,70) \put(5,5){\line(1,0){60}} \put(65,5){\line(0,1){60}} \put(5,5){\line(1,1){60}} \put(0,-.1){$a$} \put(69,0){$b$} \put(60,68){$c$} \put(68,35){$b+c$} \put(5,5){\line(2,1){60}} \put(3,55){$\triangle(0)$} \put(22,55){\vector(2,-1){28}} \put(-12,18){$\triangle(1)$} \put(6,20){\vector(1,0){45}} \end{picture} & \setlength{\unitlength}{.05 cm} \begin{picture}(70,70) \put(5,5){\line(1,0){60}} \put(65,5){\line(0,1){60}} \put(5,5){\line(1,1){60}} \put(0,-.1){1} \put(69,0){1} \put(60,68){1} \put(68,35){2} \put(5,5){\line(2,1){60}} \put(-12,18){$\triangle(1)$} \put(6,20){\vector(1,0){45}} \put(3,55){$\triangle(0)$} \put(22,55){\vector(2,-1){28}} \end{picture} \end{array}\] Let us do one more iteration. We have that $\triangle(0,0)$ will have vertices $(a, b+c, b + 2c) $, $ \triangle(0,1)$ will have vertices $ (c, a, b+2c) $, $ \triangle(1,0)$ will have vertices $(b, b+c, a+b+c) $ and $ \triangle(1,1)$ will have vertices $ (a, b, a+b+c) $ since \begin{eqnarray*} (a,b,c) F_0 F_0 &=& (a, c, b+c) F_0 \\ &=& (a, b+c, b + 2c) , \\ (a,b,c)F_0F_1 &= &(a,c, b + c) F_1 \\ &=& (c, a, b+2c), \\ (a,b,c) F_1F_0 &=& ( b, a, b+c)F_0 \\ &=& (b, b+c, a+b+c), \\ \end{eqnarray*} and \begin{eqnarray*} (a,b,c ) F_1F_1 &=& (b,a,b+c)F_1 \\ &=& (a, b, a+b+c). \end{eqnarray*} By continuing these subdivisions, we get the TRIP-Stern sequence for $(12,e,e)$. Pictorially, we now have \[\begin{array}{lr} \setlength{\unitlength}{.05 cm} \begin{picture}(200,70) \put(5,5){\line(1,0){60}} \put(65,5){\line(0,1){60}} \put(5,5){\line(1,1){60}} \put(0,-.1){$a$} \put(69,0){$b$} \put(60,68){$c$} \put(68,35){$b+c$} \put(5,5){\line(2,1){60}} \put(5,5){\line(4,3){60}} \put(68,48){$b+2 c$} \put(23,-10){$a + b+ c$} \put(45,-5){\vector(0,1){28}} \put(65,4){\line(-1,1){20}} \put(3,55){$\triangle(01)$} \put(22,55){\vector(2,-1){28}} \put(-12,35){$\triangle(00)$} \put(6,35){\vector(1,0){45}} \put(-12,18){$\triangle(10)$} \put(6,20){\vector(1,0){49}} \put(-15,10){$\triangle(11)$} \put(3,10){\vector(1,0){24}} \end{picture} & \setlength{\unitlength}{.05 cm} \begin{picture}(70,70) \put(5,5){\line(1,0){60}} \put(65,5){\line(0,1){60}} \put(5,5){\line(1,1){60}} \put(0,-.1){1} \put(69,0){1} \put(60,68){1} \put(68,35){2} \put(5,5){\line(2,1){60}} \put(68,35){2} \put(5,5){\line(2,1){60}} \put(5,5){\line(4,3){60}} \put(68,48){3} \put(40,-12){ 3} \put(45,-5){\vector(0,1){28}} \put(3,55){$\triangle(01)$} \put(22,55){\vector(2,-1){28}} \put(-12,35){$\triangle(00)$} \put(6,35){\vector(1,0){45}} \put(-12,18){$\triangle(10)$} \put(6,20){\vector(1,0){49}} \put(-15,10){$\triangle(11)$} \put(3,10){\vector(1,0){24}} \put(65,4){\line(-1,1){20}} \end{picture} \end{array}\] \section{Maximum terms and positions thereof} \label{S3} This section examines the maximum terms in every level of a TRIP-Stern sequence, as well as the positions of those maximum terms within the level. Recall for an $n$-tuple $v= (i_1, \ldots , i_n)$ of 0's and 1's that $\triangle(v) = (1, 1, 1)F_{i_1}F_{i_2} \cdots F_{i_n} $, which can be written as $\triangle(v)=(b_1(v),b_2(v),b_3(v))$. Let $|v|$ denote the number of entries in $v$. \begin{definition} The \textit{maximum entry} on level $n$ of a TRIP-Stern sequence is \[m_n = \max_{|v| = n\:\:} \max_{i\in\{1,2,3\}}b_i(v).\] \end{definition} Thus, for example, the sequence $(m_n)_{n\geq 0}$ for the TRIP-Stern sequence for $(e,e,e)$ begins \[(1,2,3,4,6,9,13,19,28,\ldots).\] We can easily extend this kind of analysis to all 216 TRIP-Stern sequences. Numerically, it appears that there are only eight possible row maxima sequences for all 216 TRIP-Stern sequences. The following lemma will be needed before presenting our results on maximum row sequences. \begin{lemma} \label{onlye} Let $(\sigma,\tau_0,\tau_1)\in S_{3}^3$ have row maxima sequence $(m_n)_{n\geq 0}$ and suppose $\kappa\in S_3$. Then $(\kappa \sigma, \tau_0 \kappa^{-1}, \tau_1 \kappa^{-1})$ also has row maxima sequence $(m_n)_{n\geq 0}$. \end{lemma} \begin{proof} This follows by direct calculation, since for any $v=(i_1, \ldots, i_n)$, \begin{align*} \triangle_{(\kappa \sigma, \tau_0 \kappa^{-1}, \tau_1 \kappa^{-1})} (v)& = (1,1,1)\cdot \kappa \sigma A_{i_1} \tau_{i_1} \kappa^{-1} \cdot \kappa \sigma A_{i_2} \tau_{i_2} \kappa^{-1} \cdots \kappa \sigma A_{i_n} \tau_{i_n} \kappa^{-1} \\ &= (1,1,1)\cdot \sigma A_{i_1} \tau_{i_1} \cdot \sigma A_{i_2}\tau_{i_2} \cdots \sigma A_{i_n} \tau_{i_n} \kappa^{-1}\\ &=\triangle_{(\sigma,\tau_0,\tau_1)}(v)\kappa^{-1}. \end{align*} Since $\kappa^{-1}$ is just a permutation, it follows that the maximal component of $\triangle_{(\kappa \sigma, \tau_0 \kappa^{-1}, \tau_1 \kappa^{-1})} (v)$ is the same as that of $\triangle_{(\sigma,\tau_0,\tau_1)}(v)$. \end{proof} Recall from Section \ref{S2.5} that the $(n+1)^{\rm st}$ level of a TRIP-Stern sequence is generated from the $n^{\rm th}$ by applying $F_0$ or $F_1$ to the sequence at the $m^{\rm th}$ term at level $n$. Denote by $n_m$ the position of this term. In other words, a TRIP-Stern sequence is a binary tree with exactly 2 children at each node $n_m,$ with the first (left) child generated by applying $F_0$ to the triplet at $n_m$ and the second generated by applying $F_1$. We will show that there exist three distinct paths through the trees containing all the maximal terms. We will then present recurrence relations for select row maxima sequences. First, we show that there exist three distinct paths through the trees generated by triangle partition maps of the form $(e,\tau_0,\tau_1)$ containing the maximal terms. For each of the three classes of path, we will write out an explicit proof for one map generating such path; proofs for the rest of the maps are similar simple calculations and will be omitted. \begin{theorem} \label{paths} There exists a path through each of the trees generated by 26 triangle partition maps of the form $(e,\tau_0,\tau_1)$ that contains the maximal terms. Namely, these paths are as follows: \begin{enumerate} \item For the eleven TRIP-Stern sequences $(e,e,13),$$(e,13,13),$$(e,13,123),$ $(e,23,13),$ $(e,23,123),$ $(e,23,132),$ $(e,123,13),$ $(e,123,123),$ $(e,132,13),$ $(e,132,123)$ and $(e,132,132),$ always select the right edge. \item For the twelve TRIP-Stern sequences $(e,e,e),$ $(e,e,12),$ $(e,e,23),$ $(e,e,123),$ $(e,e,132),$ $(e,12,e),$ $(e,12,12),$ $(e,12,13),$ $(e,12,23),$ $(e,12,123),$ $(e,12,132)$ and $(e,132,23),$ always select the left edge. \item For the three TRIP sequences $(e, 13, 12)$, $(e,23,23)$ and $(e,123,e),$ alternate between first selecting the left edge and then the right edge. \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item Always select the right edge. Consider $(e,13,123)$. Then $(x_1,x_2,x_3)F_0=(x_1+x_3,x_3,x_2)$ and $(x_1,x_2,x_3)F_1=(x_1+x_3,x_1,x_2)$. Since at the zeroth step we start with $(x_1,x_2,x_3)=(1,1,1),$ it is clear that the rightmost triplet will contain the maximal term at each level $n$ since this will lead to the greatest rates of growth for each of $x_{1_n},$ $x_{2_n}$ and $x_{3_n}$ as $n$ increases. Similar arguments can be made for $(e,e,13),$ $(e,13,13),$ $(e,23,13),$ $(e,23,123),$ $(e,23,132),$ $(e,123,13),$ $(e,123,123),$ $(e,132,13),$ $(e,132,123),$ and $(e,132,132)$. \item Always select the left edge. Consider $(e,e,e)$. Then $(x_1,x_2,x_3)F_0=(x_2,x_3,x_1+x_3)$ and $(x_1,x_2,x_3)F_1=(x_1,x_2,x_1+x_3)$. Since at the zeroth step we start with $(x_1,x_2,x_3)=(1,1,1),$ it is clear that the leftmost triplet will contain the maximal term at each level $n$ since this will lead to the greatest rates of growth for each of $x_{1_n},$ $x_{2_n}$ and $x_{3_n}$ as $n$ increases. Similar arguments can be made for $(e,e,12),$ $(e,e,23),$ $(e,e,123),$ $(e,e,132),$ $(e,12,e),$ $(e,12,12),$ $(e,12,13),$ $(e,12,23),$ $(e,12,123),$ $(e,12,132),$ and $(e,132,23)$. \item Alternate between first selecting the left edge and then the right edge. Consider $(e,13,12)$. Then $(x_1,x_2,x_3)F_0=(x_1+x_3,x_3,x_2)$ and $(x_1,x_2,x_3)F_1=(x_2,x_1,x_1+x_3)$. Since at the zeroth step we start with $(x_1,x_2,x_3)=(1,1,1),$ it is clear that the triplet containing the maximal term at each level will lie on the nodes of the path generated by alternating between first selecting the left edge and then the right edge. Similar arguments can be made for $(e,23,23),$ and $(e,123,e)$. \end{enumerate} \end{proof} In the above theorem, we have shown that the maximal TRIP-Stern sequence lies on one of three possible paths for 26 TRIP-Stern sequences generated by 26 triangle partition maps. Using Lemma \ref{onlye} brings this total up to $26\cdot 6=156$ maps. \begin{question}What are the paths for finding maximal terms for the remaining 60 TRIP-Stern sequences? \end{question} \subsection{Explicit formulas and recurrence relations for sequences of maximal terms} In the above subsection we addressed the positions of maximal terms. Here we present formulas and recurrence relations that may be used to find the actual values of the maximal terms for 120 TRIP-Stern sequences. \begin{theorem} The $n^{\rm th}$ maximal term $m_n$ in the TRIP-Stern sequences corresponding to the permutation triplets $(e,e,13),$ $(e,12,e),$ $(e,12,12),$ $(e,12,13),$ $(e,12,23),$ $(e,12,123),$ $(e,12,132),$ $(e,13,13),$ $(e,23,13),$ $(e,123,13),$ and $(e,132,13)$ is given by the formula \[m_n=\frac{2^{-n-1} \left(\left(\sqrt{5}-3\right) \left(1-\sqrt{5}\right)^n+\left(3+\sqrt{5}\right) \left(1+\sqrt{5}\right)^n\right)}{\sqrt{5}},\] which corresponds to the Fibonacci recurrence relation $m_{n}=m_{n-1}+m_{n-2}$ (\seqnum{A000045}). \end{theorem} \begin{proof} By Theorem \ref{paths}, we know that the third term in each of the triplets given by following the left-most path in the tree generated by select permutation triplets is a maximal term; similarly, the first term in each of the triplets given by following the right-most path in the tree generated by select permutation triplets is a maximal term. Hence, for sequences of maxima found by following the left-most path, all that remains to find $m_n$ is to find the third term in the triplet $(1,1,1)F_{0}^{n}$; for each of the permutation triplets listed in the theorem that are generated by choosing the left-most path, this third term corresponds to the desired explicit formula. Similarly, for sequences of maxima found by following the right-most path, all that remains to find $m_n$ is to find the first term in the triplet $(1,1,1)F_{1}^{n}$; for each of the permutation triplets listed in the theorem that are generated by choosing the right-most path, this first term corresponds to the desired explicit formula. It is easy to check using standard methods, as in Matthews \cite{Recurrence}, that $m_{n}=m_{n-1}+m_{n-2}$. As an example, let us consider $(e,e,13),$ for which \[F_1=\left( \begin{array}{ccc} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{array} \right).\] Then \[ F_{1}^{n}=\left( \begin{array}{ccc} \frac{2^{-n-1} \left(-\left(1-\sqrt{5}\right)^{n+1}+\left(1+\sqrt{5}\right)^{n+1}\right)}{\sqrt{5}} & 0 & \frac{2^{-n} \left(-\left(1-\sqrt{5}\right)^n+\left(1+\sqrt{5}\right)^n\right)}{\sqrt{5}} \\ && \\ 0 & 1 & 0 \\ &&\\ \frac{2^{-n} \left(-\left(1-\sqrt{5}\right)^n+\left(1+\sqrt{5}\right)^n\right)}{\sqrt{5}} & 0 & \frac{2^{-n-1} \left(\left(1-\sqrt{5}\right)^n \left(1+\sqrt{5}\right)+\left(-1+\sqrt{5}\right) \left(1+\sqrt{5}\right)^n\right)}{\sqrt{5}} \\ \end{array} \right),\] so that $(1,1,1)F_{1}^n= \left(\frac{2^{-n-1} \left(\left(\sqrt{5}-3\right) \left(1-\sqrt{5}\right)^n+\left(3+\sqrt{5}\right) \left(1+\sqrt{5}\right)^n\right)}{\sqrt{5}},1,\frac{2^{-n-1} \left(\left(1+\sqrt{5}\right)^{n+1}-\left(1-\sqrt{5}\right)^{n+1}\right)}{\sqrt{5}}\right)$. Hence, we can see that \[m_n=\frac{2^{-n-1} \left(\left(\sqrt{5}-3\right) \left(1-\sqrt{5}\right)^n+\left(3+\sqrt{5}\right) \left(1+\sqrt{5}\right)^n\right)}{\sqrt{5}}.\] Lastly, it is clear that $m_n=m_{n-1}+m_{n-2}$ since \[ m_{n-1}+m_{n-2}=\] $\frac{2^{-1-(n-1)} \left(\left(1-\sqrt{5}\right)^{n-1} \left(-3+\sqrt{5}\right)+\left(1+\sqrt{5}\right)^{n-1} \left(3+\sqrt{5}\right)\right)}{\sqrt{5}}+\frac{2^{-1-(n-2)} \left(\left(1-\sqrt{5}\right)^{n-2} \left(-3+\sqrt{5}\right)+\left(1+\sqrt{5}\right)^{n-2} \left(3+\sqrt{5}\right)\right)}{\sqrt{5}}=$ \[\frac{2^{-n-1} \left(\left(\sqrt{5}-3\right) \left(1-\sqrt{5}\right)^n+\left(3+\sqrt{5}\right) \left(1+\sqrt{5}\right)^n\right)}{\sqrt{5}}=m_n.\] \end{proof} \begin{theorem} The $n^{\rm th}$ maximal term $m_n$ in the TRIP-Stern sequences corresponding to the permutation triplets $(e,e,e)$, $(e,e,12)$, $(e,e,23)$, $(e,e,123)$, $(e,e,132)$, $(e,13,123)$, $(e,23,123)$, $(e,123,123),$ and $(e,132,123)$ is given by the formula \[m_n=\alpha_1\beta_{1}^n+\alpha_2\beta_{3}^n+\alpha_3\beta_{2}^n,\] where the $\alpha_i$'s are roots of $31x^3-31x^2-12x-1=0$ while the $\beta_i$'s are the roots of $x^3-x^2-1=0$. This corresponds to the well-known recurrence relation $m_{n}=m_{n-1}+m_{n-3}$ (\seqnum{A000930}). \end{theorem} \begin{proof} The proof, except for some technical details, is identical to the previous proof. By Theorem \ref{paths}, we know that the third term in each of the triplets given by following the left-most path in the tree generated by select permutation triplets is a maximal term; similarly, the first term in each of the triplets given by following the right-most path in the tree generated by select permutation triplets is a maximal term. Hence, for sequences of maxima found by following the left-most path, all that remains to find $m_n$ is to find the third term in the triplet $(1,1,1)F_{0}^{n}$; for each of the permutation triplets listed in the theorem that are generated by choosing the left-most path, this third term corresponds to the desired explicit formula. Similarly, for sequences of maxima found by following the right-most path, all that remains to find $m_n$ is to find the first term in the triplet $(1,1,1)F_{1}^{n}$; for each of the permutation triplets listed in the theorem that are generated by choosing the right-most path, this first term corresponds to the desired explicit formula. Using standard methods (as in Matthews \cite{Recurrence}) identical to those used in the example found in the proof of the previous theorem, it is easy to see that $m_{n}=m_{n-1}+m_{n-3}$. \end{proof} In the above two theorems, we have presented explicit formulas and recurrence relations for the maximal terms of TRIP-Stern sequences generated by 20 maps. Using Lemma \ref{onlye} brings this total up to $20\cdot 6=120$ maps. \begin{question} What are the recurrence relations for the other 96 TRIP-Stern sequences? \end{question} \begin{conjecture} The following table presents some numerical results and conjectured recurrence relations for TRIP-Stern maximal terms corresponding to the above-mentioned 96 maps. \end{conjecture} \begin{center} \scalebox{0.8}{ $\begin{array}{c|c|c|c} \mbox{First 11 maximal terms} & \mbox{Conjectured recurrence relation} \;m_n= & (e, \tau_0, \tau_1) & \mbox{A-number}\\ \hline \hline 1,2,3,5,7,11,16,25,36,56,81 & \mbox{unknown} & (e, 13, e),(e, 123 ,12) & \seqnum{A271485}\\ \hline 1,2,3,4,6,9,13,19,28,41,60 & m_{n-1} + m_{n-3} & (e, 13, 12) & \seqnum{A000930}\\ \hline 1,2,3,4,6,8,11,16,22,30,43 & \mbox{unknown} & (e,13,23),(e,23,12) & \seqnum{A271486}\\ \hline 1,2,3,4,6,8,11,17,23,32,48 & \mbox{unknown} & (e,13,132),(e,132,12) & \seqnum{A271487} \\ \hline 1,2,3,4,6,8,11, 15,21,30,41 & \mbox{unknown} &(e,23,e), (e, 123,23) & \seqnum{A271488} \\ \hline 1,2,3,4,5,7,9,12,16,21 & m_{n-2} + m_{n-3} & \begin{array}{c} (e,23,23),(e,23,132) \\ (e,132,23),(e,132,132) \end{array} & \seqnum{A000931}\\ \hline 1,2,3,5,8,13,21,34,55,89,144 & m_{n-1} + m_{n-2} & (e,123,e) & \seqnum{A000045}\\ \hline 1,2,3,4,5,7,10,13,18,25,34 & \mbox{unknown} & (e,123,132),(e,132,e) & \seqnum{A271489} \end{array}$ } \captionof{table}{Conjectured recurrence relations for maximal terms} \end{center} Note that in the above table we only included maps of the form $(e,\tau_0,\tau_1)$; as before, Lemma \ref{onlye} brings the total up to $16\cdot 6=96$ maps. \section{Minimal terms and positions thereof} \label{S4} We now investigate the positions of the minimal terms in each level for various TRIP-Stern sequences. This is a bit easier than the analogous investigation for the maximal terms. \begin{theorem} \label{left} The minimal terms $b_n$ in the TRIP-Stern sequences corresponding to the permutation triplets $(e,12,12)$, $(e,12,123)$, $(e,12,132)$, $(e,13,123)$, $(e,13,132)$, $(e,23,12)$, $(e,23,123)$, $(e,23,132)$, $(e,123,12)$, $(e,123,123)$, and $(e,123,132)$ lie on the left-most path in the corresponding TRIP-Stern tree. \end{theorem} \begin{proof} For the permutation triplets $(e,13,123)$ and $(e,13,132),$ the transformation \[(x_1,x_2,x_3)\mapsto (x_1,x_2,x_3)F_0\] flips the positions of $x_2$ and $x_3$ in the triplet. Recall that application of $F_0$ corresponds to following the left-most child at each node in the TRIP-Stern tree corresponding to some permutation triplet. Therefore, as we start with $(x_1,x_2,x_3)=(1,1,1),$ it is clear that the minimal terms of the TRIP-Stern sequences corresponding to these permutation triplets lie on the left-most path in the corresponding TRIP-Stern tree. For the rest of the above permutation triplets, the transformation \[(x_1,x_2,x_3)\mapsto(x_1,x_2,x_3)F_0\] leaves either $x_1,$ $x_2,$ or $x_3$ fixed. As in the first part of this proof, since we start with $(x_1,x_2,x_3)=(1,1,1),$ it is clear that the minimal terms of the TRIP-Stern sequences corresponding to the above permutation triplets lie on the left-most path in the corresponding TRIP-Stern tree. \end{proof} \begin{theorem} \label{right} The minimal terms $b_n$ in the TRIP-Stern sequences corresponding to the permutation triplets $(e,e,e)$, $(e,e,12)$, $(e,e,13)$, $(e,e,23)$, $(e,13,e)$, $(e,13,13)$, $(e,13,23)$, $(e,132,e)$, $(e,132,12)$, $(e,132,13)$, and $ (e,132,23)$ lie on the right-most path in the corresponding TRIP-Stern tree. \end{theorem} \begin{proof} For the permutation triplets $(e,e,12)$ and $(e,132,12)$, the transformation \[(x_1,x_2,x_3)\mapsto (x_1,x_2,x_3)F_1\] flips the positions of $x_1$ and $x_2$ in the triplet. Recall that application of $F_1$ corresponds to following the right-most child at each node in the TRIP-Stern tree corresponding to some permutation triplet. Therefore, as we start with $(x_1,x_2,x_3)=(1,1,1),$ it is clear that the minimal terms of the TRIP-Stern sequences corresponding to the above permutation triplets lie on the right-most path in the corresponding TRIP-Stern tree. For the rest of the above permutation triplets, the transformation \[(x_1,x_2,x_3)\mapsto(x_1,x_2,x_3)F_1\] leaves either $x_1,$ $x_2,$ or $x_3$ fixed. As in the first part of this proof, since we start with $(x_1,x_2,x_3)=(1,1,1),$ it is clear that the minimal terms of the TRIP-Stern sequences corresponding to the above permutation triplets lie on the right-most path in the corresponding TRIP-Stern tree. \end{proof} \begin{theorem} \label{both} The minimal terms $b_n$ in the TRIP-Stern sequences corresponding to the permutation triplets $(e,12,e)$, $(e,12,13)$, $(e,12,23)$, $(e,13,12)$, $(e,23,e)$, $(e,23,13)$, $(e,23,23)$, $(e,123,e)$, $(e,123,13)$, and $(e,123,23)$ lie on both the right-most and left-most paths in the corresponding TRIP-Stern tree. \end{theorem} \begin{proof} For the permutation triplet $(e,13,12)$, the transformation $(x_1,x_2,x_3)\mapsto (x_1,x_2,x_3)F_0$ flips the positions of $x_2$ and $x_3$ in the triplet and the transformation $(x_1,x_2,x_3)\mapsto (x_1,x_2,x_3)F_1$ flips the positions of $x_1$ and $x_2$. As we start with $(x_1,x_2,x_3)=(1,1,1),$ it is clear that the minimal terms of the TRIP-Stern sequences corresponding to the above permutation triplets lie on both the left- and right-most paths in the corresponding TRIP-Stern tree. For the rest of the above permutation triplets, both the transformations $(x_1,x_2,x_3)\mapsto(x_1,x_2,x_3)F_0$ and $(x_1,x_2,x_3)\mapsto(x_1,x_2,x_3)F_1$ leave either $x_1,$ $x_2,$ or $x_3$ fixed. Since we start with $(x_1,x_2,x_3)=(1,1,1),$ it is clear that the minimal terms of the TRIP-Stern sequences corresponding to the above permutation triplets lie on the right-most and left-most paths in the corresponding TRIP-Stern tree. \end{proof} \begin{corollary} The minimal terms $b_n$ in the TRIP-Stern sequences corresponding to the permutation triplets mentioned in Theorems \ref{left}, \ref{right}, and \ref{both} all have value 1. \end{corollary} \begin{proof} This follows immediately since in each case we start with $(x_1,x_2,x_3)=(1,1,1),$ and at least one of the components of this triplet gets carried over to at least one triplet in the next level of the corresponding TRIP-Stern sequence by the action of $F_0$ or $F_1,$ as outlined in the proofs of the theorems mentioned in the corollary. \end{proof} In the above theorems, we have found the values and positions of the minimal terms of TRIP-Stern sequences generated by 32 maps. The results of Lemma \ref{onlye} bring this total up to $32\cdot 6=192$ maps. \begin{question} What are the values and positions of the minimal terms of TRIP-Stern sequences generated by the remaining 24 maps? \end{question} \section{Level sums} \label{sums} The following section examines the sums of the entries in each level of a TRIP-Stern sequence, in direct analogue to the level sums found by Stern for his diatomic sequence (Lehmer \cite{Lehmer1} presents this as Property 2). As the level $n$ grows large, the ratio between the sums of the entries in successive entries approaches an algebraic number of degree at most $3$. The ratios between the first, second, and third coordinates of a given level approach ratios in the same number field. This section provides a closed form for the sums of the entries in each level. \begin{definition} Consider the TRIP-Stern sequence for arbitrary $(\sigma, \tau_0,\tau_1)$. Let $S_1(n)$ be the sum of the first entries of the triples in the $n^\text{th}$ level, let $S_2(n)$ be the sum of the second entries, and let $S_3(n)$ be the sum of the third entries. The sum of all entries in a given level is $S(n) = S_1(n) + S_2(n) + S_3(n)$. \end{definition} \begin{proposition} \label{recurrenceSi} For each $n$, \[\big(S_1(n), S_2(n), S_3(n)\big) = \big(S_1(n - 1), S_2(n - 1), S_3(n - 1)\big)(F_0 + F_1).\] \end{proposition} \begin{proof} The base case follows by definition. Say that we are at the $n^{\rm th}$ level. In order to generate the next level of triplets, we apply $F_0$ and $F_1$ to each triplet in the $n^{\rm th}$ level. It is clear that $S_1(n+1)$ is obtained by taking the sum of the first components of $a_m F_0+a_m F_1$ over all triplets $a_m$ in the $n^{th}$ level; similarly for $S_2(n+1)$ and $S_3(n+1)$. As a result, it is clear that \[\big(S_1(n), S_2(n), S_3(n)\big) = \big(S_1(n - 1), S_2(n - 1), S_3(n - 1)\big)(F_0 + F_1).\] \end{proof} \subsection{Level sums for $(e,e,e)$} \begin{proposition}\label{limprop} Let $\alpha$ be the real zero of $x^3 - 4x^2 + 5x - 4$. Then \[\lim_{n\to\infty} \frac{S_i(n)}{S_i(n-1)} = \alpha \] for $1 \leq i \leq 3$ and \[ \lim_{n\to\infty} \frac{S(n)}{S(n-1)} = \alpha.\] \end{proposition} \begin{proof} The characteristic polynomial of $A_0 + A_1$ is $x^3 - 4x^2 + 5x - 4$. This polynomial has real zero $\alpha \approx 2.69562,$ and two complex zeros of smaller absolute value. Thus, as $n \to \infty$, the vector \[\big(S_1(n), S_2(n), S_3(n)\big) = (1,1,1)(A_0+A_1)^n\] approaches the eigenvector $\bar{\alpha}$ corresponding to the eigenvalue $\alpha$. So as $n$ approaches infinity, the vector $(1,1,1)(A_0+A_1)^n$ approaches the subspace generated by $\bar{\alpha}$. Hence, \[\lim_{n\to\infty} \frac{S_i(n)}{S_i(n-1)} = \alpha \] for $1 \leq i \leq 3$ and \[ \lim_{n\to\infty} \frac{S(n)}{S(n-1)} = \alpha,\] as claimed. \end{proof} \begin{proposition} We have \[ \lim_{n\to\infty} \frac{S_2(n)}{S_1(n)}= \alpha - 1 \quad\text{and}\quad \lim_{n\to\infty}\frac{S_3(n)}{S_2(n)}= \alpha - 1.\] \end{proposition} \begin{proof} The actions of $A_0$ and $A_1$ yield the recurrence relation $S_1(n + 1) = S_1(n) + S_2(n)$. Dividing by $S_1(n)$ gives \[\frac{S_1(n + 1)}{S_1(n)} = 1 + \frac{S_2(n)}{S_1(n)}\] Taking the limit as $n\to \infty$ and applying Proposition \ref{limprop} yields \[\lim_{n\to\infty}\frac{S_2(n)}{S_1(n)} = \alpha - 1.\] Similarly, using the recurrence relation $S_2(n + 1) = S_2(n) + S_3(n)$ yields \[\lim_{n\to\infty}\frac{S_3(n)}{S_2(n)} = \alpha - 1.\] \end{proof} \subsection{Level sums for arbitrary $(\sigma,\tau_0,\tau_1)$} The properties of level sums for an arbitrary TRIP-Stern sequence are similar to those of the TRIP-Stern sequence for $(e,e,e)$. For any $(\sigma,\tau_0,\tau_1)$, it can be shown by computation that $F_0+ F_1$ has an eigenvalue of largest absolute value. In fact, this eigenvalue is an element of the interval $[2, 3]$. \begin{proposition} For any TRIP-Stern sequence, we have \[\lim_{n \rightarrow \infty} \frac{S_i(n)}{S_i(n -1)} = \alpha\] $ \text{ for } 1 \leq i \leq 3$ and \[\lim_{n \rightarrow \infty} \frac{S(n)}{S(n-1)} = \alpha,\] where $\alpha$ is the eigenvalue of $F_0+F_1$ of largest absolute value. Furthermore, $\alpha$ is an algebraic number of degree at most three, and $\alpha \in [2,3]$. \end{proposition} Analogous recurrence relations give relations between the limits of $S_1(n)$, $S_2(n)$, and $S_3(n)$. These are not always as clean as in the case of the TRIP-Stern sequence for $(e,e,e)$, but the ratios between these limits are contained in the number field $\mathbb{Q}(\alpha)$. We next examine the closed forms for $S(n)$. \begin{theorem} The family of triangle partition maps leads to 11 distinct sequences of sums $\left(S(n)\right)_{n\geq 1}$ with recurrence relations and explicit forms as shown in the tables below. \end{theorem} \begin{proof} The proof follows by direct calculation. For each triangle partition map $T_{\sigma,\tau_0,\tau_1},$ compute a modified TRIP-stern sequence given by setting $a_1=(a,b,c)$ instead of setting $a_1=(1,1,1)$ as we had done in Section \ref{S2.5}, partitioning the sequence into levels as before. Sum the terms of each level $n$ to yield a sequence of row sums $\left(S(n)\right)_{n\geq 1}$. If we can prove that the first $m$ terms of $\left(S(n)\right)_{n\geq 1}$ satisfy an $(m-1)$-term recurrence relation, it follows that the sequence must be generated by that recurrence relation. We have carried out this procedure for all 216 permutation triplets $(\sigma,\tau_0,\tau_1)$ to find recurrence relations for the associated row sums, from which the explicit form for the $n^{\rm th}$ term in the sequence $\left(S(n)\right)_{n\geq 1}$ was easily calculated. The results are presented in the tables below -- indeed, the family of triangle partition maps generates only 11 distinct row sums. The first column lists the recurrence relation, the second lists the explicit form of that recurrence relation, and the third lists the permutation triplets whose TRIP-Stern level sums follow this relation. Note that Greek letters represent zeros of certain polynomials; see the key below. For example, $\text{Root}\left[29 x^3-87 x^2-5 x-1,1\right]\to\alpha_1$ means ``let $\alpha_1$ be the first root of $29 x^3-87 x^2-5 x-1=0$." \end{proof} \begin{center} \scalebox{0.94}{ $ \begin{array}{l|l|l} \mbox{Recurrence relation for $S(n)$} & \mbox{Explicit form for $S(n)$} & (e,\tau_0,\tau_1)\\ \hline\hline 4 S(n-3)-5 S(n-2)+4 S(n-1) & \alpha _1 \beta _1^n+\alpha _2 \beta _2^n+\alpha _3 \beta _3^n & \begin{array}{c} (e,e,e),\\ (e,123,123) \\ \end{array} \\ \hline 2 S(n-2)+2 S(n-1) & \begin{array}{c}\frac{1}{6}( \left(9-5 \sqrt{3}\right) \left(1-\sqrt{3}\right)^n\\ +\left(1+\sqrt{3}\right)^n \left(9+5 \sqrt{3}\right))\end{array} & \begin{array}{c} (e,e,12), \\ (e,e,123), \\ (e,13,12), \\ (e,13,123) \\ \end{array} \\ \hline S(n-3)-S(n-2)+3 S(n-1) & \gamma _1 \delta _1^n+\gamma _3 \delta _2^n+\gamma _2 \delta _3^n & \begin{array}{c} (e,e,13), \\ (e,12,123) \\ \end{array} \\ \hline -S(n-3)+2 S(n-2)+2 S(n-1) & \frac{2^{-n} \left(-2 \left(3-\sqrt{5}\right)^n \left(-2+\sqrt{5}\right)+\left(3+\sqrt{5}\right)^n \left(11+5 \sqrt{5}\right)\right)}{5+\sqrt{5}} & \begin{array}{c} (e,e,23),\\ (e,12,23), \\ (e,12,132), \\ (e,23,e), \\ (e,23,13), \\ (e,23,123), \\ (e,123,23), \\ (e,123,132), \\ (e,132,e), \\ (e,132,13) \\ \end{array} \\ \hline 6 S(n-3)+2 S(n-2)+S(n-1) & \epsilon _1 \zeta _1^n+\epsilon _2 \zeta _2^n+\epsilon _3 \zeta _3^n & \begin{array}{c} (e,e,132), \\ (e,132,123) \\ \end{array} \\ \hline 5 S(n-1)-6 S(n-2) & 2^n+2\ 3^n & \begin{array}{c} (e,12,e), \\ (e,12,13), \\ (e,123,e), \\ (e,123,13),\\ \end{array} \\ \hline -4 S(n-3)+S(n-2)+3 S(n-1) & \eta _1 \theta _1^n+\eta _2 \theta _2^n+\eta _3 \theta _3^n & \begin{array}{c} (e,12,12), \\ (e,13,13) \\ \end{array} \\ \hline -S(n-3)-3 S(n-2)+4 S(n-1) & \iota _1 \kappa _1^n+\iota _2 \kappa _2^n+\iota _3 \kappa _3^n & \begin{array}{c} (e,13,e) \\ (e,123,12) \\ \end{array} \\ \hline -6 S(n-3)+4 S(n-2)+2 S(n-1) & \lambda _1 \mu _1^n+\lambda _2 \mu _2^n+\lambda _3 \mu _3^n & \begin{array}{c} (e,13,23), \\ (e,23,12) \\ \end{array} \\ \hline S(n-3)+4 S(n-2)+S(n-1) & \nu _1 \xi _1^n+\nu _2 \xi _2^n+\nu _3 \xi _3^n & \begin{array}{c} (e,13,132), \\ (e,132,12) \\ \end{array} \\ \hline 4 S(n-2)+S(n-1) & \begin{array}{c}\frac{1}{34} (\left(51-13 \sqrt{17}\right) \left(\frac{1}{2} \left(1-\sqrt{17}\right)\right)^n\\+\left(\frac{1}{2} \left(1+\sqrt{17}\right)\right)^n \left(51+13 \sqrt{17}\right))\end{array} & \begin{array}{c} (e,23,23), \\ (e,23,132), \\ (e,132,23), \\ (e,132,132) \\ \end{array} \\ \end{array} $ } \captionof{table}{Level sums} \end{center} \begin{center} \scalebox{0.92}{ $ \begin{array}{l|l} \hline\hline \text{Root}\left[29 x^3-87 x^2-5 x-1,1\right]\to \alpha _1 & \text{Root}\left[29 x^3-87 x^2-5 x-1,2\right]\to \alpha _2 \\ \text{Root}\left[29 x^3-87 x^2-5 x-1,3\right]\to \alpha _3 & \text{Root}\left[x^3-4 x^2+5 x-4,1\right]\to \beta _1 \\ \text{Root}\left[x^3-4 x^2+5 x-4,2\right]\to \beta _2 & \text{Root}\left[x^3-4 x^2+5 x-4,3\right]\to \beta _3 \\ \text{Root}\left[76 x^3-228 x^2+28 x-1,1\right]\to \gamma _1 & \text{Root}\left[76 x^3-228 x^2+28 x-1,2\right]\to \gamma _2 \\ \text{Root}\left[76 x^3-228 x^2+28 x-1,3\right]\to \gamma _3 & \text{Root}\left[x^3-3 x^2+x-1,1\right]\to \delta _1 \\ \text{Root}\left[x^3-3 x^2+x-1,2\right]\to \delta _2 & \text{Root}\left[x^3-3 x^2+x-1,3\right]\to \delta _3 \\ \text{Root}\left[1176 x^3-3528 x^2-119 x-1,1\right]\to \epsilon _1 & \text{Root}\left[1176 x^3-3528 x^2-119 x-1,2\right]\to \epsilon _2 \\ \text{Root}\left[1176 x^3-3528 x^2-119 x-1,3\right]\to \epsilon _3 & \text{Root}\left[x^3-x^2-2 x-6,1\right]\to \zeta _1 \\ \text{Root}\left[x^3-x^2-2 x-6,2\right]\to \zeta _2 & \text{Root}\left[x^3-x^2-2 x-6,3\right]\to \zeta _3 \\ \text{Root}\left[229 x^3-687 x^2+230 x+4,1\right]\to \eta _1 & \text{Root}\left[229 x^3-687 x^2+230 x+4,2\right]\to \eta _2 \\ \text{Root}\left[229 x^3-687 x^2+230 x+4,3\right]\to \eta _3 & \text{Root}\left[x^3-3 x^2-x+4,1\right]\to \theta _1 \\ \text{Root}\left[x^3-3 x^2-x+4,2\right]\to \theta _2 & \text{Root}\left[x^3-3 x^2-x+4,3\right]\to \theta _3 \\ \text{Root}\left[49 x^3-147 x^2+35 x-1,1\right]\to \iota _1 & \text{Root}\left[49 x^3-147 x^2+35 x-1,2\right]\to \iota _2 \\ \text{Root}\left[49 x^3-147 x^2+35 x-1,3\right]\to \iota _3 & \text{Root}\left[x^3-4 x^2+3 x+1,1\right]\to \kappa _1 \\ \text{Root}\left[x^3-4 x^2+3 x+1,2\right]\to \kappa _2 & \text{Root}\left[x^3-4 x^2+3 x+1,3\right]\to \kappa _3 \\ \text{Root}\left[404 x^3-1212 x^2+24 x+1,1\right]\to \lambda _1 & \text{Root}\left[404 x^3-1212 x^2+24 x+1,2\right]\to \lambda _2 \\ \text{Root}\left[404 x^3-1212 x^2+24 x+1,3\right]\to \lambda _3 & \text{Root}\left[x^3-2 x^2-4 x+6,1\right]\to \mu _1 \\ \text{Root}\left[x^3-2 x^2-4 x+6,2\right]\to \mu _2 & \text{Root}\left[x^3-2 x^2-4 x+6,3\right]\to \mu _3 \\ \text{Root}\left[169 x^3-507 x^2+1,1\right]\to \nu _1 & \text{Root}\left[169 x^3-507 x^2+1,2\right]\to \nu _2 \\ \text{Root}\left[169 x^3-507 x^2+1,3\right]\to \nu _3 & \text{Root}\left[x^3-x^2-4 x-1,1\right]\to \xi _1 \\ \text{Root}\left[x^3-x^2-4 x-1,2\right]\to \xi _2 & \text{Root}\left[x^3-x^2-4 x-1,3\right]\to \xi _3 \\ \end{array} $ } \captionof{table}{Key to level sums table} \end{center} Note that in the above table we only included maps of the form $(e, \tau_0, \tau_1)$; as before, Lemma \ref{onlye} brings the total up to $36*6 = 216$ maps. \subsubsection{Six row sum recurrence relations already well-known} It turns out that 6 of the 11 unique row sum recurrence relations or sequences -- either with the same initial terms or with different ones -- had already been placed on Sloane's encyclopedia; in particular, these recurrence relations and corresponding Sloane sequence numbers are as follows: \begin{eqnarray*} S(n)&=&2S(n-1)+2S(n-2) ~~~~(\seqnum{A080040}, \seqnum{A155020}) \\ S(n)&=&3S(n-1)-S(n-2)+S(n-3) ~~~~(\seqnum{A200752})\\ S(n)&=&2S(n-1)+2S(n-2)-S(n-3) ~~~~(\seqnum{A061646})\\ S(n)&=&5S(n-1)-6S(n-2) ~~~~(\seqnum{A007689})\\ S(n)&=&4S(n-1)-3S(n-2)-S(n-3) ~~~~(\seqnum{A215404})\\ S(n)&=&S(n-1)+4S(n-2) ~~~~(\seqnum{A006131}) \end{eqnarray*} The new sequences\footnote{We have placed these sequences on Sloane's encyclopedia.} are generated by the following recurrence relations: \begin{eqnarray*} S(n)&=&4S(n-1)-5S(n-2)+4S(n-3)~~~~(\seqnum{A278612})\\ S(n)&=&S(n-1)+2S(n-2)+6S(n-3)~~~~(\seqnum{A278613})\\ S(n)&=&3S(n-1)+S(n-2)-4S(n-3)~~~~(\seqnum{A278614})\\ S(n)&=&2S(n-1)+4S(n-2)-6S(n-3)~~~~(\seqnum{A278615})\\ S(n)&=&S(n-1)+4S(n-2)+S(n-3)~~~~(\seqnum{A278616}) \end{eqnarray*} \section{Forbidden triples for $(e,e,e)$} \label{S6} This section explores which points in $\mathbb{Z}^3$ appear as terms in the TRIP-Stern sequence for $(e,e,e)$ and which points do not. We call these latter points \textit{forbidden triples}. \begin{definition} Let $S$ denote the set of points given by the TRIP-Stern sequence for $(e,e,e)$, let $P=\{(x,y,z)\in\mathbb{Z}^{3}|0<x\le y < z\}\cup(1,1,1)$ denote the set of \textit{potential entries} in $S$ and let $F=P\setminus S$ denote the set of \textit{forbidden triples}. \end{definition} \begin{proposition} \label{order} We have that $S \subseteq P$. \end{proposition} \begin{proof} Suppose $(a,b,c)\in S$. We proceed by induction on the level of $(a, b, c)$. If $(a, b,c)$ is in the second level, then $(a, b, c) = (1,1,2)$, so it satisfies $0 < a \leq b < c$. Suppose that $0 < a \leq b < c$ for all elements $(a, b, c)$ of the $n^{\rm th}$ level. If $(a,b,c)$ is in level $n + 1$, then $(a, b, c) = (d, e, f)A_{i_n}$ with $i_n \in \{0,1\}$ for some $(d,e,f)$ satisfying $0 < d \leq e < f$. If $i_n = 0$, then $(a, b, c) = (e, f, d + f)$. We know that $0 < e \leq f,$ and $f < d+f$ because $d$ is positive. If $i_n = 1$, then $(a,b,c) = (d,e,d+f)$. By the inductive hypothesis, $0 < d \leq e$ and $e < f < d+f$. \end{proof} By Proposition \ref{order}, elements of $F$ are precisely the potential entries that are not in the TRIP-Stern sequence for $(e,e,e)$. \begin{definition} We define the \textit{inverse map} $G$ of a triple $(a,b,c)$ to be \[ G= \begin{cases} (a,b,c-a), & \mbox{ if } a+b<c;\\ (c-b,a,b), & \mbox{ if } a+b \ge c \mbox{ and } a<b, \mbox{ or } a=1=c-b. \end{cases} \] \end{definition} The map $G$ is $A_{1}^{-1}$ if $a+b<c$ and is $A_{0}^{-1}$ if $a+b \ge c$ is $A_{0}^{-1}$. However, $G$ is not defined on all points of $P$. We will show that, for any point $(a,b,c)$ in $P$ that is not $(1,1,2)$, at most one of $(a,b,c)A_{1}^{-1}$ or $(a,b,c)A_{0}^{-1}$ can lie in the TRIP-Stern sequence for $(e,e,e)$. \begin{proposition}\label{notP} If $(a,b,c) \in P,$ and $(a,b,c) \neq (1,1,2)$, then either $(a,b,c)A_{0}^{-1}$ or $(a,b,c)A_{1}^{-1}$ is not in $P$. \end{proposition} \begin{proof} First assume $(a,b,c)$ has $a+b \ge c$. Then \[(a,b,c)A_{1}^{-1}=(a,b,c-a)\] However, we have that $b \ge c-a$. Then $(a,b,c)A_{1}^{-1}$ is not in $P$, unless $(a,b,c-a)=(1,1,1)$, which occurs if and only if $a=b=1, c=2$. Now suppose $(a,b,c)$ has $a+b<c$. Then \[(a,b,c)A_{0}^{-1}=(c-b,a,b).\] In order for this to lie in $P$ we need $c-b \le a < b$. In particular, $c \le b+a$, contradicting the assumption. Thus, either $(a,b,c)=(1,1,2)$ or at most one of $(a,b,c)A_{1}^{-1}$ or $(a,b,c)A_{0}^{-1}$ lies in $P$. \end{proof} \begin{corollary} For every $X\in S$ except $(1,1,1)$, the point $X$ appears exactly twice in the TRIP-Stern sequence for $(e,e,e)$. Furthermore, $G$ maps $X$ to the unique $Y \in S$ such that either $YA_0=X$ or $YA_1=X$. \end{corollary} \begin{proof} If a point $(a,b,c) \neq (1,1,1)$ lies in $S$, then by definition either $(a,b,c)A_{1}^{-1}$ or $(a,b,c)A_{0}^{-1}$ must lie in $S$. Proposition \ref{notP} implies that only one of these can be in $P$. Thus, exactly one of these points is in $S$. This takes care of the second statement. Because the left and right subtrees of the TRIP-Stern sequence for $(e,e,e)$ are symmetric, we need only show that each $X\in S$ appears exactly once in the set of points generated by the action of $A_{1}$ and $A_{0}$ on $(1,1,2)$. Now suppose that up to level $n$ each entry in the TRIP-Stern sequence appears only once. Then, in level $n+1$, each element $X$ corresponds to either $XA_{1}^{-1}$ or $XA_{0}^{-1}$ in level $n$. By Proposition \ref{notP}, exactly one of $XA_{1}^{-1}$ or $XA_{0}^{-1}$ will lie in level $n$, and this is the only element that goes to $X$ under one of $A_0$ or $A_1$. This is the unique preimage in $S$, under $A_0$ and $A_1$, that goes to $X$. Then each entry in level $n+1$ appears for the first time, and in the level $n+1$ there are no repeated entries, completing the induction. \end{proof} \begin{definition} We define a \textit{germ} to be any element $(a,a,b) \in P$ with $ b<2a$. We call the set of all elements generated by action of $A_0$ and $A_{1}$ on $(a,a,b)$ the tree generated by $(a,a,b)$. \end{definition} Observe that the tree generated by $(1,1,1)$ is precisely the TRIP-Stern Sequence for $(e,e,e)$. The only points for which $G$ is not defined in $P$ are germs. Moreover, each application of $G$ to $X\in P$ decreases (strictly) the sum of the entries of $X$. Since $G$ is well-defined on all of $P$, excluding germs, after some number of applications of $G$ to an element $X$, we find a germ that generates $X$. The following lemma, whose proof is straightforward, is needed to strengthen the result. \begin{lemma} \label{lemmaG} For all $X \in P$, we have $G((X)A_1)=X$ and $G((X)A_0)=X$. \end{lemma} This lemma shows that there is a unique germ generating $X$ for each $X \in P$. \begin{definition} Let the \textit{germ of} $X$ be the value $G^n(X)$ such that $n$ is the largest integer for which $G^n(X)$ is defined. As noted above, this $G^n(X)$ will be a germ. \end{definition} \begin{theorem} Every element of $P$ lies in exactly one tree generated by a germ. Furthermore, an element $X \in P$ lies in the tree generated by $X_0$ if and only if the germ of $X$ is $X_0$. In particular, one can determine the germ of any given triple $(a,b,c)$ in a finite number of steps. \end{theorem} \begin{proof} If the germ of $X$ is $X_0$, then since G acts as $A_1^{-1}$ or $A_0^{-1}$ at each step, we have that $X$ can be written as $(X_0)A_{i_1}A_{i_2}\cdots A_{i_n}$ for some $i_j\in\{0,1\}$. Conversely, if $X$ is in the tree generated by $X_0$, then we can write $X$ as $(X_0)A_{i_1}A_{i_2}\cdots A_{i_n}$ for some $i_j\in \{0,1\}$. But then by Lemma~\ref{lemmaG}, we have that $G^n(X)=X_0$, so the germ of $X$ is $X_0$. \end{proof} \begin{corollary} No germ except $(1,1,1)$ lies in the TRIP-Stern sequence for $(e,e,e)$. All elements that do not lie in the TRIP-Stern Sequence are given by the action of $A_0$ and $A_1$ on a germ other than $(1,1,1)$. \end{corollary} \begin{proof} Both $A_1^{-1}$ and $A_0^{-1}$ take elements of the form $(a,a,b)$ with $b<2a$ outside of $P$. Then $(a,a,b)$ cannot be reached by the action of $A_1$ or $A_0$ on an element of $P$. \end{proof} No doubt a similar analysis can be done for any TRIP-Stern sequence, though we do not know how clean the analogues would be. \section{Generalized TRIP-Stern sequences} \label{S7} In the original definition of a TRIP-Stern sequence from Section \ref{TRIPSternDef}, we set the first triple in the sequence to be $a_1=(1,1,1)$. However, there is nothing canonical about this choice, which leads us to construct generalized TRIP-Stern sequences, where we set the initial triple to be $a_1=(a,b,c),$ for some $a,b,c\in \mathbb{R}$. \subsection{Construction of generalized TRIP-Stern sequences} \begin{definition} For any permutation triplet $(\sigma,\tau_0,\tau_1)$ in $S_{3}^{3}$, the \textit{generalized TRIP-Stern sequence} of $(\sigma, \tau_0, \tau_1)$ is the unique sequence such that, for some $a,b,c\in \mathbb{R},$ $a_1 = (a,b,c)$ and, for $n\geq 1$, \[\left\{ \!\! \begin{array}{ll} a_{2n} &= a_n \cdot F_0 \\ a_{2n+1} &= a_n \cdot F_1 \end{array} \right.\] The $n^{\rm th}$ level of the generalized TRIP-Stern sequence is the set of $a_m$ with $2^{n-1} \leq m < 2^n$. \end{definition} As for the standard TRIP-Stern sequence, each choice of $(\sigma, \tau_0, \tau_1)$ also produces some generalized TRIP-Stern sequence. \begin{definition} Let $(a,b,c)$ be any triple of real numbers and let $(\sigma,\tau_0,\tau_1) \in S_3 \times S_3 \times S_3$. Let $\mathcal{T}(\sigma,\tau_0,\tau_1)_{(a,b,c)}$ denote the tree generated from $(\sigma,\tau_0,\tau_1)$ using $(a,b,c)$ as a seed. \end{definition} \subsection{Maximum terms and positions thereof for generalized TRIP-Stern sequences} \label{S8} This section examines maximum terms in any given level of a generalized TRIP-Stern sequence, as well as the positions of those maximum terms within the given level. For a given seed $(a,b,c)$ and for an $n$-tuple $v$ of zeros and ones, define \[\triangle(v)=(a,b,c)F_{i_1}F_{i_2} \cdots F_{i_n},\] which can be written as $\triangle(v)=(b_1,b_2,b_3)$. As before, let $|v|$ denote the number of entries in $v$. \begin{definition} The \textit{maximum entry} on level $n$ of a generalized TRIP-Stern sequence is $ m_n = \max_{|v| = n\:\:} \max_{i\in\{1,2,3\}}b_i(v)$. \end{definition} \begin{lemma} \label{onlye2} Suppose $(\sigma,\tau_0,\tau_1)\in S_{3}^3,$ $\kappa\in S_3,$ and let $v=(i_1, \ldots, i_n)$. Then \[\triangle_{(\kappa \sigma, \tau_0 \kappa^{-1}, \tau_1 \kappa^{-1})} (v)=(a,b,c)\cdot \kappa \sigma A_{i_1} \tau_{i_1} \cdot \sigma A_{i_2}\tau_{i_2} \cdots \sigma A_{i_n} \tau_{i_n} \kappa^{-1}.\] \end{lemma} \begin{proof} The proof uses the same technique as in Lemma \ref{onlye}. \end{proof} We will now examine the sequences of generalized TRIP-Stern maximal terms induced by select triangle partition maps. We find two broad classes of generalized TRIP-Stern sequences for paths through the corresponding trees that will locate the maximal terms at each level. These depend to some extent on the initial seeds. As the proofs are straightforward and similar to the earlier ones, we will omit them. \begin{theorem} \label{paths2} We have \begin{enumerate} \item For any seed $(a,b,c),$ if $a\geq b\geq c>0$, the sequence of maximal terms for TRIP-Stern sequences induced by the nine maps $(e,13,123),$ $(e,e,13),$ $(e,13,13),$ $(e,23,13),$ $(e,23,123),$ $(e,123,13),$ $(e,123,123),$ $(e,132,13),$ and $(e,132,123)$ lies on the connected path of the tree $\mathcal{T}(e,\tau_0,\tau_1)_{(a,b,c)}$ made by always selecting the right edge of $\mathcal{T}(e,\tau_0,\tau_1)_{(a,b,c)}$. \item For any seed $(a,b,c),$ if $0<a \leq b \leq c$, the sequence of maximal terms for TRIP-Stern sequences induced by the eleven maps $(e,e,e),$ $(e,e,12),$ $(e,e,23),$ $(e,e,123),$ $(e,e,132),$ $(e,12,e),$ $(e,12,12),$ $(e,12,13),$ $(e,12,23),$ $(e,12,123)$ and $(e,12,132)$ lies on the connected path through the tree $\mathcal{T}(e,\tau_0,\tau_1)_{(a,b,c)}$ made by always selecting the left edge of $\mathcal{T}(e,\tau_0,\tau_1)_{(a,b,c)}$. \end{enumerate} \end{theorem} In the above theorem, under select conditions, we have accounted for sequences of generalized TRIP-Stern sequence maximal terms generated by 20 maps. Lemma \ref{onlye2} brings this total up to $20\cdot 6=120$ maps, as long as the conditions -- which simply guarantee that the components of the initial seed will have the right magnitudes to satisfy the conditions of Theorem \ref{paths2} after being acted upon by the first $\kappa$ in Lemma \ref{onlye2} -- listed in the theorem below are satisfied: \begin{theorem} \label{kappa} \begin{enumerate} \item Let $(\sigma,\tau_0,\tau_1)\in S_{3}^{3}$ be one of the 9 permutation triplets listed in Theorem \ref{paths2}.1. For any a seed $(a,b,c),$ the sequence of maximal terms for TRIP-Stern sequences induced by maps of the form $(\kappa \sigma, \tau_0 \kappa^{-1}, \tau_1 \kappa^{-1})$ lies on the connected path through the tree $\mathcal{T}(\kappa \sigma, \tau_0 \kappa^{-1}, \tau_1 \kappa^{-1})_{(a,b,c)}$ made by selecting the right edge of $\mathcal{T}(\kappa \sigma, \tau_0 \kappa^{-1}, \tau_1 \kappa^{-1})_{(a,b,c)},$ as long as the following conditions are satisfied: \begin{enumerate} \item For $\kappa=12,$ require $b\geq a\geq c>0,$ \item for $\kappa=13,$ require $c\geq b\geq a>0,$ \item for $\kappa=23,$ require $a\geq c\geq b>0,$ \item for $\kappa=123,$ require $c\geq a\geq b>0,$ and \item for $\kappa=132,$ require $b\geq c\geq a>0$. \end{enumerate} \item Let $(\sigma,\tau_0,\tau_1)\in S_{3}^{3}$ be one of the 11 permutation triplets listed in Theorem \ref{paths2}.2. For any seed $(a,b,c),$ the sequence of maximal terms for TRIP-Stern sequences induced by maps of the form $(\kappa \sigma, \tau_0 \kappa^{-1}, \tau_1 \kappa^{-1})$ lies on the connected path through the tree $\mathcal{T}(\kappa \sigma, \tau_0 \kappa^{-1}, \tau_1 \kappa^{-1})_{(a,b,c)}$ made by selecting the left edge of $\mathcal{T}(\kappa \sigma, \tau_0 \kappa^{-1}, \tau_1 \kappa^{-1})_{(a,b,c)},$ as long as the following conditions are satisfied: \begin{enumerate} \item For $\kappa=12,$ require $0<b\leq a\leq c,$ \item for $\kappa=13,$ require $0<c\leq b\leq a,$ \item for $\kappa=23,$ require $0<a\leq c\leq b,$ \item for $\kappa=123,$ require $0<c\leq a\leq b,$ and \item for $\kappa=132,$ require $0<b\leq c\leq a$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} This follows immediately by the results of Theorem \ref{paths2} and remembering that $\kappa$ is simply a permutation. \end{proof} For any node $(r,s,t)$ in the tree $\mathcal{T}(\sigma,\tau_0,\tau_1)_{(a,b,c)},$ it is natural to consider the positions of maximal terms within the tree that would be generated using that $(r,s,t),$ and not the original $(a,b,c)$ as its seed. Clearly, the problem of characterizing these terms is equivalent to characterizing the maximal terms on the nodes below and connected with $(r,s,t)$; in this sense, the problem is one of finding a sequence of \textit{local maximal terms}. It is straightforward to prove the analogous theorems. \subsection{Minimal terms and positions thereof} \label{S9} We also have analogs for finding minimal terms for a number of generalized TRIP-Stern sequences. As the proofs are similar to the earlier ones, we omit them. \begin{theorem} For any seed $(a,b,c),$ the minimal terms $b_n$ in the TRIP-Stern sequences corresponding to the permutation triplets listed below lie on the left-most path in the corresponding TRIP-Stern tree: \begin{enumerate} \item For the maps $(e,12,e), (e,12,12), (e,12,13), (e,12,23),$ $(e,12,123), (e,12,132)$ under the condition that $b<a$ and $b<c$ -- the minimal term will have value $b$ at every level. \item For the maps $(e,123,e),$ $(e,123,12), (e,123,13),$ $(e,123,23), (e,123,123), (e,123,132)$ under the condition that $b<a$ and $b<c$ or $c<a$ and $c<b$; correspondingly, the minimal term will have value $b$ or $c$ at every level. \item For the maps $(e,23,e), (e,23,12), (e,23,13), (e,23,23),$ $(e,23,123), (e,23,132),$ under the condition that $c<a$ and $c<b$ -- the minimal term will have value $c$ at every level. \end{enumerate} \end{theorem} \begin{theorem} For any seed $(a,b,c),$ the minimal terms $b_n$ in the TRIP-Stern sequences corresponding to the permutation triplets listed below lie on the right-most path in the corresponding TRIP-Stern tree: \begin{enumerate} \item For the maps $(e,e,23), (e,12,23), (e,13,23),$ $(e,23,23), (e,123,23), (e,132,23),$ under the condition that $a<b$ and $a<c$ -- the minimal term will have value $a$ at every level. \item For the maps $(e,e,13), (e,12,13), (e,13,13), (e,23,13),$ $(e,123,13), (e,132,13),$ under the condition that $b<a$ and $b<c$ -- the minimal term will have value $b$ at every level. \item For the maps $(e,e,e),$ $(e,12,e),$ $(e,13,e), (e,23,e), (e,123,e), (e,132,e),$ under the condition that $a<b$ and $a<c$ XOR $b<a$ and $b<c$; correspondingly, the minimal term will have value $a$ or $b$ at every level. \end{enumerate} \end{theorem} In the above two theorems, we have found the positions and values of the minimal terms of generalized TRIP-Stern sequences generated by 27 maps under certain conditions on the initial seed. Note that certain maps appear in both the ``left" and ``right" lists. Using the results of Lemma \ref{onlye2} and imposing conditions analogous to those used in Theorem \ref{kappa} for maximal terms -- which simply guarantee that the components of the initial seed will have the right magnitudes to satisfy the conditions of the above two theorems after being acted upon by the first $\kappa$ in Lemma \ref{onlye2} -- brings this total up to $27\cdot 6=162$ maps. \subsection{Level sums for generalized TRIP-Stern sequences} \label{S10} It is natural, as was done with standard TRIP-Stern sequences, to consider level sums for generalized TRIP Stern sequences. We take an identical approach to that for standard TRIP-Stern sequences, though the explicit forms are more complex, as one would expect. \begin{theorem} The family of triangle partition maps leads to 11 distinct sequences of sums $(S(n))_{n\geq 1}$ with recurrence relations and explicit forms as shown in the tables below. \end{theorem} \begin{proof} The proof follows by direct calculation. For each triangle partition map $T_{\sigma,\tau_0,\tau_1},$ compute a generalized TRIP-stern sequence given by setting $a_1=(a,b,c)$ instead of setting $a_1=(1,1,1)$ as we had done in Section \ref{S2.5}, partitioning the sequence into levels as before. Sum the terms of each level $n$ to yield a sequence of row sums $\left(S(n)\right)_{n\geq 1}$. If we can prove that the first $m$ terms of $\left(S(n)\right)_{n\geq 1}$ satisfy an $(m-1)$-term recurrence relation, it follows that the sequence must be generated by that recurrence relation. We have carried out this procedure for all 216 permutation triplets $(\sigma,\tau_0,\tau_1)$ to find recurrence relations for the associated row sums, from which the explicit form for the $n^{\rm th}$ term in the sequence $\left(S(n)\right)_{n\geq 1}$ was easily calculated. The results are presented in the tables below -- indeed, the family of triangle partition maps generates only 11 distinct row sums. The first column lists the recurrence relation, the second lists the explicit form of that recurrence relation and the third lists the permutation triplets whose TRIP-Stern level sums follow this relation. Note that Greek letters represent zeros of certain polynomials; see the key below. For example, $\text{Root}\left[x^3 - 4 x^2 + 5 x - 4, 1\right]\to\alpha_1$ means ``let $\alpha_1$ be the first root of $x^3 - 4 x^2 + 5 x - 4=0$." \end{proof} \begin{center} \scalebox{0.85}{ $ \begin{array}{c|c|c} \mbox{Recurrence relation for $S(n)$} & \mbox{Explicit form for $S(n)$} & (e,\tau_0,\tau_1)\\ \hline\hline 4 S(n-3)-5 S(n-2)+4 S(n-1) & \begin{array}{c}\left(c \zeta _1+b \eta _1\right) \alpha _1^n+a \left(\beta _1 \alpha _1^n+\alpha _2^n \beta _2+\alpha _3^n \beta _3\right)\\+\alpha _3^n \left(b \gamma _3+c \delta _2\right) \epsilon _2+\alpha _2^n \left(c \zeta _3+b \eta _2\right)\end{array} & \begin{array}{c} (e,e,e), \\ (e,123,123) \\ \end{array} \\ \hline 2 S(n-2)+2 S(n-1) & \begin{array}{c}\frac{1}{6} (\left(1-\sqrt{3}\right)^n (\left(3-2 \sqrt{3}\right) a\\-\left(-3+\sqrt{3}\right) b+\left(3-2 \sqrt{3}\right) c)\\+\left(1+\sqrt{3}\right)^n (\left(3+2 \sqrt{3}\right) a\\+\left(3+\sqrt{3}\right) b+\left(3+2 \sqrt{3}\right) c))\end{array} & \begin{array}{c} (e,e,12), \\ (e,e,123),\\ (e,13,12), \\ (e,13,123) \\ \end{array} \\ \hline S(n-3)-S(n-2)+3 S(n-1) & \begin{array}{c}a \iota _1 \theta _1^n+b \kappa _1 \theta _1^n+c \mu _1 \theta _1^n+a \theta _2^n \iota _2\\+b \theta _2^n \kappa _3+\theta _3^n \left(a \iota _3+b \kappa _2+c \mu _2\right)+c \theta _2^n \mu _3 \end{array}& \begin{array}{c} (e,e,13), \\ (e,12,123) \\ \end{array} \\ \hline -S(n-3)+2 S(n-2)+2 S(n-1) & \begin{array}{c}\frac{2^{-n}}{5 \left(5+\sqrt{5}\right)} ((-2)^n \left(5+\sqrt{5}\right) (b-c)\\ +\left(3-\sqrt{5}\right)^n (-5 \left(-1+\sqrt{5}\right) a\\+\left(5-3 \sqrt{5}\right) b-2 \left(-5+\sqrt{5}\right) c)\\+\left(3+\sqrt{5}\right)^n (10 \left(2+\sqrt{5}\right) a\\+\left(15+7 \sqrt{5}\right) b+4 \left(5+2 \sqrt{5}\right) c))\end{array} & \begin{array}{c} (e,e,23),\\ (e,12,23), \\ (e,12,132), \\ (e,23,e), \\ (e,23,13), \\ (e,23,123), \\ (e,123,23), \\ (e,123,132), \\ (e,132,e), \\ (e,132,13) \\ \end{array} \\ \hline 6 S(n-3)+2 S(n-2)+S(n-1) & \begin{array}{c} c \xi _1 \nu _1^n+b o_1 \nu _1^n+a \rho _1 \nu _1^n+c \nu _3^n \xi _2+b \nu _3^n o_3+\\ \nu _2^n \left(c \xi _3+b o_2+a \rho _2\right)+a \nu _3^n \rho _3 \end{array}& \begin{array}{c} (e,e,132), \\ (e,132,123) \\ \end{array} \\ \hline 5 S(n-1)-6 S(n-2) & 2^n b+3^n (a+c) & \begin{array}{c} (e,12,e), \\ (e,12,13), \\ (e,123,e), \\ (e,123,13) \\ \end{array} \\ \hline -4 S(n-3)+S(n-2)+3 S(n-1) & \begin{array}{c}c \tau _2 \sigma _1^n+a \upsilon _1 \sigma _1^n+b \phi _1 \sigma _1^n+c \sigma _2^n \tau _1+a \sigma _2^n \upsilon _2+ \\b \sigma _2^n \phi _2+\sigma _3^n \left(c \tau _3+a \upsilon _3+b \phi _3\right)\end{array} & \begin{array}{c} (e,12,12), \\ (e,13,13) \\ \end{array} \\ \hline -S(n-3)-3 S(n-2)+4 S(n-1) &\begin{array}{c} c \psi _1 \chi _1^n+a \omega _2 \chi _1^n+b \digamma _1 \chi _1^n+c \chi _2^n \psi _2+a \chi _2^n \omega _1\\+b \chi _2^n \digamma _2+\chi _3^n \left(c \psi _3+a \omega _3+b \digamma _3\right)\end{array} & \begin{array}{c} (e,13,e), \\ (e,123,12) \\ \end{array} \\ \hline -6 S(n-3)+4 S(n-2)+2 S(n-1) & \begin{array}{c} a \varepsilon _2 \Pi_1^n+c \vartheta _1 \Pi_1^n+b \varsigma_1 \Pi_1^n+a \varepsilon _1 \Pi_2^n+c \vartheta _2 \Pi_2^n\\ +b \varsigma_2 \Pi_2^n+\left(a \varepsilon _3+c \vartheta _3+b \varsigma_3\right) \Pi_3^n \end{array}& \begin{array}{c} (e,13,23), \\ (e,23,12) \\ \end{array} \\ \hline S(n-3)+4 S(n-2)+S(n-1) & \begin{array}{c}a \varpi _2 \varkappa _1^n+c \varrho _1 \varkappa _1^n+b \varphi _2 \varkappa _1^n+a \varkappa _2^n \varpi _1+c \varkappa _2^n \varrho _2\\ +b \varkappa _2^n \varphi _1+\varkappa _3^n \left(a \varpi _3+c \varrho _3+b \varphi _3\right)\end{array} & \begin{array}{c} (e,13,132), \\ (e,132,12) \\ \end{array} \\ \hline 4 S(n-2)+S(n-1) & \begin{array}{c}\frac{1}{34} (\left(\frac{1}{2} \left(1-\sqrt{17}\right)\right)^n (\left(17-5 \sqrt{17}\right) a\\ +\left(17-3 \sqrt{17}\right) b+\left(17-5 \sqrt{17}\right) c)\\ +\left(\frac{1}{2} \left(1+\sqrt{17}\right)\right)^n (\left(17+5 \sqrt{17}\right) a\\+\left(17+3 \sqrt{17}\right) b+\left(17+5 \sqrt{17}\right) c))\end{array} & \begin{array}{c} (e,23,23), \\ (e,23,132),\\ (e,132,23), \\ (e,132,132) \\ \end{array} \end{array}$ } \captionof{table}{Level sums} \end{center} \begin{center} \scalebox{0.85}{ $ \begin{array}{l|l} \hline\hline \text{Root}\left[x^3-4 x^2+5 x-4,1\right]\to \alpha _1 & \text{Root}\left[x^3-4 x^2+5 x-4,2\right]\to \alpha _2 \\ \text{Root}\left[x^3-4 x^2+5 x-4,3\right]\to \alpha _3 & \text{Root}\left[58 x^3-58 x^2-17 x-2,1\right]\to \beta _1 \\ \text{Root}\left[58 x^3-58 x^2-17 x-2,2\right]\to \beta _2 & \text{Root}\left[58 x^3-58 x^2-17 x-2,3\right]\to \beta _3 \\ \text{Root}\left[x^3+4 x^2+x+2,3\right]\to \gamma _3 & \text{Root}\left[x^3+5 x^2-3 x+1,2\right]\to \delta _2 \\ \text{Root}\left[116 x^3+x+1,2\right]\to \epsilon _2 & \text{Root}\left[116 x^3-116 x^2-7 x-1,1\right]\to \zeta _1 \\ \text{Root}\left[116 x^3-116 x^2-7 x-1,3\right]\to \zeta _3 & \text{Root}\left[116 x^3-116 x^2+25 x-2,1\right]\to \eta _1 \\ \text{Root}\left[116 x^3-116 x^2+25 x-2,2\right]\to \eta _2 & \text{Root}\left[x^3-3 x^2+x-1,1\right]\to \theta _1 \\ \text{Root}\left[x^3-3 x^2+x-1,2\right]\to \theta _2 & \text{Root}\left[x^3-3 x^2+x-1,3\right]\to \theta _3 \\ \text{Root}\left[19 x^3-19 x^2-3 x-1,1\right]\to \iota _1 & \text{Root}\left[19 x^3-19 x^2-3 x-1,2\right]\to \iota _2 \\ \text{Root}\left[19 x^3-19 x^2-3 x-1,3\right]\to \iota _3 & \text{Root}\left[38 x^3-38 x^2+10 x-1,1\right]\to \kappa _1 \\ \text{Root}\left[38 x^3-38 x^2+10 x-1,2\right]\to \kappa _2 & \text{Root}\left[38 x^3-38 x^2+10 x-1,3\right]\to \kappa _3 \\ \text{Root}\left[x^3-3 x^2+x-1,1\right]\to \lambda _1 & \text{Root}\left[x^3-3 x^2+x-1,2\right]\to \lambda _2 \\ \text{Root}\left[x^3-3 x^2+x-1,3\right]\to \lambda _3 & \text{Root}\left[76 x^3-76 x^2-2 x-1,1\right]\to \mu _1 \\ \text{Root}\left[76 x^3-76 x^2-2 x-1,2\right]\to \mu _2 & \text{Root}\left[76 x^3-76 x^2-2 x-1,3\right]\to \mu _3 \\ \text{Root}\left[x^3-x^2-2 x-6,1\right]\to \nu _1 & \text{Root}\left[x^3-x^2-2 x-6,2\right]\to \nu _2 \\ \text{Root}\left[x^3-x^2-2 x-6,3\right]\to \nu _3 & \text{Root}\left[147 x^3-147 x^2-7 x-1,1\right]\to \xi _1 \\ \text{Root}\left[147 x^3-147 x^2-7 x-1,2\right]\to \xi _2 & \text{Root}\left[147 x^3-147 x^2-7 x-1,3\right]\to \xi _3 \\ \text{Root}\left[588 x^3-588 x^2+77 x-4,1\right]\to o_1 & \text{Root}\left[588 x^3-588 x^2+77 x-4,2\right]\to o_2 \\ \text{Root}\left[588 x^3-588 x^2+77 x-4,3\right]\to o_3 & \text{Root}\left[1176 x^3-1176 x^2-161 x-6,1\right]\to \rho _1 \\ \text{Root}\left[1176 x^3-1176 x^2-161 x-6,2\right]\to \rho _2 & \text{Root}\left[1176 x^3-1176 x^2-161 x-6,3\right]\to \rho _3 \\ \text{Root}\left[x^3-3 x^2-x+4,1\right]\to \sigma _1 & \text{Root}\left[x^3-3 x^2-x+4,2\right]\to \sigma _2 \\ \text{Root}\left[x^3-3 x^2-x+4,3\right]\to \sigma _3 & \text{Root}\left[229 x^3-229 x^2-33 x+1,1\right]\to \tau _1 \\ \text{Root}\left[229 x^3-229 x^2-33 x+1,2\right]\to \tau _2 & \text{Root}\left[229 x^3-229 x^2-33 x+1,3\right]\to \tau _3 \\ \text{Root}\left[229 x^3-229 x^2+5 x+2,1\right]\to \upsilon _1 & \text{Root}\left[229 x^3-229 x^2+5 x+2,2\right]\to \upsilon _2 \\ \text{Root}\left[229 x^3-229 x^2+5 x+2,3\right]\to \upsilon _3 & \text{Root}\left[229 x^3-229 x^2+61 x-2,1\right]\to \phi _1 \\ \text{Root}\left[229 x^3-229 x^2+61 x-2,2\right]\to \phi _2 & \text{Root}\left[229 x^3-229 x^2+61 x-2,3\right]\to \phi _3 \\ \text{Root}\left[x^3-4 x^2+3 x+1,1\right]\to \chi _1 & \text{Root}\left[x^3-4 x^2+3 x+1,2\right]\to \chi _2 \\ \text{Root}\left[x^3-4 x^2+3 x+1,3\right]\to \chi _3 & \text{Root}\left[49 x^3-49 x^2+1,1\right]\to \psi _1 \\ \text{Root}\left[49 x^3-49 x^2+1,2\right]\to \psi _2 & \text{Root}\left[49 x^3-49 x^2+1,3\right]\to \psi _3 \\ \text{Root}\left[49 x^3-49 x^2-14 x+1,1\right]\to \omega _1 & \text{Root}\left[49 x^3-49 x^2-14 x+1,2\right]\to \omega _2 \\ \text{Root}\left[49 x^3-49 x^2-14 x+1,3\right]\to \omega _3 & \text{Root}\left[49 x^3-49 x^2+14 x-1,1\right]\to \digamma _1 \\ \text{Root}\left[49 x^3-49 x^2+14 x-1,2\right]\to \digamma _2 & \text{Root}\left[49 x^3-49 x^2+14 x-1,3\right]\to \digamma _3 \\ \text{Root}\left[x^3-2 x^2-4 x+6,1\right]\to \Pi_1 & \text{Root}\left[x^3-2 x^2-4 x+6,2\right]\to \Pi_2 \\ \text{Root}\left[x^3-2 x^2-4 x+6,3\right]\to \Pi_3 & \text{Root}\left[101 x^3-101 x^2+19 x-1,1\right]\to \varsigma_1 \\ \text{Root}\left[101 x^3-101 x^2+19 x-1,2\right]\to \varsigma_2 & \text{Root}\left[101 x^3-101 x^2+19 x-1,3\right]\to \varsigma_3 \\ \text{Root}\left[202 x^3-202 x^2-42 x-1,1\right]\to \varepsilon _1 & \text{Root}\left[202 x^3-202 x^2-42 x-1,2\right]\to \varepsilon _2 \\ \text{Root}\left[202 x^3-202 x^2-42 x-1,3\right]\to \varepsilon _3 & \text{Root}\left[404 x^3-404 x^2-14 x+3,1\right]\to \vartheta _1 \\ \text{Root}\left[404 x^3-404 x^2-14 x+3,2\right]\to \vartheta _2 & \text{Root}\left[404 x^3-404 x^2-14 x+3,3\right]\to \vartheta _3 \\ \text{Root}\left[x^3-x^2-4 x-1,1\right]\to \varkappa _1 & \text{Root}\left[x^3-x^2-4 x-1,2\right]\to \varkappa _2 \\ \text{Root}\left[x^3-x^2-4 x-1,3\right]\to \varkappa _3 & \text{Root}\left[169 x^3-169 x^2-26 x+1,1\right]\to \varpi _1 \\ \text{Root}\left[169 x^3-169 x^2-26 x+1,2\right]\to \varpi _2 & \text{Root}\left[169 x^3-169 x^2-26 x+1,3\right]\to \varpi _3 \\ \text{Root}\left[169 x^3-169 x^2-13 x+5,1\right]\to \varrho _1 & \text{Root}\left[169 x^3-169 x^2-13 x+5,2\right]\to \varrho _2 \\ \text{Root}\left[169 x^3-169 x^2-13 x+5,3\right]\to \varrho _3 & \text{Root}\left[169 x^3-169 x^2+26 x-1,1\right]\to \varphi _1 \\ \text{Root}\left[169 x^3-169 x^2+26 x-1,2\right]\to \varphi _2 & \text{Root}\left[169 x^3-169 x^2+26 x-1,3\right]\to \varphi _3 \\ \end{array} $ } \captionof{table}{Key to level sums table} \end{center} \section{Conclusion} \label{Conclusion} This paper has used a collection of multidimensional continued fractions to construct a family of sequences called TRIP-Stern sequences. These sequences reflect the properties of the multidimensional continued fractions from which they are generated. We have studied the sequences of maximal and minimal terms -- and positions thereof. We have also characterized the sums of levels and examined restrictions on terms appearing in a given TRIP-Stern sequence. We found that several of the level sum sequences or corresponding recurrence relations are well-known. Lastly, we introduced generalized TRIP-Stern sequences and proved several analogous results. We will conclude with a few unanswered questions: Do recurrence relations for row maxima and their locations exist in general? We ask this because such relations could not always be found. What are the forbidden triples corresponding to $(\sigma,\tau_0,\tau_1)\in S_{3}^{3}$ other than $(e,e,e)$? Do the distributions of terms in the TRIP-Stern sequences have any interesting properties? The terms of the TRIP-Stern sequences are the denominators of the convergents of the corresponding multidimensional continued fractions. Can these sequences reveal anything about approximation properties of multidimensional continued fractions? As discussed in Dasaratha et al.\ \cite{SMALL11q1}, many known multidimensional continued fractions are combinations of our family of $216$ maps; as a result, it may be of interest to construct analogous sequences using select combination maps. There are polynomial analogs of Stern's diatomic sequence (as in the work of Dilcher and Stolarsky \cite{Dilcher-Stokarsky07, Dilcher-Stokarsky09}, of Coons \cite{Coons10}, of Dilcher and Ericksen \cite{Dilcher-Ericksen14}, of Klav\v{z}ar, Milutinovi\'{c} and Petr \cite{Klavzar-Milutinovic-Petr07}, of Ulas \cite{Ulas-Ulas11, Ulas12}, of Vargas \cite{Vargas12}, of Bundschuh \cite{Bundschuh12}, of Bundschun and V\"{a}\"{a}n\"{a}nen \cite{Bundschun-Vaananen13} and of Allouche and Mend\`{e}s France \cite{Allouche-Mendes France12}). What are the polynomial analogs for TRIP-Stern sequences? In essence, in this paper we start with a triple of numbers $v=(a,b,c)$ and two $3\times 3$ matrices $A$ and $B$ and then examine the concatenation of the triples \[vA, vB, vAA, vAB, vBA, vBB, vAAA, vAAB, vABA, \ldots.\] Since there are 216 different triangle partition algorithms, we have 216 different pairs of $3\times 3$ matrices. Naively then, we would expect for there to be 216 different types of sequences, or, 216 different stories. As we have seen, this is not the case. We have found clear patterns and classes among the 216 different TRIP-Stern sequences. The real question is why these patterns exist. Further, do the TRIP-Stern sequences that share, say common sequences of maximum terms, have common number theoretic properties? These questions strike us as hard. \section{Acknowledgments} We thank L. Pedersen and the referee for useful comments and the National Science Foundation for their support of this research via grant DMS-0850577.
1,108,101,566,162
arxiv
\section{Introduction} It is well known that in the space of dimension higher than two the many-particle wave function is either symmetric or antisymmetric under a permutation group operation; this property leads to the division into the systems of bosons and fermions, respectively. As a consequence, the distribution function for the ideal gas is given either by the Bose-Einstein or by the Fermi-Dirac functions \cite{hei}. In low-dimensional systems ($d=1$ and $2$) the situation changes drastically because e.g., a proper symmetry group in two dimensions for the hard-core particles is the braid group, the characters of which are complex numbers \cite{l}. In such instances the distribution function has not been determined as yet. On the other hand, the distribution function can be changed by the interaction among particles. Such a situation arises for instance at the critical point when the system undergoes a phase transition. Below the critical temperature (e.g., in the superconducting phase) the distribution function changes its form from that in the normal state. So, the statistical properties of the particles are influenced by both system dimensionality and by the character of dynamical interaction between particles. In his remarkable paper \cite{hal}, Haldane noted that the distribution function can also differ from the Bose-Einstein or the Fermi-Dirac form in the normal state. He generalized the Pauli exclusion principle by introducing the concept of {\bf statistical interaction} which determines how the number of accessible orbitals changes when particles are added to the system. The paper dealt with the many-particle Hilbert space of finite dimension. The limitation turned out to be irrelevant. Namely, Murthy and Shankar showed \cite{ms} that the statistical interaction, when extended to the Hilbert space of infinite dimension, is proportional to the second virial coefficient. Very recently, Wu \cite{wu} solved the problem of the distribution function for Haldane's fractional statistics. He found a general form of the equations for the distribution function for an arbitrary statistical interaction and discussed the thermodynamics of such a gas. Furthermore, Bernard and Wu \cite{be} found the explicit form of the statistical interaction in the case of interacting scalar particles in one dimension. In the particular case of bosons they showed that, as the amplitude of a local delta-function interaction changes from zero to infinity, the distribution function evolves from the Bose-Einstein to the Fermi-Dirac form. In this paper we introduce the spin degrees of freedom into the problem and determine the statistical properties as well as the statistical interaction for particles in two situations. We consider first the Hubbard model in the space of one dimension and with an infinite on-site Coulomb repulsion. In this limit, we show rigorously that the charge excitations (holons) obey the Fermi-Dirac distribution, whereas the spin excitations (spinons) obey the Bose-Einstein distribution. The boson part leads to the correct entropy ($k_B \ln 2$ per carrier) in the Mott insulating limit. As a second example, we express the statistical spin liquid partition function \cite{s1} with the help of the statistical interaction concept. These two examples represent nontrivial generalizations of Haldane's fractional statistics to particles with internal symmetry such as spin. In both cases an explicit form of the multicomponent statistical interaction is required. We show that the statistical distributions evolve to some novel functions when the interaction between the particles diverges. For the spin liquid case the form of the distribution functions are also presented for intermediate values of the dynamical interaction. In both cases the nonstandard statistics is due to the interaction between the particles. \section{Statistical interaction for the Hubbard model in one dimension} \subsection{Thermodynamic limit for the Bethe ansatz equations \mbox{$(U \rightarrow \infty)$} } We consider first the one-dimensional system of particles with a contact interaction. One of the simplest models of interacting spin one-half particles was introduced by Hubbard \cite{hub}. The Hamiltonian in this case is \begin{equation} H = - t \sum_{<i,j> \sigma} a_{i \sigma}^{+} a _{j \sigma} + U \sum_{i} n_{i \uparrow} n_{i \downarrow}, \end{equation} where $t$ is the hopping integral between the nearest neighboring pairs $<i,j>$ of lattice sites, and $U$ is the on-site Coulomb repulsion when the two particles with spin up and down meet on the same lattice site. We set $t=1$. This model was solved in one dimension by Lieb and Wu \cite{lieb}. The solution is given by the set of the Bethe-ansatz equations determining the rapidities $\{k_i\}$, $\{ \Lambda_{\alpha}\}$; i.e., \begin{equation} \renewcommand{\arraystretch}{1.5} \everymath={\displaystyle} \left\{ \begin{array}{c} \frac{2 \pi}{L} I_{j} = k_{j} - \frac{1}{L} \sum_{\beta =1}^{M} \Theta ( 2 \sin k_{j} - 2 \Lambda_{\beta} ) \\ \frac{2 \pi}{L} J_{\alpha} = \Lambda_{\alpha} - \frac{1}{L} \sum_{j = 1}^{N} \Theta ( 2 \Lambda_{\alpha} - 2 \sin k_{j} ) + \frac{1}{L} \sum_{\beta =1 }^{M} \Theta( \Lambda _{\alpha} - \Lambda_{\beta}) - \sum_{\beta =1}^{M} \Lambda_{\beta} \delta_{\Lambda_{\alpha},\Lambda_{\beta}} , \end{array} \right. \end{equation} where $N$ is the total number of particles in the system, $M$ is the number of particles with spin down, $L$ is the length of the chain, $j=1,...,N$, and $\alpha=1,...,M$. $I_{j}$ is an integer (half-odd integer) for M even (odd), and $J_{\alpha}$ is an integer (half-odd integer) for $N-M$ odd (even). The phase shift function $\Theta (p)$ is defined by \begin{equation} \Theta (p) = - 2 \tan ^{-1} \left( \frac{2 p}{U} \right). \end{equation} The second set of equations (2) was written in the form better suited to our purposes. The basis in the Hilbert space which diagonalizes the Hamiltonian (1) is called the holon-spinon representation. The Bethe-ansatz equations can be rewritten in such a way that all dynamical interactions are transmuted into the statistical interaction \cite{be}. We determine explicitly the statistical interaction in the case of the Hubbard model. Our method is a straightforward generalization of the Bernard and Wu result \cite{be} and is valid in the case of infinite interaction only. In this limit, the charge and the spin excitations decouple and there are no bound states in the system \cite{ogata}. In other words, all bound states in the upper Hubbard subband are pushed out from the physical many-particle Hilbert space. In the large $U$ limit the Bethe ansatz equations read \cite{sok} \begin{equation} \renewcommand{\arraystretch}{1.5} \everymath={\displaystyle} \left\{ \begin{array}{c} \frac{2 \pi}{L} I_{j} = k_{j} + \frac{1}{L} \sum_{\beta =1}^{M} \Theta ( 2 \Lambda_{\beta} ) \\ \frac{2 \pi}{L} J_{\alpha} = \Lambda_{\alpha} - \frac{N}{L} \Theta ( 2 \Lambda_{\alpha} ) + \frac{1}{L} \sum_{\beta =1 }^{M} \Theta( \Lambda _{\alpha} - \Lambda_{\beta}) - \sum_{\beta =1}^{M} \Lambda_{\beta} \delta_{\Lambda_{\alpha}, \Lambda_{\beta}} . \end{array} \right. \end{equation} We rewrite these equations in the thermodynamic limit, i.e. for $N\rightarrow \infty$, $L\rightarrow \infty$, and $N/L = const$. We divide the range of the momentum $k's$ and $\Lambda 's$ into the intervals with an equal size $\Delta k$ and $\Delta \Lambda$, as well as label each interval by its midpoints $k_i$ and $\Lambda _{\alpha}$, respectively. We treat the particles with the momenta in the $i$-th or the $\alpha$-th interval as belonging to the $i$-th or the $\alpha$-th group. As usual, the number of available bare single particle states are $ G_{i}^{0} = L \Delta k/2 \pi $ and $ G_{\alpha}^{0} = L \Delta \Lambda/ 2 \pi. $ These numbers follow from the decomposition of the Bethe-ansatz wave function in the $U \rightarrow \infty$ limit \cite{ogata}. Next, we introduce the distribution functions (the densities of states) for the roots $k_{j}$ and $\Lambda_{\alpha}$ of the Bethe-ansatz equations (4). Namely, we define $ L\rho (k_i) \Delta k \equiv N_{i}^{c} $ as the number of $k $ values in the interval $ [ k_i - \Delta k /2, k_i + \Delta k/2 ]$, and $ L\sigma(\Lambda_{\beta})\Delta\Lambda \equiv N_{\beta}^{s} $ as the number of $\Lambda$ values in the interval $[ \Lambda_{\beta} - \Delta \Lambda /2, \Lambda_{ \beta} + \Delta \Lambda /2 ]$. Hence, the two quantities $ 2 \pi \rho (k_i ) = N_{i}^{c}/G_{i}^{0} \equiv n_{i}^{c} $, and $ 2 \pi \sigma (\Lambda_{\beta}) = N_{\beta}^{s}/G_{\beta}^{0} \equiv n_{\beta}^{s} $, are respectively the occupation-number distributions for the holon and the spinon excitations in the Hubbard chain. In effect, the Bethe ansatz equations in the intervals $\Delta k$ and $\Delta \Lambda$ take the form \begin{equation} \renewcommand{\arraystretch}{1.5} \everymath={\displaystyle} \left\{ \begin{array}{c} \frac{2 \pi}{L} I(k_{j}) = k_{j} + \sum_{\beta} \Theta ( 2 \Lambda_{\beta} ) \sigma(\Lambda_{\beta}) \Delta \Lambda \\ \frac{2 \pi}{L} J(\Lambda_{\alpha}) = \Lambda_{\alpha} - \frac{N}{L} \Theta ( 2 \Lambda_{\alpha} ) + \sum_{\beta } \Theta( \Lambda _{\alpha} - \Lambda_{\beta}) \sigma(\Lambda_{\beta}) \Delta \Lambda\\ - L \sum_{\beta} \Lambda_{\beta} \delta_{\Lambda_{\alpha},\Lambda_{\beta}} \sigma( \Lambda_{\beta}) \Delta \Lambda. \end{array} \right. \end{equation} The function $\rho(k)$ does not appear explicitly in the large $U$ limit. The numbers of accessible states in each of the $i$-th and the $\alpha$-th groups are \begin{equation} \tilde{D}_{i}^{c} (\{N_{i}^{c}\},\{N_{\beta}^{s}\}) = I(k_i + \Delta k /2) - I (k_i - \Delta k /2), \end{equation} \begin{equation} \tilde{D}_{\alpha}^{s} (\{N_{i}^{c}\},\{N_{\beta}^{s}\})= J( \Lambda _{\alpha} + \Delta \Lambda /2 ) - J (\Lambda _{\alpha} - \Delta \Lambda /2). \end{equation} Using the continuous form (5) of the Bethe ansatz equations we find that $ \tilde{D}_{i}^{c} = L \rho _{t}^{c}(k_{i}) \Delta k $ and $ \tilde{D}_{\alpha}^{s} = L \rho _{t}^{s} (\Lambda_{\alpha}) \Delta \Lambda, $ where in the thermodynamic limit ($\Delta k \rightarrow 0$ , $\Delta \Lambda \rightarrow 0$) we have respectively the total densities of states for charge and spin excitations \begin{equation} \rho_{t}^{c} (k) = \frac{1}{2 \pi}, \end{equation} and \cite{got} \begin{equation} \rho_{t}^{s} (\Lambda) = \frac{1}{2 \pi} - \frac{N}{2 \pi L} \frac{\partial \Theta(2 \Lambda)}{\partial \Lambda} + \frac{1}{2 \pi}\int d\Lambda' \sigma(\Lambda ') \frac{\partial \Theta( \Lambda - \Lambda ')}{\partial \Lambda}- \int d \Lambda ' \sigma( \Lambda ') \Lambda ' \frac{\partial \delta( \Lambda - \Lambda ')}{\partial \Lambda}. \end{equation} Substituting the form (3) for $\Theta (p)$ to the derivative $ \partial \Theta / \partial \Lambda$ one can easily find that in the $U\rightarrow \infty $ limit \begin{equation} \rho^{s}_{t}(\Lambda) = \frac{1}{2 \pi} + \sigma ( \Lambda). \end{equation} To derive (10) we utilized the fact that $\sigma (\Lambda)$ is a flat function of $\Lambda$ in the large $U$ limit \cite{car}. We see that the numbers of accessible states for the holons and the spinons in the $U\rightarrow \infty$ limit are independent of each other. This result, as we show in the following, leads to the decomposition of the partition function into the holon and the spinon parts. \subsection{Statistical interaction for the Hubbard chain} We define the statistical interaction and the total number of states for spin one-half particles. For that purpose we work in the basis in which the Hamiltonian is diagonal, i.e., we choose the holon-spinon representation and observe that the dimensions $D_{i}^{c}$ and $D_{\alpha}^{s}$ of the one-particle Hilbert spaces for the particle in the $i$-th or the $\alpha$-th groups are functionals of both $\{N_{i}^{c}\}$ and $\{N_{\alpha}^{s}\}$, i.e. $ D_{i}^{c} = D_{i}^{c} (\{N_{i}^{c}\},\{N_{\beta}^{s}\}), $ and $ D_{\alpha}^{s}=D_{\alpha}^{s}(\{N_{i}^{c}\},\{N_{\beta}^{s}\}). $ Namely, starting from the Haldane definition \cite{hal} of the change of the number of the accessible states and adopting it to the present situation we obtain \begin{equation} \Delta D_{i}^{c} = - \sum_{j} g_{ij}^{cc} \Delta N_{j}^{c} - \sum_{\alpha} g_{i\alpha}^{cs} \Delta N_{\alpha}^{s}, \end{equation} \begin{equation} \Delta D_{\alpha}^{s} = - \sum_{j} g_{\alpha j}^{sc} \Delta N_{j}^{c} - \sum_{\beta} g_{\alpha \beta}^{ss} \Delta N_{\beta}^{s}, \end{equation} where the four $g$ parameters are called the {\bf statistical interactions}. These difference equations can be transformed to the following differential forms: \begin{equation} (-g_{ij}^{cc})^{-1} \frac{\partial D_{i}^{c}}{\partial N_{j}^{c}} + (-g_{i \alpha}^{cs})^{-1} \frac{\partial D_{i}^{c}}{\partial N_{\alpha}^{s}} =2, \end{equation} \begin{equation} (-g_{\alpha i }^{sc})^{-1} \frac{\partial D_{\alpha}^{s}}{\partial N_{i}^{c}} + (-g_{\alpha \beta}^{ss})^{-1} \frac{\partial D_{\alpha}^{s}}{\partial N_{\beta}^{s}} =2. \end{equation} This set of equations establishes the generalization of Haldane's equations for the number of accessible orbitals of the species $\alpha$ in the case of particles without internal symmetry. As before \cite{hal}, statistical interactions $\{g\}$ do not depend on the occupations ${N_{\alpha}^s}$ and ${N_i^c}$, since otherwise the thermodynamic limit would not be well defined. The factors $2$ in the r.h.s. of (13) and (14) are irrelevant because they can be incorporated into the $g$ parameters. Then the solutions of equations (13) and (14) are \begin{equation} D_{i}^{c} (\{N_{i}^{c}\},\{N_{\beta}^{s}\}) = G_{i}^{0} - \sum_{j} g_{ij}^{cc} N_{j}^{c} - \sum_{\alpha} g_{i\alpha}^{cs}N_{\alpha}^{s}, \end{equation} \begin{equation} D_{\alpha}^{s}(\{N_{i}^{c}\},\{N_{\beta}^{s}\}) = G_{\alpha}^{0} - \sum_{j} g_{\alpha j}^{sc} N_{j}^{c} - \sum_{\beta} g_{\alpha \beta}^{ss} N_{\beta}^{s}. \end{equation} One should note that these solutions are well defined also in the boson limit, since then the corresponding $g$ parameter(s) vanish. The relations $ D_{i}^{c}(\{0\},\{0\}) = G_{i}^{0} $ and $ D_{\alpha}^{s}(\{0\},\{0\}) = G_{\alpha}^{0} $ express the boundary conditions for this problem; the values $G_{\alpha}^s$ and $G_i^c$ represent the maximal values of available one-particle states in the situation when the holon and the spinon bands are empty. Additionally, the total number of microscopic configurations with the numbers $\{N_{j}^{c}\}$ and $\{N_{\beta}^{s}\}$ of holon and spinon excitations is given by \begin{equation} \Omega = \prod_{i=1}^{N} \frac{ (D_{i}^{c}+N_{i}^{c} -1)!}{(N_{i}^{c})! (D_{i}^{c} -1)!} \prod_{\alpha=1}^{M} \frac{(D_{\alpha}^{s} + N_{\alpha}^{s} -1)!}{(N_{\alpha}^{s})!(D_{\alpha}^{s}- 1)!}. \end{equation} In this expression the two products are in general interconnected via the relations (15) and (16). Each of the factors is defined in the same manner as in Ref.\cite{hal}. In the fermionic bookkeeping for $I_j$ and $J_{\alpha}$ the same $\Omega$ is obtained with the number of accessible states in the $i$-th and $\alpha$-th groups taken to be \cite{be} \begin{equation} \tilde{D}_{i}^{c} (\{N_{i}^{c}\},\{N_{\beta}^{s}\})= D_{i}^{c} (\{N_{i}^{c}\},\{N_{\beta}^{s}\}) +N_{i}^{c} -1 = G_{i}^{0}+ N_{i}^{c} -1 - \sum_{j} g_{ij}^{cc} N_{j}^{c} - \sum_{\alpha} g_{i\alpha}^{cs}N_{\alpha}^{s}, \end{equation} \begin{equation} \tilde{D}_{\alpha}^{s} (\{N_{i}^{c}\},\{N_{\beta}^{s}\})= D_{\alpha}^{s}(\{N_{i}^{c}\},\{N_{\beta}^{s}\}) +N_{\alpha}^{s} -1= G_{\alpha}^{0} +N_{\alpha}^{s} -1- \sum_{j} g_{\alpha j}^{sc} N_{j}^{c} - \sum_{\beta} g_{\alpha \beta}^{ss} N_{\beta}^{s}. \end{equation} Rewriting these equations for each of the intervals $\Delta k$ and $\Delta \Lambda$ we easily find that in the $\Delta k \rightarrow 0$ and $\Delta \Lambda \rightarrow 0$ limits these four types of statistical interactions reduce to the following form \begin{equation} g^{cc}(k,k') = \delta(k-k'), \end{equation} \begin{equation} g^{cs}(k,\Lambda)=g^{sc}(\Lambda,k)= g^{ss}(\Lambda, \Lambda ') = 0. \end{equation} Thus, the vanishing $g$ functions in (21) simplify the expression (17) for the total number of available configurations, which is then \begin{equation} \Omega = \prod_{i=1}^{N} \frac{ (G_{i}^{0})!}{(N_{i}^{c})! (G_{i}^{0} -N_{i}^{c})!} \prod_{\alpha=1}^{M} \frac{(G_{\alpha}^{0} + N_{\alpha}^{s} -1)!}{(N_{\alpha}^{s})!(G_{\alpha}^{0}- 1)!}. \end{equation} The statistical weight $\Omega$ factorizes into the holon $(\Omega^c)$ and the spinon $(\Omega^s)$ parts. This, once again, expresses the fact that the spin and the charge degrees of freedom decouple in the $U\rightarrow \infty$ limit \cite{ogata}. As a consequence, the entropy of the system is a sum of the two parts $S=S^c+S^s= k_B \ln \Omega^c + k_B \ln \Omega^s$, where the corresponding expressions calculated per particle are \begin{equation} S^c = - k_B \frac{1}{N_a} \sum_{i=1}^{N} \left[ n_i^c \ln n_i^c + (1-n_i^c) \ln (1-n_i^c) \right], \end{equation} and \begin{equation} S^s = - k_B \frac{1}{N_a} \sum_{\alpha=1}^{M} \left[ n_i^s \ln n_i^s - (1+n_i^s) \ln (1+n_i^s) \right], \end{equation} where $N_a$ is the number of atomic sites. We recognize immediately that the holon contribution to the system entropy coincides with that for spinless fermions, whereas the spinon contribution reduces to localized-spin moments ($k_B \ln 2$) in the Mott-insulator limit and in the spin disordered phase, i.e. when $n_{\alpha}^s =1$ and $M=N_a /2$. In general, one may say that $S^c$ provides the entropy of charge excitations (and vanishes in the Mott insulating limit $n_i^c=1$), whereas $S^s$ represent the spin part of the excitation spectrum. This demonstrates again that the holon (charge) excitations are fermions and the spinon (spin) excitations are bosons. In the $U \rightarrow \infty$ limit considered here the Heisenberg coupling constant $(J=4t^2/U)$ vanishes and the spin wave excitations do not interact with each other \cite{mat}. In other words, they are dispersionless bosons. Also, the charge excitations are spinless fermions. The total entropy of the system reduces in the Mott insulating spin-disordered limit to $S=k_B \ln 2$. This value is different from that for the Fermi liquid in the high-temperature limit, which is $2 k_B \ln 2$. This difference confirms on a statistical ground the inapplicability of the Fermi liquid concept to the Hubbard model in one dimension. \section{Statistical interaction for the spin liquid} In this section we derive the statistical interaction for the so-called statistical spin liquid. This concept was introduced in \cite{s1} to describe the thermodynamic properties of strongly interacting electrons. The basic assumption in this approach is to exclude the doubly occupied configurations of electrons with spin up and down not only in real space but also in reciprocal space (with given ${\bf k}$). This assumption leads to a novel class of universality for electron liquids. Its thermodynamics in the normal, magnetic and superconducting states were examined in the series of papers \cite{s1,nasze}. A justification of this approach has as its origin in the concept of the singularity in the forward scattering amplitude due to interparticle interactions. Namely, it was noted by Anderson \cite{and} and by Kveshchenko \cite{khv} that in two spatial dimensions this amplitude may be divergent either due to the Hubbard on-site repulsion, or due to the long-distance current-current interaction mediated by the transfer gauge fields. With the assumption that in those situations the wave vector is still a good quantum number, one can write down the phenomenological Hamiltonian describing such liquid in the form \begin{equation} H=\sum_{\bf k \sigma} (\epsilon_{\bf k} - \sigma h)N_{\bf k \sigma} +U_s \sum_{\bf k} N_{\bf k \uparrow}N_{\bf k \downarrow}. \end{equation} In this model $\epsilon_{\bf k}$ is the dispersion relation for the particles with the wave vector ${\bf k}$ moving in the applied external magnetic field $h$, $N_{\bf k \sigma}$ is the number of electrons in the state $|\bf k \sigma>$, and $\sigma = \pm 1$ is the projected spin direction. The number of double occupancies in given ${\bf k}$ state is $N_{{\bf k} d}=N_{\bf k \uparrow}N_{\bf k \downarrow}$. The nonvanishing $N_{{\bf k} d}$ causes an increase of the system energy by $U_s > 0 $ for each doubly occupied ${\bf k}$ state. Finally, we will put $U_s \rightarrow \infty$ because this model is to represent the situation with the singular forward scattering amplitude. It turns out that this exactly solvable model \cite{s2} belongs to the class of models with Haldane's fractional statistics, as shown below. In this case the statistical interaction is proportional to $\delta_{\bf k k'}$ in ${\bf k}$ space, but is a nondiagonal matrix in the extended spin space. To prove this we define the total size of the Hilbert space of the many-particle states determined by the number of physically inequivalent configurations \begin{equation} \Omega = \prod_{\bf k} \frac{ ( D_{\bf k \uparrow} + N_{\bf k \uparrow} - N_{ {\bf k} d} -1 )! }{ (N_ {\bf k \uparrow} - N_{ {\bf k} d } )!(D_{ \bf k \uparrow } -1)! } \frac{ ( D_{ \bf k \downarrow } + N_{ \bf k \downarrow } - N_{ {\bf k} d } -1 )! }{ ( N_{ \bf k \downarrow } - N_{ {\bf k} d })!( D_{ \bf k \downarrow }-1)! } \frac{ ( D_{ {\bf k} d } + N_{ {\bf k } d } -1 )! }{ ( N_{ {\bf k} d })!( D_{ {\bf k} d }-1)! }. \end{equation} Due to the local nature of the interaction in ${\bf k}$ space we must treat separately the singly occupied states as distinct from those with double occupancy in reciprocal space. Then, the statistical weight $\Omega$ expresses the possible ways of distributing $N_{\bf k \sigma} - N_{{\bf k} d}$ quasiparticles over $D_{\bf k \sigma}$ states and $N_{{\bf k} d }$ quasiparticles over the $D_{{\bf k} d}$ states. In general, the dimension of the one particle Hilbert space for the singly $(D_{\bf k \sigma})$ and the double occupied $(D_{{\bf k}d})$ states is the function of the number of other quasiparticles $\{{N_{\bf k \sigma}}\}$ and $\{{N_{{\bf k}d}}\}$ \cite{s1,s2} i.e., \begin{equation} D_{{\bf k} \alpha}(\{N_{{\bf k} \beta}\}) = G_{{\bf k}}^{0} - \sum_{\beta} g_{\alpha, \beta} ({\bf k},{\bf k}')(N_{{\bf k}' \beta} -\delta_{\alpha \beta} \delta_{{\bf k} {\bf k}'}), \end{equation} where $\alpha$ and $\beta$ label the configurations ${\uparrow,\downarrow,d}$; these states define the extended spin space. Note that in difference with Eqs.(15) and (16) we define here the boundary conditions via the relations $ D_{{\bf k} \alpha}(\{ N_{{\bf k} \beta} = \delta_{\alpha \beta} \delta_{{\bf k k'}}\}) =G^{0}_{{\bf k}}$, i.e. the maximal dimension of the single-particle Hilbert space is defined for an occupied configuration in each category, not for an empty one. This new conditions are equivalent to the form appearing in Eqs. (15) and (16), in the thermodynamic limit. Since, the Hamiltonian (25) does not mix different momenta, we find the general solution for $g_{\alpha \beta}({\bf k},{\bf k}')$ in the form \begin{equation} g_{\alpha \beta}({\bf k},{\bf k}')=\delta_{{\bf k}{\bf k}'} \otimes g_{\alpha \beta}. \end{equation} Hence, the statistical interaction is diagonal in ${\bf k}$ space for this model. Next, to get the exact solution of the Hamiltonian (25) we choose \begin{equation} \renewcommand{\arraystretch}{1.5} \everymath={\displaystyle} g_{\alpha \beta}= \left( \begin{array}{lcr} 1 &1&-1\\ 0 &1& 0 \\ 0&0& 1 \ \end{array} \right), \end{equation} and substitute (29) and (28) into (27) we find that the total size of the many-particle Hilbert space is given by \begin{equation} \Omega = \prod_{\bf k} \frac{(G_{\bf k}^{0})!}{ (N_{\bf k\uparrow}-N_{{\bf k}d})! (N_{\bf k \downarrow} - N_{{\bf k}d})! (N_{{\bf k}d})! (G_{\bf k}^{0} - N_{\bf k \uparrow} - N_{\bf k \downarrow} +N_{{\bf k} d})!}. \end{equation} This result is exactly the same as that obtained in Ref.\cite{s1} (cf. Appendix B). Therefore, we conclude that this model also belongs to the class of models with Haldane's statistics. In this case, the change in the distribution functions are not due to the phase shift between different momenta but rather due to the mutual (dynamic) interaction between quasiparticles with the same ${\bf k}$ but different spin. The interaction pushes some of the states upward in energy, leading to the following form of momentum distribution functions \begin{equation} \frac{ N_{\bf k \sigma}-N_{{\bf k} d} }{ G_{\bf k}^{0} } = \frac{ e^{\beta U_s} e^{\beta (\epsilon_{\bf k}-\mu)} \cosh (\beta h)} {1+e^{\beta U_s}e^{\beta (\epsilon_{\bf k}-\mu)} [e^{\beta (\epsilon_{\bf k}-\mu)}+2 \cosh(\beta h)]} [1 +\sigma \tanh(\beta h)], \end{equation} \begin{equation} \frac{ N_{{\bf k} d} }{ G_{\bf k}^{0} }=\frac{1} {1+e^{\beta U_s}e^{\beta (\epsilon_{\bf k}-\mu)} [e^{\beta (\epsilon_{\bf k}-\mu)}+2 \cosh(\beta h)]}, \end{equation} which are easily obtained by minimizing the thermodynamic potential with respect to $N_{\bf k \sigma}$ and $N_{{\bf k} d}$ separately \cite{s2}. It is easy to show that those distributions evolve from the Fermi-Dirac function to the statistical spin liquid distribution when the $U_s$ changes from zero to infinity \cite{s1}. The limit $U_s\rightarrow \infty$ represents the physical situation in which $N_{{\bf k} d} \equiv0$. In other words, there are no double occupancies in ${\bf k}$ space. All states are singly occupied by the quasiparticles with either spin up or down, or empty. In this limit, the statistical interaction (28) reduces to the $2\times 2$ matrix form \begin{equation} \renewcommand{\arraystretch}{1.5} \everymath={\displaystyle} g_{\sigma \sigma'}({\bf k},{\bf k}')= \delta_{{\bf k},{\bf k}'}\otimes \left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \ \end{array} \right) {}. \end{equation} Then, the statistical weight is \cite{s1} \begin{equation} \Omega=\prod_{\bf k} \frac{(G_{\bf k}^{0})!}{ (N_{\bf k \sigma})!(G_{\bf k}^{0} -N_{\bf k \sigma} -N_{\bf k \bar{\sigma}})!}. \end{equation} Such liquid is called the {\bf statistical spin liquid}. This is a novel class of quantum liquids, similar, in some respect, to the Bethe-Luttinger liquid discussed above. For example, because of the mutual interaction between spin up and down particles, half of the total number of states ($2N_a$) are pushed out of the physical space in the $U_s\rightarrow \infty$ limit. Therefore, the entropy of the statistical spin liquid in the high-temperature limit is the same as in the case of the Hubbard chain with the infinite interaction because the entropy of the system in this temperature limit is given in terms of the degeneracy of the state only \cite{ch}. In particular, the entropy in the statistical spin liquid for $N=N_a$ equals $k_B N \ln 2$, which is the correct value for a Mott insulator. Also, the high-temperature value of a thermopower is the same for both liquids \cite{ch}. It was also shown \cite{s3} that the magnetization of statistical spin liquid has the same form as that of the Hubbard chain with the infinite repulsion, i.e. that for localized moments \cite{sok}. However, the direct consequence of the statistical interaction is also a breakdown of the Luttinger theorem: the volume enclosed by the Fermi surface is twice that for the Fermi liquid. This arises because of the differences in the microscopic character of the single-particle excitations in these two liquids. \section{Conclusions} In this paper we considered statistical properties of the two model system: The Hubbard chain with infinite repulsion and the statistical spin liquid. We determined the form of the Haldane statistical interaction in each case. In the one-dimensional Hubbard model a novel distribution function emerges due to the presence of the phase shift between pairs of states with different rapidities $\{\Lambda_{\alpha}\}$ and $\{k_i\}$. In the $U\rightarrow \infty$ limit, when all bound states are excluded, the charge excitations (holons) behave statistically as fermions, and the spin excitations (spinons) behave as bosons. The holon have a simple energy dispersion $\epsilon_k$ coinciding with the bare band energy, whereas the spinons are dispersionless. An equivalent alternative approach \cite{car2} is based on the fermionic representation of both the holon and the spinon degrees of freedom; in that approach the spinons and the holons acquire complicated forms of the effective dispersion relation. The latter approach allows for a generalization of the treatment of the Hubbard chain for finite $U$, the limit unavailable with our present analysis. Nonetheless, our analysis is also applicable to other models when the separation into the charge and the spin degrees of freedom occurs (cf. the Kondo problem \cite{andr}). In the statistical spin liquid case the mutual interaction between quasiparticles with the same ${\bf k}$ leads to the exclusion of the double occupied configurations in reciprocal space. In that case the statistical interaction is diagonal in ${\bf k}$ but has a nondiagonal structure in spin space. In that case the statistical distribution changes with growing interaction from the Fermi-Dirac form to the spin liquid form \cite{s1}. It is interesting that those two models of particles with the internal symmetry can be classified as the models with the fractional statistics in the Haldane sense. By contrast to the case with scalar particles \cite{be}, the distribution functions in the present situation take the form of either holon-spinon or the spin liquid distributions. Those possibilities arise only when the particles have some internal symmetry (spin,colour). One may also say that in those cases the statistical interactions have a tensorial character in space in which the Hamiltonian is diagonal.\\ \vspace{1cm} The work was supported by the Committee of Scientific Research (KBN) of Poland, Grants Nos. 2 P302 093 05 and 2 P302 171 06. The authors are also grateful to the Midwest Superconductivity Consortium (MISCON) of U.S.A. for the support through Grant No. DE-FG 02-90 ER 45427.
1,108,101,566,163
arxiv
\section{HEAVY ELEMENT ABUNDANCE DETERMINATION} \subsection{Ground-based optical observations} To derive element abundances from the optical spectra, we have followed the procedure detailed in ITL94 and ITL97. We adopt a two-zone photoionized H II region model (Stasi\'nska 1990): a high-ionization zone with temperature $T_e$(O III), and a low-ionization zone with temperature $T_e$(O II). We have determined $T_e$(O III) from the [O III]$\lambda$4363/($\lambda$4959+$\lambda$5007) ratio using a five-level atom model (Aller 1984) with atomic data from Mendoza (1983). That temperature is used for the derivation of the He$^+$, He$^{2+}$, O$^{2+}$, Ne$^{2+}$ and Ar$^{3+}$ ionic abundances. To derive $T_e$(O II), we have used the relation between $T_e$(O II) and $T_e$(O III) (ITL94), based on a fit to the photoionization models of Stasi\'nska (1990). The temperature $T_e$(O II) is used to derive the O$^+$, N$^+$ and Fe$^+$ ionic abundances. For Ar$^{2+}$ and S$^{2+}$ we have used an electron temperature intermediate between $T_e$(O III) and $T_e$(O II) following the prescriptions of Garnett (1992). The electron density $N_e$(S II) is derived from the [S II] $\lambda$6717/$\lambda$6731 ratio. Total element abundances are derived after correction for unseen stages of ionization as described in ITL94 and TIL95. In the spectra of several BCGs, a strong nebular He II $\lambda$4686 emission line is present, implying the presence of a non-negligible amount of O$^{3+}$. Its abundance is derived from the relation: \begin{equation} {\rm O}^{3+}=\frac{{\rm He}^{2+}}{{\rm He}^+}({\rm O}^++{\rm O}^{2+}). \label{eq:O3+} \end{equation} Then, the total oxygen abundance is equal to \begin{equation} {\rm O}={\rm O}^++{\rm O}^{2+}+{\rm O}^{3+}. \label{eq:O} \end{equation} The electron temperatures, number densities, ionic and total element abundances along with ionization correction factors for each galaxy in our sample are given in ITL94, TIL95, ITL97, and Izotov \& Thuan (1998b). Because the electron temperatures $T_e$(O III) in ITL94, TIL95 and ITL97 are calculated using a three-level atom model, while Izotov \& Thuan (1998b) adopt a five-level atom model (Aller 1984), we have recalculated, for consistency, all heavy element abundances for the galaxies in ITL94, TIL95 and ITL97 using also a five-level atom model. This gives slightly higher electron temperatures and slightly lower oxygen abundances (by $\sim$ 0.04 dex). The heavy element abundance ratios are however nearly unchanged. Table 1 lists the oxygen abundance and the heavy element to oxygen abundance ratios for all the galaxies in our sample. \subsection{{\sl HST} UV and optical observations} Carbon is one of the elements which provides strong constraints on the chemical evolution of BCGs and the origin of elements. However, it possesses no strong optical emission line. The strongest carbon emission line is the C III] $\lambda$(1906+1909) line in the UV. With the advent of {\sl HST}, much work has been done to determine the carbon abundance in BCGs, using FOS spectroscopy (Garnett et al. 1995a, 1997; Kobulnicky et al. 1997 and Kobulnicky \& Skillman 1998). The same spectra can also be used to derive the silicon abundance when the Si III] $\lambda$(1885+1892) line is seen. We have decided to reanalyze some of these data, specifically the galaxies with both UV and optical FOS spectra, for the following reasons: (1) Garnett et al. (1995a) used the UV O III] $\lambda$1666 to derive the C/O abundance ratio. While this method has the advantage of being insensitive to the extinction curve adopted, the O III] $\lambda$1666 and C III] $\lambda$1906+1909 emission lines being close to each other in wavelength, the weakness of the O III] $\lambda$1666 line makes in many cases the C/O abundance ratio determination very uncertain. Therefore, we have used the [O III] $\lambda$(4959 + 5007) emission lines in the FOS spectra (these are used instead of higher S/N ground-based spectra to insure that exactly the same region is observed as in the UV) to determine the C/O abundance ratio. This approach is subject to uncertainties in the extinction curve but, fortunately, at least in some of the galaxies, the extinction is small. (2) Because of the uncertainties in the atomic data, different authors have used different analytical expressions for the determination of carbon and silicon abundances. As this may introduce systematic shifts, we have reanalyzed all spectra in the same manner, using the same expressions, to insure the homogeneity of our data set. (3) The C/O abundance ratio is dependent on several physical parameters, and in particular on the electron temperature which needs to be determined with a relatively high precision. Because of the poorer signal-to-noise ratio of the FOS spectra in the optical range, Garnett et al. (1997) and Kobulnicky \& Skillman (1998) have used electron temperatures derived from higher signal-to-noise ratio ground-based observations. We use here our own new ground-based observations with very high signal-to-noise ratio to better constrain the electron temperature for some of the most metal-deficient galaxies in our sample. (4) In deriving silicon abundances, Kobulnicky et al. (1997) have not corrected for unseen stages of ionization. We have taken into account here the ionization correction factor for Si which can be large. We derive the C$^{2+}$ abundance from the relation (Aller 1984): \begin{equation} \frac{{\rm C}^{2+}}{{\rm O}^{2+}}=0.093\exp\left(\frac{4.656}{t}\right) \frac{I({\rm C III]}\lambda1906+\lambda1909)} {I({\rm [O III]}\lambda4959+\lambda5007)}, \label{eq:C2+} \end{equation} where $t$ = $T_e$/10$^4$. Following Garnett et al. (1995a) we adopt for the temperature in Eq. (\ref{eq:C2+}) the $T_e$(O III) value in the O$^{++}$ zone. Then \begin{equation} \frac{\rm C}{\rm O}={\rm ICF}\left(\frac{\rm C}{\rm O}\right) \frac{{\rm C}^{2+}}{{\rm O}^{2+}}. \label{eq:CO} \end{equation} The correction factor ICF(C/O) in Eq.(\ref{eq:CO}) for unseen ionization stages of carbon is taken from Garnett et al. (1995a). In the majority of cases, the correction factor is small, i.e. ICF(C/O) = 1.0 -- 1.1. However, it is evident from Eq. (\ref{eq:C2+}) that the C/O abundance ratio is critically dependent on the electron temperature. To derive as precise temperatures as possible, we have used, when available, very high signal-to-noise ground-based observations obtained within apertures nearly matching the circular or square 0\farcs86 FOS aperture. The abundance of silicon is derived following Garnett et al. (1995b) from the relation \begin{equation} \frac{\rm Si}{\rm C}={\rm ICF}\left(\frac{\rm Si}{\rm C}\right) \frac{{\rm Si}^{2+}}{{\rm C}^{2+}}, \label{eq:Si} \end{equation} where \begin{equation} \frac{{\rm Si}^{2+}}{{\rm C}^{2+}}=0.188t^{0.2}\exp\left(\frac{0.08}{t}\right) \frac{I({\rm Si III]}\lambda1883+\lambda1892)} {I({\rm C III]}\lambda1906+\lambda1909)}. \label{eq:Si2+} \end{equation} For the determination of the silicon abundance, we again adopt the temperature $T_e$(O III). The correction factor ICF(Si/C) is given by Garnett et al. (1995b) and is larger than that for carbon. One of the main uncertainties in the determination of the silicon abundance comes from the Si III] $\lambda$1892 emission line. It is not seen in some galaxies (such as in the NW component of I Zw 18), while atomic theory predicts that this line should be present, given the signal-to-noise ratio and intensity of the neighboring Si III] $\lambda$1883 emission line. In those cases, we assume $I$(Si III] $\lambda$1883 + $\lambda$1892) = 1.67 $\times$ $I$(Si III] $\lambda$1883) as expected in the low-density limit. We discuss next in more details the carbon and silicon abundance determinations in a few BCGs of special interest. The heavy element abundance ratios derived from {\sl HST} FOS observations are given in Table 4. \subsubsection{I Zw 18} Both the NW and SE components of this BCG, the most metal-deficient galaxy known, have been observed in the UV and optical with the {\sl HST} FOS by Garnett et al. (1997). Adopting $T_e$(O III) = 19,600 K and 17,200 K respectively for the NW and SE components (Skillman \& Kennicutt 1993) and using the C III] $\lambda$(1906+1909) and [O III] $\lambda$(4959+5007) emission lines, those authors derived very high C/O abundance ratios for both components, --0.63$\pm$0.10 and --0.56$\pm$0.09, as compared to $\sim$ --0.8 in other metal-deficient galaxies and predictions of massive stellar nucleosynthesis theory (Weaver \& Woosley 1993; WW95). This led Garnett et al. (1997) to conclude that I Zw 18 is not a young galaxy and has experienced star formation episodes in the past which have enhanced the C/O ratio through the evolution of intermediate-mass stars. As the adopted value for the electron temperature is crucial for the determination of the C/O abundance ratio, we have redetermined it using new data by Izotov et al. (1997b). Those authors have obtained with the Multiple Mirror Telescope (MMT) a spectrum of both NW and SE components during a 3-hour exposure with excellent seeing conditions (FWHM $\sim$ 0.7 arcsec). The much higher signal-to-noise ratio of this spectrum as compared to that by Skillman \& Kennicutt (1993) has resulted in the discovery of a Wolf-Rayet stellar population in the NW component of I Zw 18 (Izotov et al. 1997b). We have extracted from this two-dimensional spectrum two one-dimensional spectra at the location of the brightest parts of the NW and SE components, within the smallest aperture allowed by the MMT observations, that of 0\farcs6$\times$1\farcs5. This provides a fairly good match to the round 0\farcs86 FOS aperture, as the ratio of the area of the former to that of the latter is $\sim$ 1.54. The observed and corrected emission line fluxes relative to H$\beta$ for both components are shown in Table 2 together with the extinction coefficients $C$(H$\beta$), observed H$\beta$ fluxes and equivalent widths, and equivalent widths of the underlying stellar Balmer absorption lines. The errors of the line intensities listed in Table 2 take into account the noise statistics in the continuum, the errors in placing the continuum and fitting the line profiles with gaussians. We also retrieved the FOS spectra from the {\sl HST} archives and remeasured the emission line fluxes. Comparison of the MMT and FOS H$\beta$ fluxes in the SE component shows very good agreement: 3.5$\times$10$^{-15}$ ergs cm$^{-2}$s$^{-1}$ in the MMT spectrum as compared to 3.2$\times$10$^{-15}$ ergs cm$^{-2}$s$^{-1}$ in the {\sl HST} spectrum. The H$\beta$ emission equivalent widths are also in good agreement: 144 \AA\ in the MMT spectrum as compared to 127 \AA\ in the {\sl HST} spectrum. There is not so good agreement, however, for the NW component. The ratio of the observed H$\beta$ flux in the MMT spectrum of 2.9$\times$10$^{-15}$ ergs cm$^{-2}$s$^{-1}$ to that of 5.0$\times$10$^{-15}$ ergs cm$^{-2}$s$^{-1}$ in the {\sl HST} spectrum is $\sim$ 0.6, just the ratio of the measured emission equivalent width of H$\beta$ of 34 \AA\ from the MMT spectrum to that of 55 \AA\ from the {\sl HST} spectrum. These differences are probably due to a slight positioning shift between the MMT and {\sl HST} apertures on the NW component. The electron temperatures and oxygen abundances derived from the MMT spectra for both NW and SE components are shown in Table 3. In deriving these quantities, we have neglected temperature fluctuations. While there is evidence for large temperature fluctuations in planetary nebulae and perhaps in high-metallicity HII regions, we believed that there is, until now, no such convincing evidence for low-metallicity HII regions like the ones considered here (see a detailed discussion of this issue in Izotov \& Thuan 1998b). The electron temperatures $T_e$(O III) are much higher than those obtained by Skillman \& Kennicutt (1993) through a larger aperture (2\arcsec$\times$5\arcsec\ for the SE component and 2\arcsec$\times$7\farcs55 for the NW component), by 1900 K in the NW component, and by 2300 K in the SE component. They are also higher by 1800 K and 700 K respectively than the temperatures obtained by Izotov \& Thuan (1998a) within an aperture of 2\arcsec$\times$5\arcsec. There is evidently a temperature gradient at the center of both NW and SE components (see also Martin 1996), and large apertures give invariably lower electron temperatures. Matching the small {\sl HST} FOS aperture as closely as possible is essential to derive appropriate abundances. The higher electron temperatures lead to lower oxygen abundances than those derived by Skillman \& Kennicutt (1993) and Izotov \& Thuan (1998a) for larger regions. Note that the oxygen abundance derived for the NW component is lower than that derived for the SE component by a factor of $\sim$ 1.2. This is again because there is a gradient toward higher temperatures in the central part of the NW component, which leads to lower abundances. In larger apertures, the derived oxygen abundances are the same within the errors for both components, suggesting good mixing (Skillman \& Kennicutt 1993; Izotov \& Thuan 1998a). The new C/O abundance ratio derived using emission line fluxes from Garnett et al. (1997) is also lower than the value derived by those authors, implying C/O abundance gradients. We have also measured the fluxes of the Si III] $\lambda$1883, 1992 emission lines and derived silicon abundance in both components. \subsubsection{SBS 0335--052} Because Garnett et al. (1995a) observed SBS 0335--052 only in the UV, they used the C III] $\lambda$(1906 + 1909) to O III] $\lambda$1666 flux ratio to derive log C/O = --0.94$\pm$0.17, the lowest value for the BCGs in their sample. Although the O III] $\lambda$1666 was clearly detected, the UV spectrum is noisy, especially in the blue part, introducing uncertainties on the line strength. Garnett et al. (1995b) have also derived a low Si/O abundance: log Si/O = --1.72$\pm$0.20. This low abundance ratio is mainly due to the low value of C/O, as Si/O is derived using the Si/C ratio. We again use new high signal-to-noise ratio MMT spectral observations (Izotov et al. 1997a) in an effort to improve the situation. We extracted a one-dimensional spectrum of the brightest part of the SBS 0335--052 within a 1\arcsec$\times$0\farcs6 aperture to best match the FOS aperture. We use the MMT spectrum to derive the electron temperature and the oxygen abundance, and combine with the the {\sl HST} UV spectrum to derive C/O and Si/O abundance ratios. Emission line fluxes are shown in Table 2, while the electron temperature $T_e$(O III) and oxygen abundance are given in Table 3. The observed C III] $\lambda$(1906+1909) and Si III] $\lambda$(1882+1893) emission line fluxes are corrected for extinction using the reddening law for the Small Magellanic Cloud, as parameterized by Pr\'evot et al. (1984). The carbon and silicon abundances are shown in Table 4. We derive higher log C/O = --0.83$\pm$0.08 and log Si/O = --1.60$\pm$0.21 values as compared to those of Garnett et al. (1995ab), although they are consistent within the errors. \subsubsection{SBS 1415+437} This galaxy, with a heavy element abundance $Z_\odot$/21 (Izotov \& Thuan 1998b), has been observed with the {\sl HST} FOS in the UV and optical ranges (Thuan, Izotov \& Foltz 1998). As in I Zw 18 NW, the Si III] $\lambda$1892 emission line is not seen. Instead, at the location of this line a deep absorption is observed. Therefore, the total flux of the Si III] emission lines has been derived by multiplying the Si III] $\lambda$1883 emission line flux by a factor of 1.67. The UV emission line fluxes have been corrected for extinction using the reddening law for the Small Magellanic Cloud (Pr\'evot et al. 1984). The electron temperature is derived from a high signal-to-noise ratio MMT spectrum in a 1\farcs5$\times$0\farcs6 aperture (Thuan, Izotov, \& Foltz 1998). \subsubsection{UM 469, NGC 4861 and T1345--420} We use for these galaxies the corrected emission line fluxes derived by Kobulnicky \& Skillman (1998) to calculate the C/O and Si/O abundance ratios with equations (\ref{eq:C2+} -- \ref{eq:Si2+}). We also correct the C$^{++}$/O$^{++}$ abundance ratio for unseen stages of ionization while Kobulnicky \& Skillman (1998) decided not to apply a correction factor. \subsubsection{NGC 5253} Kobulnicky et al. (1997) have derived C/O and Si/O abundance ratios in three H II regions of this galaxy. Our measurements of the emission line fluxes in the same FOS spectra retrieved from the {\sl HST} archives are in fair agreement with theirs, except for the Si III] $\lambda$(1883+1892) emission line flux in the HII-2 region, for which we derive a higher value. We have also corrected Si$^{++}$/C$^{++}$ for unseen stages of ionization, which was not done by Kobulnicky et al. (1997). This correction factor can be as high as $\sim$ 1.4. \section{HEAVY ELEMENT ABUNDANCES} Our main goal here is to use the large homogeneous BCG sample described above to extend the work of TIL95 and study in more detail and with more statistics the relationship between different heavy elements in a very low-metallicity environment. Some of the very low-metallicity galaxies are most likely young nearby dwarf galaxies undergoing their first burst of star formation not more than 100 Myr ago (Thuan, Izotov \& Lipovetsky 1997; Thuan \& Izotov 1997; Thuan, Izotov \& Foltz 1998). Therefore, the relationships between different heavy element abundances will not only put constraints on the star-formation history of BCGs, they will also be useful for understanding the early chemical evolution of galaxies. Furthermore, a precise determination of heavy element abundances in BCGs can put constraints on stellar nucleosynthesis models, as theoretical predictions for the yields of some elements are not yet very firm. The best studied and most easily observed element in BCGs is oxygen. Nucleosynthesis theory predicts it to be produced only by massive stars. We shall use it as the reference chemical element and consider the behavior of heavy element abundance ratios as a function of oxygen abundance. \subsection{Neon, Silicon, Sulfur and Argon} The elements neon, silicon, sulfur and argon are all products of $\alpha$-processes during both hydrostatic and explosive nucleosynthesis in the same massive stars which make oxygen. Therefore, the Ne/O, Si/O, S/O and Ar/O ratios should be constant and show no dependence on the oxygen abundance. In Figure 1 we show the abundance ratios for these elements as a function of 12 + log O/H (filled circles). For Si/O, we have also shown for comparison the data from Garnett et al. (1995b) for those galaxies which we have not reanalyzed for lack of the necessary data (open circles). These galaxies are not included in the computation of the mean abundance ratios listed in Table 5. The mean values of these element abundance ratios are directly related to the stellar yields and thus provide strong constraints on the theory of massive stellar nucleosynthesis. Note, that while silicon and sulfur abundances can be measured in a wide variety of astrophysical settings (stars, H II regions, high-redshift damped Ly$\alpha$ clouds), neon and argon abundances can be measured with good precision at low metallicities only in BCGs. Table 5 gives the mean values of the Ne/O, Si/O, S/O and Ar/O abundance ratios calculated for the total sample, as well as for two subsamples: a low-metallicity subsample containing galaxies with 12 + log O/H $\leq$ 7.6, and a high-metallicity subsample containing galaxies with 12 + log O/H $>$ 7.6. For comparison, we list also the solar ratios taken from Anders \& Grevesse (1989). Generally, the dispersion about the mean of the points in the low-metallicity subsample is smaller than that in the high-metallicity subsample and the total sample, although the differences are not statistically significant. Examination of Figure 1 and Table 5 shows that, as predicted by stellar nucleosynthesis theory, no dependence on oxygen abundance is found for any of the Ne/O, Si/O, S/O and Ar/O abundance ratios. Independently of the subsample, the Ne/O, Si/O, S/O and Ar/O ratios are all very close to the corresponding solar value (dotted lines in Figure 1). There may be a hint that the mean Si/O abundance ratio in BCGs is slightly lower than the solar value, but the difference is again not statistically significant. We thus conclude that there is no significant depletion of silicon into dust grains in BCGs. By contrast, Garnett et al. (1995b) have found a weighted mean log Si/O = --1.59$\pm$0.07 for their sample of low-metallicity galaxies, lower by a factor of $\sim$ 1.6 than the solar value. This led them to conclude that about 50\% of the silicon is incorporated into dust grains. The number of galaxies with measured silicon abundances is not large, and more observations are needed for a more definite conclusion. The mean values of the abundance ratios for the other elements are in very good agreement with those derived by TIL95 for a smaller sample of BCGs. TIL95 have made detailed comparisons of their results with those of previous studies, so we shall not repeat the discussion here. We shall only mention the more recent work of van Zee, Haynes \& Salzer (1997) who have studied heavy element abundance ratios in the H II regions of 28 gas-rich, quiescent dwarf galaxies. Their mean Ne/O, S/O and Ar/O ratios are consistent within the errors with the results of TIL95 and those obtained here, although in many of their galaxies, the abundance measurements are more uncertain because the [O III] $\lambda$4363 emission line is not detected and the electron temperature is derived from an empirical method. \subsection{Carbon} Carbon is produced by both intermediate and high-mass stars. Since C is a product of hydrostatic burning, the contributions of SNe Ia and SNe II are small. Therefore, the C/O abundance ratio is sensitive to the particular star formation history of the galaxy. It is expected that, in the earliest stages of galaxy evolution, carbon is mainly produced by massive stars, so that the C/O abundance ratio is independent of the oxygen abundance, as both C and O are primary elements. At later stages, intermediate-mass stars add their carbon production, so that an increase in the C/O ratio is expected with increasing oxygen abundance. The results of the study by Garnett et al. (1995a) did not conform to these expectations. Those authors found a continuous increase of log C/O with increasing log O/H in their sample of metal-deficient galaxies, which could be fitted by a power law with slope 0.43. Because log C/O is fairly constant at --0.9 for halo stars in the Galaxy (Tomkin et al. 1992), Garnett et al. (1995a) suggested there may be a difference between the abundance patterns seen in the Galaxy and dwarf galaxies. Subsequent {\sl HST} FOS observations of I Zw 18 by Garnett et al. (1997) yielded abundances that complicated the situation even more. It was found that I Zw 18 bucks the trend shown by the other low-metallicity objects. Although it has the lowest metallicity known, it shows a rather high log C/O, equal to --0.63$\pm$0.10 and --0.56$\pm$0.09 in the NW and SE components respectively. These values are significantly higher than those predicted by massive stellar nucleosynthesis theory. This led Garnett et al. (1997) to conclude that carbon in I Zw 18 has been enhanced by an earlier population of lower-mass stars and, hence, despite its very low metallicity, I Zw 18 is not a ``primeval" galaxy. Garnett et al. (1997) considered and dismissed the possibility that the high C/O ratio in I Zw 18 may be due to errors in the electron temperature. We have revisited the problem here for I Zw 18 with our own data and conclude that it is indeed a too low adopted electron temperature which is responsible for the too high C/O ratio in both components of I Zw 18. The temperatures adopted by Garnett et al. (1997) for I Zw 18 are derived from optical observations in an aperture larger than the FOS aperture, and are too low because of a temperature gradient. In Figure 2a we have plotted with filled circles log C/O against 12 + log O/H for all the galaxies which we have reanalyzed. Great care was taken to derive electron temperatures in apertures matching as closely as possible the FOS aperture. Open circles show galaxies from Garnett et al. which could not be reanalyzed because of lack of the necessary data. It is clear that, in contrast to the results by Garnett et al. (1995a, 1997), we find log C/O to be constant in the extremely low-metallicity range, when 12 + log O/H varies between 7.1 and 7.6, as expected from the common origin of carbon and oxygen in massive stars. Furthermore, the dispersion of the points about the mean is very small : $<$log C/O$>$ = --0.78$\pm$0.03 (Table 5). This mean value is in very good agreement with that of $\sim$ --0.8 predicted by massive stellar nucleosynthesis theory (Weaver \& Woosley 1993; WW95). Two models with $Z$ = 0 and $Z$ = 0.01 $Z_\odot$ are shown by horizontal lines in Figure 2a. They are in good agreement with the observations. At higher metallicities (12 + log O/H $>$ 7.6), there is an increase in log C/O with log O/H and also more scatter at a given O/H, which we attribute to the carbon contribution of intermediate-mass stars in addition to that of massive stars. \subsection{Nitrogen} The origin of nitrogen has been a subject of debate for some years. The basic nucleosynthesis process is well understood -- nitrogen results from CNO processing of oxygen and carbon during hydrogen burning -- however the nature of the stars mainly responsible for the production of nitrogen remains uncertain. If oxygen and carbon are produced not in previous generation stars, but in the same stars prior to the CNO cycle, then the amount of nitrogen produced is independent of the initial heavy element abundance of the star, and its synthesis is said to be primary. On the other hand, if the "seed" oxygen and carbon are produced in previous generation stars and incorporated into a star at its formation and a constant mass fraction is processed, then the amount of nitrogen produced is proportional to the initial heavy element abundance, and the nitrogen synthesis is said to be secondary. Secondary nitrogen synthesis can occur in stars of all masses, while primary nitrogen synthesis is usually (but not universally) thought to occur mainly in intermediate-mass stars (RV81, WW95). In the case of secondary nitrogen production, it is expected that massive stars with decreasing metallicity would produce decreasing amounts of $^{14}$N (WW95). There is, however, a caveat: there is the possibility that in some massive stars the convective helium shell penetrates into the hydrogen layer, with the consequent production of large amounts of primary nitrogen. The behavior of the N/O abundance ratio as a function of the O/H ratio has provided the main observational constraint to this debate. In low-metallicity Galactic halo stars, N/O is nearly independent of O/H implying that nitrogen has a strong primary component which can be explained by primary nitrogen production in massive stars with a large amount of convective overshoot (Timmes, Woosley \& Weaver 1995). Several studies of low-metallicity (12 + log O/H $\leq$ 8.3) H II regions in dwarf galaxies have also revealed that N/O is independent on O/H implying again a primary origin of nitrogen (Garnett 1990; Vila-Costas \& Edmunds 1993; TIL95; van Zee, Haynes \& Salzer 1997; van Zee, Salzer \& Haynes 1998). It is generally believed that primary nitrogen in BCGs is produced by intermediate-mass stars by carbon dredge-up (RV81). For high-metallicity H II regions in spiral galaxies (12 + log O/H $>$ 8.3), the N/O ratio increases linearly with the O abundance, indicating that, in this metallicity regime, N is primarily a secondary element. We shall not be concerned with secondary N here and shall discuss mainly primary N since all our BCGs have 12 + log O/H $<$ 8.3. A problem in interpreting the data in the vast majority of studies is the existence of a considerable scatter ($\pm$ 0.3 dex) in N/O at a given O/H. The large scatter has been attributed to the delayed release of N produced in intermediate-mass long-lived stars, compared to O produced in massive short-lived stars (Matteucci \& Tosi 1985; Matteucci 1986; Garnett 1990; Pilyugin 1992, 1993; Marconi et al. 1994; Kobulnicky \& Skillman 1996, 1998). In contrast to these studies, TIL95 found the scatter in N/O at a given O/H to be large only when the metallicity exceeds a certain value, i.e. 12 + log O/H $>$ 7.6. At lower O abundances, the scatter is extremely small. It is difficult to improve much the statistics by adding to the TIL95 sample more galaxies in this very low-metallicity range because the known objects having such a low heavy element content are very scarce and extremely difficult to discover. We have added only SBS 0335--052 with $Z_\odot$/41, the second most metal-deficient BCG known after I Zw 18 (Figure 2b). As for the latter, we have used our own data instead of that of Skillman \& Kennicutt (1993). The result is the same. The scatter of the points is very small: $<$log N/O$>$ = --1.60$\pm$0.02 (Table 5). We do not believe that this very small scatter is the result of some unknown selection effect which would invariably pick out low-metallicity BCGs at the same stage of their evolutionary history. This is because of two reasons. First, we have plotted all the data we have, without any selection. Second, as discussed later, the scatter does increase substantially for higher-metallicity BCGs. There is no obvious reason why a selection effect would operate only on low-metallicity objects, but not on higher-metallicity ones. As discussed by TIL95, the very small dispersion of the N/O ratio puts very severe constraints on time-delay models. A time delay between the primary production of oxygen by massive stars and that of nitrogen by intermediate-mass stars can be as large as 5$\times$10$^8$ yr, the lifetime of a 2 -- 3 $M_\odot$ star. This would introduce a significant ($\geq$ $\pm$ 0.2) scatter in N/O, as chemical evolution models by Pilyugin (1993) and Marconi et al. (1994) show. We reiterate the conclusion of TIL95, that the small scatter of N/O in the most metal-deficient galaxies in our sample can be best understood if primary N in these galaxies is produced by massive stars ($M$ $>$ 9 $M_\odot$) only. As we shall see in section 5, intermediate-mass stars have not yet returned their nucleosynthesis products to the interstellar medium in these most metal-poor galaxies, because they have not had enough time to evolve. There is a further data point which we did not include in our sample, but which strengthens our results even more. It comes from the western companion of SBS 0335--052, the BCG SBS 0335--052 W, located in a common H I envelope with the former. This BCG was observed by Lipovetsky et al. (1998) with the MMT and Keck telescopes to have oxygen abundances in its two knots of 12 + log O/H = 7.22$\pm$0.03 and 7.13$\pm$0.07 respectively, with corresponding log N/O = --1.54$\pm$0.06 and --1.53$\pm$0.19, consistent with our mean derived value of log N/O (Table 5). We have compared our results with those of other authors for BCGs in the range 12 + log O/H $\leq$ 7.6, using the compilation of Kobulnicky \& Skillman (1996) of the existing data for element abundances in BCGs. Two galaxies in their compilation show N/O ratios which are significantly lower than our mean value, by many times the dispersion of our sample. The first one is Tol 65, with 12 + log O/H = 7.56 which has a very low log N/O = --1.79$\pm$0.20. This galaxy was observed nearly two decades ago (in 1980) by Kunth \& Sargent (1983). Its N/O ratio has large errors and should be redetermined more precisely. The second galaxy, CG 1116+51 with 12 + log O/H = 7.53 has log N/O = --1.68$\pm$0.11. However, Izotov \& Foltz (1998) have recently reobserved this galaxy with the MMT and found log N/O = --1.57$\pm$0.09, consistent with the mean value and small dispersion about the mean found here and in TIL95. Studies of southern BCGs by Campbell, Terlevich \& Melnick (1986) and Masegosa et al. (1994) also show a lower envelope for log N/O at around --1.6 $\div$ --1.7. The very few galaxies with N/O below the envelope all have large observational uncertainties. The situation changes appreciably for BCGs with 12 + log O/H $>$ 7.6. The scatter of the C/O and N/O ratios increases significantly at a given O abundance. This increase in the dispersion is best explained if, in addition to the production of carbon and primary nitrogen by massive stars during the starburst phase, there is an additional production of both these elements by intermediate-mass stars during the interburst quiescent phase. Within the framework of current nucleosynthesis theory (RV81, WW95), primary nitrogen is produced only by intermediate-mass stars, not by massive stars. In this case the productions of oxygen and nitrogen would be decoupled and in principle very low N/O values can be observed. However, our observations do not confirm these expectations. Instead, Figure 2b shows a definite lower envelope for the N/O ratio, at the level set by primary nitrogen production in low-metallicity massive stars. To summarize, we have arrived at the following important conclusions concerning the origin of nitrogen: (1) in very low-metallicity BCGs with 12 + log O/H $\leq$ 7.6, nitrogen is produced as a primary element by massive stars only. Intermediate-mass stars have not had the time to evolve and release their nucleosynthesis products to the interstellar medium. The massive stars set the level of log N/O to be at $\sim$ --1.60. This picture is the most reasonable one to account for the extremely small dispersion in log N/O ($\pm$ 0.02 dex) at a given O abundance. (2) The value of log N/O increases above $\sim$ --1.60 along with the scatter at a given O abundance in BCGs with 7.6 $<$ 12 + log O/H $<$ 8.2. We interpret this increase in log N/O and its larger scatter as due to the additional contribution of primary nitrogen produced by intermediate-mass stars, on top of the primary nitrogen produced by massive stars. Finally, we check whether the N/O ratios observed in more quiescent dwarf galaxies with less active star formation (van Zee et al. 1997, 1998) are consistent with the scenario outlined above. The H II regions in the latter have much lower excitation than those in BCGs, and the [O III] $\lambda$4363 emission line is not seen in many galaxies. In the framework of a time delayed nitrogen production model, we would expect lower values of N/O abundance ratios in quiescent dwarf galaxies than in BCGs as the bursts in the former are older (the H$\beta$ emission equivalent widths are smaller) and more oxygen has been released relative to nitrogen. Van Zee et al. (1997, 1998) do find some galaxies with very low log N/O $\leq$ --1.7. However, these low values are suspect and may be subject to systematic errors. There are several odd features concerning the galaxies with low N/O in the sample of van Zee et al. (1997, 1998). First, they were all observed in the same run, the mean value of their log N/O in 9 H II regions being --1.84, while that for the rest of the sample observed during 4 other runs is --1.52, in good agreement with the mean value found for BCGs. Second, the derived extinctions for the low N/O galaxies are either zero or systematically lower than that of other galaxies observed in different runs. The H II region UGC 5764-3 with the lowest log N/O = --2.02 has a H$\alpha$/H$\beta$ intensity ratio of only 2.3 , much lower than the theoretical recombination value of 2.7 -- 2.8. We conclude therefore that there is no strong evidence in the quiescent dwarf galaxy data against a scenario of primary nitrogen production by high-mass stars in the galaxy metallicity range 12 + log O/H $\leq$ 7.6. \subsection{Iron} The Fe/O abundance ratio also provides a very important constraint on the chemical evolution history of galaxies. TIL95 first discussed the iron abundance in BCGs. From their small sample of 7 low-metallicity BCGs, they found that oxygen in these galaxies is overproduced relative to iron, as compared to the Sun: [O/Fe] = log (Fe/O)$_\odot$ -- log (Fe/O) = 0.34$\pm$0.10. This value is in very good agreement with the [O/Fe] observed for Galactic halo stars (Barbuy 1988), implying that the origin of iron in low-metallicity BCGs and in the Galaxy prior the formation of halo stars is similar, and supporting the scenario of an early chemical enrichment of the galactic halo by massive stars. We have considerably increased the size of the sample of BCGs with iron abundance measurements. In Figure 2c we show [O/Fe] vs. 12 + log O/H for a total of 38 BCGs. It can be seen that, for all BCGs except one, [O/Fe] is above the solar value, reenforcing the conclusion of TIL95. The mean value of [O/Fe] for the whole sample is 0.40$\pm$0.14 (Table 5). The only exception is the BCG SBS 0335--052 which has a negative [O/Fe], i.e. oxygen is underabundant with respect to iron as compared to the Sun. These odd abundances are not the result of observational errors as Izotov et al. (1997a) derived a similar result from independent MMT observations. The low value of [O/Fe] is more probably caused by the contamination and hence artificial enhancement of the nebular [Fe III] $\lambda$4658 emission line by the narrow stellar C IV $\lambda$4658 emission line produced in hot stars with stellar winds. The presence of these stars in SBS 0335--052 is demonstrated by the detection of stellar Si IV $\lambda$$\lambda$1394, 1403 lines with P Cygni profiles (Thuan \& Izotov 1997). A case in point which supports this contamination hypothesis is that of the NW component of I Zw 18. Izotov et al. (1997b) and Legrand et al. (1997) have discovered a Wolf-Rayet population in this NW component with a significant contribution from WC4 stars to the C IV $\lambda$4658 emission. The spectrum by Izotov et al. (1997b) shows that a narrow emission line at $\lambda$4658 superposed on top of the broad C IV $\lambda$4658 bump produced by the WR stars. The use of this narrow emission line to derive Fe abundance results in an artificially low [O/Fe] $\sim$ --0.4 for the NW component of I Zw 18. On the other hand, the SE component which does not possess Wolf-Rayet stars and hence has a nebular [Fe III] $\lambda$4658 emission line uncontaminated by narrow stellar C IV $\lambda$4658 emission, has a normal [O/Fe] $\sim$ 0.3, consistent with the value derived by ITL97 and with the mean value for the BCGs in our sample. We have plotted in Figure 2c the [O/Fe] value for the SE component of I Zw 18. Another possible explanation for the abnormally low [O/Fe] in SBS 0335--052 is the enhancement of the [Fe III] lines by supernova shocks. Disregarding SBS 0335--052, Figure 2c shows that the O/Fe ratio in BCGs is nearly constant, irrespective of the oxygen abundance, at a value $\sim$ 2.5 higher than in the solar neighborhood. \subsection{Comparison of observational and theoretical nucleosynthetic yields} \subsubsection{Heavy element enrichment by massive stars} The remarkable constancy and small scatter of the C/O and N/O abundance ratios for the BCGs with oxygen abundance 12 + log O/H $\leq$ 7.6, and the similar behavior of the Ne/O, Si/O, S/O, Ar/O and [O/Fe] ratios with respect to the O abundance for all the BCGs in our sample provide an unique opportunity to compare the observed yields of massive stars with theoretical predictions, and put stringent constraints on nucleosynthetic models of low-metallicity massive stars. This comparison has generally been made for stars in the Galaxy (Timmes et al. 1995; Samland 1998). However, the Galaxy is a complex evolved stellar system with the juxtaposition of many generations of stars. While its study gives the possibility to test models for a wide range of metallicities, the task is complex because the chemical enrichment is made not only by massive but also by intermediate and low-mass stars. Additionally, heavy element abundance ratios in the Galaxy may be modified by dynamical effects such as gas infall or outflow. BCGs are simple systems by comparison. As in the lowest-metallicity BCGs, all heavy elements are made by massive stars only, the chemical enrichment of BCGs is insensitive to infall or outflow of material, and is dependent only on the characteristics of the Initial Mass Function (IMF) and stellar yields. A first comparison of observed stellar yields by massive stars with theoretical calculations has been made by TIL95. Since our observational sample is much increased and new calculations (WW95) have appeared which cover a wider metallicity range (TIL95 had only solar metallicity models to compare with), we revisit the problem here. Table 6 shows the heavy element-to-oxygen yield ratios as derived from the observed abundance ratios. Since intermediate-mass stars contribute significantly to the synthesis of C and N in BCGs with 12 + log O/H $>$ 7.6, the observed yield ratios $M$(C)/$M$(O) and $M$(N)/$M$(O) were derived using BCGs with 12 + log O/H $\leq$ 7.6, so that only the contribution from massive stars is taken into account. For the other elements, the derived yields are simply the mean value of the abundance ratio for the whole sample. As for the theoretical yields given in Table 6, they are taken from WW95 and averaged over an IMF with a Salpeter slope in the stellar mass range 1 -- 100 $M_\odot$, for a heavy element mass fraction ranging from $Z$ = 0 to $Z_\odot$. The 2 horizontal solid lines in Figures 1 and 2 show, for all elements except N and Fe, the range of theoretical yield ratios predicted by WW95 as determined by their models with metallicities 0 and 0.01 $Z_\odot$. Inspection of Table 6 and Figures 1 and 2 shows that the heavy element yield ratios calculated by WW95 are generally not too far off from the yield ratios inferred from the observations. The model which fits best the observed Ne/O and Si/O ratios (Figures 1a and 1b) has $Z$ = 0. Models with $Z$ $>$ 0 predict a Ne abundance too low by a factor of $\sim$ 2, while the predicted Si abundance is too high by about the same factor. This anticorrelation can be explained by the fact that part of the Ne produced is consumed in the later stages of hydrostatic burning, synthesizing Si in particular. The observed S/O and Ar/O abundance ratios (Figures 1c and 1d) are best fitted by the $Z$ = 0.01 $Z_\odot$ and $Z$ = 0.1 $Z_\odot$ models which give nearly identical yields. The $Z$ = 0 model predicts yields too low by a factor of $\sim$ 1.3. As for C/O (Figure 2a), the $Z$ = 0 and $Z$ = 0.01 $Z_\odot$ models, where C is produced only by massive stars and not by intermediate-mass stars, bracket nicely the data for BCGs with 12 + log O/H $<$ 7.6. It may seems surprising that the models which best fit elements such as Ne and Si have $Z$ = 0, considering that the BCGs in our sample all have $Z$ $\geq$ 0.02 $Z_\odot$. However, these heavy element abundances characterize the ionized gas, not the stars which can have a much lower metallicity. Because the most abundant Ne isotope, $^{20}$Ne, is synthesized during hydrostatic carbon burning, it is not sensitive to uncertainties in explosive nucleosynthesis models. Si, S and Ar yields in massive stars are, on the other hand, sensitive to the treatment of the explosion. Additionally, the production of these elements is sensitive to a variety of uncertain factors such as the rate of the $^{12}$C($\alpha$,$\gamma$)$^{16}$O process, the treatment of semiconvection, the treatment of convection and convective overshoot mixing during the last stages of shell oxygen burning, the density structure near the iron core, the initial location of the mass cut, and the amount of mass that falls back in the explosion (WW95). Given all these uncertainties, it is not so much the small discrepancies between theoretical and observational yield ratios that should be emphasized, but the overall good general agreement: it is remarkable that the abundance ratios inferred from the stellar yields by WW95 do not differ from those observed in BCGs by large factors, but are invariably in the ballpark. The calculated N and Fe yields constitute exceptions: they do not agree well with the data. TIL95 have already discussed the problem of iron, for which the theoretical yields are $\sim$ 2 times greater than those inferred from the observations. Theoretical calculations predict that iron is produced during explosive nucleosynthesis by supernovae of both types I and II in nearly equal quantities. However, the progenitors of SNe II are short-lived massive stars while the progenitors of SNe Ia are low-mass stars which explode only after 1 Gyr. Therefore, [O/Fe] is a good estimator of the galaxy's age. The constancy of [O/Fe] and its high value in BCGs as compared to the Sun (Figure 2e) implies that iron in BCGs was produced only by massive stars in type II supernovae. Because explosive nucleosynthesis models are sensitively dependent on the initial conditions of the explosion, the observed iron-to-oxygen abundance ratio can serve as a good discriminator between different models. TIL95 found that the models fit best the observations when the mass of the central collapsing core in the explosive synthesis is $\sim$ 10\% larger than the mass of the iron core. Since the mean value of [O/Fe] has not changed with the larger sample as compared to that found by TIL95, this conclusion still holds with the new data. The discrepancy between theory and observation is much more important for N. The N yield inferred from observations is 1 to more than 2 orders of magnitude larger than the theoretical yields of models with sub-solar metallicities (Table 6). This is because conventional low-metallicity massive star models do not produce primary nitrogen. However, as noted by WW95, it is possible that in some massive stars, the convective helium shell penetrates into the hydrogen layer with the consequent production of large amounts of primary nitrogen. In fact, Timmes, Woosley \& Weaver (1995) have found that the theoretical predictions for primary N production in massive stars with a large amount of convective overshoot are much more consistent with the observed [N/Fe] in low-metallicity halo stars as compared to conventional models, in which nitrogen is produced only as a secondary element, despite the unknown details of convective overshoot. \subsubsection{The role of intermediate-mass stars in heavy element production} While a picture where all heavy elements in BCGs with 12 + log O/H $\leq$ 7.6 are produced in high-mass stars (HMS) only is consistent with the observations, the additional production of carbon and nitrogen by intermediate-mass stars (IMS) needs to be taken into account in BCGs with oxygen abundance 12 + log O/H $>$ 7.6. As already discussed, it is commonly thought that primary nitrogen in low-metallicity BCGs is produced only by intermediate-mass stars (RV81, WW95). Additionally, some nitrogen is produced as a secondary element in both intermediate and high-mass stars. However, because production of secondary nitrogen drops as metallicity decreases, it is expected that the amount of secondary nitrogen is negligible compared to that of primary nitrogen in low-metallicity BCGs. As for carbon, it is believed to be produced as a primary element in all stars more massive than 1.5 $M_\odot$ (RV81, WW95). We compare in Table 7 the observed C/O and N/O abundance ratios with theoretical predictions, taking into account the contributions of both high and intermediate-mass stars. As before, all theoretical ratios are IMF-averaged values with the Salpeter slope of --2.35 and lower and upper mass limits of 1 and 100 $M_\odot$ respectively. Theoretical yields for massive stars in the mass range 12 -- 40 $M_\odot$ and with a heavy element mass fraction $Z$ = 0.0002 are taken from WW95. Since production of primary nitrogen by massive stars is not considered by these authors, we adopt as the primary N yield by massive stars that which is consistent with the observed $<$log N/O$>$ = --1.60 for low-metallicity BCGs (Table 7 and solid horizontal line in Figure 2b). As for the C and N yields for intermediate-mass stars, they are taken from two different sets of models. The first set of models is from RV81. They are characterized by a stellar mass range from 3.5 to 7 $M_\odot$, a mass loss efficiency parameter on the asymptotic giant branch $\eta$ = 0.33, and a mixing length parameter $\alpha$ = 1.5. The other set of models is from HG97. They are characterized by $Z$ = 0.001 and a standard scaling parameter related to the efficiency of mass loss on the asymptotic giant branch $\eta$ = 1. We shall argue in Section 5 that the high value of [O/Fe] with respect to the Sun in the BCGs studied here implies that they are not older than $\sim$ 1 -- 2 Gyr. Therefore, we have only considered yields from HG97 for stars with a lifetime less than 1 Gyr, i.e. with mass $\geq$ 2 $M_\odot$. We assume furthermore that oxygen is produced by massive stars only. We do not give in Table 7 the value of C/O in the case of C production by intermediate-mass stars only, as this situation is not realistic: C produced by longer-lived intermediate-mass stars is always accompanied by C produced by shorter-lived massive stars. It is evident from Table 7 that there is general good agreement between observations and theory for C/O. We have already discussed the agreement for low-metallicity BCGs with oxygen abundance 12 + log O/H $\leq$ 7.6 where C is produced by massive stars only. The agreement is as good for higher-metallicity BCGs with 12 + log O/H $>$ 7.6, if C is produced in both massive and intermediate-mass stars. We have plotted in Figure 2a by a horizontal dashed line the theoretical C/O ratio calculated with models by WW95 for high mass stars and by RV81 for intermediate-mass stars. This value should be considered as an upper limit as the production of oxygen by massive stars in the current burst of star formation lowers the observed C/O ratios. The latter should lie between a lower limit set by primary C production by massive stars alone and whose possible range is shown by the two horizontal solid lines in Figure 2a, and that upper limit. The data points for BCGs with 12 + log O/H $>$ 7.6 do indeed scatter between these two limits as expected. Using the HG97 models would give a lower upper limit (by a factor of 1.6), but this is still consistent with the data given the observational uncertainties. The situation for nitrogen is more complex. If nitrogen is produced only by intermediate mass stars (and oxygen only by massive stars), then the RV81 and HG97 models predict respectively log N/O = --1.27 and --0.84. While these values are consistent with the largest N/O ratios observed for the BCGs in our sample with 12 + log O/H $>$ 7.6 (Figure 2b), N production by intermediate-mass stars alone cannot explain, as discussed before, the very small dispersion of N/O ratios in BCGs with lower metallicities, so we do not consider this scenario further. We examine therefore the picture where nitrogen is produced by both high and intermediate-mass stars. If nitrogen is produced only as a secondary element in massive stars, then the predicted log N/O is too small, $\leq$ --2.37 for massive star models with $Z$ $\leq$ 0.1 $Z_\odot$ (Table 6). We have thus to consider the situation where the nitrogen produced in massive stars is primary. In this case, the combined (HMS + IMS) nitrogen and oxygen production gives log N/O = --1.11 and --0.77 for the RV81 and HG97 models respectively. As in the case of the C/O ratio, these values should be considered as upper limits. We have shown in Figure 2b by a dashed horizontal line the upper limit corresponding to the RV81 model. The data are completely consistent with the latter model: all the BCGs points fall within the dashed line and the lower limit set by primary N production in massive stars only (solid line). It is important to stress here that in this picture, no BCG can have a N/O (or C/O) ratio below the value set by massive star evolution, as indicated by the solid lines in Figure 2a (and 2b). We have already discussed in Sections 4.2 and 4.3 that we know of no reliable data which contradict that statement. In summary, the comparison of observational and theoretical yields shows a remarkably good general agreement in spite many uncertain parameters in the models. With the presently available data, the RV81 yields appear to give a slightly better fit to the data than the HG97 yields, although both sets are consistent with it within the observational uncertainties. It is also clear that further development of massive star nucleosynthesis theory is needed, especially concerning nitrogen and iron productions. Because the theoretical yields of some elements are still so uncertain, we feel it is best to use, in computing chemical evolution models, the empirical yields derived from observations of low-metallicity BCGs, as summarized in Table 6. \subsection{Evolution of the He abundance in BCGs} The analysis of the behavior of the heavy element abundance ratios as a function of oxygen abundance has shown that chemical enrichment proceeds differently in BCGs with low and high oxygen abundance. In BCGs with 12 + log O/H $\leq$ 7.6, high-mass stars are the main agents of chemical enrichment, the very small dispersion of the abundance ratios ruling out the time-delayed production of carbon and nitrogen by intermediate-mass stars. On the other hand, in higher metallicity BCGs with 7.6 $<$ 12 + log O/H $<$ 8.2, the contribution of intermediate-mass stars to heavy element enrichment is significant, as evidenced by an increase in the values and dispersions of the C/O and N/O ratios. One might expect therefore that the He enrichment history is also different in these two ranges of oxygen abundances, as helium is produced in different proportions by high and intermediate-mass stars. In Figure 3 we show the helium mass fraction $Y$ as a function of oxygen abundance for the galaxies in our sample. All of them possess accurate He abundance determination as the present sample is precisely the one used by Izotov \& Thuan (1998b) to derive the primordial helium abundance $Y_p$. It is usual practice to extrapolate the $Y$ versus O/H and $Y$ versus N/H linear regressions to O/H = N/H = 0 to derive $Y_p$ (Peimbert \& Torres-Peimbert 1974, 1976; Pagel, Terlevich \& Melnick 1986). The $Y$ versus O/H regression line is decribed by \begin{equation} Y = Y_p + \frac{dY}{d({\rm O/H})} ({\rm O/H}). \label{eq:YvsO} \end{equation} The dotted line in Figure 3 represents the best fit regression line as derived by Izotov \& Thuan (1998b), with d$Y$/d$Z$ = 2.4 which corresponds to d$Y$/d(O/H) = 45. We now compare this best fit slope with the ones predicted in various models. In the scenario of element production in high-mass stars only for BCGs with 12 + log O/H $\leq$ 7.6 or O/H $\leq$ 4$\times$10$^{-5}$, the predicted slope (WW95) is much shallower, d$Y$/d$Z$ = 0.94 (Table 7). The solid line in Figure 3 has this slope (we adopt $Y_p$= 0.244, Izotov \& Thuan 1998b). It can be seen that it fits quite well the lowest-metallicity points, and is in fact nearly indistinguishable from the best fit line. At higher oxygen abundances, the slope steepens because of the additional production of helium in intermediate mass stars and takes the values d$Y$/d$Z$ = 1.66 and 1.54 for the RV81 and HG97 yields respectively (Table 7). These values are lower than the best fit slope d$Y$/d$Z$ = 2.4$\pm$1.0, but are consistent with it within the errors. The dashed line in Figure 3 has the slope derived with the RV81 yields. Taking into account the error bars, it fits the observations quite well. We shall need to supplement our BCG sample with more high-metallicity BCGs to determine d$Y$/d$Z$ better, and reduce the observational uncertainties in the individual He determinations, before we can ascertain whether there is any difference between the best fit slope and the one derived from theory. Given the present data, we conclude that our proposed scenario -- He production in high-mass stars only for galaxies with 12 + log O/H $\leq$ 7.6, and He production by both high and intermediate-mass stars for higher metallicity galaxies -- is in good agreement with the observations. If we assume that 25\% of the oxygen produced is lost by the galaxy due to supernova-driven winds, then d$Y$/d$Z$ is nearly unchanged at low metallicities because both oxygen and helium are produced in massive stars. However, when intermediate-mass stars play a role in producing He (but not O), the slope steepens, becoming 1.89 instead of 1.66. The above analysis implies that a slope change may be expected for the $Y$ versus O/H linear regression at 12 + log O/H $\sim$ 7.6, and that fitting both the low and high metallicity ranges by the same slope may introduce some systematic underestimation of $Y_p$. If this is the case, it is perhaps best to derive $Y_p$ not by a regression fit, but by taking the mean $Y$ of the most metal-deficient galaxies. ITL97 and Izotov \& Thuan (1998b) did indeed find that the mean $Y$ of the two most metal-deficient BCGs known, I Zw 18 and SBS 0335--052, is higher by 0.001 than the value derived from a linear regression fit to the whole sample ($<$$Y$$>$ = 0.245$\pm$0.004 instead of $Y_p$ = 0.244$\pm$0.002). However, as shown in Figure 3, the difference between the regression line derived by fitting the data with the same slope over the whole metallicity range and that expected for He production by massive stars only, is small and far below the observational uncertainties. We conclude therefore that, given the present quality of the data, the method of using a single linear regression (Eq. \ref{eq:YvsO}) with the same d$Y$/d$Z$ slope over the whole metallicity range is perfectly adequate for the determination of the primordial helium abundance. We next examine the behavior of He with respect to C in the context of our favored model: He and heavy-element production by high-mass stars only in BCGs with 12 + log O/H $<$ 7.6, and by both high-mass and intermediate-mass stars in higher metallicity BCGs. In Figure 4 we show the dependence of the helium mass fraction $Y$ on the carbon abundance C/H for those BCGs in our sample for which both of these quantities are known. Admittedly, the total number of such galaxies is very small (only 4), too small to draw any definite conclusion on the slope of the $Y$ -- C/H relation. The solid line shows the expected relation when He and C are only produced by high-mass stars, and the dashed line that expected when both high and intermediate-mass stars contribute. The theoretical IMF-averaged yields are taken from WW95 for high-mass stars and from RV81 for intermediate-mass stars. Despite the very small number of data points, it can be seen that the theoretical predictions agree well with the observational data. We give in Table 7 the theoretical slopes d$Y$/d(C/H) in both situations (HMS and IMS+HMS), for two sets of yields for intermediate-mass stars taken from RV81 and HG97. While the slopes in the case of HG97 yields are nearly the same in both situations, the slope with RV81 yields is shallower when both high and intermediate-mass stars produce He and C than when only high-mass stars are responsible for the nucleosynthesis. \section{CHEMICAL CONSTRAINTS ON THE AGE OF BCGs} Because O, Ne, Si, S and Ar are made in the same high-mass stars, their abundance ratios with respect to O are constant and not sensitive to the age of the galaxy. By contrast, C, N and Fe can be produced by both high and lower-mass stars, and their abundance ratios with respect to O can give important information on the evolutionary status of BCGs. The constancy of [O/Fe] for the BCGs in our sample and its high value compared to the Sun (Figure 2c) suggests that all iron was produced by massive stars, i.e. in SNe II only. Since the time delay between iron production from SNe II and SNe Ia is about 1 -- 2 Gyr, it is likely that BCGs with oxygen abundance less than 12 + log O/H $\sim$ 8.2 are younger than 1 -- 2 Gyr, assuming that abundances measured in the supergiant H II regions are representative for the whole galaxy. We cannot put more stringent constraints on the star formation history of BCGs with the [O/Fe] abundance ratio because intermediate-mass stars do not produce oxygen and iron. We next use the behavior of the C/O and N/O ratios as a function of oxygen abundance to constrain the age of BCGs. As discussed previously, this behavior is very different whether the BCG has 12 + log O/H smaller or greater than 7.6. The C/O and N/O abundance ratios in BCGs with 12 + log O/H $\leq$ 7.6 are independent of the oxygen abundance and show a very small scatter about the mean value. We have argued that this small scatter rules out any time-delay model in which O is produced first by massive stars and C and N are produced later by intermediate-mass stars, and supports a common origin of C, N and O in the same first-generation massive stars. Thus, it is very likely that the presently observed episode of star formation in BCGs with 12 + log O/H $\leq$ 7.6 is the first one in the history of the galaxy and the age of the oldest stars in it do not exceed $\sim$ 40 Myr, the lifetime of a 9 $M_\odot$ star. The conclusion that BCGs with $Z$ $\leq$ $Z_\odot$/20 are young is supported by the analysis of {\sl HST} WFPC2 images of some of these galaxies. Hunter \& Thronson (1995) have concluded that the blue colors of the underlying diffuse extended emission in I Zw 18 ($Z_\odot$/50) are consistent with those from a sea of unresolved B and early A stars, with no evidence for stars older than $\sim$ 10$^7$ yr. Izotov et al. (1997a) and Thuan, Izotov \& Lipovetsky (1997) have also found in SBS 0335--052 ($Z_\odot$/41), after removal of the gaseous emission, an extremely blue extended underlying stellar component with age less than 100 Myr. In the same manner, SBS 1415+437 ($Z_\odot$/21, Thuan, Izotov \& Foltz 1998), T1214--277 and Tol 65 (respectively $Z_\odot$/21 and $Z_\odot$/22, Izotov, Thuan \& Papaderos 1998) show, after subtraction of the gaseous emission, a very blue extended emission consistent with an underlying stellar population not older than 100 Myr. The situation changes for BCGs with $Z$ $>$ $Z_\odot$/20. The scatter of the C/O and N/O ratios increases significantly at a given O abundance, which we interpret as due to the additional production of primary N by intermediate-mass stars, on top of the primary N production by high-mass stars. Thus, since it takes at least 500 Myr (the lifetime of a 2 -- 3 $M_\odot$ star) for C and N to be produced by intermediate-mass stars, BCGs with 12 + log O/H $>$ 7.6 must have had several episodes of star formation before the present one and they must be at least older than $\sim$ 100 Myr. This conclusion is in agreement with photometric studies of these higher metallicity BCGs which, unlike their very low-metallicity counterparts, have a red old instead of a blue young underlying stellar component (Loose \& Thuan 1985; Papaderos et al. 1996; Telles \& Terlevich 1997). In summary, the study of heavy element abundances in BCGs leads us to the following timeline for galaxy evolution: a) all objects with 12 + log O/H $\leq$ 7.6 began to form stars less than 40 Myr ago; b) after 40 Myr, all galaxies have evolved so that 12 + log O/H $>$ 7.6; c) by the time intermediate-mass stars have evolved and released their nucleosynthetic products (100--500 Myr), all galaxies have become enriched to 7.6 $<$ 12 + log O/H $<$ 8.2. The delayed release of primary N at these metallicities greatly increase the scatter in the N/O abundance ratio; d) later, when galaxies get enriched to 12 + log O/H $>$ 8.2, secondary N production becomes important. \section{COMPARISON WITH DAMPED LYMAN-$\alpha$ SYSTEMS} Damped Ly$\alpha$ systems are believed to be young disk galaxies in their early stages of evolution. They are extremely metal-deficient, their heavy element abundances ranging between $Z_\odot$/300 and $Z_\odot$/10 (Pettini et al. 1994; Wolfe et al. 1994; Lu et al. 1996). The large light-gathering power of the 10 m Keck telescope has made it possible to study their elemental abundance ratios (Lu et al. 1996), thus revealing many similarities between these ratios and those found in BCGs. A direct comparison, however, is not always possible. The Ne and Ar emission lines are present in the spectra of BCGs, but the absorption lines of the same noble elements are absent in the spectra of Ly$\alpha$ galaxies. On the other hand, the abundances of several elements in the iron-peak group have been measured in damped Ly$\alpha$ galaxies, but only iron abundance measurements are available for BCGs. There exists only lower limits for the carbon and oxygen abundances in damped Ly$\alpha$ systems as the O I $\lambda$1302 and C II $\lambda$1334 absorption lines are strongly saturated. In the case of the elements for which a direct comparison can be made, Lu et al. (1996) have shown that some $\alpha$-elements such as Si and S are overabundant with respect to iron as compared to the Sun. This is true as well for BCGs (and for halo stars) since in these objects, Si and S are normal with respect to O while O is overabundant with respect to Fe, by comparison with the Sun. On the other hand, the iron-peak element abundance ratios are nearly solar as expected. The N/O ratios seem at first glance to constitute the major difference. The N/O ratios in damped Ly$\alpha$ systems appear to be significantly lower than those measured in our low-metallicity BCGs. Pettini, Lipman \& Hunstead (1995) have found in one damped Ly$\alpha$ galaxy an upper limit log N/O $\leq$ --2.12 , --0.52 dex lower than the mean log N/O = --1.60 for the most metal deficient BCGs in our sample, a value which, we have argued, is set by primary N production in massive stars and constitutes a lower limit to log N/O in BCGs. Lu, Sargent \& Barlow (1998) have measured N/Si abundance ratios in 15 damped Ly$\alpha$ galaxies, and have set upper limits for log N/O ranging from from --1.2 to --2.1. Until now, nearly all log N/O measurements lower than the BCG lower limit of --1.60 in damped Ly$\alpha$ systems are upper limits. There is however one exception, the 1946+7658 system where Lu et al. (1998) did measure an actual value, [N/Si] = --1.70 which translates to log N/O = --2.6, or 1.0 dex lower than the BCG lower limit. From the N/O and N/Si measurements, Pettini et al. (1995) concluded that both primary and secondary nitrogen production are important over the whole range of metallicity measured in damped Ly$\alpha$ systems, while Lu et al. (1998) favor the time-delayed primary nitrogen production by intermediate-mass stars. We find such a large difference between the N/O ratios measured in BCGs and those in damped Ly$\alpha$ galaxies to be very puzzling. If we believe the physics of star formation, the stellar IMF and the nucleosynthesis processes in stars at a given metallicity, to be universal and independent of cosmic epoch, then there must be some systematic differences in the way in which the N/O ratios are derived in damped Ly$\alpha$ systems as compared to those in BCGs. The derivation of N/O ratios in BCGs is straightforward enough. The photoionized H II region models are simple and well defined, the nebular emission lines are strong and their intensities can be measured with precision. There is no hidden assumption in the path from line intensities to N and O abundances. On the other hand, there is a basic assumption in the derivation of N/O in damped Ly$\alpha$ systems which we would like to discuss. It is generally assumed that absorption lines in damped Ly$\alpha$ systems originate in the H I clouds in the disk of the galaxy. In this case, only low ionization species are expected and correction factors for unseen stages of ionization are close to unity (Viegas 1995). We suggest here that this basic assumption may not be valid. We may expect that in high-redshift gas-rich disks there is ongoing star formation giving birth to massive stars which ionize the H I gas and create H II regions and diffuse ionized gas. There is thus a non-zero probability for the line of sight to cross diffuse ionized gas which is ubiquitous and has a large covering factor in gas-rich galaxies and/or HII regions, so that absorption lines would originate not in neutral but ionized gas. Supporting this hypothesis is the fact that many of the spectra of Lu et al. (1996) do show absorption lines of high ionization species such as Al III, C IV and Si IV, which are usually present in ionized gas regions. Lu et al. (1996) have dismissed the ionized gas hypothesis by the following two arguments. First, the absorption profile of the high-ionization species Al III is similar to those of lower ionization species, and hence both types of species must be produced in the same physical region. The latter must be mostly neutral because of the large observed H I column densities. In that case, Al III comes from an ionized shell surrounding the H I gas. Second, the Al II lines are always much stronger than the Al III lines, so that $N$(Al II) $\gg$ $N$(Al III) implying that most of the gas where the absorption arises is neutral. The last argument is not airtight. Supposing that all absorption lines originate not in H I but H II gas, we have constructed with the radiative-collisional equilibrium code CLOUDY (Ferland 1993) a series of spherically symmetric H II region models with the ionization parameter log U in the range --0.4 to 0 which is typical for HII regions, and ionizing stars with effective temperatures between 40 000 and 50 000 K and a metallicity of one-tenth solar. For these models, typical radially-averaged column densities are $N$(H II) = 10$^{19}$--10$^{21}$ cm$^{-2}$ and $N$(Al II) = 10$^{12}$--10$^{14}$ cm$^{-2}$. The models give invariably $N$(Al II) $>$ $N$(Al III) even when the hydrogen gas is totally ionized. The profile similarities noted by Lu et al. (1996) between Al III and Al II can indeed be explained by the formation of absorption lines in the same physical region, but the latter can be H II rather than H I gas. If the gas is ionized, the correction factors for unseen stages of ionization are dependent on the parameters of the particular H II region model and will be higher than for those in a H I cloud. The determination of abundances is more uncertain because of the lack of information on the column density of ionized hydrogen. However, the abundance ratios for some elements will not change greatly when the absorption lines are assumed to originate in the H II instead of the H I gas. This is because the abundances of these elements are derived from column densities of singly ionized ions, e.g. C$^+$, Si$^+$, S$^+$, Fe$^+$. Photoionized H II region models (Stasi\'nska 1990) predict for these ions similar correction factors, so that their ratios are close to unity and the element abundance ratios are roughly equal to the singly ionized ion abundance ratios. However, the situation changes dramatically when we compare abundances of elements derived from column densities of ions in different stages of ionization. In those cases, the ratio of the ionization correction factors for different elements can be very far from unity, and the abundance ratios very different from the ion abundance ratios. Such is the case for the N/Si abundance ratio. While the silicon column density is derived from the Si$^+$ absorption line equivalent widths, the nitrogen column density is derived from the N$^0$ absorption lines. In the H I cloud model \begin{equation} \frac{\rm N}{\rm Si} = \frac{\rm N^0}{\rm Si^+}. \label{eq:HI} \end{equation} The situation is however totally different if the H II region model is adopted. In Figure 5 we show the correction factors ICF(N/Si) as a function of the fraction of O$^+$ ions $x$(O$^+$) = O$^+$/O, for the set of H II region models of Stasi\'nska (1990). The correction factors ICF(N/Si) are weakly dependent on $x$(O$^+$) and have a lower envelope at a value $\sim$ 10. Hence, for the H II region model \begin{equation} \frac{\rm N}{\rm Si} = {\rm ICF}\left(\frac{\rm N}{\rm Si}\right) \frac{\rm N^0}{\rm Si^+} \geq 10\frac{\rm N^0}{\rm Si^+}. \label{eq:HII} \end{equation} This factor of $\sim$ 10 is just about the one needed to bring the N/O ratio measured in the 1946+7658 damped Ly$\alpha$ system to the mean value of N/O obtained for low-metallicity BCGs. Instead of the N/Si ratio, Pettini et al. (1994) have measured directly the N/O ratio in the damped Ly$\alpha$ systems. This method is in principle subject to less uncertainties because the abundances of both elements in the H II region are derived from column densities of neutral species. We have run the CLOUDY code for several spherically symmetric H II region models, varying the ionization parameter and the temperature of the ionizing radiation. These calculations show that the fractions of neutral nitrogen and neutral oxygen in the H II region along the line of sight are nearly the same, i.e. \begin{equation} \frac{\rm N}{\rm O} \approx \frac{\rm N^0}{\rm O^0}. \label{eq:NO} \end{equation} However, the O abundance is not known with good precision. The O I $\lambda$1302 absorption line is saturated and the O abundances derived by Pettini et al. (1995) by fitting the saturated O I profiles have so large errors that they span 3 orders of magnitude. Furthermore, the very method of using saturated absorption lines to derive column densities and abundances is questionable (Pettini et al. 1995; Pettini \& Lipman 1995; Lu et al. 1998). We stress therefore that there is a great uncertainty in the derived N/O abundance ratios in damped Ly$\alpha$ absorption systems. This uncertainty comes in large part from our ignorance of the nature of the absorbing medium, whether it is neutral or ionized gas. If the absorbing system is an H II region instead of a H I cloud, the derived N/O abundance ratio may increase by a factor of 10 or more if the Si$^{+}$ lines are used instead of neutral oxygen lines, which would bring the N/O ratios derived in damped Ly$\alpha$ systems in closer agreement with those found in low-metallicity BCGs. Another large source of uncertainty comes from the derivation of abundances from saturated O I $\lambda$1302 absorption lines. If in the future, there is evidence that the absorption lines do come from a neutral medium, and better O determinations in damped Ly$\alpha$ systems still give a N/O ratio a whole order of magnitude lower than the value found in low-metallicity BCGs, then we would have to conclude that star formation and metal dispersal histories in damped Ly$\alpha$ galaxies are very different from those in BCGs. \section {SUMMARY AND CONCLUSIONS} The present study is a continuation and extension of the one by TIL95 on heavy element abundances in very metal-deficient environments, with considerably more data. We derive here the abundances of N, O, Ne, S, Ar and Fe in a sample of 54 supergiant H II regions in 50 low-metallicity blue compact galaxies with oxygen abundance in the range 7.1 $<$ 12 + log O/H $<$ 8.3. The objects in this sample all possess very high signal-to-noise ratio spectra which have been obtained for the determination of the primordial helium abundance. This allows us to measure line intensities with a high precision, and properly correct them for interstellar extinction and underlying stellar hydrogen Balmer absorption. In addition, we redetermine the carbon and silicon abundances in some BCGs with the use of {\sl HST} UV and optical archival spectra, supplemented in a few cases by ground-based optical spectroscopic observations to derive accurate electron temperatures. We have obtained the following results: 1. As in TIL95, the $\alpha$-elements-to-oxygen Ne/O, Si/O, S/O and Ar/O abundance ratios show no dependence on oxygen abundance over the whole range of metallicities studied here. Furthermore, these ratios are about the same as those found in halo stars and high-redshift damped Ly$\alpha$ galaxies, and they have approximately the solar values. This result is to be expected from stellar nucleosynthesis theory as oxygen and all $\alpha$-elements are produced by the same massive stars. 2. We rederive the carbon-to-oxygen abundance ratio in both NW and SE components of I Zw 18, the most metal-deficient galaxy known. Our values of log C/O = --0.77$\pm$0.10 for the NW component and --0.74$\pm$0.09 for the SE component are in excellent agreement with those predicted by the theory of massive star nucleosynthesis, but are significantly lower (by $\sim$ 0.2 dex) than those derived by Garnett et al. (1997). The main source of the differences comes from the adopted electron temperatures. We use higher electron temperatures (by 1900 K and 2300 K respectively for the NW and SE components), as derived from recent MMT spectral observations (Izotov et al. 1997b) in apertures which match more closely those of the {\sl HST} FOS observations used to obtain C abundances. With these lower C/O ratios, I Zw 18 does not stand apart anymore from the other low-metallicity BCGs. An earlier lower-mass carbon-producing stellar population need not be invoked, and I Zw 18 can still be a ``primeval" galaxy undergoing its first burst of star formation. 3. The C/O abundance ratio is constant, independent of the O abundance in BCGs with 12 + log O/H $\leq$ 7.6 ($Z$ $\leq$ $Z_\odot$/20), with a mean value log C/O = --0.78$\pm$0.03 and a very small dispersion about the mean. This result is based on a small sample of 4 data points in 3 galaxies and needs to be strengthened by more C/O measurements in extremely metal-deficient galaxies. The constancy of the C/O ratio suggests that carbon in these galaxies is produced in the same stars that make O, i.e. only in massive stars ($M$ $>$ 9 $M_\odot$). By contrast, the C/O ratio in BCGs with higher oxygen abundance (12 + log O/H $>$ 7.6) is significantly higher, with a mean value log C/O = --0.52$\pm$0.08 and a larger dispersion about the mean. This enhanced C/O ratio and the larger scatter at a given O/H are likely the result of additional carbon production by intermediate-mass (3 $M_\odot$ $\leq$ $M$ $\leq$ 9 $M_\odot$) stars, on top of the carbon production by high-mass stars. 4. TIL95 showed that the N/O abundance ratio has a very small scatter in BCGs with 12 + log O/H $\leq$ 7.6, with a mean value log N/O = --1.58$\pm$0.02. We have added here a very low-metallicity object, SBS 0335--052 with $Z_\odot$/41, and the results do not change. As discussed by TIL95, such a small dispersion of the N/O ratio is not consistent a time-delayed production of primary nitrogen by longer-lived intermediate-mass stars in these extremely metal-deficient galaxies. The small dispersion at very low metallicities can only be explained by primary nitrogen production in short-lived massive stars. As in the case of C, galaxies with 12 + log O/H $>$ 7.6 show a considerably larger scatter of N/O ratios, with a mean value log N/O = --1.46$\pm$0.14. Again, the larger scatter can be explained by the addition of primary nitrogen production in intermediate-mass stars, on top of that in massive stars. None of our galaxies has log N/O below --1.67, implying that the lower envelope of the N/O distribution is set by the production of primary nitrogen in massive stars over the whole range of metallicities studied here, 7.1 $<$ 12 + log O/H $<$ 8.2. While a detailed scenario of primary nitrogen production by massive stars is yet to be developed, the mean log N/O obtained here for the most metal-deficient BCGs will put strong constraints on any future theory. 5. The Fe/O abundance ratio in our BCGs is $\sim$ 2.5 times lower than in the Sun, with a mean [O/Fe] = 0.40$\pm$0.14. Again, it does not show any dependence on oxygen abundance, in agreement with TIL95. This implies that iron in BCGs was synthesised during the explosive nucleosynthesis of SNe II. Only one BCG bucks the trend: SBS 0335--052 has a Fe/O abundance ratio higher than the solar ratio. We argue that the high abundance of Fe in this BCG may not be real, and may be caused by the contamination of the nebular [Fe III] $\lambda$4658 emission line by the narrow stellar C IV $\lambda$4658 produced in expanding envelopes of hot massive stars. 6. Comparison of theoretical heavy element yields for low-metallicity massive stars (WW95) with those inferred from observations of BCGs shows that the theory of massive star nucleosynthesis is generally in good shape. The major problems are with iron and nitrogen yields. The predicted iron-to-oxygen yield ratio is a factor of $\sim$ 2 larger than the observed ratio. Models of primary nitrogen production by intermediate-mass stars cannot reproduce the N/O ratios observed in very low-metallicity BCGs, and conventional models of low-metallicity massive stars do not allow for primary nitrogen production. Until further developments of the theory of massive star nucleosynthesis resolve these disagreements, it is best to use the observed yields given in Table 6 for studies of the early chemical evolution of galaxies. 7. Helium follows the same pattern as carbon and nitrogen. The helium abundance is constant within the observational uncertainties for BCGs with 12 + log O/H $\leq$ 7.6. Again we interpret this constancy by He being produced in high-mass stars only. At higher oxygen abundances, the mean and dispersion increase, and we again interpret this increase as the addition of helium production by intermediate-mass stars to that by massive stars. Although the sources of helium production appear to be different in low and high-metallicity BCGs, implying slightly different dependences of the helium mass fraction $Y$ on oxygen abundance above and below 12 + log O/H = 7.6, we find that the commonly used $Y$ versus O/H linear regression over a large range of oxygen abundances to determine the primordial helium abundance, is a good approximation. 8. The constancy of the C/O, N/O and [O/Fe] ratios in BCGs with 12 + log O/H $\leq$ 7.6 argues strongly for the production in the most metal-deficient galaxies, of these elements by massive stars only. Intermediate-mass stars have not yet returned their nucleosynthesis products to the interstellar medium in these most metal-poor galaxies because they have not had enough time to evolve. This allows us to date these galaxies. Galaxies with 12 + log O/H $\leq$ 7.6 are young, in the sense that they have experienced their first episode of star formation not more than $\sim$ 40 Myr ago, the lifetime of a 9 $M_\odot$ star. This derived young age is in agreement with the results obtained by recent photometric studies with {\sl HST} images of some of the most metal-deficient galaxies discussed here. 9. The study of heavy element abundances in BCGs leads us to the following timeline for galaxy evolution: a) all objects with 12 + log O/H $\leq$ 7.6 began to form stars less than 40 Myr ago; b) after 40 Myr, all galaxies have evolved so that 12 + log O/H $>$ 7.6; c) by the time intermediate-mass stars have evolved and released their nucleosynthetic products (100--500 Myr), all galaxies have become enriched to 7.6 $<$ 12 + log O/H $<$ 8.2. The delayed release of primary N at these metallicities greatly increase the scatter in the N/O abundance ratio; d) later, when galaxies get enriched to 12 + log O/H $>$ 8.2, secondary N production becomes important. 10. The N/O abundance ratios derived for BCGs are apparently larger by a factor of up to $\sim$ 10 than those obtained for high-redshift damped Ly$\alpha$ galaxies (Lu et al. 1996). We suggest here that this discrepancy may not be real. Contrary to the situation for BCGs where the N/O ratios are very well determined, the N/O ratios derived for damped Ly$\alpha$ systems are highly uncertain because of the unknown physical conditions in the interstellar medium of high-redshift systems. It is generally assumed that the absorption lines in these systems originate in the H I gas. In that case, the ionization correction factors for the low-ionization species used for abundance determination are close to unity. However, if we assume that the absorption lines originate not in neutral but ionized gas, the correction factors can be $\geq$ 10, and the derived N/O abundance ratios in the damped Ly$\alpha$ systems can be as high as those derived in BCGs. \acknowledgements Y.I.I. thanks the warm hospitality of the Astronomy Department of the University of Virginia. This international collaboration was made possible thanks to the partial financial support of NSF grant AST-9616863. \clearpage
1,108,101,566,164
arxiv
\section{Introduction} The problem of finding the graphs with maximal and minimal energy has been extensively studied for several matrices. For the Adjacency matrix, Gutman \cite{Gutman2} proved the following. \begin{theorem} Let $T$ be a tree on $n$ vertices. Then $$E(S_{n})\leq E(T)\leq E(P_{n}).$$ \end{theorem} Where $P_{n}$ and $S_{n}$ stand for the $n$-vertex path and the $n$-vertex star. Radenkovi\'{c} and Gutman \cite{Gutman3} conjecturated the following about the Laplacian energy. \begin{conjecture} Let $T$ be a tree on $n$ vertices. Then $$LE(P_{n})\leq LE(T)\leq LE(S_{n}).$$ \end{conjecture} Fritscher et al. \cite{Eliseu1} proved that among the trees the star has maximal Laplacian energy. The problem of minimal Laplacian energy is still open. In the paper of Gutman, Furtula and Bozkurt \cite{Gutman1} on the energy of the Randi\'{c} matrix, graphs called sun and double sun were defined. The authors presented the following conjecture about the connected graphs with maximal RE. \begin{conjecture} \label{conjecturerandic} Let $G$ be a connected graph on $n$ vertices. Then \begin{equation*} RE(G) \leq \begin{cases} RE(Sun)&\text{if $n$ is odd,}\\ RE(Balanced\mbox{ }double\mbox{ }sun)&\text{if $n$ is even.} \end{cases} \end{equation*} \end{conjecture} In this article we present bounds for the Randi\'{c} energy. And some families of graphs that satisfy the conjecture above. Let $G=(V,E)$ be an undirected graph with vertex set $V$ and edge set $E$. The Randi\'{c} matrix $R = [r_{ij}]$ of a graph $G$ is defined \cite{Bozkurt2010a,Das2016,Gutman1} as $$ r_{ij} = \displaystyle{ \left\{ \begin{array}{cl} \frac{1}{\sqrt{d_{u}d_{v}}} & \mbox{ if } uv\in E \\ 0 & \mbox{ otherwise } \end{array} \right.} $$ Denote the eigenvalues of $R$ by $\lambda_{1}, \ldots, \lambda_{n}$. The multiset $\sigma_{R}=\{\lambda_{1}, \ldots, \lambda_{n}\}$ will be called the $R$-spectrum of the graph $G$. The Randi\'{c} energy $RE(G)$ of a graph $G$ is $$\sum_{i=1}^{n} |\lambda_{i}|.$$ Historically, the $RE$ is related to a descriptor for molecular graphs used by Milan Randi\'{c} in 1975 \cite{Randic1975}. The normalized Laplacian matrix, defined by Chung \cite{Chung1997}, can be written using the Randi\'{c} matrix as $$\mathcal{L}=I _{n}-R.$$ And the eigenvalues of $\mathcal{L}$ are given by $$\mu_{i}=1-\lambda_{i}$$ for $i=1\ldots n$. For graphs without isolated vertices Cavers \cite{Cavers} defined the normalized Laplacian energy as $$E_{\mathcal{L}}(G)=\sum_{i=1}^{n}|\mu_{i}-1|.$$ An interesting fact about $E_{\mathcal{L}}(G)$, see \cite{Gutman1}, is that if $G$ does not have isolated vertices then $$RE(G) = E_{\mathcal{L}}(G).$$ Thus, results in this paper on Randi\'{c} energy apply also to normalized Laplacian energy. This paper is organized as follows. In Section \ref{Sun}, we present closed formulas for the Randi\'{c} energy of the sun and the double sun. In Section \ref{Spectrum}, we use some known eigenvalues to provide upper bounds for $RE$ in terms of the number of vertices and the nullity of the graph. In Section \ref{bounds}, we use bounds for the Randi\'{c} index $R_{-1}(G)$ to improve bounds for $RE$. In Section \ref{TB}, we show that some families of graphs, for example starlikes of odd order, satisfy the conjecture proposed in \cite{Gutman1}. \section{Randi\'{c} energy of sun and double sun} \label{Sun} In the work of Gutman, Furtula and Bozkurt \cite{Gutman1} on the energy of the Randi\'{c} matrix, two families of trees were defined, sun and doble sun. For each $p \geq 0$, the $p$-{\em sun}, which we denote with $S^p$, is the tree of order $n = 2p + 1$ formed by taking the star on $p+1$ vertices and subdividing each edge. \begin{figure}[h] \centering \begin{tikzpicture} \path( 0,0)node[shape=circle,draw,fill=black] (primeiro) {} ( 1,0.5)node[shape=circle,draw,fill=black] (segundo) {} ( 1,1)node[shape=circle,draw,fill=black] (terceiro) {} (2,0.5)node[shape=circle,draw,fill=black] (quarto){} (2,1)node[shape=circle,draw,fill=black] (quinto) {} ( 1,-0.5)node[shape=circle,draw,fill=black] (sexto){} ( 2,-0.5)node[shape=circle,draw,fill=black] (setimo){}; \draw[-](primeiro)--(segundo); \draw[-](primeiro)--(terceiro); \draw[-](primeiro)--(sexto); \draw[-](terceiro)--(quinto); \draw[-](segundo)--(quarto); \draw[-](setimo)--(sexto); \draw[loosely dotted,line width=0.5mm] (1.5,0.4) -- (1.5,-0.4); \end{tikzpicture} \caption{\label{fig-sun} Sun.} \end{figure} For $p, q \geq 0$ the $(p,q)$-{\em double sun}, denoted $D^{p,q}$, is the tree of order $n = 2(p + q + 1)$ obtained by connecting the centers of $S^p$ and $S^q$ with an edge. Without loss of generality we assume $p \geq q$. When $p-q \leq 1$ the double sun is called {\em balanced}. \begin{figure}[h] \centering \begin{tikzpicture} \path( 0,0)node[shape=circle,draw,fill=black] (primeiro) {} ( 1,0.5)node[shape=circle,draw,fill=black] (segundo) {} ( 1,1)node[shape=circle,draw,fill=black] (terceiro) {} (2,0.5)node[shape=circle,draw,fill=black] (quarto){} (2,1)node[shape=circle,draw,fill=black] (quinto) {} ( 1,-0.5)node[shape=circle,draw,fill=black] (sexto){} ( 2,-0.5)node[shape=circle,draw,fill=black] (setimo){} ( -1,0)node[shape=circle,draw,fill=black] (oitavo) {} ( -2,0.5)node[shape=circle,draw,fill=black] (nono) {} ( -2,1)node[shape=circle,draw,fill=black] (decimo) {} (-3,1)node[shape=circle,draw,fill=black] (decprimeiro){} (-3,0.5)node[shape=circle,draw,fill=black] (decseg) {} ( -3,-0.5)node[shape=circle,draw,fill=black] (decterc){} ( -2,-0.5)node[shape=circle,draw,fill=black] (decquar){}; \draw[-](primeiro)--(oitavo); \draw[-](primeiro)--(segundo); \draw[-](primeiro)--(terceiro); \draw[-](primeiro)--(sexto); \draw[-](terceiro)--(quinto); \draw[-](segundo)--(quarto); \draw[-](setimo)--(sexto); \draw[-](oitavo)--(nono); \draw[-](oitavo)--(decimo); \draw[-](oitavo)--(decquar); \draw[-](decimo)--(decprimeiro); \draw[-](nono)--(decseg); \draw[-](decterc)--(decquar); \draw[loosely dotted,line width=0.5mm] (1.5,0.4) -- (1.5,-0.4); \draw[loosely dotted,line width=0.5mm] (-2.5,0.4) -- (-2.5,-0.4); \end{tikzpicture} \caption{\label{fig-doublesun} Double Sun.} \end{figure} In \cite{Gutman1} was conjectured that the connected graph with maximal Randi\'{c} energy is a tree. More specifically, if $n\geq 1$ is odd, the sun is conjectured to have greatest Randi\'{c} energy among graphs with $n$ vertices. And, if $n\geq 2$ is even, then the balanced double sun is conjectured to have greatest Randi\'{c} energy among graphs with $n$ vertices. Using the algorithm developed in \cite{Braga1}, for locating eigenvalues in trees for the normalized Laplacian matrix, we can compute the characteristic polynomials of the sun and the balanced double sun. The characteristic polynomial of the sun with $p\geq 1$ is: $$\mbox{det}(\lambda I-\mathcal{L})=(-1)(\lambda-(\frac{2+\sqrt{2}}{2}))^{p-1}(\lambda-(\frac{2-\sqrt{2}}{2}))^{p-1}(\lambda)(\lambda-2)(\lambda-1).$$ It follows that $$E_{\mathcal{L}}(S^{p})=\sum_{i=1}^{n}|\lambda_{i}(l)-1|=2(p-1)\frac{\sqrt{2}}{2}+2=(n-3)\frac{\sqrt{2}}{2}+2.$$ Suppose that $p\geq q$ and $p+q\geq 2$. Then, the characteristic polynomial of $D^{p,q}$ is: $$\mbox{det}(\lambda I-\mathcal{L})=\lambda(\lambda-2)(\lambda-(\frac{2+\sqrt{2}}{2}))^{p+q-2}(x-(\frac{2-\sqrt{2}}{2}))^{p+q-2}(q(\lambda))$$ with $$q(\lambda)=\lambda^{4}-4\lambda^{3}+\frac{1}{4}\frac{(22p+20qp+22q+20)}{(q+1)(p+1)}\lambda^{2}+\frac{1}{4}\frac{(-12p-8qp-12q-8)}{(q+1)(p+1)}+\frac{1}{4}\frac{(1+2p+2q)}{(q+1)(p+1)}.$$ It is known that the graph $G$ is bipartite if and only if for each normalized laplacian eigenvalue $\lambda$, the value $2-\lambda$ is also an eigenvalue of $G$. Using this fact, we can write $q(\lambda)$ as $$q(\lambda)=(\lambda-\lambda_{a})(\lambda-\lambda_{b})(\lambda-(2-\lambda_{a}))(\lambda-(2-\lambda_{b}))$$ with $\lambda_{a}\leq\lambda_{b}$. Now, we can compute the energy of the balanced double sun in both cases as follows: If $p=q$ then $$E_{\mathcal{L}}(D^{p,p})=\frac{\sqrt{2}(n^{2}-4n-12)+4\sqrt{n^{2}+4n+20}}{2(n+2)}.$$ If $q=p-1$ then \begin{align*} E_{\mathcal{L}}(D^{p,p-1})=\frac{\sqrt{2}}{2n(n+4)}(n^{3}-2n^{2}-24+2\sqrt{n(n+4)(n^{2}+8+\sqrt{-64n+n^{4}+64})}+ \\ 2\sqrt{n(n+4)(n^{2}+8-\sqrt{-64n+n^{4}+64})}). \end{align*} Now, we can rewrite the Conjecture \ref{conjecturerandic} for the Randi\'{c} energy using closed formulas. \begin{conjecture} Let $G$ be a connected graph on $n$ vertices. Then for $k\geq 3$ odd we have that \begin{equation*} RE(G) \leq \begin{cases} RE(S^{p})= E_{\mathcal{L}}(S^{p})&\text{if $n=k$}\\ RE(DS^{p,p})= E_{\mathcal{L}}(D^{p,p-1})&\text{if $n=2k$}\\ RE(DS^{p,p-1})=E_{\mathcal{L}}(D^{p,p-1}) &\text{if $n=2k+2$} \end{cases} \end{equation*} \end{conjecture} \section{Upper bounds for $RE$} \label{Spectrum} In this section, we present upper bounds for $RE$ in terms of the number of vertices and the nullity of $G$. The main tool we use to study the Randi\'{c} energy of graphs is the trace of $R^2$, taking advantage of the eigenvalues we know for $G$. The next Theorem is the generalized mean inequality that will be used in the results following it. \begin{theorem} \label{generalizedmean} If $x_1,\ldots,x_n$ are nonnegative real numbers, $p$ and $q$ are positive integers, and $p<q$, then \[ \left(\frac{1}{n}\sum_{i=1}^n x_i^p\right)^{1/p}\leq \left(\frac{1}{n}\sum_{i=1}^n x_i^q\right)^{1/q} \] \end{theorem} The next Lemma can also be found in \cite{Cavers}. But for completeness we present a proof here. \begin{lemma} \label{oldtrbound} Let $G$ be a graph of order $n$ with no isolated vertices. Then $$RE(G)\leq \sqrt{n\tr{R^2}}.$$ \end{lemma} \begin{Prf} Applying Theorem \ref{generalizedmean} with $p=1$, $q=2$, $x_i=|\lambda_i|$ yields \begin{align*} \frac{1}{n}\sum_{i=1}^n |\lambda_i|&\leq \left(\frac{1}{n}\sum_{i=1}^n |\lambda_i|^2\right)^{1/2}\\ \sum_{i=1}^n |\lambda_i|&\leq n\frac{1}{\sqrt{n}}\left(\sum_{i=1}^n |\lambda_i|^2\right)^{1/2}\\ \sum_{i=1}^n |\lambda_i|&\leq \sqrt{n}\left(\sum_{i=1}^n |\lambda_i|^2\right)^{1/2}. \end{align*} And the last inequality is exactly $RE(G)\leq \sqrt{n\tr{R^2}}$. \end{Prf} $\Box$ Notice that Lemma \ref{oldtrbound} can be improved when some of the eigenvalues for $R$ are known. Consider $\Psi$ a sub-multiset of $\sigma_R$ and denote the multiset difference by $\sigma_R\setminus\Psi$. \begin{theorem}\label{generaltrbound} Let $G$ be a graph, and let $\Psi$ be a sub-multiset of $\sigma_R$. Then \[ RE(G)\leq \sqrt{(n-|\Psi|)\left(\tr{R^2}-\sum_{\lambda\in\Psi}\lambda^2\right)}+\sum_{\lambda\in\Psi}|\lambda|. \] \end{theorem} \begin{Prf} Notice that \[ RE(G)=\sum_{i=1}^n |\lambda_i|=\sum_{\lambda\in\sigma_R\setminus\Psi}|\lambda|+\sum_{\lambda\in\Psi}|\lambda|. \] Applying Theorem \ref{generalizedmean} to the elements of $\sigma_R\setminus\Psi$ yields \begin{align*} \frac{1}{|\sigma_R\setminus\Psi|}\sum_{\lambda\in\sigma_R\setminus\Psi}|\lambda|\leq & \left( \frac{1}{|\sigma_R\setminus\Psi|}\sum_{\lambda\in\sigma_R\setminus\Psi}(|\lambda|)^2\right)^{1/2}\\ \sum_{\lambda\in\sigma_R\setminus\Psi}|\lambda|\leq & \left( |\sigma_R\setminus\Psi|\sum_{\lambda\in\sigma_R\setminus\Psi}(\lambda)^2\right)^{1/2}.\\ \end{align*} But $|\sigma_R\setminus\Psi|=n-|\Psi|$, and $\sum_{\lambda\in\sigma_R\setminus\Psi}(\lambda)^2=\tr{R^2}-\sum_{\lambda\in\Psi}\lambda^2$. Thus \[ \sum_{\lambda\in\sigma_R\setminus\Psi}|\lambda|\leq \left( (n-|\Psi|)\left(\tr{R^2}-\sum_{\lambda\in\Psi}\lambda^2\right)\right)^{1/2}. \] Hence \begin{align*} RE(G)\leq \sqrt{(n-|\Psi|)\left(\tr{R^2}-\sum_{\lambda\in\Psi}\lambda^2\right)}+\sum_{\lambda\in\Psi}|\lambda|. \end{align*} \end{Prf} $\Box$ We can apply Theorem \ref{generaltrbound} to graphs in general, using that $1$ is an eigenvalues of $R$ for every graph $G$, and using that $-1$ is an eigenvalue whenever the graph is bipartite. Furthermore, we can use the dimension of the null space, denoted by $\mnull{R}$, as that counts the multiplicity of $0$ as an eigenvalue. Notice that $\sum_{\lambda\in\Psi}\lambda^2=\sum_{\lambda\in\Psi}|\lambda|=1$ in the general case, and $\sum_{\lambda\in\Psi}\lambda^2=\sum_{\lambda\in\Psi}|\lambda|=2$ in the bipartite case. Hence, we obtain the following. \begin{corollary}\label{trspecificbound} If $G$ is a graph, then \[ RE(G)\leq \sqrt{(n-1-\mnull{R})(\tr{R^2}-1)}+1. \] Furthermore, if $G$ is bipartite, then \[ RE(G)\leq \sqrt{(n-2-\mnull{R})(\tr{R^2}-2)}+2. \] \end{corollary} Corollary \ref{trspecificbound} will be our main tool to bound $RE(G)$ in the following sections. \section{Upper bounds using $R_{-1}(G)$} \label{bounds} The randi\'{c} index of $G$, denoted by $R_{-1}(G)$, satisfies the equality \[ R_{-1}(G)=\frac{1}{2}\tr{R^2}. \] Hence, any upper bound of $R_{-1}(G)$ may yield an upper bound for $RE(G)$. In the next Theorem we summarize some upper bounds for $R_{-1}(G)$. \begin{theorem} \label{boundonrandicindex} In \cite{Cavers}, Cavers et al. showed that if $G$ is a connected graph on $n\geq 3$ vertices. Then \[ R_{-1}(G)\leq \frac{15(n+1)}{56}. \] In \cite{Clark}, Clark and Moon proved that if $T$ is a tree, then \[R_{-1}(T)\leq \frac{5n+8}{18}.\] In \cite{Li,Pav}, they proved that if $T$ is a tree of order $n\geq 103$, then \[ R_{-1}(T)\leq \frac{15n-1}{56}.\] \end{theorem} Applying Corollary \ref{trspecificbound} together with Theorem \ref{boundonrandicindex} yields \begin{corollary}\label{boundsforallgraphs} Let $G$ be a graph, then \[ RE(G)\leq \sqrt{(n-1-\mnull{R})\frac{15n-13}{28}}+1. \] Let $G$ be a bipartite graph, then \[ RE(G)\leq \sqrt{(n-2-\mnull{R})\frac{15n-41}{28}}+2. \] Let $T$ be a tree, then \[ RE(T)\leq \sqrt{(n-2-\mnull{R})\frac{5n-10}{9}}+2. \] Let $T$ be a tree of order $n\geq 103$, then \[ RE(T)\leq \sqrt{(n-2-\mnull{R})\frac{15n-57}{28}}+2. \] \end{corollary} It is known that trees of odd order have nullity at least one. If we consider $\mnull{R}=0$ for trees of even order and $\mnull{R}=1$ for trees of odd order in Corollary \ref{boundsforallgraphs} we obtain the following result. \begin{corollary} \label{null} Let $T$ be a tree of even order $n\geq 2$, then \begin{equation} \label{eq1} RE(T)\leq \sqrt{(n-2)\frac{5n-10}{9}}+2. \end{equation} Let $T$ be a tree of odd order $n\geq 2$, then \begin{equation} \label{eq1nul1} RE(T)\leq \sqrt{(n-3)\frac{5n-10}{9}}+2. \end{equation} Let $T$ be a tree of even order $n\geq 103$, then \begin{equation} \label{eq2} RE(T)\leq \sqrt{(n-2)\frac{15n-57}{28}}+2. \end{equation} Let $T$ be a tree of odd order $n\geq 103$, then \begin{equation} \label{eq2nul1} RE(T)\leq \sqrt{(n-3)\frac{15n-57}{28}}+2. \end{equation} \end{corollary} In \cite{Das2016} the following bound for the energy of trees was found. \begin{theorem}\cite{Das2016} \label{TheoremDas} Let $T$ be a tree of order $n$, then \begin{equation} \label{eq3} RE(T)\leq 2\sqrt{\left\lfloor\frac{n}{2}\right\rfloor\frac{5n+8}{18}}. \end{equation} \end{theorem} With a simple calculation we can compare the bounds in Corollary \ref{null} and Theorem \ref{TheoremDas}. If $n$ is even we use $\left\lfloor n/2\right\rfloor=n/2$. Then $(\ref{eq1})<(\ref{eq3})$ if $n\geq 3$ and $(\ref{eq1})=(\ref{eq3})$ if $n=2$. If $n$ is odd we use $\left\lfloor n/2\right\rfloor=(n-1)/2$. Then $(\ref{eq1nul1})<(\ref{eq3})$ if $n\geq 3$. Corollary \ref{boundsforallgraphs} also suggests that if $\mnull{R}$ is sufficiently large, then $RE(G)\leq (n-3)\frac{\sqrt{2}}{2}+2$. The following theorem gives a lower bound for the nullity that yields the inequality. \begin{theorem}\label{nullbound} If $\mnull{R}\geq \dfrac{n^2+56n-141-28(n-3)\sqrt{2}}{15n-13}$, then $RE(G)\leq (n-3)\frac{\sqrt{2}}{2}+2$. \end{theorem} \begin{Prf} If $G$ is a graph, then writing Corollary \ref{trspecificbound} in terms of $R_{-1}(G)$ yields \[ RE(G)\leq \sqrt{n-1-\mnull{R}}\sqrt{2R_{-1}(G)-1}+1. \] Using Theorem \ref{boundonrandicindex}, \begin{align*} RE(G)\leq& \sqrt{n-1-\mnull{R}}\sqrt{\frac{15(n+1)}{28}-1}+1\\ =&\sqrt{n-1-\mnull{R}}\sqrt{\frac{15n-13}{28}}+1. \end{align*} Thus, we want \begin{align*} \sqrt{n-1-\mnull{R}}\sqrt{\frac{n-13}{28}}+1&\leq (n-3)\frac{\sqrt{2}}{2}+2,\\ (n-1-\mnull{R})\frac{15n-13}{28}&\leq ((n-3)\frac{\sqrt{2}}{2}+1)^2,\\ (n-1-\mnull{R})\frac{15n-13}{28}&\leq (n^2-6n+9)\frac{1}{2}+(n-3)\sqrt{2}+1,\\ (n-1-\mnull{R})(15n-13)&\leq 14 n^2 - 84n + 126+28(n-3)\sqrt{2}+28,\\ (n-1-\mnull{R})&\leq \frac{14 n^2 - 84n + 154+28(n-3)\sqrt{2}}{15n-13}.\\ \end{align*} Hence, \begin{align*} -\mnull{R}&\leq \frac{14 n^2 - 84n + 154+28(n-3)\sqrt{2}}{15n-13}-n+1,\\ \mnull{R}&\geq -\frac{14 n^2 - 84n + 154+28(n-3)\sqrt{2}}{15n-13}+n-1,\\ \mnull{R}&\geq \frac{15n^2-13n-15n+13-14n^2+84n-154-28(n-3)\sqrt{2}}{15n-13},\\ \mnull{R}&\geq \frac{n^2+56n-141-28(n-3)\sqrt{2}}{15n-13}.\\ \end{align*} \end{Prf} $\Box$ In the case of trees, using $R_{-1}(G)\leq (15n-1)/56$ and the fact that $\pm 1$ are eigenvalues, Theorem \ref{nullbound} can be improved to show that $\mnull{T} \geq \dfrac{n^2-3n-12}{15n-57}$ implies $RE(T)\leq (n-3)\dfrac{\sqrt{2}}{2}+2$. A \textit{suspended path} is a path $uvw$, with $d_u=1$ and $d_v=2$, i.e. $u$ is a pendent vertex and its neighbor has degree $2$. The next result improves the bound on $R_{-1}(G)$ when $G$ has no suspended paths. \begin{theorem}\cite{Cavers} Let $G$ be a connected graph on $n\geq 3$ vertices. If $G$ has no suspended paths, then \[ R_{-1}(G)\leq \dfrac{n}{4} \] \end{theorem} If $G$ is bipartite, then we can use that $\pm 1$ are eigenvalues of $R$, and, hence, $1$ is an eigenvalue of multiplicity $2$ of $R^2$ to obtain the following. \begin{theorem} Let $G$ be a connected bipartite graph on $n\geq 3$ vertices. If $G$ has no suspended paths, then \[ RE(G)\leq \sqrt{n-2}\sqrt{n-4}\dfrac{\sqrt{2}}{2}+2. \] \end{theorem} Notice that $\sqrt{n-2}\sqrt{n-4}=\sqrt{n^2-6n+8}< \sqrt{n^2-6n+9}=(n-3)$. Therefore, if $n=2p+1$ is odd and $G$ is a connected bipartite graph on $n\geq 3$ vertices that has no suspended paths, then $RE(G)\leq RE(S^{p})$. When $G$ is not bipartite, it is better to look at a result using $\mnull{R}$. \begin{theorem} Let $G$ be a connected graph on $n\geq 3$ vertices. If $G$ has no suspended paths, then \[ RE(G)\leq \sqrt{n-1-\mnull{R}}\sqrt{n-2}\dfrac{\sqrt{2}}{2}+1. \] \end{theorem} Notice in particular that if $\mnull{R}\geq 1$, then \begin{align*} \sqrt{n-1-\mnull{R}}\sqrt{n-2}\dfrac{\sqrt{2}}{2}+1\leq &\sqrt{n-2}\sqrt{n-2}\frac{\sqrt{2}}{2}+1\\ = & (n-2)\frac{\sqrt{2}}{2}+1\\ = & (n-3)\frac{\sqrt{2}}{2}+\frac{\sqrt{2}}{2}+1\\ < (n-3)\frac{\sqrt{2}}{2}+2.\\ \end{align*} Hence, if $n=2p+1$ is odd and $G$ is a connected graph on $n\geq 3$ vertices that has no suspended paths, and $\mnull{R}\geq 1$, then $RE(G)<RE(S^{p})$. \section{TB graphs} \label{TB} In the previous section we showed how finding good bounds for $\tr{R^2}$ yields good bounds for $RE(G)$. In this section we study $\tr{R^2}$ for a particular family of bipartite graphs, and use it to show that their randi\'{c} energy is bounded by the randi\'{c} energy of the sun graph. The family that we consider is bipartite graphs with bipartition $A,B$, such that $\deg(b)\leq 2$ for every $b\in B$. We denote this graphs as TB graphs. Notice that the family of TB graphs include many important subfamilies of graphs: \begin{itemize} \item starlike trees, which are trees with exactly one vertex of degree greater than 2; \item basic trees, see \cite{JMS}, which are trees with a unique maximum independent set of size $\lceil n/2 \rceil$ (the importance of this trees is due to their null space); \item a graph obtained by taking a graph $G$ and replacing each edge $e=\{v_1,v_2\}$ by two edges $e_1=\{v_1,w_e\}$ and $e_2=\{v_2,w_e\}$. \end{itemize} The graphs described above satisfy the condition $\deg(b)=2$ for every $b\in B$. In this section, we give a bound on $\tr{R^2}$ for any TB graph. Before doing so, we give a short explanation on how to find $\tr{R^2}$ for any bipartite graph. Let $G$ be a bipartite graph. As $G$ is bipartite, the underlying graph of $R^2$ has two connected components. Consider a bipartite graph $G$ with $V(G)=A\cup B$. Let $R$ be the Randi\'{c} matrix of $G$ indexed first with the vertices in $A$ and then in $B$. As there are no edges between vertices in $A$ and no edges between vertices in $B$, $R$ is a block anti-diagonal matrix. I.e., $R$ is of the form $$R=\begin{bmatrix} 0 & C \\ C^{t} & 0 \end{bmatrix},$$ where $C$ is a $|A|\times |B|$ matrix. Then $$R^{2}=\begin{bmatrix} R_{A}^{2} & 0 \\ 0 & R_{B}^{2} \end{bmatrix}$$ with $R_{A}^{2}=CC^{t}$ and $R_{B}^{2}=C^{t}C$. It follows that $tr(R_{A}^{2})=tr(R_{B}^{2})$, and $tr(R^{2})=2tr(R_{A}^{2})=2tr(R_{B}^{2})$. There is actually a big difference between vertices of degree $1$ and vertices of degree $2$ in $B$, hence we partition $B$ into $B_1=\{b\in B\,|\,\deg(b)=1\}$ and $B_2=\{b\in B\,|\, \deg(b)=2\}$. \begin{lemma}\label{lemmaN(a)} Let $G$ be a connected TB graph with $|G|\geq 3$. Then for every $a\in A$, \[ \frac{1}{2}\leq R^2_{a,a}\leq\frac{1}{2}+\frac{1}{4}|N(a)\cap B_1|, \] where $N(a)$ is the neighborhood of $a$. \end{lemma} \begin{Prf} If $\deg(a)\geq 2$, then \begin{align*} R^2_{a,a}=&\sum_{b\in N(a)}\frac{1}{\deg(b)\deg(a)}\\ =&\sum_{b\in N(a)\cap B_1}\frac{1}{\deg(a)}+\sum_{b\in N(a)\cap B_2}\frac{1}{2\deg(a)}\\ =& \sum_{b\in N(a)\cap B_1}\frac{2}{2\deg(a)}+\sum_{b\in N(a)\cap B_2}\frac{1}{2\deg(a)}\\ =& \sum_{b\in N(a)\cap B_1}\frac{1}{2\deg(a)}+\sum_{b\in N(a)\cap B_1}\frac{1}{2\deg(a)}+\sum_{b\in N(a)\cap B_2}\frac{1}{2\deg(a)}\\ =& \sum_{b\in N(a)\cap B_1}\frac{1}{2\deg(a)}+\sum_{b\in N(a)}\frac{1}{2\deg(a)}\\ =& |N(a)\cap B_1|\frac{1}{2\deg(a)}+\deg(a)\frac{1}{2\deg(a)}\\ =& |N(a)\cap B_1|\frac{1}{2\deg(a)}+\frac{1}{2},\\ \end{align*} thus \[ \frac{1}{2}\leq R^2_{a,a}\leq |N(a)\cap B_1|\frac{1}{4}+\frac{1}{2}, \] where the second inequality follows from $\deg(a)\geq 2$. If $\deg(a)=1$, let $b$ be the only neighbor of $a$, \begin{align*} R^2_{a,a}=&\frac{1}{\deg(b)}\\ =& \frac{1}{2}\\ =& |N(a)\cap B_1|\frac{1}{4}+\frac{1}{2}, \end{align*} because $\deg(b)= 2$, as $G$ is a connected TB graph with at least 3 vertices. \end{Prf} $\Box$ Notice that if $G$ is a TB graph, then $E(G)=|B_1|+2|B_2|$. As $G$ is connected, $E(G)\geq n-1=|A|+|B_1|+|B_2|-1$. Thus $2|B_2|\geq |A|+|B_2|-1$, or $|A|\leq |B_2|+1$. We can now bound the trace. \begin{lemma}\label{lemmatraceR2} Let $G$ be a connected TB graph with $|G|\geq 3$. Then $\tr{R_A^2}\leq \frac{n+1}{4}$. \end{lemma} \begin{Prf} \[ \tr{R_A^2}=\sum_{a\in A}R_{a,a}^2 \leq \sum_{a\in A}\left(\frac{1}{2}+\frac{1}{4}|N(a)\cap B_1|\right) \] but as the vertices in $B_1$ have degree $1$, they each appear in exactly one $N(a)\cap B_1$. Hence \[ \sum_{a\in A}\frac{1}{4}|N(a)\cap B_1|=\sum_{b\in B_1}\frac{1}{4}=\frac{|B_1|}{4}. \] Then \begin{align*} \tr{R_A^2}\leq& \sum_{a\in A}\left(\frac{1}{2}\right)+\frac{1}{4}|B_1|\\ \leq& \frac{2|A|}{4}+\frac{1}{4}|B_1|\\ \leq& \frac{|A|+|B_2|+1}{4}+\frac{1}{4}|B_1|\\ \leq& \frac{|A|+|B_2|+1+| B_1|}{4}\\ \leq& \frac{n+1}{4}. \end{align*} \end{Prf} $\Box$ As a TB graph is bipartite, $\tr{R^2}=2\tr{R^2_A}$. \begin{lemma}\label{cotatrTB} Let $G$ be a connected TB graph, then $\tr{R^2}\leq \frac{n+1}{2}$. \end{lemma} \begin{lemma}\label{lemmacotaconnnul} Let $G$ be a bipartite graph. If $\tr{R^2}\leq \frac{n+1}{2}$, then \[ RE(G)\leq \sqrt{n-2-\mnull{R}}\sqrt{n-3}\frac{\sqrt{2}}{2}+2.\] \end{lemma} \begin{Prf} As $G$ is bipartite, Corollary \ref{trspecificbound} yields \begin{align*} RE(G)\leq \sqrt{(n-2-\mnull{R})\tr{R^2-2}}+2. \end{align*} But $\tr{R^2}-2\leq \frac{n+1}{2}-2=\frac{n-3}{2}$. Thus \begin{align*} RE(G)\leq \sqrt{n-2-\mnull{R}}\sqrt{n-3}\frac{\sqrt{2}}{2}+2. \end{align*} \end{Prf} $\Box$ We can now combine Lemma \ref{cotatrTB} with Lemma \ref{lemmacotaconnnul} to obtain the following. \begin{theorem} Let $G$ be a connected TB graph. Then \[ RE(G)\leq \sqrt{n-2}\sqrt{n-3}\frac{\sqrt{2}}{2}+2.\] Even more, if $\mnull{R}\geq 1$, then \[RE(G)\leq (n-3)\frac{\sqrt{2}}{2}+2.\] \end{theorem} Notice that bipartite graphs with an odd number of vertices have $\mnull{R}\geq 1$. This shows that the randi\'{c} energy of TB graphs of odd order is less or equal than the randi\'{c} energy of the sun graph of that same order. \section*{Acknowledgments} This work was partially supported by the Universidad Nacional de San Luis, grant PROICO 03-0918, and MATH AmSud, grant 18-MATH-01.
1,108,101,566,165
arxiv
\section{Optimal Offline Algorithm } In this section, we propose an offline algorithm $\mathsf{OFF}$, and show that it satisfies the sufficiency conditions of Theorem \ref{th_algo1_1}. Algorithm $\mathsf{OFF}$ first finds an initial feasible solution via INIT\_POLICY, and then iteratively improves upon it via PULL\_BACK. Finally, QUIT produces the output. \subsection{INIT\_POLICY} We find a simple constant power policy that is feasible and starts as early as possible. Also, we try to make it satisfy most of the sufficient conditions of Theorem \ref{th_algo1_1}. \textit{Step1:} Identify the first energy arrival instant $\tau_n$, so that using $\mathcal{E}(\tau_n)$ energy and $\TRx_0$ time, $B_0$ or more bits can be transmitted with a constant power (say $p_c$), i.e. $\TRx_0g\left(\dfrac{\mathcal{E}(\tau_n)}{\TRx_0}\right)\ge B_0$. Then solve for $\widetilde{\TRx}_0$, \begin{small} \begin{equation} \widetilde{\TRx}_0\, g\left(\dfrac{\mathcal{E}(\tau_n)}{\widetilde{\TRx}_0}\right)= B_0,\ p_c = \dfrac{\mathcal{E}({\tau_n})}{\widetilde{\TRx}_0}. \label{INIT_POLICY_time} \end{equation} \end{small} \begin{figure} \centering \centerline{\includegraphics[width=8cm]{straight.eps}} \caption{Figure showing point $\tau_q$.}\label{straight} \end{figure} \textit{Step2:} Find the earliest time $T_{start}$, such that transmission with power $p_c$ from $T_{start}$ for $\widetilde{\TRx}_0$ time, is feasible with energy constraint \eqref{pb1_constraint_energy}. Set $T_{stop} = T_{start} + \widetilde{\TRx}_0$. Let $\tau_q$ be the \textit{first epoch} where $U(\tau_q) = \mathcal{E}(\tau_q^-)$ (Fig. \ref{straight}). Next Lemma shows that point $\tau_q$ thus found is a `good' starting solution. \begin{lemma} In every optimal solution, at energy arrival epoch $\tau_q$ defined in INIT\_POLICY, $U(\tau_q)=\mathcal{E}(\tau_q^-)$. \label{lemma_Q} \end{lemma} Continuing with INIT\_POLICY, if $U(T_{stop}) = \mathcal{E}(T_{stop}^-)$ as shown in Fig. \ref{straight}(a), then terminate INIT\_POLICY with constant power policy $p_c$. Otherwise if $U(T_{stop}) < \mathcal{E}(T_{stop}^-)$, then modify the transmission after $\tau_q$ as follows. Set $\widetilde{B}_0 = (T_{stop} - \tau_q)g(p_c)$, which denotes the number of bits left to be sent after time $\tau_q$. Then apply Algorithm 1 of \cite{UlukusEH2011b} \textit{from time $\tau_q$} to transmit $\widetilde{B}_0$ bits in as minimum time as possible without considering the receiver {\it on} time constraint. Update $T_{stop}$, to where this policy ends. So, $U(T_{stop}) = \mathcal{E}(T_{stop}^-)$ from \cite{UlukusEH2011b}. Since Algorithm 1 \cite{UlukusEH2011b} is optimal, it takes minimum time ($=T_{stop}-\tau_q$) to transmit $\widetilde{B}_0$ starting at time $\tau_q$. However, using power $p_c$ to transmit $\widetilde{B}_0$ takes $(T_{start}+\widetilde{\TRx}_0 - \tau_q)$ time. Hence, $T_{stop}\le (T_{start}+\widetilde{\TRx}_0)$. As $\widetilde{\TRx}_0\le \TRx_0$ from \eqref{INIT_POLICY_time}, $(T_{stop}- T_{start})\le \TRx_0$. This shows that solution thus found using Algorithm 1 \cite{UlukusEH2011b}, is indeed feasible with receiver time constraint \eqref{pb1_constraint_time}. Now, output of INIT\_POLICY is a policy that transmits at power $p_c$ from $T_{start}$ to $\tau_q$, and after $\tau_q$ uses Algorithm 1 of \cite{UlukusEH2011b}. \begin{figure} \centering \centerline{\includegraphics[width=8cm]{Algorithm1.eps}} \caption{Figures showing possible configurations in any iteration of the PULL\_BACK. The solid line represents the transmission policy in the previous iteration and dash dotted lines are for the current iteration.}\label{figure_Algorithm1} \end{figure} \subsection{ PULL\_BACK} Now, we describe the iterative subroutine PULL\_BACK whose input is policy $\{\bm{p},\bm{s},N\}$ output by INIT\_POLICY. Clearly $\{\bm{p},\bm{s},N\}$ satisfies all but structure \eqref{claim2} of Theorem \ref{th_algo1_1}. So, the main idea of PULL\_BACK is to increase the transmission duration from $(s_{N+1}-s_1)\le \widetilde{\TRx}_0$, in INIT\_POLICY, to $\TRx_0$ in order to satisfy \eqref{claim2}, while decreasing the finish time for reaching the optimal solution. To achieve this, we utilize the structure presented in Lemma \ref{lemma_increase_time} and iteratively increase the last transmission power $p_N$, and decease the first transmission power $p_1$. Initialize $\tau_l=s_2,\tau_r=s_N,p_l=p_1,p_r=p_N,T_{start}=s_1,T_{stop}=s_{N+1}$. In any iteration, $\tau_{l}$ and $\tau_{r}$ are assigned to the first and last energy arrival epochs, where $U(\tau_l)=\mathcal{E}(\tau_l^-)$ and $U(\tau_r)=\mathcal{E}(\tau_r^-)$. $p_l$ and $p_r$ are the constant transmission powers before $\tau_l$ and after $\tau_r$, respectively. We reuse the notation $\tau$ here, because $\tau_l$ and $\tau_r$ will occur at energy arrival epochs from Lemma \ref{lemma_energy_consumed}. $T_{start}$ and $T_{stop}$ are the start and finish time of the policy, found in any iteration. $\tau_l, \tau_r, p_l, p_r, T_{start}, T_{stop}$ get updated to $\tau_l', \tau_r', p_l', p_r', T'_{start}, T'_{stop}$ over an iteration. In any iteration, only one of $\tau_l$ or $\tau_r$ gets updated, i.e., either $\tau_l'=\tau_l$ or $\tau_r'=\tau_r$. Further, PULL\_BACK ensures that \textit{transmission powers between $\tau_l$ and $\tau_r$ do not get changed} over an iteration. Fig. \ref{figure_Algorithm1} shows the possible updates in an iteration of PULL\_BACK. \textit{Step1, Updation of $\tau_r$, $p_r$:} Initialize $p_r'=p_r$ and increase $p_r'$ till it hits the boundary of energy constraint \eqref{pb1_constraint_energy}, say at $(t_r',\mathcal{E}(t_r'^-))$ as shown in Fig. \ref{figure_Algorithm1}(a). The last epoch where $p_r'$ hits \eqref{pb1_constraint_energy} is set to $\tau_r'$. So, $U(\tau_r') = \mathcal{E}(\tau_r'^-)$. Set $T_{stop}'$ to where power $p_r'$ ends. Calculate $p_l'$ such that decrease in bits transmitted due to change from $p_r$ to $p_r'$ is compensated by increasing $p_l$ to $p_l'$, via \begin{align} \nonumber g(p_r)(T_{stop}&-\tau_r)-g(p_r')(T_{stop}'-\tau_r') \\ &=g(p_l')\frac{\mathcal{E}(\tau_l'^-)}{p_l'}-g(p_l)(\tau_l-T_{start}). \label{eq_example1} \end{align} Suppose, $p_r$ can be increased till infinity without violating \eqref{pb1_constraint_energy}, as shown in Fig. \ref{figure_Algorithm1}(b). This happens when there in no energy arrival between $\tau_r$ and $T_{stop}$. In this case, set $p_r'$ to the transmission power at $\tau_r^-$. Set $\tau_r'$ as the epoch where $p_r'$ starts, and $T_{stop}'$ to $\tau_r$. Calculate $p_l'$ similar to \eqref{eq_example1}. \textit{Step2, Updation of $\tau_l, p_l$}: If $p_l'$ obtained from \textit{Step1} is feasible, as shown in Fig. \ref{figure_Algorithm1}(a), set $T_{start}'=\tau_l-\frac{\mathcal{E}(\tau_l'^-)}{p_l'}$, $\tau_l'=\tau_l$. Proceed to \textit{Step3}. Otherwise, if $p_l'$ is not feasible, as shown in Fig. \ref{figure_Algorithm1}(c), the changes made to $\tau_r',p_r'$ in \textit{Step1} are discarded. As shown in Fig. \ref{figure_Algorithm1} (d), $p_l'$ is increased from its value in \textit{Step1} until it becomes feasible. $\tau_l'$ is set to the first epoch where $U(\tau_l') = \mathcal{E}(\tau_l'^-)$. Similar to \textit{Step1}, calculate $p_r'$ such that the increase in bits transmitted due to change of $p_l$ to $p_l'$ is compensated, and update $T_{stop}'$ accordingly. Set $\tau_r'=\tau_r$. Proceed to \textit{Step3}. \textit{Step3, Termination condition:} If $T_{stop}' - T_{start}' \ge \TRx_0$ or $T_{start}' = 0$, then terminate PULL\_BACK. Otherwise, update $\tau_l, \tau_r, p_l, p_r, T_{start}, T_{stop}$ to $\tau_l', \tau_r', p_l', p_r', T'_{start}, T'_{stop}$ receptively and GOTO \textit{Step1}. \begin{lemma} Transmission time $(T_{stop}-T_{start})$ monotonically increases over each iteration of PULL\_BACK. \label{lemma_PULL_BACK_power} \end{lemma} \begin{theorem} Worst case running time of PULL\_BACK is linear with respect to the number of energy harvests before finish time of INIT\_POLICY. \end{theorem} \begin{proof} Since, in an iteration of PULL\_BACK, either $\tau_r$ or $\tau_l$ updates, the number of iterations is bounded by the values attained by $\tau_l$, plus that of $\tau_r$. Initially, $\tau_l\le \tau_q$ and $\tau_r \ge \tau_q$. As $\tau_l$ is non-increasing across iterations, $\tau_l\le \tau_q$ throughout. Assume that $\tau_r$ remains $\ge \tau_q$ across INIT\_POLICY. Then, both $\tau_l$ and $\tau_r$ can at max attain all $\tau_i$'s less than finish time of initial feasible policy. Hence, we are done. Now, it remains to show that $\tau_r\ge \tau_q$. $\tau_n$ is defined as the first energy arrival epoch with which $B_0$ or more bits can be transmitted in $\TRx_0$ time and $\tau_q\le \tau_n$, by definition. So, when $T_{stop}$ becomes $\le \tau_n \, or \, \tau_q$, then transmission time, $(T_{stop}-T_{start})$, should be $>\TRx_0$. But, in the initial iteration $(T_{stop}-T_{start})\le \TRx_0$ and $(T_{stop}-T_{start})$ increases monotonically, from Lemma \ref{lemma_PULL_BACK_power}. Hence, PULL\_BACK will terminate before $T_{stop}$ (and therefore $\tau_r$) decreases beyond $\tau_q$. \end{proof} \subsection{QUIT}If $T_{start}' = 0$ and $T_{stop}' - T_{start}' \le \TRx_0$ upon PULL\_BACK's termination, then PULL\_BACK's policy at termination is output. Note that structure \eqref{claim2} holds for this policy. Otherwise, if $T_{stop}' - T_{start}' > \TRx_0$ (which happens for the first time), then we know that in penultimate step $T_{stop} - T_{start} < \TRx_0$. Hence, we are looking for a policy that starts in $ [T_{start} ,\ T_{start}']$ and ends in $[T_{stop} ,\ T_{stop}']$, whose transmission time is equal to $\TRx_0$. Hence, we solve for $x,y$ (let the solution be $\hat{x},\hat{y}$), \begin{align} \nonumber (\tau_l-x)& \; g\left(\frac{\mathcal{E}(\tau_l^-)}{\tau_l-x}\right)+(y-\tau_r)\; g\left(\frac{\mathcal{E}(T_{stop}^-)}{y-\tau_r}\right)\\ &=g(p_l)(\tau_l-T_{start})+g(p_r)(T_{stop}-\tau_r), \label{eq_termination_0} \\ y-x&=\TRx_0. \label{eq_termination} \end{align} At penultimate iteration, $(x,y)=(T_{start},T_{stop})$, \eqref{eq_termination_0} is satisfied and $y-x<\TRx_0$. At $(x,y)=(T_{start}',T_{stop}')$, as $\mathcal{E}(T_{stop}^-)=\mathcal{E}(T_{stop}'^-)$, \eqref{eq_termination_0} is satisfied and $y-x>\TRx_0$. So, there must exist a solution $(\hat{x},\hat{y})$ to \eqref{eq_termination_0}, where $\hat{x}\in [T_{start}',T_{start}]$, $\hat{y}\in [T_{stop}',T_{stop}]$ and $\hat{y}-\hat{x}=\TRx_0$, for which, \eqref{claim2} holds. Output with this policy which starts at $\hat{x}$ and ends at $\hat{y}$. \begin{theorem} The transmission policy proposed by Algorithm $\mathsf{OFF}$ is an optimal solution to Problem \eqref{pb1}. \label{th_algo1_2} \end{theorem} \begin{proof} We show that Algorithm $\mathsf{OFF}$ satisfies the sufficiency conditions of Theorem \ref{th_algo1_1}. To begin with, we prove that the power allocations satisfy \eqref{claim3} by induction. First we establish the base case that INIT\_POLICY's output satisfies \eqref{claim3}. If INIT\_POLICY returns the constant power policy $p_c$ from time $T_{start}$ to $T_{stop}$, then clearly the claim holds. Otherwise, INIT\_POLICY applies Algorithm 1 from \cite{UlukusEH2011b} with $\widetilde{B}=B_0-g(p_c)(\tau_q-T_{start})$ bits to transmit after time $\tau_q$. Algorithm 1 from \cite{UlukusEH2011b} ensures that transmission powers are non-decreasing after $\tau_q$. So we only prove that the transmission power $p_c$ between time $T_{start}$ and $\tau_q$ is $\le$ to the transmission power just after $\tau_q$ (say $p_q$), via contradiction. Assume that $p_q<p_c$. Let transmission with $p_q$ end at an epoch $\tau_{q'}$, where $U(\tau_{q'})=\mathcal{E}(\tau_{q'}^-)$ form \cite{UlukusEH2011b}. The energy consumed between time $\tau_q$ to $\tau_{q'}$ with power $p_c$ is, \begin{equation} p_c(\tau_{q'}-\tau_q)>p_q(\tau_{q'}-\tau_q)\stackrel{(a)}{=}(\mathcal{E}(\tau_{q'}^-)-\mathcal{E}(\tau_q^-)),\label{eq_1_algo1_modified} \end{equation} where $(a)$ follows from $U(\tau_q)=\mathcal{E}(\tau_q^-)$. Further, the maximum amount of energy available for transmission between $\tau_q$ and $\tau_{q'}$ is $\left(\mathcal{E}(\tau_{q'}^-)-\mathcal{E}(\tau_q^-)\right)$. By \eqref{eq_1_algo1_modified}, transmission with $p_c$ uses more than this energy and therefore it is infeasible between time $\tau_q$ and $\tau_{q'}$. But, by definition of $p_c$, transmission with power $p_c$ is feasible till time $(T_{start}+\widetilde{\TRx}_0)$. Also, $\tau_{q'}\le T_{stop}$ by definition of $\tau_{q'}$ and $T_{stop}\le (T_{start}+\widetilde{\TRx}_0)$. So, power $p_c$ must be feasible till $\tau_{q'}$ and we reach a contradiction. Now, we assume that the transmission powers from PULL\_BACK are non-decreasing till its $n^{th}$ iteration. Therefore, as transmission powers between $\tau_l$ and $\tau_r$ does not change over an iteration, powers would remain non-decreasing in the $(n+1)^{th}$ iteration if we show that $p_l'<p_l$ and $p_r'>p_r$. In any iteration, by definition, either $\tau_l$ or $\tau_{r}$ updates. Assume $\tau_l$ gets updated to $\tau_{l}'$, $p_l$ to $p_l'$, $p_r$ to $p_r'$ and $\tau_r$ remains same, shown Fig. \ref{figure_Algorithm1}(d) (when $\tau_r$ updates, the proof follows similarly). Then we are certain that $p_{r}'>p_r$ by algorithmic steps. So from $n^{th}$ to $(n+1)^{th}$ iteration, the number of bits transmitted after $\tau_r$ should decrease. Thus, the number of bits transmitted before $\tau_l$ must be increasing. This implies $p_l'\le p_l$. Hence, transmission powers by output by $\mathsf{OFF}$ are non-deceasing and it satisfies \eqref{claim3}. Now consider structure \eqref{claim5}. As $\tau_q$ is present in INIT\_POLICY, the only way it cannot be part of the policy in an iteration of PULL\_BACK is when $\tau_r$ decreases beyond $\tau_q$. But $\tau_r\ge \tau_q$ as shown in Theorem 2. So, the policy output by $\mathsf{OFF}$ includes $\tau_q$. By arguments presented at end of OUIT, we know that $\mathsf{OFF}$ satisfies \eqref{claim2}. To conclude, $\mathsf{OFF}$ satisfies \eqref{claim1}-\eqref{claim5}, hence is an optimal algorithm. \end{proof} \section{Introduction} Extracting energy from nature to power communication devices has been an emerging area of research. Starting with \cite{ozel2012achieving, SharmaEH2014}, a lot of work has been reported on finding the capacity, approximate capacity \cite{dong2014near}, structure of optimal policies \cite{sinha2012optimal}, optimal power transmission profile \cite{UlukusEH2011b, UlukusEH2011c, michelusi2012optimal, VazeEHICASSP14}, competitive online algorithms \cite{VazeEH2011}, etc. One thing that is common to almost all the prior work is the assumption that energy is harvested only at the transmitter while the receiver has some conventional power source. This is clearly a limitation, however, helped to get some critical insights into the problem. In this paper, we broaden the horizon, and study the more general problem when energy harvesting is employed both at the transmitter and the receiver. The joint (tx-rx) energy harvesting model has not been studied in detail and only some preliminary results are available, e.g., a constant approximation to the maximum throughput has been derived in \cite{VazeEH2014}. This problem is fundamentally different than using energy harvesting only at the transmitter, where receiver is always assumed to have energy to receive. The receiver energy consumption model is binary, since it uses a fixed amount of energy to stay {\it on}, and is {\it off} otherwise. Since useful transmission happens only when the receiver is \textit{on}, the problem is to find jointly optimal decisions about transmit power and receiver ON-OFF schedule. Under this model, there is an issue of coordination between the transmitter and receiver to implement the joint decisions, however, we ignore that currently in the interest to make some analytical progress. We study the canonical problem of finding the optimal transmission power and receiver ON-OFF schedule to minimize the time required for transmitting a fixed number of bits. We first consider the offline case, where the energy arrivals both at the transmitter and the receiver are assumed to be known non-causally. Even though offline scenario is unrealistic, it still gives some design insights. Then we consider the more useful online scenario, where both the transmitter and receiver only have causal information about the energy arrivals. To characterize the performance of an online algorithm, typically, the metric of competitive ratio is used that is defined as the maximum ratio of profit of the online and the offline algorithm over all possible inputs. In prior work \cite{UlukusEH2011b}, an optimal offline algorithm has been derived for the case when energy is harvested only at the transmitter, which cannot be generalized with energy harvesting at the receiver together with the transmitter. To understand the difficulty, assume that the receiver can be \textit{on} for a maximum time of $T$. The policy of \cite{UlukusEH2011b} starts transmission at time $0$, and power transmission profile is the one that yields the tightest piecewise linear energy consumption curve that lies under the energy harvesting cure at all times and touches the energy harvesting curve at end time. With receiver {\it on} time constraint, however, the policy of \cite{UlukusEH2011b} may take more than $T$ time and hence may not be feasible. So, we may have to either delay the start of transmission and/or keep stopping in-between to accumulate more energy to transmit with higher power for shorter bursts, such that the total time for which transmitter and receiver is \textit{on}, is less than $T$. The contributions of this paper are : \begin{itemize} \item For the offline scenario, we derive the structure of the optimal algorithm, and then propose an algorithm that is shown to satisfy the optimal structure. The power profile of the proposed algorithm is fundamentally different than the optimal offline algorithm of \cite{UlukusEH2011b}, however, the two algorithms have some common structural properties. \item For the online scenario, we propose an online algorithm and show that its competitive ratio is strictly less than $2$ for any energy arrival inputs. With only energy harvesting at the transmitter, a $2$-competitive online algorithm has been derived in \cite{VazeEH2011}. This result is more general with different proof technique that allows energy harvesting at the receiver. \end{itemize} \section{System Model} \input{NotationsICC} \section{OPTIMAL OFFLINE ALGORITHM} \input{OptimalOfflineICC} \section{ONLINE ALGORITHM} \input{onlineICC} \bibliographystyle{IEEEtran}
1,108,101,566,166
arxiv
\section{Introduction} Common supervised machine learning approaches to extractive summarisation attempt to label individual text extracts (usually sentences or phrases; in this paper we will use sentences). In a subsequent stage, a summary is generated based on the predicted labels of the individual sentences and other factors such as redundancy of information. The process of obtaining the annotated data can be complex. Data sets often contain complete summaries written manually. Well-known examples of data sets of this type are the DUC and TAC data sets \cite{Dang:2006,Dang:2008b}. In such cases the task of labelling individual sentences is not straightforward and needs to be derived from the full summaries. Alternatively, annotations can be obtained through highlights made by the annotators \cite[for example]{Woodsend2010}. Regardless of the means used to annotate individual sentences, the final evaluation of the system compares the output summary with a set of target summaries, either by using human judges or automatically by using packages such as ROUGE \cite{Lin:2004}. However, machine learning approaches designed to minimise the prediction error of individual sentences would not necessarily minimise the prediction error of the final summary evaluation metric. In this paper we propose a proof-of-concept method that uses reinforcement learning with global policy as a means to use the ROUGE\_L evaluation of the final summary directly in the training process. Section~\ref{sec:rl} introduces reinforcement learning and mentions past work on the use of reinforcement learning for summarisation. Section~\ref{sec:sum} describes our proposal for the use of reinforcement learning for query-based summarisation. Section~\ref{sec:results} presents the results of our experiments, and Section~\ref{sec:conclusions} concludes this paper. \section{Reinforcement Learning}\label{sec:rl} Reinforcement Learning (RL) is a machine learning approach that is designed to train systems that aim to maximise a long-term goal, even when there is no knowledge (or little knowledge) of the impact of the individual decisions that are made to achieve the goal. A RL task (Figure~\ref{fig:rl}) consists of an environment that can be observed and can be acted on, and an agent that makes a sequence of actions. The effect of undertaking an action ($a$) on the environment will result in an observed state ($s$) and a reward ($r$). The agent then needs to learn the sequence of actions that maximises the cumulative reward. \begin{figure} \centering \begin{tikzpicture}[>=latex] \path (0,0) node [draw,rounded corners] (agent) {Agent} (0,-1.5) node [draw,rounded corners] (env) {Environment}; \draw[->] (env) .. controls (-0.5,-0.75) .. node[left] {$s$, $r$} (agent); \draw[->] (agent) .. controls (0.5,-0.75) .. node[right] {$a$} (env); \end{tikzpicture} \caption{The reinforcement learning process.} \label{fig:rl} \end{figure} The task of query-based summarisation can be reduced to a RL task by assigning null reward $r=0$ to the decision of selecting each individual sentence or not, until the point at which a final summary has been extracted. At the moment that a final summary has been extracted, the reward $r$ is the actual evaluation score of the full summary. The RL approach should learn a policy $\pi$ such that the agent can determine how the individual decisions made at the time of selecting (or not) a sentence would impact on the evaluation score of the final summary. \newcite{Ryang2012} and \newcite{Rioux2014} propose the learning of a local policy $\pi$ that is specific to each summary. For this purpose, the reward $r$ of the entire summary is calculated based on measures of similarity between the summary and the source document. Thus, \newcite{Ryang2012} uses information such as coverage, redundancy, length and position. \newcite{Rioux2014} uses a reward system that is more similar to the ROUGE set of metrics, but again using only information from the source text and the generated summary. Effectively, these approaches use RL as a means to search the space of possible selections of sentences by training a local policy that needs to be re-trained each time a new summary needs to be generated. \newcite{Ryang2012} mentions the possibility of training a global policy in the section of further work provided that there is a mean to provide a feature representation of a summary. In this paper we show a simple way to represent the state of the environment, including the summary, such that the system can train a global policy. We use a training set annotated with target summaries to train a global policy that uses the direct ROUGE\_L score as the reward. Once a global policy has been learnt, it is applied to unseen text for evaluation. By using a global policy instead of a local policy, the system can use the direct ROUGE\_L score instead of an approximation, and the computational cost shifts to the training stage, enabling a faster generation of summaries after the system has been trained. There is also research that use other mechanisms in order to train a summarisation system using the direct ROUGE score \cite{Aker2010} or an approximation \cite{Peyrard2016}. \section{Reinforcement Learning for Query-based Extractive Summarisation}\label{sec:sum} This section describes our proposal for the adaptation of query-based summarisation to RL with global policy. \subsection{Environment} After applying a decision whether sentence $i$ is to be selected as a summary or not, the environment records the decision and issues a reward $r=0$. After all decisions have been made, the environment builds the summary by concatenating all selected sentences in linear order. Then, the environment returns the ROUGE\_L score of the summary as the reward. More formally, and assuming that the total number of sentences in the input text is $n$, the reward is computed as follows: $$ r = \left\{ \begin{array}{ccc} 0 & \hbox{if} & i<n\\ \hbox{ROUGE\_L} & \hbox{if} & i=n\\ \end{array} \right. $$ This process is inspired in \newcite{Ryang2012}'s framework, the difference being that, in our work, the reward returned when $i=n$ is the actual ROUGE\_L score of the summary instead of an approximation. For the purposes of this paper, the environment is implemented as an object $\mathtt{env}$ that allows the following operations: \begin{itemize} \item $s \leftarrow \mathtt{env.reset}(\mathtt{sample})$: reset to sample $\mathtt{sample}$ and return an initial state $s$. \item $s,r,\mathtt{done} \leftarrow \mathtt{env.step}(a)$: perform action $a$ and return state $s$, reward $r$, and a Boolean value $True$ if all input sentences have been processed. \end{itemize} \subsection{Action Space} At each step of the RL process, the agent will decide whether a particular sentence is to be selected (1) or not (0). \subsection{State}\label{sec:state} The RL framework is greedy in the sense that, once a decision is made about sentence $i$, it cannot be undone. The agent should therefore have the information necessary to make the right decision, including information about what sentences are yet to process. Since the agent uses a global policy, the state should be able to encode information about any number of input sentences, and any number of remaining sentences. We resolved this by building vectors that represent sequences of sentences. In this paper we use \emph{tf.idf}, but other methods could be used, such as sentence embeddings learnt by training deep neural networks. In concrete, the environment provides the following state: \begin{enumerate} \item \emph{tf.idf} of the candidate sentence $i$. \item \emph{tf.idf} of the entire input text to summarise. \item \emph{tf.idf} of the summary generated so far. \item \emph{tf.idf} of the candidate sentences that are yet to be processed. \item \emph{tf.idf} of the question. \end{enumerate} Information 2. and 3. would be useful to determine whether the current summary is representative of the input text. Information 4. would be useful to determine whether there is still important information that could be added to the summary in future steps. The agent could then, in principle, contrast 1. with 2., 3., 4. and 5. to determine whether sentence $i$ should be selected or not. \subsection{Global Policy} The global policy is implemented as a neural network that predicts the probability of each action $a$ available in the action space $\{0,1\}$. In practice, the system only needs to predict $Pr(a=0)$. As a proof of concept, the neural network implemented in this paper is simply a multi-layer network with one hidden layer that uses a relu activation, and the output unit is a Bernoulli logistic unit. Thus, given a state $s$ formed by concatenating all the items listed in Section~\ref{sec:state}, the network predicts $Pr(a=0)$ as follows. $$ \begin{array}{rcl} Pr(a=0) & = & \sigma(h\cdot W_h + b_h)\\ h & = & \max(0, s\cdot W_s + b_s) \end{array} $$ In our experiments, the size of the hidden layer is 200. \subsection{Learning Algorithm} The learning algorithm for the global policy is a variant of the REINFORCE algorithm \cite{Williams1992} that uses gradient descent with cross-entropy gradients that are multiplied with the reward \cite[Chapter 16]{Geron2017}. This is shown in Algorithm~\ref{fig:learning}. \begin{algorithm} \KwData{$\mathtt{train\_data}$} \KwResult{$\theta$} $\mathtt{sample} \sim Uniform(\mathtt{train\_data})$\; $s \leftarrow \mathtt{env.reset}(\mathtt{sample})$\; $\mathtt{all\_gradients} \leftarrow \emptyset$\; $\mathtt{episode} \leftarrow 0$\; \While{True}{ $\xi \sim Bernoulli\left(\frac{Pr(a=0)+p}{1+2\times p}\right)$\; $y \leftarrow 1-\xi$\; $\mathtt{gradient} \leftarrow \frac{\nabla(\hbox{cross\_entropy}(y,Pr(a=0))}{\nabla \theta}$\; $\mathtt{all\_gradients.append}(\mathtt{gradient})$\; $s, r, done \leftarrow \mathtt{env.step}(\xi)$\; $\mathtt{episode} \leftarrow \mathtt{episode} + 1$\; \If{done}{ $\theta \leftarrow \theta - \alpha\times r \times \hbox{mean}(\mathtt{all\_gradients})$\; $\mathtt{sample} \sim Uniform(\mathtt{train\_data})$\; $s \leftarrow \mathtt{env.reset}(\mathtt{sample})$\; $\mathtt{all\_gradients} \leftarrow \emptyset$\; } } \caption{Training by Policy Gradient, where $\theta = (W_h, b_h, W_s, b_s)$.} \label{fig:learning} \end{algorithm} \begin{figure*} \centering \includegraphics[width=\textwidth]{results.png} \caption[Results of the system]{Results of the system. The results of training (black line) are the average ROUGE\_L of the last 1000 chosen training samples at every point. The results of testing (red line) are the average ROUGE\_L of the test set.} \label{fig:results} \end{figure*} In Algorithm~\ref{fig:learning}, the neural net predicts $Pr(a=0)$. The action chosen during training is sampled from a Bernoulli distribution with probability $Pr(a=0)$ that has a perturbation $p$, such that $p$ slowly decreases at each training episode. By adding this perturbation the system explores the two possible actions in the early stages of training and delays locking in possible local minima. In our implementation, $p$ is computed with an initial value of 0.2 and decreasing using the formula: $$ p = 0.2 \times 3000 / (3000 + \hbox{episode}) $$ Thus, $p=0.1$ after 3000 episodes, and so on. When a full summary has been produced, the mean of all cross-entropy gradients used in all the steps that lead to the summary is computed and multiplied by the summary reward to update the neural network trainable parameters. Using RL terminology, the method uses undiscounted reward. At run time, the action $a$ chosen is simply the action $a$ with highest probability. \section{Experiments and Results}\label{sec:results} We have used the data provided by BioASQ 5b Phase B \cite{Tsatsaronis:2015}. The dataset has 1799 questions together with input text and ideal answers. These ideal answers form the target summaries. We have split the data into a training an a test set. Algorithm~\ref{fig:learning} updates the parameters $\theta$ by applying standard gradient descent. In our experiments, we have used the Adam optimiser instead, which has been shown to converge rapidly in many applications \cite{Kingma2015}. Also, due to computing limitations, our implementation only processes the first 30 sentences of the input text. Figure~\ref{fig:results} shows the progress of training and evaluation. We can observe that the neural net learns a global policy that improves the ROUGE\_L results of the training data (black line). More importantly, it also improves the ROUGE\_L results when presented with the test data (red line). It appears that the system starts overfitting after about 200,000 training steps Considering that the state does not have direct information about the sentence position or the length of the summary, and given the relatively small training data, these results are encouraging. It is well known that sentence position carries important information for the task of summarisation. Also, preliminary experiments adding summary length to the state showed quicker convergence to better values. In this paper we chose not to incorporate any of this information to test the capabilities of the use of reinforcement learning. \section{Conclusions}\label{sec:conclusions} We have presented a reinforcement learning approach that learns a global policy for the task of query-based summarisation. Our experiments used fairly simple features to represent the state of the environment. Also, the neural network implemented to model the global policy is fairly simple. Yet, the system was able to effectively learn a global policy. In further work we will explore the use of more sophisticated features such as word or sentence embeddings, and more sophisticated neural networks. Further work will also explore the use of variants of reinforcement learning algorithms in order to speed up the learning process.
1,108,101,566,167
arxiv
\section*{} \section{INTRODUCTION} In past decades there has been a large amount of discussion regarding the secular evolution of bars in spiral galaxies. Numerical simulations show that gas inflow, star formation and the subsequent buildup of a central mass concentration (CMC) can considerably alter the shape of a bar and even result in the dissolution of the bar itself (Norman, Sellwood \& Hasan 1996; Shen \& Sellwood 2004; Kormendy \& Kennicutt 2004; Hozumi \& Hernquist 2005; Athanassoula, Lambert \& Dehnen 2005; Bournaud, Combes \& Semelin 2005). However, the fraction of bars remains fairly high out to a redshift of z$\sim$0.7 to 1 (Sheth et al. 2003; Elmegreen, Elmegreen \& Hirst 2004; Jogee et al. 2004; Sheth et al. 2007) which suggests that either bars are long-lived or can reform in galaxy disks. Simulations suggest that bars can dissolve but form again once the disk is cooled through gas accretion (Bournaud \& Combes 2002) or through tidal interactions (Berentzen et al. 2004). Although there has been a considerable amount of numerical work on bar formation and dissolution, there is not much observational evidence. Molecular gas has a higher central concentration in barred galaxies compared to non-barred galaxies which suggests that bars are important for building up a CMC (Sakamoto et al. 1999; Sheth et al. 2005). The bar length has been found to correlate with bar intensity contrast and central luminosity densities (Elmegreen et al. 2007). A more direct indicator of how bars change with mass concentration is the correlation of CMC and bar ellipticity (Das et al. 2003) which shows that bar ellipticity declines with increasing dynamical CMC. An enhanced CMC also results in the deepening of the central potential and gives rise to a dynamically hotter galaxy center. This will lead to a higher observed central stellar velocity dispersion in a galaxy ($\sigma_{v}$). This effect has been seen in numerical simulations where a dynamically hotter center results when a bar dissolves (Friedli \& Benz 1993; Hasan, Pfenniger \& Norman 1993) or weakens (Athanassoula, Lambert \& Dehnen 2005). In this paper we investigate whether there is any observational evidence for this heating effect by examining the correlation between the central velocity dispersion in bars and the bar strength. \section{Central Velocity Dispersion} The $\sigma_{v}$ values used in this paper are from published integral-field spectroscopy (IFS) observations of the nuclear regions of galaxies. We have thirty one galaxies in our sample (Table~1). The majority of $\sigma_{v}$ values are from the SAURON survey of nearby galaxies (de Zeeuw et al. 2002; Ganda et al. 2006; Falcon-Barroso et al. 2006; Peletier et al. 2007); a smaller number are from the INTEGRAL and SPIRAL instruments on the William Herschel and Anglo-Autralian Telescopes respectively (Batcheldor et al. 2005). The $\sigma_{v}$ values for the remaining four galaxies are from observations using the GMOS instrument at the Gemini North Telescope (Barbosa et al. 2002). The velocity dispersion values that we have used are derived from two dimensional stellar velocity fields and not gas kinematics. One drawback of using these $\sigma_{v}$ values is that the aperture size is not uniform across the sample. Hence we have tried to bring the apertures to a common system using the aperture correction formula of Jorgensen, Franx \& Kjaergaard (1995). The velocity dispersions were corrected to a radius $R_{e}/8$, where $R_{e}$ is the effective bulge radius of a galaxy. Thus $\sigma_{v}$ was assumed equal to the average velocity dispersion within a radius $R_{e}/8$ of the bulge of a galaxy. For five galaxies it was not possible to determine $R_{e}$ either because the bulge was not distinguishable or the image quality was poor. Also, since the galaxies span a wide variety of size and mass, we normalised the new aperture corrected velocity dispersions ($\sigma_{e}$) with the HI gas rotation velocity $v_{g}$ for each galaxy. This also takes out some of the luminosity/size effects for the galaxies. The $v_{g}$ values were derived from the maximum HI rotation velocities ($v_{H}$) using the galaxy axes ratios ($q$) to correct for inclination (i.e. $v_{g}=v_{H}/\sqrt{1-q^{2}}$~); $v_{H}$ was obtained from the Hyperleda database. The ratio $\sigma_{e}/v_{g}$ is thus an indicator of how kinematically hot the center is for each galaxy relative to its disk rotation speed. \section{Bar Strength Derived from Near-IR Images} There are several ways to quantify the strength of a bar in a galaxy. In this paper we have derived the bar strength in two ways; bar strength is assumed to be the maximum relative bar torque ($Q_g$) derived from the gravitational potential of a galaxy. This is probably the most robust estimate of bar strength but it is sensitive to the bulge mass or luminosity. The second way of estimating bar strength is to use the maximum of the relative intensity amplitude of the bar in the near-IR image ($A_2$). Both methods require that the images be deprojected before analysis. This assumes the disks are thin, but the presence of a less flattened bulge component could lead to artificial stretching of bulge isophotes. To minimize this effect, we decomposed the bulge from the disk, subtracted it from the image before deprojection, and added it back (assuming that bulge light is spherically symmetric) after deprojecting the disk/bar light. The bulge components were separated from the disks using a two-dimensional multicomponent decomposition code which uses a Sersic model for the bulge, an exponential function for the disk, and either Ferrers' or Sersic's functions for the bar (Laurikainen, Salo, and Buta 2005). Effects of seeing were taken into account, using the values of the full width at half maximum stored in image headers or provided in articles. The effective radius of the bulge ($R_e$) was estimated by integrating the flux of the fitted bulge model. The images were obtained mainly from the 2MASS survey and some from previous studies. The fiters used were either K or H band. The bar parameters were derived as follows. \noindent {\bf (i)~$Q_g$ :}~The gravitational potentials ($\Phi$) were inferred from near-IR light distributions assuming that the light traces the mass. Bar induced tangential forces were calculated using a Polar method, as described in Laurikainen \& Salo (2002) and Laurikainen, Salo \& Buta (2004). In particular, the calculation applies an azimuthal Fourier decomposition of intensity, including the even components up to m=20, which are then converted to the corresponding potential components (Salo et al. 1999). Two dimensional maps of the radial force ($F_R$) and tangential force ($F_T$) were calculated. The radial profile of the maximum tangential force at each distance is given by, \begin{equation} Q_T(r) = { ~~~~~|F_{T}(r,\phi)|_{max} \over <|F_{R}(r,\phi)|>} \end{equation} \noindent where $<|F_{R}(r,\phi)|>$ denotes the azimuthally averaged axisymmetric force at each radius. The maximum in the $Q_T$ profile at the region of the bar then gives a single measure of bar strength, $Q_g$. The main assumptions are that the mass-to-luminosity ratio is constant in the bar region, and that the vertical light distribution can be approximated by an exponential. The scale height, $h_z$, was estimated from an empirical relation between $h_r$/$h_z$ and the de Vaucouleurs type index T (de Grijs 1998), where $h_r$ is the radial scale length of the disk. The images were taken from the literature and are generally not very deep. Therefore, instead of estimating $h_r$ from the new decompositions, we used mainly the $h_r$ values from Baggett et al. (1998); if it was not available, we used other sources in the literature where deep optical images had been used to derive $h_r$. For two galaxies $h_r$ was taken to be the mean value for our sample of 31 galaxies. \noindent {\bf (ii)~$A_2$ :}~The same Fourier method gives us also the m=2 amplitudes of bar intensity contrast in the bar region. For some of the galaxies, Table 1 gives $Q_g$ and $A_2$ but not the effective bulge radius. This is because application of the 2D decomposition method requires deeper images than the methods used to calculate $Q_g$ and $A_2$. \vspace{-2mm} \section{Estimating the Correlation } Figure~1 shows bar strength $Q_g$ plotted against the normalized velocity dispersion $\sigma_{e}/v_g$ for 26 galaxies. Although there are 31 galaxies in the sample, we could use only 26 because the effective radius $R_e$ could not be calculated for a few cases. The errors have been calculated using the standard error propagation equation and include the uncertainties in the observed quantities. The majority of galaxies seem to follow a trend of decreasing bar strength with increasing central velocity dispersion. We quantified the correlation in two ways. The linear correlation coefficient for this sample of 26 galaxies is $r=-0.50$ and the probabilty that they are from a random sample $P_r$ is $P_r~<~1$\%. This method does not include the errors on both axes. A more accurate estimate would be a weighted correlation coefficient, but this is difficult to obtain in practice (Feigelson \& Babu 1992). Instead we used a simple Monte Carlo simulation that randomly samples the errors on both axes and determines a mean weighted correlation coefficient $<r>$. We used 50,000 linear fits and obtained a value of $<r>=-0.46$. The second method that we used to quantify the correlation was the Kendall-Tau coefficient which assigns relative ranks to the different values. This is perhaps a more robust way of examining the correlation especially when the sample size is relatively small as in our case. The Kendall-Tau coefficient for the 26 galaxies in Figure~1 is $<r_{KT}>=-0.35$ and the probability that they are from a random sample is $P_{KT}\sim1.3$\%. Figure~2 shows the plot of $A_{2}$ against the normalized velocity dispersion $\sigma_{e}$ for 25 galaxies. Here again we calculated the linear correlation coefficient which is $r=-0.54$ and $P_r~<~0.5$\%. When the errors are sampled using a Monte Carlo simulation we obtain a value of $<r>=-0.51$. The Kendall-Tau coefficient for the 25 galaxies is $<r_{KT}>=-0.33$ and $P_{KT}~<~2$\%. Thus both Figures~1 and 2 suggest that there is a correlation between the bar strength in galaxies and their central velocity dispersions. \section{DISCUSSION} The main result of this paper are shown in Figures 1 \& 2. There are 26 galaxies in the plots of which about half are intermediate type spirals and the remaining a mixture of early and late type spirals. Since the number of galaxies in each Hubble type is not very large, we cannot investigate trends within the different Hubble types. But from Figures 1 \& 2, it appears that early type spirals have relatively lower central dispersions; this may be because they have larger bulges where rotational velocity is comparatively higher and the central velocity dispersion lower compared to the later Hubble types. Since the correlation is significant but not very strong we examined the two galaxies that define the higher and lower limits of $Q_g$ and $A_2$. We looked at them closely to see if they are odd in some way, and not characteristic of the rest of the sample. (i)~NGC~3162 ($\sigma_{e}/v_g$=1.13)~:~This is an intermediate type spiral galaxy with a weak bar and prominent bulge. There may also be a ring in the center. Though the spiral arms are somewhat asymmetric, the nucleus is fairly undisturbed. (ii)~NGC~4314 ($\sigma_{e}/v_g$=0.18)~:~This is a bright, early type galaxy with a strong bar and large bulge with a LINER type nucleus. There may be significant rotation in the nuclear region which may lower the $\sigma_{e}$. Also, the mass may be more widely distributed over the bucle and hence not as centrally concentrated as in NGC~3162. There appears to be a ring of star formation in the nucleus as well (Gonzalez Delgado et al. 1997). Both galaxies are thus fairly normal and not unusually different from the rest of the sample galaxies. Figures~1 \& 2 suggest that galaxies with dynamically hotter nuclei have weaker bars. It is now well established that a galaxy's central black hole mass and bulge velocity dispersion are correlated (Ferrarese \& Merritt 2000; Gebhardt et al. 2000). Later results show that the nuclear mass also correlates with the overall mass of a galaxy (Ferrarese et al. 2006; Hopkins et al. 2007). These results all indicate that the nuclear mass in a galaxy is intimately connected to the dynamics of its disk and halo. If so, then it is not suprising that $\sigma_{e}$ in our sample of barred galaxies is correlated with $Q_g$ or $A_{2}$. It suggests that the growth of a central mass and evolution of the bar for spiral galaxies may be closely linked. Our results have important implications for the secular evolution of barred galaxies. Simulations suggest that there may be several factors responsible for the dissolution of a bar. One is the CMC growth (Hasan \& Norman 1990; Friedli \& Pfenniger 1991) which weakens the bar-supporting $x_1$ orbits and increases the fraction of chaotic orbits in the galaxy center. Second is the inflow of gas towards the galaxy center which results in the transfer of angular momentum to the bar wave which then weakens the bar itself (Bournaud, Combes \& Semelin 2005). The buckling instability is an important bar thickening mechanism that results in a boxy/peanut bulge which can temporarily weaken a bar (Raha et al. 1991; Berentzen et al. 1998; Athanassoula \& Misiriotis 2002; Martinez-Valpuesta, Shlosman \& Heller 2006; Debattista et al. 2006). All these effects result in a more massive and dynamically hotter central component and a weaker bar. The correlations that we see in Figures 1 and 2 may be observational indications of this ongoing evolution. In particular, while the apparent drop of $Q_g$ with central velocity dispersion might be an artifact caused by the bulge dilution effect (see Laurikainen, Salo \& Buta 2004; a bias could follow since nuclear velocity dispersion and bulge mass are strongly correlated), the similar correlation between the bar intensity contrast $A_2$ suggest that the effect is real. The found correlation between $A_2$ and $\sigma_e/v_g$ is also consistent with Das et al. (2003) who found that the bar ellipticity (closely related to $A_2$) drops with central mass concentration. The evolution of bars by secular processes in galaxies is an issue which is expected to gain more attention in the near future. Recent observational evidence shows that the fraction of strong bars in bright galaxies increases from under 10\% at redshift z=0.84 to about 30\% in the local universe (Sheth et 2007). Also, it has been shown by (Laurikainen et al. 2007) that among the early-type barred galaxies the bulge-to-total flux ratios are on average smaller than in the non-barred galaxies. These results, together with ours, may indicate that bars evolve with their parent galaxies. \section{ACKNOWLEDGMENTS} This research has made use of the NASA/IPAC Infrared Science Archive (NED), which is operated by the JPL, California Institute of Technology, under contract with NASA. We also acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr). R.B. is supported by NSF grant AST 05-07140. E.L and H.S acknowledge the support by the Academy of Finland. \nocite{*} \bibliographystyle{spr-mp-nameyear-cnd}
1,108,101,566,168
arxiv
\section{Introduction} Let $C$ be a hyperelliptic curve defined over the rational field ${\mathbb Q}$ by a hyperelliptic equation of the form $y^2=f(x)$, $\deg f(x)\ge3$. One may construct a sequence of ${\mathbb Q}$-rational points in $C({\mathbb Q})$ such that the $x$-coordinates of these rational points form a sequence of rational numbers which enjoys a certain arithmetic pattern. For instance, an arithmetic progression sequence on $C$ is a sequence $(x_i,y_i)\in C({\mathbb Q}),\;i=1,2,\ldots,$ where $x_i=a+ib$ for some $a,b\in{\mathbb Q}$. In a similar fashion one may define a geometric progression sequence on $C$. In \cite{Bremner} Bremner discussed arithmetic progression sequences on elliptic curves over ${\mathbb Q}$. He investigated the size of these sequences and he produced elliptic curves with long arithmetic progression sequences. His techniques were improved and used to generate infinitely many elliptic curves with long arithmetic progression sequences of rational points, see \cite{Campbell, Macleod, Ulas1}. In \cite{Ulas2} arithmetic progression sequences on genus 2 curves were considered. A certain family of algebraic curves was studied by Bremner and Ulas in \cite{BremnerUlas}. They proved the existence of an infinite family of algebraic curves defined by $y^2 = ax^n + b,\,n\ge 1,\, a, b \in \mathbb Q,$ with geometric progression sequences of rational points of length at least 4. They remarked that their method can be exploited in order to increase the length of these sequences to be 5. In this note we examine geometric progression sequences on hyperelliptic curves. We start with proving that unlike geometric progressions on the rational line, geometric progression sequences on hyperelliptic curves are finite. A certain family of hyperelliptic curves defined by an equation of the form $y^{2}=ax^{2n}+bx^{n}+a,\;n \in \mathbb{N},\; a,b \in \mathbb{Q},$ is displayed. Each hyperelliptic curve in this family possesses a geometric progression sequence of rational points whose length is at least 8. In fact, those hyperelliptic curves are parametrized by an elliptic surface $\mathcal H$ with positive rank. In particular, to each point of infinite order on $\mathcal H$ one can associates a hyperelliptic curve with a geometric progression sequence of length at least $8$. It is worth mentioning that other types of sequences of rational points on algebraic curves are being studied. For example, in \cite{SadekKamel} an infinite family of elliptic curves is constructed such that every elliptic curve in the family has a sequence of rational points whose $x$-coordinates form a sequence of consecutive rational squares. The length of the latter sequence is at least $5$. \section{Geometric progression sequences on hyperelliptic curves} Let $C$ be a hyperelliptic curve defined over a number field $K$ by the equation $y^2=f(x)$ where $f(x)\in K[x]$ is of degree $n\ge 3$, and $f(x)$ has no double roots. The set $C(K)$ of $K$-rational points on $C$ is defined by $\displaystyle C(K)=\{(x,y):y^2=f(x),\,x,y\in K\}$. \begin{Definition} Let $C:y^2=f(x)$ be a hyperelliptic curve over a number field $K$. The sequence $P_i=(x_i,y_i)\in C(K),\,i=1,2,\ldots,$ is said to be a {\em geometric progression sequence} in $C(K)$ if there are $p,t\in K^{\times}$ such that $x_i=pt^i$. In other words, the $x$-coordinates of the rational points $P_i$ form a geometric progression sequence in $K$. \end{Definition} We assume throughout that our geometric progression sequences contain distinct rational points, in particular $t\not\in\{\pm1\}$. We will show that unlike geometric progressions in $K$, geometric progression sequences in $C(K)$ are finite. \begin{Theorem} \label{thm:length} Let $C:y^2=f(x)$ be a hyperelliptic curve over a number field $K$ with $\deg f(x)\ge 3$. Let $(x_i,y_i)$ form a geometric progression sequence in $C(K)$. Then the sequence $(x_i,y_i)$ is finite. \end{Theorem} \begin{Proof} If $\deg f(x)\ge 5$, then it follows that the genus $g$ of $C$ satisfies $g\ge 2$. In view of Faltings' Theorem, \cite{Falting}, $C(K)$ is finite. If $\deg f(x)=3$ or $4$, then $C$ is an elliptic curve. Assume that $f(x)=a_0x^4+a_1x^3+a_2x^2+a_3x+a_4$. Assume on the contrary that there is an infinite sequence $(x_i,y_i)\in C(K)$, $x_i=pt^i,\,i=1,2,\ldots,$ for some $p,t\in K^{\times}$. Considering the subsequence $(x_{2i},y_{2i}),$ $i=1,2,\ldots$, one obtains \[y_{2i}^2=a_0p^4t^{8i}+a_1p^3t^{6i}+a_2p^2t^{4i}+a_3p t^{2i}+a_4,\,i=1,2,\ldots.\] In particular, the rational points $(t^i,y_i),i=1,2,\ldots,$ form an infinite sequence of rational points on the new hyperelliptic curve \[C':y^2=a_0p^4x^8+a_1p^3x^6+a_2p^2x^4+a_3px^2+a_4.\] This contradicts Faltings' Theorem, since the genus of $C'$ is $2$ if $a_0=0$; $3$ if $a_0\ne 0$. \end{Proof} The theorem above motivates the following definition. Given a geometric progression sequence $(x_i,y_i)$, $i=1,2,\ldots,k,$ in $C(K)$, the positive integer $k$ will be called the {\em length} of the sequence. \section{Hyperelliptic curves with long geometric progressions} In this note, we consider the family of hyperelliptic curves over ${\mathbb Q}$ described by the equation $y^2=ax^{2n}+bx^n+a$ where $a,b\in{\mathbb Q}$, and $n\ge 2$. We introduce an infinite family of these hyperelliptic curves with geometric progression sequences of length at least $8$. We remark that the existence of a sequence of rational points $(t^i,y_i)$, $i=1,2,\ldots,k$, in geometric progression on one of these hyperelliptic curves is equivalent to the existence of the following geometric progression sequence of rational points $(t^{ni},y_i)$ on the conic $y^2=ax^2+bx+a$. In fact, we will establish the existence of such an infinite family of conics on which there exist geometric progression sequences of rational points whose $x$-coordinates are $t^{-7},t^{-5},t^{-3},t^{-1},t,t^3,t^5,t^7$ for some $t\in{\mathbb Q}\setminus\{-1,0,1\}$. We start with assuming that the points $(t,U)$ and $(t^3,V)$ are two rational points in $C({\mathbb Q})$ where $C$ is given by $y^2=f(x)=ax^2+bx+a$. This implies that \begin{eqnarray*} U^{2}&=&at^{2}+bt+a,\\ V^{2}&=&at^{6}+bt^{3}+a, \end{eqnarray*} hence \begin{eqnarray} \label{eq1} a&=&\frac{t^{2}U^{2}-V^{2}}{(t^{2}-1)^{2}(t^{2}+1)},\nonumber\\ b&=&\frac{(t^{4}-t^{2}+1)U^{2}-V^{2}}{t(t^{2}-1)^{2}}. \end{eqnarray} From the symmetry of the polynomial $f(x)$, one observes that if the points $(t,U)$ and $(t^{3},V)$ are in $C({\mathbb Q})$, then so are the points $\displaystyle(t^{-1},Ut^{-1})$ and $(t^{-3},Vt^{-3})$. So we already have four points in geometric progression in $C({\mathbb Q})$. In order to increase the length of the progression, we assume that $(t^{5},R)$ is in $C({\mathbb Q})$, hence $(t^{-5},Rt^{-5})$ is in $C({\mathbb Q})$ as well. Given the description of $a$ and $b$, (\ref{eq1}), one obtains $$R^{2}=-t^{2}(t^{4}+1)U^{2}+(1+t^{2}+t^{4})V^{2}.$$ \begin{Theorem} \label{Thm2} The conic $\mathcal{C}:R^{2}=-t^{2}(t^{4}+1)U^{2}+(1+t^{2}+t^{4})V^{2}$ defined over ${\mathbb Q}(t)$ has infinitely many rational points given by the following parametrization \begin{eqnarray} \label{eq2} U&=&t^{2}(1+t^{4})p^{2}+(1+t^{2}+t^{4})q^{2}-2t(1+t^{2}+t^{4})pq,\nonumber\\ V&=&t^2(1+t^4)p^2+(1+t^{2}+t^{4})q^{2}-2t(1+t^{4})pq,\nonumber\\ R&=&t^{3}(1+t^{4})p^{2}-t(1+t^{2}+t^{4})q^{2}. \end{eqnarray} \end{Theorem} \begin{Proof} The point $(U:V:R)=(1:t:t^2)$ lies in $\mathcal{C}({\mathbb Q}(t))$. This implies the existence of infinitely many rational points on the conic $\operatorname{\mathcal C}$. These rational points are given by a parametric description, and the parametrization can be found in \cite[p. 69]{Mordell}. \end{Proof} \begin{Corollary} There exists an infinite family of conics $y^2=ax^2+bx+a$, $a,b\in{\mathbb Q}$, containing 6 rational points in geometric progression. In particular, there exist infinitely many hyperelliptic curves described by the equation $y^2=ax^{2n}+bx^n+a$ with 6 rational points in geometric progression. \end{Corollary} In what follows we parametrize the family of conics $C:y^2=ax^2+bx+a$ containing a seventh rational point $(t^7,S)$. We recall that the existence of this seventh rational point implies the existence of an eighth point $(t^{-7},St^{-7})$ on the conic $C$. The point $(t^7,S)$ satisfies the equation of the conic where $a,b$ are described as in (\ref{eq1}) and (\ref{eq2}). This gives rise to the following curve over ${\mathbb Q}(t)$ \begin{multline} \label{eq3} \mathcal{H}:S^2=H_t(p,q):=t^8(1+t^4)^2p^4+4t^5(1+2t^4+2t^8+t^{12})p^3q-2t^4(4+3t^2+9t^4+4t^6+9t^8+3t^{10}+4t^{12})p^2q^2\\ +4t^3(1-t^2+t^4)(1+t^2+t^4)^2pq^3+t^4(1+t^2+t^4)^2q^4. \end{multline} \begin{Theorem} \label{thm3} The curve $\mathcal H$ defined over ${\mathbb Q}(t)$ is an elliptic curve with $\operatorname{rank} \mathcal{H}({\mathbb Q}(t))\ge 1$. \end{Theorem} \begin{Proof} The following point lies in $\mathcal H({\mathbb Q}(t))$ {\footnotesize$$(p:q:S)=\left(\frac{-t}{1-t^2+t^4}:1-\frac{3+2t^2+4t^4+2t^6+3t^8}{2(1+t^4+t^8)}:\frac{t^2(3+4t^2+8t^4+8t^6+10t^8+8t^{10}+8t^{12}+4t^{14}+3 t^{16})}{4(1-t^2+t^4)^2(1+t^2+t^4)}\right).$$} The existence of the latter rational point in $\mathcal H({\mathbb Q}(t))$ implies that the curve $\mathcal H$ is birationally equivalent over ${\mathbb Q}(t)$ to its Jacobain $\mathcal E$ described by $Y^2=4X^3-g_2X-g_3$ where \begin{eqnarray*} g_2&=&\frac{4}{3}t^8(1+t^2+t^4)^2(1+t^2+4t^4+t^6+7t^8+t^{10}+4t^{12}+t^{14}+t^{16}),\\ g_3&=&-\frac{4}{27}t^{12}(1+t^2+t^4)^4(2+t^2+3t^4+15t^6-9t^8+30t^{10}-9t^{12}+15t^{14}+3t^{16}+t^{18}+2t^{20}), \end{eqnarray*} see \cite{Mordell}. The point $P=(X_P,Y_P)$ where {\footnotesize\begin{eqnarray*}X_P&=&-\frac{t^4(1+t^2+t^4)^2(2-5t^2-2t^4-2t^6-2t^8-5t^{10}+2t^{12})}{3(1+t^2)^4},\\ Y_P&=&\frac{4t^7(1+t^2+t^4)^2}{(1+t^2)^6}(1+t^2+2t^4+2t^6+3t^8+2t^{10}+3t^{12}+2t^{14}+2t^{16}+t^{18}+t^{20})\end{eqnarray*}} is a point in $\mathcal E({\mathbb Q}(t))$. In fact, specializing $t=2$ and using {\sf MAGMA }, \cite{MAGMA}, we find that the specialization of the point $P$ is a point of infinite order on the specialization of ${\mathcal E}$ when $t=2$. It follows that the point $P$ itself is a point of infinite order in ${\mathcal E}({\mathbb Q}(t))$. \end{Proof} \begin{Corollary} \label{cor:hyperelliptic} Fix $t_0\in{\mathbb Q}$. For any nontrivial geometric progression sequence of the form $t_0^{\pm1},t_0^{\pm3},t_0^{\pm5},t_0^{\pm 7}$, there exist infinitely many hyperelliptic curves $C_m:y^2=a_mx^{2n}+b_mx^n+a_m,\;m\in{\mathbb Z}\setminus\{0\},n\ge2,$ such that the numbers $t_0^{\pm i},i=1,3,5,7,$ are the $x$-coordinates of rational points on $C_m$. \end{Corollary} \begin{Proof} The point $P=(p:q:S)$ described by {\footnotesize$$\left(\frac{-t_0}{1-t_0^2+t_0^4}:1-\frac{3+2t_0^2+4t_0^4+2t_0^6+3t_0^8}{2(1+t_0^4+t_0^8)}:\frac{t_0^2(3+4t_0^2+8t_0^4+8t_0^6+10t_0^8+8t_0^{10}+8t_0^{12}+4t_0^{14}+3 t_0^{16})}{4(1-t_0^2+t_0^4)^2(1+t_0^2+t_0^4)}\right)$$} is a point of infinite order on the curve $\mathcal H$ over ${\mathbb Q}(t_0)$, see Theorem \ref{thm3}. For any nonzero $m$, we write $mP=(p_m:q_m:S_m)$ for the $m$-th multiple of $P$. Substituting these values of $p_m,q_m\in{\mathbb Q}(t_0)$ into (\ref{eq2}), one obtains a parametric solution $U_m,V_m,R_m$ of the quadratic $R^2=-t_0^2(t_0^4+1)U^2+(1+t_0^2+t_0^4)V^4$. Hence, one obtains $a_m$ and $b_m$ by substituting $U_m$ and $V_m$ into the formulas of $a,b$ in (\ref{eq1}). We get an infinite family of hyperelliptic curves $C_m:y^2=a_mx^{2n}+b_mx^n+a_m$, where $m$ is nonzero. This family satisfies the property that the points $(t_0^i,u_i),(t_0^{-i},u_it_0^{-i})$, $i=1,3,5,7$, are lying in $C_m({\mathbb Q})$ for some $u_i\in{\mathbb Q}$. Thus, one obtains an infinite family of hyperelliptic curves with an $8$-term geometric progression sequence of rational points. \end{Proof} \section{A numerical example} The curve $C: y^{2}=a(T)x^{2n}+b(T)x^{n}+a(T),n \in \mathbb N$, where $a(T)$ is given by \\ {\footnotesize $\displaystyle \frac{T^{4n}(1+T^{2n})(1+T^{8n})}{2(1+T^{4n})(-1+T^{2n}-T^{4n}+T^{6n}-T^{8n}+T^{10n})^2}$} and $b(T)$ is defined by {\footnotesize$$\frac{1-2T^{2n}-T^{4n}-12T^{6n}-3T^{8n}-14T^{10n}-13T^{12n}-40T^{14n}-13T^{16n}-14T^{18n}-3T^{20n}-12T^{22n}-T^{24n}-2T^{26n}+T^{28n}}{16T^{3n} (-1+T^{2n})^2(1+T^{4n})^2(1-T^{2n}+T^{4n})^2(1+T^{2n}+T^{4n})^2},$$} has the following $8$-term geometric progression sequence {\footnotesize \begin{gather*} \left(T^{-7},\frac{3+4T^{2n}+5T^{4n}+4T^{6n}+5T^{8n}+4T^{10n}+3T^{12n}}{4T^{5n}(1+2T^{4n}+2T^{8n}+T^{12n})}\right),\\ \left(T^{-5},\frac{1+4T^{2n}+3T^{4n}+4T^{6n}+3T^{8n}+4T^{10n}+T^{12n}}{4T^{4n}(1+2T^{4n}+2T^{8n}+T^{12n})}\right), \left(T^{-3},\frac{1+3T^{4n}+4T^{6n}+3T^{8n}+T^{12n}}{4T^{3n}(1+2T^{4n}+2T^{8n}+T^{12n})}\right),\\ \left(T^{-1},\frac{-1+T^{4n}+4T^{6n}+T^{8n}-T^{12n}}{4T^{2n}(1+2T^{4n}+2T^{8n}+T^{12n})}\right), \left(T,\frac{-1+T^{4n}+4T^{6n}+T^{8n}-T^{12n}}{4T^n(1+2T^{4n}+2T^{8n}+T^{12n})}\right),\\ \left(T^3,\frac{1+3T^{4n}+4T^{6n}+3T^{8n}+T^{12n}}{4(1+2T^{4n}+2T^{8n}+T^{12n})}\right), \left(T^5,\frac{T^n(1+4T^{2n}+3T^{4n}+4T^{6n}+3T^{8n}+4T^{10n}+T^{12n})}{4(1+2T^{4n}+2T^{8n}+T^{12n})}\right),\\ \left(T^7,\frac{T^{2n}(3+4T^{2n}+5T^{4n}+4T^{6n}+5T^{8n}+4T^{10n}+3T^{12n})}{4(1+2T^{4n}+2T^{8n}+T^{12n})}\right). \end{gather*}} For example, When $n=2$ and $t=2$, one has the elliptic curve \[y^2=\frac{142608512}{250308167443425}x^4+\frac{62553486161362657 }{65873099809751270400}x^2+\frac{142608512}{250308167443425}\] which contains the following $8$-term geometric progression sequence $$(2^{-7},\frac{54871363}{69258448896}),(2^{-5},\frac{21185345}{17314612224}),(2^{-3},\frac{5663659}{1442884352}),(2^{-1},\frac{16695041}{1082163264}),$$ $$(2,\frac{16695041}{270540816}),(2^3,\frac{5663659}{22545068}),(2^5,\frac{21185345}{16908801}),(2^7,\frac{219485452}{16908801}).$$ \section{A remark on geometric progressions of length 10} In order to extend the length of the $8$-term geometric progression sequence we constructed in Corollary \ref{cor:hyperelliptic} to a geometric progression of length $10$, one assumes that a point of the form $(t^9,S')$, and consequently the point $(t^{-9},S't^{-9})$, exists on the hyperelliptic curve $y^2=a(t)x^{2n}+b(t)x^n+a(t)$. This yields the existence of a rational point $(p:q:S')$ on the elliptic curve $\mathcal L$ defined by {\footnotesize\begin{multline*} S'^2=H_t'(p,q):=t^{10}(1+t^4)^2p^4+4t^5(1+t^2+2t^4+2t^6+2t^8+2t^{10}+2t^{12}+t^{14}+t^{16})p^3q\\ -2t^4(4+6t^2+11t^4+11t^6+12t^8+11t^{10}+11t^{12}+6t^{14}+4t^{16})p^2q^2\\ +4t^3(1+2t^2+3t^4+3t^6+3t^8+3t^{10}+3t^{12}+2t^{14}+t^{16})pq^3 +t^6(1+t^2+t^4)^2q^4. \end{multline*}} One recalls that the pair $(p,q)$ makes up the first two coordinates of a point $(p:q:S)$ on the elliptic curve $\mathcal H:S^2=H_t(p,q)$ defined over ${\mathbb Q}(t)$. This implies that one needs to find a solution $(p,S,S')$ on the genus $5$ curve $\mathcal C$ defined by the affine equation \[S^2=H_t(p,1),\;S'^2=H'_t(p,1).\] In view of Faltings' Theorem, a genus five curve possesses finitely many rational points. Therefore, one reaches the following result. \begin{Proposition} Fix $t_0\in{\mathbb Q}$. For any nontrivial $10$-term geometric progression sequence of the form $t_0^{\pm1},t_0^{\pm3},t_0^{\pm5},t_0^{\pm 7},t_0^{\pm 9}$, there exist finitely many hyperelliptic curves of the form $C:y^2=ax^{2n}+bx^n+a,\,a,b\in{\mathbb Q},$ such that the numbers $t_0^{\pm i},i=1,3,5,7,9,$ are the $x$-coordinates of rational points in $C({\mathbb Q}).$ \end{Proposition} \hskip-12pt\emph{\bf{Acknowledgements.}} We would like to thank Professor Nabil Youssef, Cairo University, for his support, careful reading of an earlier draft of the paper, and several useful suggestions that improved the manuscript.
1,108,101,566,169
arxiv
\section{Introduction} The Major Atmospheric Gamma-ray Imaging Cherenkov (MAGIC) telescope, located on the Canary Island of La Palma at 2200\,m~a.s.l., is capable of extending very high energy observations to energies previously unreachable and detecting new sources at energies down to 50\,GeV. One of these sources is the BL Lac type object PG\,1553+113, observed for the first time in this energy range in April and May 2005 by the MAGIC telescope and the High Energy Stereoscopic System (H.E.S.S.). From these observations a faint signal was measured, and the detection was confirmed by further observations \citep{hess1553,magic1553}. In the subsequent years, additional VHE data were taken. From the combined data sets, strong signals were found: for H.E.S.S.\ at 10.2\,$\sigma$ significance \citep{hess1553-2} and for MAGIC at 15.0\,$\sigma$ significance \citep{phd}. In addition, the VHE measurements allowed the unknown redshift of the source to be constrained. Until now the redshift of PG\,1553+113 has been unknown, since neither emission or absorption lines could be found, nor the host galaxy resolved. Several lower limits were determined \citep{carangelo,Sbarufatti2005,Sbarufatti2006,Scarpa2000,Treves2007}. With the MAGIC and H.E.S.S.\ measurements, upper limits could be determined \citep{hess1553,mazin,phd}. To study the spectral energy distribution of a source, simultaneous data from different wavelengths are mandatory. Therefore, a multi\-wave\-length (MWL) campaign was performed in July 2006 to observe PG\,1553+113. This paper concentrates on the data taken by MAGIC during this MWL campaign. \section{Observations and Data Quality} Between April 2005 and April 2007, MAGIC observed PG\,1553+113 for a total of 78~hours. Part of these data were acquired during a MWL campaign in 2006 July with the H.E.S.S.\ array of IACTs, the X-ray satellite Suzaku and the optical telescope KVA. Suzaku observed the source between 24 July, 14:26~UTC and 25 July, 19:17~UTC, and H.E.S.S. between 22 July and 27 July. From the KVA, data between 21 July and 27 July are available. The MAGIC telescope observed PG\,1553+113 between 14 July and 27 July for 9.5~hours at zenith distances between 18$^\circ$ and 35$^\circ$. The data were acquired in wobble mode, where the source was tracked with an offset of $\pm0.4^\circ$ from the center of the camera, which enabled simultaneous measurement of the source and the background. One hour of data was excluded due to technical problems. The quality of the entire data set acquired during the MWL campaign was affectd by calima, i.e.\ sand-dust from the Sahara in the atmosphere. For the affected data, the nightly values of atmospheric absorption ranged between 5\% and 40\%. To account for the absorption of the Cherenkov light, correction factors were calculated and applied to the data (see Sect.~\ref{corr}). \section{Analysis} The data were processed by an automated analysis pipeline \citep{autom,mars} at the data center in W\"urzburg. The analysis includes an absolute calibration with muons \citep{muoncal}, an absolute mispointing correction \citep{starg}, and it uses the arrival time information of the pulses of neighboring pixels for noise subtraction and background suppression. In determining the background, three OFF regions were used, providing a scale factor of 1/3 for the background measurement. To suppress the background, a dynamical cut in Area (Area=$\pi\cdot$WIDTH$\cdot$LENGTH) versus SIZE and a cut in $\vartheta$ were applied. More details of the cuts can be found in \citet{cuts}, and the aforementioned image parameters are described by \citet{hillas}. To account for the steeper spectrum of PG\,1553+113, the here presented analysis applied a cut in Area that was less restrictive at lower energies compared to the standard cut used by the automated analysis, which has been optimized for Crab Nebula data over several years. In generating the spectrum, the cut in Area was selected to ensure that more than 90\% of the simulated gammas survived. To study the dependency of the spectral shape on the cut efficiency, a different cut in Area with cut efficiencies of between 50\% and 95\% for the entire energy range was applied. \section{Correction of the Effect of Calima} \label{corr} Calima, also known as Saharan Air Layer (SAL), is a layer in the atmosphere that transports sand-dust from the Sahara in a westerly direction over the Atlantic Ocean. The SAL is usually situated between 1.5\,km and 5.5\,km~a.s.l.\ \citep{sal}. Since the Canary Islands are close to the North African coast, the MAGIC observations are probably affected by additional light absorption in the atmosphere when calima occurs. From extinction measurements of the Carlsberg Meridian Telescope \citep{cmt,cmtweb}, which are available for each night, the loss of light due to calima was calculated. To correct the data for absorption by the atmosphere, the calibration factors were adapted for each night separately and the data were reprocessed. More details of the method are provided by \citet{calima}. \section{Results} The 8.5~hours of data from PG\,1553+113 provided a signal with a significance of 5.0\,$\sigma$ according to \citet{lima}. The $\vartheta^2$ distributions for the ON- and normalized OFF-source measurement are shown in Fig.~\ref{thetasq}. \begin{figure}[ht] \begin{center} \resizebox{\hsize}{!}{\includegraphics{9048fig1.eps}} \caption{Distributions of $\vartheta^2$ for ON-source events (black crosses) and normalized OFF-source events (gray shaded area) from 8.5~hours of data. The dashed line represents the cut applied in $\vartheta^2$. \label{thetasq}} \end{center} \end{figure} The differential spectrum measured by MAGIC is shown in Fig.~\ref{spectrum}. In this plot, a set of spectra obtained with different cut efficiencies and different simulated spectra is indicated by a gray band. The data points of the spectrum are given in Table~\ref{tablespectrum} including the statistical errors. The systematic errors of the analysis are discussed in \citet{crab}. Fitting the differential spectrum with a power law yields a flux of $(1.4\pm0.3)\cdot10^{-6}~{\rm ph\,TeV^{-1}s^{-1}m^{-2}}$ at 200\,GeV and a spectral index of $-4.1\pm0.3$. This result is consistent with the data (fit probability 45\%) and in good agreement with previous measurements in 2005 and 2006 \citep{hess1553,magic1553}. Within the errors, the simultaneous measurements of the H.E.S.S. telescopes in the energy range above 225\,GeV is in agreement with the MAGIC results, although the fit of the differential spectrum yields a spectral index of $-5.0\pm0.7$ \citep{hess1553-2}. The integral flux above 150\,GeV obtained from this analysis is $(2.6\pm0.9)\cdot10^{-7}~{\rm ph\,s^{-1}m^{-2}}$. \begin{figure} \begin{center} \resizebox{\hsize}{!}{\includegraphics{9048fig2.eps}} \caption{Differential energy spectrum of PG\,1553+113. The horizontal error bars indicate the width of the energy bins. The vertical error bars illustrate the statistical errors. The gray band corresponds to a set of spectra obtained with different cut efficiencies and simulated spectra. \label{spectrum}} \end{center} \end{figure} \begin{table} \caption{Flux for the spectral points in Fig.~\ref{spectrum} including the statistical errors.} \label{tablespectrum} \centering \begin{tabular}{c c c} \hline\hline E & F & Statistical Error \\ $[$GeV$]$ & [ph\,${\rm TeV^{-1}s^{-1}m^{-2}}$] & [ph\,${\rm TeV^{-1}s^{-1}m^{-2}}$]\\ \hline 91 & 3.36$\cdot10^{-5}$ & 8.11$\cdot10^{-6}$ \\ 139 & 7.37$\cdot10^{-6}$ & 2.68$\cdot10^{-6}$ \\ 211 & 1.67$\cdot10^{-6}$ & 6.04$\cdot10^{-7}$ \\ 320 & 9.61$\cdot10^{-8}$ & 9.61$\cdot10^{-8}$ \\ 486 & 6.18$\cdot10^{-8}$ & 4.00$\cdot10^{-8}$ \\ \hline \end{tabular} \end{table} \begin{figure} \begin{center} \resizebox{\hsize}{!}{\includegraphics{9048fig3.eps}} \caption{Night-by-night light curve for the integral flux above 150\,GeV in ${\rm ph\,s^{-1}m^{-2}}$ (upper part) and the flux in the optical R-band (lower part) between July 15${\rm ^{th}}$ and 27${\rm^{th}}$ 2006. For the first nights of the MAGIC observations, no data by the optical telescope KVA were obtained. \label{lc}} \end{center} \end{figure} \begin{table} \centering \caption{Date, start time, duration, and flux above 150\,GeV for the MAGIC observations carried out between July 15${\rm ^{th}}$ and 27${\rm ^{th}}$ 2006. For one day, the flux level could not be determined because no signal for energies above 150\,GeV was detectable in this data set.} \label{tablelc} \begin{tabular}{l c} \hline\hline Observation [UTC] & F (\textgreater 150\,GeV) \\ Start Time (Duration) & [${\rm ph\,s^{-1}m^{-2}}$]\\ \hline 15.7.2006 21:45 (1.4\,h) & $(1.55\pm1.46)\cdot10^{-7}$\\%21:45-23:15 16.7.2006 21:45 (1.37\,h) & $(4.79\pm2.31)\cdot10^{-7}$\\%21:45-23:15 17.7.2006 22:15 (0.85\,h) & $(3.47\pm2.14)\cdot10^{-7}$\\%22:15-23:10 21.7.2006 22:13 (0.62\,h) & $(2.99\pm2.04)\cdot10^{-7}$\\%22:13-22:53 23.7.2006 21:38 (0.88\,h) & -\\%21:38-22:46 24.7.2006 21:50 (1.09\,h) & $(3.34\pm2.38)\cdot10^{-7}$\\%21:50-23:06 25.7.2006 21:38 (1.31\,h) & $(1.50\pm2.18)\cdot10^{-7}$\\%21:38-23:06 26.7.2006 21:37 (0.92\,h) & $(1.27\pm1.87)\cdot10^{-7}$\\%21:37-22:36 \hline \end{tabular} \end{table} To check whether a flare occurred during the eight nights of observation, the flux above 150\,GeV was calculated on a night-by-night basis (see Table~\ref{tablelc}). The corresponding light curve is shown in the upper part of Fig.~\ref{lc}. Since the data sets for single days are of durations shorter than one and a half hours (0.62\,h\,-\,1.4\,h), the data points represent only signals of a significance level between 0.7\,$\sigma$ and 2.1\,$\sigma$. Consequently, no strong conclusions about the night-to-night variability in the flux can be drawn. Within the errors, the measurement is consistent with a constant flux (fit probability 82\%). Contemproraneously with the MAGIC observations, the optical telescope KVA acquired data in the R-band. For the first nights, no data was available. The flux for additional nights is shown in the lower part of Fig.~\ref{lc}. \section{Conclusion and Outlook} PG\,1553+113 was observed in July 2006 as part of a MWL campaign with the MAGIC telescope. After correcting the data for the effect of calima, the analysis detected a gamma-ray signal of 5.0 standard deviations. Within the statistical errors, the differential energy spectrum is compatible with those derived by previous measurements including the one observed by H.E.S.S.\ in this campaign. The inter-night light curve shows no significant variability. The measured flux and reconstructed spectrum will be used in studies of the spectral energy distribution, which will include other data acquired during the MWL campaign \citep{mwl}. \section*{Acknowledgements} We would like to thank the IAC for the excellent working conditions at the Observatorio del Roque de los Muchachos in La Palma. The support of the German BMBF and MPG, the Italian INFN and the Spanish CICYT is gratefully acknowledged. This work was also supported by ETH research grant TH-34/04-3 and by Polish grant MNiI 1P03D01028. \bibliographystyle{aa}
1,108,101,566,170
arxiv
\section{Introduction} \subsection{Overview of Euclidean field theory} \label{sec:overview} A Euclidean field theory of a scalar field on a domain $\Lambda \subset \mathbb{R}^d$ is specified by a formal probability measure on a space of fields\footnote{Rigorously, the space of fields is the Schwartz distribution space $\cal S'(\Lambda, \mathbb{R}^N)$.} $\phi \vcentcolon}%{\mathrel{\vcenter{\baselineskip0.75ex \lineskiplimit0pt \hbox{.}\hbox{.}}} \Lambda \to \mathbb{R}^N$ given by \begin{equation} \label{mu_def} \mu(\r d \phi) = \frac{1}{c} \, \r e^{-S(\phi)} \, \r D \phi\,, \end{equation} where $\r D \phi = \prod_{x \in \Lambda} \r d \phi(x)$ is the formal uniform measure on the space of fields, and $S$ is the action. The latter is typically the integral over $\Lambda$ of a local function of the field $\phi$ and its gradient. One of the simplest field theories with nontrivial interaction is the $N$-component \emph{Euclidean $\phi^4_d$ theory}, whose action is given by \begin{equation} \label{S_def} S(\phi) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= - \int_\Lambda \r d x \, \phi(x) \cdot (\theta + \Delta / 2) \phi(x) + \frac{\lambda}{2} \int_\Lambda \r d x \, \abs{\phi(x)}^4\,, \end{equation} where $\theta$ is a constant, $\lambda$ is a coupling constant, $\Delta$ is the Laplacian on $\Lambda$ with appropriate boundary conditions, and $\abs{\cdot}$ denotes the Euclidean norm on $\mathbb{R}^N$. Euclidean field theories originally arose in high-energy physics in $d = 4$ space-time dimensions, through an analytic continuation of the time variable of the quantum field $\phi$, which replaces the Minkowski space-time metric with a Euclidean one \cite{schwinger1958euclidean, nakano1959quantum}. Subsequently, Euclidean field theories have proven of great importance in statistical mechanics in $d \leq 3$ dimensions, in particular through their connection with the theory of phase transitions and critical phenomena. The works \cite{symanzik1966euclidean, symanzik1969euclidean} recognized the analogy between Euclidean field theories and classical statistical mechanics, which was followed by a purely probabilistic formulation of Euclidean field theories in \cite{nelson1973free, nelson1973probability}. The rigorous study of field theories of the form \eqref{mu_def} has been a major topic in mathematical physics since the late sixties; see e.g. \cite{glimm2012quantum, Simon74} for reviews. Euclidean field theories also play a central role in the theory of stochastic nonlinear partial differential equations. Formally, \eqref{mu_def} is the stationary measure of the stochastic nonlinear heat equation \begin{equation*} \partial_t \phi = - \frac{1}{2} \nabla S(\phi) + \xi = (\theta + \Delta / 2) \phi - \lambda \abs{\phi}^2 \phi + \xi \end{equation*} with space-time white noise $\xi$, which can be regarded as the Langevin equation for a time-dependent field $\phi$ with potential given by the action $S$ in \eqref{S_def}. Constructing measures of the form \eqref{mu_def} by exhibiting them as stationary measures of stochastic nonlinear partial differential equations is the goal of stochastic quantization developed in \cite{nelson1966derivation, faris1982large, parisi1981perturbation, lebowitz1988statistical}. See for instance \cite{hairer2014theory,gubinelli2015paracontrolled, kupiainen2016renormalization,da2003strong} for recent developments. In addition, Euclidean field theories are of great importance in the probabilistic Cauchy theory of nonlinear dispersive equations. For $N = 2$ and identifying $\mathbb{R}^2 \equiv \mathbb{C}$, the measure \eqref{mu_def} is formally invariant under the nonlinear Schrödinger (NLS) equation \begin{equation} \label{NLS} \r i \partial_t \phi = \frac{1}{2} \nabla S(\phi) = -(\theta + \Delta / 2) \phi + \lambda \abs{\phi}^2 \phi \,. \end{equation} Gibbs measures \eqref{mu_def} for the NLS \eqref{NLS} have proven a powerful tool for constructing almost sure global solutions with random initial data of low regularity. One considers the flow of the NLS \eqref{NLS} with random initial data distributed according to \eqref{mu_def}. The invariance of the measure \eqref{mu_def} under the NLS flow (in low dimensions) serves as a substitute for energy conservation, which is not available owing to the low regularity of the solutions. See for instance the seminal works \cite{bourgain1994periodic,bourgain1994_z,bourgain1996_2d,bourgain1997invariant,bourgain2000_infinite_volume,lebowitz1988statistical} as well as \cite{Carlen_Froehlich_Lebowitz_2016, Carlen_Froehlich_Lebowitz_Wang_2019,GLV1,GLV2,McKean_Vaninsky2,NORBS,NRBSS,BourgainBulut4,BrydgesSlade,BurqThomannTzvetkov,FOSW,Thomann_Tzvetkov} and references given there for later developments. The main difficulty in all of the works cited above is that, in dimensions larger than one, under the measure \eqref{mu_def} the field $\phi$ is almost surely a distribution of negative regularity, and hence the interaction term \begin{equation*} V(\phi) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{\lambda}{2} \int_\Lambda \r d x \, \abs{\phi(x)}^4 \end{equation*} in \eqref{S_def} is ill-defined. This is an \emph{ultraviolet} problem: a divergence for large wave vectors (i.e.\ spatial frequencies) producing small-scale singularities in the field. As the dimension $d = 1,2,3$ increases, the difficulty of making sense of the measure in \eqref{mu_def} increases significantly. To outline the rigorous construction of the measure in \eqref{mu_def}, we introduce an ($\mathbb{R}^N$-valued) Gaussian free field on $\Lambda$ whose law $\P$ is the Gaussian measure on the space of fields with mean zero and covariance $(2 \kappa - \Delta)^{-1}$, where $\kappa > 0$ is some positive constant. Then we write \normalcolor \begin{equation} \label{mu_P} \mu(\r d \phi) = \frac{1}{\zeta} \r e^{-V(\phi)} \, \P(\r d \phi) \end{equation} for some normalization constant $\zeta > 0$. For $d = 1$, the right-hand side of \eqref{mu_P} makes sense as is, since, under $\P$, the field $\phi$ is almost surely a continuous function and hence $V(\phi)$ is almost surely nonnegative and finite. This provides a simple construction of \eqref{mu_def} for $d = 1$ and $\kappa = -\theta$. \normalcolor For $d > 1$, the simple approach just sketched no longer works, since $\phi$ is almost surely of negative regularity, and the interaction term $V(\phi)$ has to be renormalized by subtracting suitably chosen infinite counterterms. The most elementary renormalization is Wick ordering of $V(\phi)$ with respect to the Gaussian measure $\P$, denoted by $\wick{\cdot}$ (see Appendix \ref{sec:wick}). After Wick ordering, the interaction term becomes \begin{align} V(\phi) &= \frac{\lambda}{2} \int_\Lambda \r d x \, \wick{\abs{\phi(x)}^4} \notag \\ \label{Wick_intro} &= \frac{\lambda}{2} \int_\Lambda \r d x \pbb{\abs{\phi(x)}^4 - \frac{4 + 2N}{N} \, \mathbb{E} \qb{\abs{\phi(x)}^2}\, \abs{\phi(x)}^2 + \frac{N+2}{N} \, \mathbb{E} \qb{\abs{\phi(x)}^2}^2}\,, \end{align} where $\mathbb{E}$ denotes expectation with respect to $\P$. The second and third terms on the right-hand side of \eqref{Wick_intro} are infinite counterterms, which may be regarded as mass and energy renormalizations, respectively. Hence, for $d > 1$, the constant $\theta$ in \eqref{S_def} is formally $-\infty$. \normalcolor To make rigorous sense of \eqref{Wick_intro} in dimension $d=2$, one has to mollify $\phi$ by convolving it with an approximate delta function, and then show that as the mollifier is removed, the right-hand side of \eqref{Wick_intro} converges in $L^2(\P)$ (see Section \ref{sec:classical_field} below for more details). It is not hard to show that for $d = 2$ the renormalization on the right-hand side of \eqref{Wick_intro} yields a well-defined interaction term $V(\phi) \in L^2(\P)$. However, owing to the mass renormalization in \eqref{Wick_intro}, after Wick ordering, $V(\phi)$ is unbounded from below, and the integrability of $\r e^{-V(\phi)}$ with respect to $\P$ represents a nontrivial problem, which was successfully solved in the landmark work of Nelson \cite{nelson1973free, nelson1973probability}. For $d = 3$, it is easy to see that, even after Wick ordering, $V(\phi)$ almost surely does not exist in $L^2(\P)$. Further, a simple expansion of the exponential $\r e^{-V(\phi)}$ in the two-point correlation function, $\frac{1}{\zeta} \int \phi(x) \, \phi(y) \, \r e^{-V(\phi)} \, \P(\r d \phi)$, yields a divergent term already at second order, associated with the so-called sunset diagram of quantum field theory. Hence, a further mass renormalization of $V(\phi)$ is required, which results in a measure $\mu$ that is mutually singular with respect to the free-field Gaussian measure $\P$. The mathematically rigorous construction of the Euclidean $\phi^4_3$ theory, first achieved in the seminal work of Glimm and Jaffe \cite{glimm1973positivity}, is one of the major successes of the constructive field theory programme started in the sixties. By now, several different constructions of this theory have been developed, based on, first, phase cell expansions \cite{glimm1973positivity,feldman1976wightman,park1977convergence,glimm2012quantum}, then on renormalization group methods \cite{brydges1995short,gawedzki1985asymptotic,benfatto1978some}, later on correlation inequalities \cite{brydges1983new}, and, most recently, on paracontrolled calculus \cite{catellier2018paracontrolled, gubinelli2018pde}, as well as variational methods \cite{barashkov2020variational}. For $d \geq 4$, it is expected, and indeed proven in some cases, that the $\phi^4_d$ theory is trivial: any renormalization of $V(\phi)$ resulting in a well-defined measure $\mu$ yields a (generalized free-field) Gaussian measure. For $d \geq 5$, this triviality was proven in \cite{aizenman1982geometric, frohlich1982triviality}. Recently, the triviality of $\phi^4_4$ for $N = 1$ was established in \cite{aizenman2021marginal}. \subsection{The $\phi^4_2$ theory as a limit of a Bose gas} In this paper we establish a relationship between Euclidean field theories and interacting Bose gases with repulsive two-body interactions in two spatial dimensions, by showing that the Euclidean $\phi^4_2$ theory describes the limiting behaviour of an interacing Bose gas at positive temperature. The limiting regime is a high-density limit in a box\footnote{For conciseness, in this paper we assume that $\Lambda$ is the unit torus, although the actual shape of $\Lambda$ and the boundary conditions are not essential for our proof; see Remark \ref{rem:general_Lambda} below.} of fixed size, where the range of the interaction is much smaller than the diameter of the box. The emerging field is complex-valued, corresponding to $N = 2$. This result provides a rigorous derivation of the $\phi^4_2$ theory starting from a realistic model of statistical mechanics. Viewed differently, we introduce a new regularization of the $\phi^4_2$ theory in terms of an interacting Bose gas, in addition to the commonly used smooth mollifiers or lattice approximations. To explain our result more precisely, we recall that a quantum system of $n$ spinless non-relativistic bosons of mass $m$ in $\Lambda$ is described by the Hamiltonian \begin{equation*} \bb H_n \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= - \sum_{i = 1}^n \frac{\Delta_i}{2m} + \frac{g}{2} \sum_{i,j = 1}^n v(x_i - x_j) \end{equation*} acting on the space $\cal H_n$ of square-integrable wave functions that are symmetric in their arguments $x_1, \dots, x_n$ and supported in $\Lambda^n$. Here $\Delta_i$ is the Laplacian in the variable $x_i$, $g$ is a coupling constant, and $v$ is a repulsive (i.e.\ of positive type) two-body interaction potential. We consider a system in the grand canonical ensemble at positive temperature, characterized by the density matrix \begin{equation} \label{gc_density} \frac{1}{Z} \bigoplus_{n \in \mathbb{N}} \r e^{-\beta(\bb H_n - \theta n)} \end{equation} acting on Fock space $\cal F = \bigoplus_{n \in \mathbb{N}} \cal H_n$, where $\beta < \infty$ is the inverse temperature, $\theta$ is the chemical potential, and $Z$ is a normalization factor. The limiting regime of this paper is obtained by introducing two parameters, $\nu, \epsilon > 0$, where $\nu = \frac{\beta}{m} = \sqrt{\beta g}$, and the potential $v$ is taken to be an approximate delta function of range $\epsilon$. We suppose that $\nu,\epsilon \to 0$ under the technical constraint $\epsilon \geq \exp\pb{- (\log \nu^{-1})^{1/2 - c}}$, for some constant $c > 0$. We show that there exists a suitable renormalization of the chemical potential $\theta \equiv \theta_\nu^\epsilon$ such that the reduced density matrices of the quantum state \eqref{gc_density} converge to the correlation functions of the field theory \eqref{mu_def}, \eqref{S_def}. Previously, this result was obtained for $d = 1$ in \cite{lewin2015derivation}, where, as explained in Section \ref{sec:overview}, no renormalization is required. In higher dimensions $d = 2,3$, the \emph{mean-field limit} was investigated in \cite{lewin2018classical, LNR3, FKSS_2020, frohlich2017gibbs, frohlich2019microscopic}, where the parameter $\epsilon$ was fixed as $\nu \to 0$. The resulting limiting field theory differs from $\phi^4_d$ in that the interaction term $V(\phi)$ is nonlocal, given by a convolution with a bounded two-body interaction potential $v$. This nonlocal interaction term is considerably less singular than the local one of $\phi^4_2$ theory. The current paper is the first to establish the convergence of an interacting Bose gas to a local Euclidean field theory in dimensions larger than one. The stronger singularity of $V(\phi)$ requires additional renormalization as compared to the nonlocal potential. One consequence of this is that the renormalized interaction term $V(\phi)$ is unbounded from below, whereas in the nonlocal regime it is almost surely nonnegative. Using our methods, we also extend the results on the mean-field limit for a nonlocal interaction term $V(\phi)$ in \cite{LNR3, FKSS_2020} from bounded two-body interaction potentials, $v$, to unbounded ones. Our integrability assumptions on the function $v$ are optimal, as given in \cite{bourgain1997invariant}. We refer to Section \ref{Extension of results to unbounded nonlocal interactions} below for details. \subsection{Outlook} The close relationship between Euclidean field theory and interacting Bose gases established in this paper leads to a web of conjectures concerning properties of $\phi^4_d$ theories inspired by results on Bose gases and, conversely and perhaps more interestingly, properties of interacting Bose gases inspired by known results on $\phi^4_d$ theories. In the following, we outline some of these conjectures. We remark that an analysis very similar to the one in this paper yields an analogous relationship between the $\phi^4_2$ theory with $N$ complex components (that is, with $2N$ real components) and an interacting Bose gas with $N$ species of identical Bosons; see Remark \ref{rem:species} below. \begin{enumerate} \item It is known (see \cite{berezin1961remark, albeverio2012solvable, geiler1995potentials}) that systems of non-relativisitic quantum particles moving in $d$-dimensional Euclidean space and interacting through delta function potentials are equivalent to systems of \textit{free} (i.e.\ non-interacting) particles, provided that $d\geq 4$. Given the connection between interacting Bose gases and $\phi^4_d$ theories exhibited in this paper, this suggests that the latter theories are equivalent to free (i.e.\ Gaussian) field theories in dimensions $d\geq 4$, for a field $\phi$ with an arbitrary number of complex components. \item In $d=3$ dimensions, $\phi^4_d$ theories with $N$ complex components are known to undergo a phase transition accompanied by spontaneous $O(2N)$-symmetry breaking and the emergence of Goldstone bosons \cite{frohlich1976infrared}; (see also \cite{frohlich1983berezinskii}, as well as \cite{garban2021continuous} for recent results on related lattice models with disorder). Given our results for $d = 2$, as well as analogous results for $d = 3$ to appear in a future paper, the existence of a phase transition in the Euclidean field theory strongly suggests that translation-invariant Bose gases with repulsive two-body interactions in three dimensions exhibit Bose-Einstein condensation accompanied by the appearance of massless quasi-particles with approximately relativistic dispersion at small wave vectors. In two dimensions, the Mermin-Wagner theorem implies that such phase transitions do not exist, and the $O(2N)$-symmetry remains unbroken for arbitrary values of the coupling constant $\lambda$. A similar result is also known for two-dimensional Bose gases; see \cite{ruelle1999statistical} and references given there. \item The \textit{one-component} complex $\phi^4_d$ - theory in $d = 2$ dimension is expected to exhibit a \textit{Berezinskii-Kosterlitz-Thouless transition.} This is rigorously known for the classical $XY$-model on a square lattice, which is the limiting theory of lattice $\phi^4_2$-theory, as $\lambda$ tends to $\infty$, with $\kappa=2\lambda$; see \cite{frohlich1981kosterlitz, frohlich1983berezinskii}. In view of the results proven in this paper, this suggests that two-dimensional Bose gases of \textit{one species} of particles might exhibit a transition to a low-temperature phase where reduced density matrices exhibit \textit{slow decay}, analogous to the Berezinskii-Kosterlitz-Thouless transition. In contrast, for a two-dimensional $\phi^4_d$ theory with two or more complex components, with an $O(2N)$-symmetry, it is expected that connected correlations exhibit exponential decay for arbitrary values of the coupling constant $\lambda$; see \cite{polyakov1975interaction}. This suggests that two-dimensional Bose gases of several species of identical particles exhibit rapidly decaying correlations at all temperatures and densities. \item For $\phi^4_d$ theories with $N$ complex components, there exists a systematic $1/N$-expansion; see \cite{itzykson1991statistical1, itzykson1991statistical2, zinn2021quantum}. The model obtained in the limit, as $N\rightarrow \infty$, is the spherical model, which is exactly solved. It is tempting to extend the method of the $1/N$-expansion to Bose gases of $N$ species of identical particles interacting through two-body interactions of strength $O(1/N)$. The model obtained in the limit, as $N \rightarrow \infty$, appears to be equivalent to an ideal Bose gas, but with a renormalized chemical potential. In attempting to prove Bose-Einstein condensation for translation-invariant interacting Bose gases, therefore, it seems judicious to begin by studying Bose gases with a large number of species of identical particles. \end{enumerate} \normalcolor \section{Setup and results} \subsection{Classical field theory} \label{sec:classical_field} In this subsection we define the Euclidean field theory and its correlation functions. We note that the measure $\mu$ from \eqref{mu_def} can be formally viewed as the thermal equilibrium measure of a \textit{classical} field theory with Hamilton function given by $S(\phi)$ from \eqref{S_def}. We work on the $d$-dimensional torus $\Lambda \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= [-1/2,1/2)^d$. We use the Euclidean norm $\abs{\cdot}$ for elements of $\Lambda$ regarded as a subset of $\mathbb{R}^d$. We use the shorthand $\int \r d x \, (\cdot)\mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int_{\Lambda} \r d x \, (\cdot)$ to denote integration over $\Lambda$ with respect to Lebesgue measure. We abbreviate $\cal H \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= L^2(\Lambda; \mathbb{C})$ and denote by $\scalar{\cdot}{\cdot}$ the inner product of the space $\cal H$, which is by definition linear in the second argument. On $\cal H$ we use the standard Laplacian $\Delta$ with periodic boundary conditions. The classical free field $\phi$ is by definition the complex-valued Gaussian field with covariance $(\kappa - \Delta/2)^{-1}$, where $\kappa > 0$ is a constant. Explicitly, the free field may be constructed as follows. We use the spectral decomposition $\kappa - \Delta/2 = \sum_{k \in \mathbb{Z}^d} \lambda_k u_k u_k^*$, with eigenvalues $\lambda_k > 0$ and normalized eigenfunctions $u_k \in \cal H$ (see also \eqref{lambda_k} below). Let $X = (X_k)_{k \in \mathbb{Z}^d}$ be a family of independent standard complex Gaussian random variables\footnote{We recall that $Z$ is a standard complex Gaussian if it is Gaussian and satisfies $\mathbb{E} Z = 0$, $\mathbb{E} Z^2 = 0$, and $\mathbb{E} \abs{Z^2} = 1$, or, equivalently, if it has law $\pi^{-1} \r e^{- \abs{z}^2} \r d z$ on $\mathbb{C}$, where $\r d z$ denotes Lebesgue measure.}, whose law and associated expectation are denoted by $\P$ and $\mathbb{E}$, respectively. The \emph{classical free field} is then given by \begin{equation*} \phi = \sum_{k \in \mathbb{Z}^d} \frac{X_k}{\sqrt{\lambda_k}} \, u_k\,, \end{equation*} which is easily seen to converge\footnote{In fact, an application of Wick's rule shows that the convergence holds in $L^m$ for any $m < \infty$.} in $L^2(\P)$ of the $L^2$-Sobolev space $H^{1 - d/2 - c}$ for any $c > 0$. In order to define the interacting theory, it is necessary to regularize the field $\phi$ by convolving it with a smooth mollifier. To that end, choose a nonnegative function $\vartheta \vcentcolon}%{\mathrel{\vcenter{\baselineskip0.75ex \lineskiplimit0pt \hbox{.}\hbox{.}}} \mathbb{R}^d \to \mathbb{R}_+$ of rapid decay satisfying $\vartheta(0) = 1$, and for $0 < N < \infty$ define the regularized field \begin{equation} \label{def_phi_N} \phi_N \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \sum_{k \in \mathbb{Z}^d} \frac{X_k}{\sqrt{\lambda_k}} \sqrt{\vartheta(k/N)}\, u_k\,, \end{equation} which is almost surely a smooth function on $\Lambda$. We define the regularized interaction \begin{equation*} V_N \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{2} \int \r d x \, \wick{\abs{\phi_N(x)}^4}\,, \end{equation*} where $\wick{\cdot}$ denotes Wick ordering with respect to the Gaussian measure $\P$ (see Appendix \ref{sec:Wick}). Explicitly, \begin{equation*} \wick{\abs{\phi_N(x)}^4} = \abs{\phi_N(x)}^4 - 4 \mathbb{E} \qb{\abs{\phi_N(x)}^2} \, \abs{\phi_N(x)}^2 + 2 \mathbb{E} \qb{\abs{\phi_N(x)}^2}^2\,. \end{equation*} Here, the deterministic factor $\mathbb{E} \qb{\abs{\phi_N(x)}^2} = \sum_{k \in \mathbb{Z}^d} \frac{\vartheta(k/N)}{\lambda_k}$ diverges as $N \to \infty$ for $d > 1$. For $d = 2$, using Wick's theorem, it is easy to see that $V_N$ converges as $N \to \infty$ in $L^2(\P)$ to a random variable, denoted by $V$, which does not depend on the choice of $\vartheta$. See e.g.\ \cite[Lemma 1.5]{frohlich2017gibbs} for details. The interacting field theory is given as the probability measure \begin{equation} \label{field theory} \frac{1}{\zeta} \r e^{-V} \, \r d \P \,, \qquad \zeta \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \mathbb{E}[\r e^{-V}]\,. \end{equation} By the well-known Nelson bounds \cite{nelson1973probability, nelson1973free} mentioned in Section \ref{sec:overview}, $\r e^{-V}$ is integrable with respect to $\P$. We characterize the interacting field theory through its correlation functions, defined as follows. For $p \in \mathbb{N}$ and $\f x,\tilde{\f x} \in \Lambda^p$, we define the \emph{$p$-point correlation function} as \begin{equation} \label{gamma_p} (\gamma_p)_{\f x,\tilde{\f x}} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{\zeta} \, \mathbb{E} \qB{\bar{\phi}(\tilde x_1) \cdots \bar{\phi}(\tilde x_p)\,\phi(x_1)\cdots \phi(x_p) \, \r e^{-V}}\,. \end{equation} which is the $2p$-th moment of the field $\phi$ under the probability measure \eqref{field theory}. This measure is sub-Gaussian, and is hence determined by its moments $(\gamma_p)_{p \in \mathbb{N}^*}$. (Note that any moment containing a different number of $\bar \phi$s and $\phi$s vanishes by invariance of the measure \eqref{field theory} under the gauge transformation $\phi \mapsto \alpha \phi$, where $\abs{\alpha} = 1$.) As explained in \cite[Section 1.5]{FKSS_2020}, the correlation function $\gamma_p$ is divergent on the diagonal, even for the free field. Hence, for instance, it cannot be used to analyse the distribution of the mass density $\abs{\phi(x)}^2$. As in \cite[Section 1.5]{FKSS_2020}, we remedy this issue by introducing the \emph{Wick-ordered $p$-point correlation function} \begin{equation} \label{hat_gamma_p} (\widehat{\gamma}_p)_{\f x,\tilde{\f x}} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{\zeta}\, \mathbb{E} \qB{\wick{\bar{\phi}(\tilde x_1) \cdots \bar{\phi}(\tilde x_p)\,\phi(x_1)\cdots \phi(x_p)} \, \r e^{-V}}\,, \end{equation} which has a regular behaviour on the diagonal. The Wick-ordered correlation function \eqref{hat_gamma_p} can be expressed explicitly the correlation functions \eqref{gamma_p} and the correlation functions of the free field; see \eqref{hat_gamma_p_2} below. \subsection{Quantum many-body system} In this subsection we define the quantum many-body system and its reduced density matrices. For $n \in \mathbb{N}$, we denote by $P_n$ the orthogonal projection onto the symmetric subspace of $\cal H^{\otimes n}$; explicitly, for $\Psi_n \in \cal H^{\otimes n}$, \begin{equation} \label{def_Pn} P_n \Psi_n(x_1, \dots, x_n) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{n!} \sum_{\pi \in S_n} \Psi_n(x_{\pi(1)}, \dots, x_{\pi(n)})\,, \end{equation} where $S_n$ is the group of permutations on $\{1, \dots, n\}$. For $n \in \mathbb{N}^*$, we define the $n$-particle space as $\cal H_n \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= P_n \cal H^{\otimes n}$. We define Fock space as the Hilbert space $\cal F \equiv \cal F (\cal H) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \bigoplus_{n \in \mathbb{N}} \cal H_n$. We denote by $\tr_{\cal F}(X)$ the trace of an operator $X$ acting on $\cal F$. For $f \in \cal H$ we define the bosonic annihilation and creation operators $a(f)$ and $a^*(f)$ on $\cal F$ through their action on a dense set of vectors $\Psi = (\Psi_n)_{n \in \mathbb{N}} \in \mathcal{F}$ as \begin{align} \label{def_b2} \pb{a(f) \Psi}_n(x_1, \dots, x_n) &= \sqrt{n+1} \int \r d x \, \bar f(x) \, \Psi_{n+1} (x,x_1, \dots, x_n)\,, \\ \label{def_b1} \pb{a^*(f) \Psi}_n(x_1, \dots, x_n) &= \frac{1}{\sqrt{n}} \sum_{i = 1}^n f(x_i) \Psi_{n - 1}(x_1, \dots, x_{i - 1}, x_{i+1}, \dots, x_n) \,. \end{align} The operators $a(f)$ and $a^*(f)$ are unbounded closed operators on $\cal F$, and are each other's adjoints. They satisfy the canonical commutation relations \begin{equation} \label{CCR_b} [a(f), a^*(g)] = \scalar{f}{g} \, 1 \,, \qquad [a(f), a(g)] = [a^*(f), a^*(g)] =0\,, \end{equation} where $[X,Y] \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= XY - YX$ denotes the commutator. We regard $a$ and $a^*$ as operator-valued distributions and use the notations \begin{equation} \label{phi_tau f} a(f) = \int \r d x \, \bar f(x) \, a(x)\,, \qquad a^*(f) = \int \r d x \, f(x) \, a^*(x)\,. \end{equation} The distribution kernels $a^*(x)$ and $a(x)$ satisfy the canonical commutation relations \begin{equation} \label{CCR} [a(x),a^*(\tilde x)] = \delta(x - \tilde x) \,, \qquad [a(x),a(\tilde x)] = [a^*(x),a^*(\tilde x)] = 0\,. \end{equation} For $\nu > 0$, we define the free quantum Hamiltonian $H^{(0)} \equiv H^{(0)}_\nu$ through \begin{equation} \label{free_Hamiltonian_H} H^{(0)} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \nu \int \r d x \, a^*(x) ((\kappa - \Delta / 2) a) (x)\,. \end{equation} To describe the interaction potential of the Bose gas, we choose $v \vcentcolon}%{\mathrel{\vcenter{\baselineskip0.75ex \lineskiplimit0pt \hbox{.}\hbox{.}}} \mathbb{R}^d \to \mathbb{R}$ to be an even, smooth, compactly supported function of positive type\footnote{This means that the Fourier transform of $v$ is a positive measure. Note that we do not assume $v$ to be pointwise nonnegative.} whose integral is equal to one. For $\epsilon > 0$ we define the rescaled interaction potential on $\Lambda$ as \begin{equation} \label{v_epsilon} v^\epsilon(x) = \sum_{n \in \mathbb{Z}^d} \frac{1}{\epsilon^d} \, v \pbb{\frac{x - n}{\epsilon}}\,. \end{equation} For $\epsilon, \nu > 0$ we define the interacting quantum Hamiltonian $H \equiv H^{\epsilon}_{\nu}$ through \begin{equation} \label{def_H} H \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= H^{(0)} + \frac{\nu^2}{2} \int \r d x \, \r d \tilde x \, a^*(x) a(x) \, v^\epsilon(x - \tilde x) \, a^*(\tilde x) a (\tilde x) - \nu \alpha^\epsilon_\nu \int \r d x \, a^*(x) a(x)+ \theta^\epsilon_\nu \,, \end{equation} where $\alpha^\epsilon_\nu, \theta^\epsilon_\nu \in \mathbb{R}$ are appropriately chosen real parameters; see \eqref{alpha_beta_def} below for their definitions. The parameter $\alpha^\epsilon_\nu$ governs a renormalization of the chemical potential, and $\theta^\epsilon_\nu$ an energy renormalization. As $\epsilon ,\nu \to 0$, we have $\alpha_\nu^\epsilon \to +\infty$. The quantum grand canonical density matrix, as given in \eqref{gc_density}, can be expressed as the operator $\r e^{-H} / Z$, where $Z \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \tr_{\cal F} (\r e^{-H})$ is the grand canonical partition function. Analogously, the free grand canonical partition function is $Z^{(0)} = \tr_{\cal F}(\r e^{-H^{(0)}})$. We introduce the relative partition function \begin{equation} \label{Z^epsilon_quantum} \cal Z \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{Z}{Z^{(0)}}\,. \end{equation} We define the \emph{$p$-particle reduced density matrix} as \begin{equation} \label{Gamma_p} (\Gamma_p)_{\f x,\tilde{\f x}} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \tr_{\cal F} \Biggl(a^{*}(\tilde{x}_1) \cdots a^{*}(\tilde{x}_p)\,a(x_1)\cdots a(x_p)\,\frac{\r e^{-H}}{Z}\Biggr)\,. \end{equation} As for the correlation function \eqref{gamma_p} and its Wick-ordered version \eqref{hat_gamma_p}, we would like to replace \eqref{Gamma_p} with its Wick-ordered version. To that end, we regard the expressions \eqref{gamma_p} and \eqref{hat_gamma_p} as integral kernels of operators acting on $\cal H_p$, and observe that (see \cite[Lemma A.4]{FKSS_2020}) that \begin{equation} \label{hat_gamma_p_2} \widehat{\gamma}_p =\sum_{k=0}^{p}\binom{p}{k}^2\,(-1)^{p-k}\,P_p(\gamma_k \otimes \gamma^{(0)}_{p-k})P_p\,, \end{equation} where $\gamma^{(0)}_{m}$ denotes the $m$-point correlation function from \eqref{gamma_p} with $V=0$. In analogy with \eqref{hat_gamma_p_2}, we therefore define the \emph{Wick-ordered $p$-particle reduced density matrix} as \begin{equation} \label{hat_Gamma_p_2} (\widehat{\Gamma}_p)_{\f x,\tilde{\f x}} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \sum_{k=0}^{p}\binom{p}{k}^2\,(-1)^{p-k}\,P_p(\Gamma_k \otimes \Gamma^{(0)}_{p-k})P_p\,, \end{equation} where $\Gamma^{(0)}_{m}$ denotes the $m$-particle reduced density matrix of the free grand canonical density matrix $\r e^{-H^{(0)}} / Z^{(0)}$. (For an interpretation of \eqref{hat_Gamma_p_2} as a result of Wick ordering \eqref{Gamma_p} with respect to the free field in the functional integral representation of quantum many-body theory, we refer the reader to the discussion in \cite[Section 1.7]{FKSS_2020}). \subsection{Results} We may now state our main result. \begin{theorem} \label{thm:main} Suppose that $d = 2$ and $\epsilon \equiv \epsilon(\nu)$ satisfies \begin{equation} \label{epsilon_assumption} \epsilon \geq \exp\pb{- (\log \nu^{-1})^{1/2 - c}} \end{equation} for some constant $c > 0$. Then as $\epsilon, \nu \to 0$ we have the convergence of the partition function \begin{equation} \cal Z \to \zeta \end{equation} and of the Wick-ordered correlation functions \begin{equation} \nu^p \, \widehat \Gamma_p \overset{C}{\longrightarrow} \widehat \gamma_p \end{equation} for all $p \in \mathbb{N}$, where $\overset{C}{\longrightarrow}$ denotes convergence in the space of continuous functions on $\Lambda^p \times \Lambda^p$ with respect to the supremum norm. \end{theorem} We refer to \cite[Section 1.5]{FKSS_2020} for an in-depth discussion on applications of Theorem \ref{thm:main}. In particular, Theorem \ref{thm:main} yields the following result for unrenormalized correlation functions. \begin{corollary} Under the assumptions of Theorem \ref{thm:main}, \begin{equation*} \nu^p \Gamma_p \overset{L^r}{\longrightarrow} \gamma_p \end{equation*} for all $p \in \mathbb{N}$ and $r < \infty$, where $\overset{L^r}{\longrightarrow}$ denotes convergence in the $L^r(\Lambda^p \times \Lambda^p)$-norm. \end{corollary} Another application of Theorem \ref{thm:main} is the convergence of the joint distribution of the Wick-ordered quantum particle densities $a^*(x) a(x)$ to those of the Wick-ordered mass densities $\abs{\phi(x)}^2$; see \cite[Theorem 1.4]{FKSS_2020}. \begin{remark} \label{rem:general_Lambda} In this paper we set $\Lambda$ to be the unit torus for definiteness, but our methods extend without complications to more general domains and boundary conditions. In particular, they also apply to the full space $\mathbb{R}^2$ when the particles are confined by a suitable external potential, as in \cite[Sections 1.6 and 7]{FKSS_2020}. We omit further details. \end{remark} \begin{remark} \label{rem:species} The proof of Theorem \ref{thm:main} can be extended to establish the convergence of the interacting Bose gas of $N$ species of identical Bosons to the $\phi^4_2$ theory with $N$ complex components. (Theorem \ref{thm:main} corresponds to $N = 1$.) More precisely, we introduce the species index $i = 1, \dots, N$, and augment the creation and annihilation operators to $a_i^*(x), a_i(x)$ satisfying the canonical commutation relations \begin{equation} \label{CCR_species} [a_i(x),a_j^*(\tilde x)] = \delta_{ij} \delta(x - \tilde x) \,, \qquad [a_i(x),a_j(\tilde x)] = [a_i^*(x),a_j^*(\tilde x)] = 0\,, \end{equation} which generalize \eqref{CCR}. The Hamiltonian from \eqref{free_Hamiltonian_H} and \eqref{def_H} is generalized to \begin{equation*} H^{(0)} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \nu \sum_{i = 1}^N \int \r d x \, a_i^*(x) ((\kappa - \Delta / 2) a_i) (x) \end{equation*} and \begin{equation*} H \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= H^{(0)} + \frac{\nu^2}{2} \sum_{i = 1}^N \int \r d x \, \r d \tilde x \, a_i^*(x) a_i(x) \, v^\epsilon(x - \tilde x) \, a_i^*(\tilde x) a_i (\tilde x) - \nu \alpha^\epsilon_\nu \sum_{i = 1}^N \int \r d x \, a_i^*(x) a_i(x)+ \theta^\epsilon_\nu \,. \end{equation*} Then we find that the reduced density matrices of the $N$-species quantum Bose gas converge to the correlation functions of $\phi^4_2$ theory with $N$ complex components, in the sense of Theorem \ref{thm:main}. \end{remark} \section{Structure of the proof} The rest of this paper is devoted to the proof of Theorem \ref{thm:main}. We begin with a short section that lays out the general strategy. We use $c,C$ to denote generic positive constants, which may change from one expression to the next, and may depend on fixed parameters. We write $x \lesssim y$ or $x=O(y)$ to mean $x \leq C y$. If $C$ depends on a parameter $\alpha$, we write $x \lesssim_{\alpha} y$, $x \leq C_{\alpha} y$, or $x=O_{\alpha}(y)$. We abbreviate $[n] = \{1,\dots,n\}$. Let $G$ be the Green function of the free field $\phi$, i.e.\ the integral kernel of the operator $(\kappa - \Delta / 2)^{-1}$. Since $\kappa - \Delta/2$ is invariant under translations, we can write $G_{x,y} = G(x - y)$. Explicitly, in the sense of distributions, \begin{equation*} G(x - y) = \mathbb{E} [\phi(x) \bar \phi(y)]\,. \end{equation*} We denote by \begin{equation*} \varrho_\nu \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \nu \tr_{\cal F} \pbb{a^*(0) a(0) \, \frac{\r e^{-H^{(0)}}}{Z^{(0)}}} \end{equation*} the expected rescaled particle density in the free quantum state. Define the real parameters \begin{equation} \label{tau_epsilon} \tau^\epsilon \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int \r d x \, v^\epsilon(x) \, G(x)\,, \qquad E^\epsilon \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{2} \int \r d x \, v^\epsilon (x) \, G(x)^2\,. \end{equation} With the choice \begin{equation} \label{alpha_beta_def} \alpha^\epsilon_\nu \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \varrho_\nu + \tau^\epsilon\,, \qquad \theta^\epsilon_\nu \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{2} \varrho_\nu^2 + \tau^\epsilon \varrho_\nu - E^\epsilon\,, \end{equation} we can rewrite the Hamiltonian \eqref{def_H} in the form \begin{multline} \label{Hamiltonian_H} H = H^{(0)} + \frac{1}{2} \int \r d x \, \r d \tilde x \, \pb{\nu a^*(x) a(x) - \varrho_\nu} \, v^\epsilon(x - \tilde x) \, \pb{ \nu a^*(\tilde x) a (\tilde x) - \varrho_\nu} \\ - \tau^\epsilon \, \int \r d x \, \pb{\nu a^*(x) a(x) - \varrho_\nu} - E^\epsilon\,. \end{multline} We shall need two different interacting field theories approximating \eqref{field theory}, obtained by replacing the interaction $V$ with regularized variants, denoted by $W^\epsilon$ and $V^\epsilon$, respectively. They are defined by \begin{align} \label{W^epsilon} W^\epsilon &\mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{2} \int \r d x \, \r d \tilde x \, \wick{\abs{\phi(x)}^2} \, v^\epsilon(x - \tilde x) \, \wick{\abs{\phi(\tilde x)}^2} - \tau^\epsilon \int \r d x \, \wick{\abs{\phi(x)}^2} - E^\epsilon\,, \\ \label{V^epsilon} V^\epsilon &\mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{2} \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \, \wick{ \abs{\phi(x)}^2 \, \abs{\phi(\tilde x)}^2}\,. \end{align} The rigorous construction of the random variables $W^\epsilon, V^\epsilon$ proceeds exactly like that of $V$ explained in Section \ref{sec:classical_field}: one introduces truncated versions $W^\epsilon_N, V^\epsilon_N$ defined in terms of the truncated free field $\phi_N$ (see e.g.\ \eqref{def_V_e_M} below), and proves using Wick's theorem that as $N \to \infty$ they converge in $L^2(\P)$ to their respective limits $W^\epsilon, V^\epsilon$. Throughout the following, we shall make use of such constructions of Wick-ordered functions of the free field without further comment. The integrability of $\r e^{-W^\epsilon}$ and $\r e^{-V^\epsilon}$ is established in Section \ref{sec:field_theory} below. To emphasize the dependence of the quantities \eqref{field theory} and \eqref{hat_gamma_p} on the interaction $V$, we sometimes include the interaction $V$ in our notation as a superscript, writing $\zeta^V$ and $\widehat \gamma_p^V$, respectively. The proof consists of two main steps. \begin{description} \item[Step 1.] We compare $\cal Z$ and $\nu^p \, \widehat \Gamma_p$ with $\zeta^{W^\epsilon}$ and $\widehat \gamma_p^{W^\epsilon}$, respectively, in the limit $\nu,\epsilon \to 0$ under the condition \eqref{epsilon_assumption}. \item[Step 2.] We compare $\zeta^{W^\epsilon}$ and $\widehat \gamma_p^{W^\epsilon}$ with $\zeta^{V}$ and $\widehat \gamma_p^{V}$, respectively, in the limit $\epsilon \to 0$. This step is done by passing via the further intermediate interaction $V^\epsilon$. \end{description} Step 1 relies on a quantitative analysis of the infinite-dimensional saddle point argument for the functional integral introduced in \cite{FKSS_2020}. Step 2 relies on three main ingredients. First, we show integrability of $\r e^{-V^\epsilon}$, uniformly in $\epsilon$. Second, we use that $V^\epsilon - W^\epsilon$ is small in $L^2(\P)$ and it lies in the second polynomial chaos (see Section \ref{sec:hypercontr} below), which allows us to deduce integrability of $\r e^{-W^\epsilon}$ by expansion in $V^\epsilon - W^\epsilon$ and hypercontractive moment bounds. Third, to obtain uniform control on the Wick-ordered correlation functions, we use Gaussian integration by parts, analogous to Malliavin calculus, to derive a representation of the correlation functions in terms of expectations of derivatives of the interaction potential. The results of these two steps are summarized in the following two propositions. \begin{proposition} \label{prop:step1} Suppose that $d = 2$ and that $\nu,\epsilon \to 0$ under the constraint \eqref{epsilon_assumption}. Then $\cal Z - \zeta^{W^\epsilon} \to 0$. Moreover, for all $p \in \mathbb{N}^*$, \begin{equation} \normB{\nu^p \, \widehat \Gamma_p - \widehat \gamma_p^{W^\epsilon}}_C \to 0\,. \end{equation} \end{proposition} \begin{proposition} \label{prop:step2} Suppose that $d = 2$ and that $\epsilon \to 0$. Then $\zeta^{W^\epsilon} \to \zeta^V$. Moreover, for all $p \in \mathbb{N}^*$, \begin{equation} \label{convergence_gamma_field} \normB{\widehat \gamma_p^{W^\epsilon} - \widehat \gamma_p^V}_C \to 0\,. \end{equation} \end{proposition} We remark that Proposition \ref{prop:step1} holds also for $d = 3$, with the same proof, provided that the condition \eqref{epsilon_assumption} is suitably adjusted. We refer to Section \ref{The rate of convergence} for more details and for the proof. \section{Proof of Proposition \ref{prop:step2}} \label{sec:field_theory} In this section we prove Proposition \ref{prop:step2}. We set $d = 2$ throughout. \subsection{$L^2$-estimates} \label{sec_L2estimates} In this subsection we derive $L^2$-estimates for the differences $V^\epsilon - V$ and $V^\epsilon - W^\epsilon$. \begin{lemma} \label{lem:R_variance} We have $\norm{V^\epsilon - W^\epsilon}_{L^2(\P)} \to 0$ as $\epsilon \to 0$. \end{lemma} \begin{proof} A straightforward calculation using \eqref{wick_expanded} below yields \begin{align} V^\epsilon &= \frac{1}{2} \int \r d x \, \r d \tilde x \, \wick{\abs{\phi(x)}^2} \, v^\epsilon(x - \tilde x) \, \wick{\abs{\phi(\tilde x)}^2} - \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \, G(x - \tilde x) \, \bar \phi(x) \phi(\tilde x) + E^\epsilon \notag \\ &= \frac{1}{2} \int \r d x \, \r d \tilde x \, \wick{\abs{\phi(x)}^2} \, v^\epsilon(x - \tilde x) \, \wick{\abs{\phi(\tilde x)}^2} - \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \, G(x - \tilde x) \, \wick{\bar \phi(x) \phi(\tilde x)} - E^\epsilon \notag \\ \label{W_V_estimate} &= W^\epsilon - \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \, G(x - \tilde x) \, \pb{\wick{\bar \phi(x) \phi(\tilde x)} - \wick{\abs{\phi(x)}^2}}\,. \end{align} By Lemma \ref{lem:Wick} below, we find \begin{align} \norm{V^\epsilon - W^\epsilon}_{L^2(\P)}^2 &= \int \r d x \, \r d \tilde x \, \r d y \, \r d \tilde y \, v^\epsilon(x - \tilde x) \, v^\epsilon(y - \tilde y) \, G(\tilde x - x) \, G(\tilde y - y) \notag \\ \label{R_e_square} &\qquad \times \pb{G(x - \tilde y) - G(x - y)} \pb{G(\tilde x - y) - G(x - y)}\,. \end{align} From Lemma \ref{lem:G_smoothness}, we find \[ |G (x- \tilde{y}) - G(x-y)| \lesssim |y-\tilde{y}| + \absbb{ \log \frac{|x-\tilde{y}|}{|x-y|}} \] and similarly for $| G(\tilde{x} - y) - G (x-y)|$. Switching to new integration variables $h = (x-\tilde{x})/\epsilon$, $k = (y - \tilde{y})/\epsilon$ and $z = x-y$, we obtain \[ \| V^\epsilon - W^\epsilon \|_{L^2 (\mathbb{P})}^2 \lesssim \int \r d h \, \r d k \, \r d z \, v(h) \, v(k) \, G(\epsilon h) \, G(\epsilon k) \pbb{ \epsilon |h| + \absbb{\log \frac{|z + \epsilon h|}{|z|}}} \pbb{ \epsilon |k| + \absbb{ \log \frac{|z +\epsilon k|}{|z|} } } \] Estimating $\absb{ \log \frac{|z + \epsilon h|}{|z|}} \leq | \log |z+ \epsilon h| | + | \log |z| |$ for $|z| \leq 2 \epsilon |h|$ and \[ \absbb{ \log \frac{|z + \epsilon h|}{|z|} } \lesssim \left( \frac{\epsilon |h|}{|z|} \right)^\alpha \] for $|z| > 2 \epsilon |h|$ and any $\alpha \in (0,1)$, we conclude that $\| V^\epsilon - W^\epsilon \|_{L^2 (\mathbb{P})} \leq C \epsilon^{\alpha}$ for any $\alpha \in (0,1)$. \end{proof} \begin{lemma} \label{lem:conv_V} We have $\norm{V^{\epsilon} - V}_{L^2(\P)} \to 0$ as $\epsilon \to 0$. \end{lemma} \begin{proof} Using Lemma \ref{lem:Wick} we find \begin{multline*} \mathbb{E} (V^\epsilon - V)^2 = \frac{1}{2} \int \r d x \, \r d \tilde x \, \r d y \, \r d \tilde y \, \pb{v^\epsilon (x - \tilde x) - \delta(x - \tilde x)} \, \pb{v^\epsilon(y - \tilde y) - \delta(y - \tilde y)} \\ \times G(x - y) \, G(\tilde x - \tilde y) \, \pb{G(x - y) \, G(\tilde x - \tilde y) + G(\tilde x - y) \, G(x - \tilde y)}\,. \end{multline*} This can be estimated, similarly to the proof of Lemma \ref{lem:R_variance}, with the change of variables $\tilde x - x = h$ and $\tilde y - y = k$. We omit the details. \end{proof} \subsection{Integrability of $\r e^{-V^\epsilon}$} \label{sec:Nelson} In this subsection we establish the integrability of $\r e^{-V^\epsilon}$, uniformly in $\epsilon$. This is an adaptation of Nelson's argument \cite{nelson1973free} to a nonlocal interaction. \begin{proposition} \label{prop:Nelson} There is a constant $c > 0$ such that for all $\epsilon > 0$ and $t \geq 1$ we have \begin{equation*} \P(\r e^{-V^{\epsilon}} > t) \lesssim \exp(-\r e^{c \sqrt{\log t}})\,. \end{equation*} The same estimate holds for $V^\epsilon$ replaced with $V$. \end{proposition} In particular, $\r e^{-V^{\epsilon}}$ is uniformly integrable in $\epsilon > 0$. The rest of this subsection is devoted to the proof of Proposition \ref{prop:Nelson}. We start by noting that $\kappa - \Delta / 2$ has eigenfunctions $u_k \in \cal H$ and eigenvalues $\lambda_k$ indexed by $k \in \mathbb{Z}^d$ and given by \begin{equation} \label{lambda_k} \lambda_k = \kappa + 2 \pi^2 \abs{k}^2 \,, \qquad u_k = \r e^{2 \pi \r i k \cdot x}\,. \end{equation} We shall use the truncated field $\phi_N$ from \eqref{def_phi_N} with a suitable truncation $\vartheta$, which is smooth in Fourier space. To that end, we fix $\rho$ to be a smooth, nonnegative, rotation invariant function, that has integral $1$ and is supported in the unit ball. We suppose that its Fourier transform \begin{equation} \label{Fourier} \vartheta(\xi) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \fra F \rho (x) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int_{\mathbb{R}^2} \r d x \, \r e^{-2 \pi \r i \xi \cdot x} \, \rho(x) \end{equation} is nonnegative and radially nonincreasing (this can always be achieved by taking $\rho$ as a convolution of two nonnegative functions). We define the truncated version of $V^\epsilon$ from \eqref{V^epsilon} through \begin{equation} \label{def_V_e_M} V^\epsilon_N \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{2} \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \, \wick{ \abs{\phi_N(x)}^2 \, \abs{\phi_N(\tilde x)}^2}\,, \end{equation} which converges in $L^2(\P)$ to $V^\epsilon$ as $N \to \infty$. Next, let $(Y_k)_{k \in \mathbb{Z}^d}$ be a family of i.i.d.\ standard complex Gaussian random variables, which is independent of the family $(X_k)_{k \in \mathbb{Z}^d}$. For $0 < N \leq M \leq \infty$ we define the field \begin{equation*} \psi_{N,M} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \sum_{k \in \mathbb{Z}^d} \frac{Y_k}{\sqrt{\lambda_k}} \, \sqrt{\vartheta(k / M) - \vartheta(k / N)} \, u_k\,. \end{equation*} By construction, $\phi_N$ and $\psi_{N,M}$ are independent. For $M < \infty$, they are almost surely smooth on ${\Lambda}$. We define the truncated Green function \begin{equation} \label{def_G_N} G_N \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= G * \rho_N\,, \qquad \rho_N(x) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \sum_{n \in \mathbb{Z}^d} N^2 \rho(N(x + n))\,, \end{equation} and find by Poisson summation that, for $N \leq M$, \begin{align} \label{G_N_phi} \mathbb{E} \qb{\phi_N(x) \bar \phi_N(y)} &= \sum_{k \in \mathbb{Z}^d} \frac{1}{\lambda_k} \r e^{2 \pi \r i k \cdot (x - y)} \vartheta(k / N) = G_N(x-y) \\ \label{G_N_M_phi} \mathbb{E} \qb{\psi_{N,M}(x) \bar \psi_{N,M}(y)} &= \sum_{k \in \mathbb{Z}^d} \frac{1}{\lambda_k} \r e^{2 \pi \r i k \cdot (x - y)} \pb{\vartheta(k / M) - \vartheta(k / N)} = G_M(x-y) - G_N(x - y)\,. \end{align} By independence of $\phi_N$ and $\psi_{N,M}$, we therefore find that for any $N \leq M$ we have the decomposition into low and high frequencies \begin{equation} \label{phi_decomposition} \phi_N + \psi_{N,M} \overset{\r d}{=} \phi_{M}\,, \end{equation} and in particular setting $M = \infty$ we get \begin{equation*} \phi_N + \psi_{N,\infty} \overset{\r d}{=} \phi\,. \end{equation*} Here $\overset{\r d}{=}$ denotes equality in law. By \eqref{phi_decomposition} we have, for any $N \leq M$, \begin{equation*} V^{\epsilon}_M \overset{\r d}{=} \frac{1}{2} \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \, \wick{ \abs{\phi_N(x) + \psi_{N,M}(x)}^2 \, \abs{\phi_N(\tilde x) + \psi_{N,M}(\tilde x)}^2}\,. \end{equation*} For any $N \leq M$ we therefore have \begin{equation*} V^{\epsilon}_M \overset{\r d}{=} \sum_{a , \tilde a , b, \tilde b \in \{0,1\}} V^{\epsilon}_{ N,M}(a,\tilde a, b, \tilde b)\,, \end{equation*} where \begin{multline} \label{def_V_epsilon} V^{\epsilon}_{N,M}(a,\tilde a, b, \tilde b) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{2} \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \\ \times \wick{\phi_N(x)^{1 - a} \phi_N(\tilde x)^{1 - \tilde a} \bar \phi_N(x)^{1 - b} \bar \phi_N(\tilde x)^{1 - \tilde b} \,\psi_{N,M}(x)^{a} \psi_{N,M}(\tilde x)^{\tilde a} \bar \psi_{N,M}(x)^{b} \bar \psi_{N,M}(\tilde x)^{\tilde b}}\,. \end{multline} Hence, for $N \leq M$ we have \begin{equation} \label{V_M-V_N} V^{\epsilon}_M - V^{\epsilon}_N = \sum_{a , \tilde a , b, \tilde b \in \{0,1\}} \ind{a + \tilde a + b + \tilde b > 0} \, V^{\epsilon}_{N,M}(a,\tilde a, b, \tilde b)\,. \end{equation} \begin{lemma} \label{lem:lbound_V} There is a constant $C$ depending on $v$ such that almost surely \begin{equation*} V^{\epsilon}_{N} \geq - C (\log N)^2 \end{equation*} for all $\epsilon > 0$. \end{lemma} \begin{proof} Abbreviate $S = 1 + \int_{\mathbb{R}^2} \r d x \, \abs{v(x)}$. Using the explicit form \eqref{wick_expanded} of the Wick power in \eqref{def_V_e_M} as well as \eqref{G_N_phi}, we find \begin{align*} V^{\epsilon}_N &= \frac{1}{2} \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \, \pB{\abs{\phi_N(x)}^2 \abs{\phi_N(\tilde x)}^2 - G_N(0) \abs{\phi_N(x)}^2 - G_N(0) \abs{\phi_N(\tilde x)}^2 \\ &\qquad - 2 \re G_N(x - \tilde x) \phi_N(x) \bar \phi_N(\tilde x) + G_N(0)^2 + G_N(x - \tilde x)^2} \\ &= \frac{1}{2} \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \, \qB{\pb{\abs{\phi_N(x)}^2 - S G_N(0)} \pb{\abs{\phi_N(\tilde x)}^2 - S G_N(0)} \\ &\qquad + (S - 1) G_N(0) \pb{\abs{\phi_N(x)}^2 + \abs{\phi_N(\tilde x)}^2 } - 2 \re G_N(x - \tilde x) \phi_N(x) \bar \phi_N(\tilde x) \\ &\qquad - (S^2 - 1) G_N(0)^2 + G_N(x - \tilde x)^2 } \\ &\geq (S - 1) G_N(0) \int \r d x \, \abs{\phi_N(x)}^2 - \re \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) G_N(x - \tilde x) \phi_N(x) \bar \phi_N(\tilde x) - \frac{S^2}{2} G_N(0)^2\,, \end{align*} where in the last step we used that $v$ (and hence also $v^\epsilon$) is of positive type with integral one. Using $\abs{G_N(x)} \leq G_N(0)$ by \eqref{G_N_phi} and Young's inequality, we find \begin{equation*} \absbb{ \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) G_N(x - \tilde x) \phi_N(x) \bar \phi_N(\tilde x)} \leq (S - 1) \, G_N(0) \int \r d x \, \abs{\phi_N(x)}^2\,, \end{equation*} and the claim follows from Lemma \ref{lem:prop_G_N}. \end{proof} Next, we derive an estimate for the $L^2$-norm of $V^{\epsilon}_M - V^{\epsilon}_N$. \begin{lemma} \label{lem:V_variance} For any fixed $\delta > 0$ and for any $0 < N \leq M < \infty$ we have \begin{equation*} \normb{V^{\epsilon}_M - V^{\epsilon}_N}_{L^2(\P)} \lesssim N^{-1+\delta} \end{equation*} \end{lemma} \begin{proof} By \eqref{V_M-V_N} and Minkowski's inequality, it suffices to estimate \begin{equation} \label{variance_V} \cal R \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \mathbb{E} \qB{\abs{V^{\epsilon}_{N,M}(a,\tilde a, b, \tilde b)}^2} = \mathbb{E} \qB{V^{\epsilon}_{N,M}(a,\tilde a, b, \tilde b) \, \ol{V^{\epsilon}_{N,M}(a,\tilde a, b, \tilde b)}\,} \end{equation} for any fixed $a, \tilde a, b, \tilde b \in \{0,1\}$ satisfying $a + \tilde a + b + \tilde b > 0$. Using Lemma \ref{lem:prop_G_N} below we find the bounds $G_N(x) \lesssim p(x)$ and $\abs{G_M(x) - G_N(x)} \lesssim q(x)$, where \begin{equation*} p(x) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= 1 + \abs{\log \abs{x}}\,, \qquad q(x) \equiv q_N(x) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= (1 + \abs{\log (N \abs{x})}) \wedge \frac{1}{N^2 \abs{x}^2}\,. \end{equation*} Note that $q(x) \lesssim p(x)$. Using Wick's theorem, Lemma \ref{lem:Wick} below, and Young's inequality, we find \begin{align*} \cal R &\lesssim \int \r d x \, \r d \tilde x \, \r d y \, \r d \tilde y \, \abs{v^\epsilon(x - \tilde x)} \, \abs{v^\epsilon(y - \tilde y)} \, q(x - y) \, p(x - y) \, p(\tilde x - \tilde y)^2 \\ &\quad + \int \r d x \, \r d \tilde x \, \r d y \, \r d \tilde y \, \abs{v^\epsilon(x - \tilde x)} \, \abs{v^\epsilon(y - \tilde y)} \, q(x - y) \, p(\tilde x - \tilde y) \, p(x - \tilde y) \, p(\tilde x - y) \\ &\lesssim \sup_{y \in \Lambda} \int \, \r d x \, q(x) \, p(x-y)^3 + \int \r d x \, \r d \tilde x \, q(x) \, p(\tilde x)^3 \\ &\lesssim \sup_{y \in \Lambda} \int \, \r d x \, q(x) \, p(x-y)^3 \\ &\lesssim \sup_{y \in \Lambda} \left[ \int_{|x| \leq C/N} \r d x \, \pb{ 1 + |\log |x| | }^4 + \frac{1}{N^2} \int_{|x| > C/N} \r d x \, \pb{ 1 + |\log |x-y| | }^3 \frac{1}{|x|^2} \right] \lesssim N^{-2+\delta} \end{align*} for any $\delta > 0$, where in the third step we used that $\int \r d \tilde x \, p(\tilde x)^3 \lesssim 1 \leq p(x)^3$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:Nelson}] For any $N \geq 1$ we have, by Lemma \ref{lem:lbound_V}, \begin{equation*} \P(\r e^{-V^{\epsilon}} > t) = \P\pb{V^{\epsilon} - V^{\epsilon}_N < - \log t - V^{\epsilon}_N} \leq \P\pb{V^{\epsilon} - V^{\epsilon}_N < - (\log t - C (\log N)^2)}\,. \end{equation*} Now choose $N \geq 1$ such that \begin{equation*} \log t - C (\log N)^2 = 1\,, \end{equation*} which is always possible for $t$ large enough. Next, we find that $V^{\epsilon}_M$ (or more precisely its real and imaginary parts) is in the $4$th polynomial chaos (see Section \ref{sec:hypercontr}), by using Lemma \ref{lem:chaos_decomp} and the easy fact that $V^{\epsilon}_M$ is orthogonal to the $n$th chaos for $n \neq 4$, which is a consequence of Wick's theorem in Lemma \ref{lem:Wick}. Hence, from Remark \ref{rem:hypercontr_complex} and Lemma \ref{lem:V_variance} we deduce that for any $0 < N \leq M < \infty$ and $p \in 2 \mathbb{N}$ we have \begin{equation} \label{V_M-V_N2} \normb{V^{\epsilon}_M - V^{\epsilon}_N}_{L^p(\P)} \lesssim \frac{p^2}{N^{2/3}}\,. \end{equation} Since $V_N^\epsilon \to V^\epsilon$ in $L^2(\P)$ as $N \to \infty$, by Lemma \ref{lem:hyper} we find that \eqref{V_M-V_N2} holds also for $M = \infty$ (i.e.\ replacing $V^\epsilon_M$ with $V^\epsilon$). Hence we get from Chebyshev's inequality, for any $p \in 2\mathbb{N}$, \begin{equation*} \P(\r e^{-V^{\epsilon}} > t) \leq \mathbb{E} \qb{\abs{V^{\epsilon} - V^{\epsilon}_N}^p} \leq \pbb{\frac{C p^2}{N^{2/3}}}^p \leq \pbb{\frac{p^2}{\sqrt{N}}}^p \leq \pb{p^2 \r e^{-c \sqrt{\log t}}}^p\,, \end{equation*} for large enough $t$ (and hence $N$). Choosing $p$ to be the largest element of $2 \mathbb{N}$ smaller than $\r e^{c/2 \sqrt{\log t} - 1/2}$ yields the claim for $V^\epsilon$. Finally, the claim for $V$ easily follows from the one for $V^\epsilon$ and Lemma \ref{lem:conv_V}. \end{proof} \subsection{Convergence of the partition function} \label{sec:field_part_funct} The convergence $\zeta^{W^\epsilon} \to \zeta^{V}$ follows immediately from the following result. \begin{lemma} \label{lem:V_E_L2} For any $1 \leq p < \infty$ we have $\normb{\r e^{-V} - \r e^{-W^\epsilon}}_{L^p(\P)} \to 0$ as $\epsilon \to 0$. \end{lemma} \begin{proof} We begin by estimating $\norm{\r e^{-W^\epsilon}}_{L^p(\P)}$ by comparing it to $\norm{\r e^{-V^\epsilon}}_{L^p(\P)}$ and recalling Proposition \ref{prop:Nelson}. To that end, we note that $V^\epsilon - W^\epsilon$ is in the second polynomial chaos, as can be easily verified by showing using Lemma \ref{lem:Wick} that $V^\epsilon - W^\epsilon$ is orthogonal to the $n$th polynomial chaos for $n \neq 2$, and recalling Lemma \ref{lem:chaos_decomp}. Hence, by the hypercontractive bound from Lemma \ref{lem:hyper} we obtain \begin{multline*} \normb{\r e^{V^\epsilon - W^\epsilon} - 1}_{L^p(\P)} \leq \sum_{k \geq 1} \frac{1}{k!}\, \norm{(V^\epsilon - W^\epsilon)^k}_{L^p(\P)} \\ = \sum_{k \geq 1} \frac{1}{k!}\, \norm{V^\epsilon - W^\epsilon}_{L^{pk}(\P)}^k \lesssim \sum_{k \geq 1} \frac{1}{k!} \, (pk)^k \norm{V^\epsilon - W^\epsilon}_{L^2(\P)}^k\,. \end{multline*} Using Lemma \ref{lem:R_variance} and Proposition \ref{prop:Nelson}, we conclude that for small enough $\epsilon$ (depending on $p$), $\norm{\r e^{-W^\epsilon}}_{L^p(\P)}$ is uniformly bounded in $\epsilon$. The claim now follows by writing \begin{equation*} \normb{\r e^{-V} - \r e^{-W^\epsilon}}_{L^p(\P)} \leq \int_0^1 \r d t \, \normB{(V - W^\epsilon) \, \r e^{-t W^\epsilon - (1 - t) V}}_{L^p(\P)}\,, \end{equation*} applying Hölder's inequality to the right-hand side, and combining Lemmas \ref{lem:R_variance} and \ref{lem:conv_V} with Lemma \ref{lem:hyper} and the observation that $W^\epsilon - V$ lies in the span of the polynomial chaoses up to order four. \end{proof} \subsection{Convergence of correlation functions} In order to obtain the uniform convergence of the Wick-ordered correlation functions $(\widehat \gamma^{W^\epsilon}_p)_{\f x, \tilde {\f x}}$ to $(\widehat \gamma^V_p)_{\f x, \tilde {\f x}}$, we use a representation obtained by repeated Gaussian integration by parts. To that end, we shall introduce a differential operator, denoted by $L_{N,x}$, such that $L_{N,x} \bar \phi(y) = G_N(x-y)$ and hence, formally, $L_{N,x} = \int \r d y \, G_N(x - y) \frac{\delta}{\delta \bar \phi(y)}$. Our argument may be viewed as an instance of Malliavin calculus, with $L_{N,x}$ playing the role of the Malliavin derivative. We choose the regularizing function $\vartheta$ to have compact support. Recall (see \eqref{def_phi_N}) that the underlying probability space consists of elements $X = (X_k)_{k \in \mathbb{Z}^d}$ with $X_k \in \mathbb{C}$. Define $\cal T$ to be the space of random variables of the form $f(X)$, where $f$ is smooth in the sense that all of its partial derivatives exist. We denote by $\partial_{X_k}$ and $\partial_{\bar X_k}$ the usual holomorphic and antiholomorphic partial derivatives in the complex variable $X_k$. On the space $\cal T$ we define the first order differential operators \begin{equation*} L_{N,x} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \sum_{k \in \mathbb{Z}^d} \frac{1}{\sqrt{\lambda_k}}\, \sqrt{\vartheta(k / N)} \, u_k(x) \, \partial_{\bar X_k}\,, \qquad \bar L_{N,x} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \sum_{k \in \mathbb{Z}^d} \frac{1}{\sqrt{\lambda_k}}\, \sqrt{\vartheta(k / N)} \, \bar u_k(x) \, \partial_{X_k}\,. \end{equation*} where $x \in \Lambda$ and $0 < N < \infty$. Here we recall the definitions \eqref{lambda_k}. Note that, owing to our choice of $\vartheta$, each sum is finite. We record a few simple properties of $L_{N,x}$. \begin{lemma} \label{lem:IBP} Let $f(X) \in \cal T \cap L^1(\P)$. Then \begin{equation*} \mathbb{E}[L_{N,x} f(X)] = \mathbb{E} [\phi_N(x) f(X)] \,, \qquad \mathbb{E}[\bar L_{N,x} f(X)] = \mathbb{E} [\bar \phi_N(x) f(X)]\,. \end{equation*} \end{lemma} \begin{proof} We only prove the first identity. We use that if $Z$ is a standard complex Gaussian random variable, then $\mathbb{E}[Z f(Z)] = \mathbb{E}[ \partial_{\bar Z} f(Z)]$, as can be seen by integration by parts. Thus, using that each $X_k$ is a standard complex Gaussian random variable independent of the others, we get \begin{align*} \mathbb{E}[L_{N,x} f(X)] &= \sum_{k \in \mathbb{Z}^d} \frac{1}{\sqrt{\lambda_k}}\, \sqrt{\vartheta(k / N)} \, u_k(x) \, \mathbb{E} [\partial_{\bar X_k} f(X)] \\ &= \sum_{k \in \mathbb{Z}^d} \frac{1}{\sqrt{\lambda_k}}\, \sqrt{\vartheta(k / N)} \, u_k(x) \, \mathbb{E} [X_k f(X)] \\ &= \mathbb{E}[\phi_N(x) f(X)]\,. \qedhere \end{align*} \end{proof} \begin{lemma} \label{lem:L_phi} For $x,y \in \Lambda$, we have \begin{equation*} L_{N,x} \, \phi(y) = 0\,, \qquad L_{N,x}\, \bar \phi(y) = G_N(x - y) \end{equation*} in the sense of distributions in the variable $y$. Similar identities hold for $\bar L_{N,x}$. \end{lemma} \begin{proof} We only prove the second identity. We compute \begin{equation*} L_{N,x}\, \bar \phi(y) = \sum_{k \in \mathbb{Z}^d} \frac{1}{\sqrt{\lambda_k}}\, \vartheta(k / N) \, u_k(x) \, \frac{1}{\sqrt{\lambda_k}} \, \bar u_k(y) = G_N(x-y)\,, \end{equation*} where the last step follows from \eqref{G_N_phi}\,. \end{proof} We may now prove the representation of \eqref{hat_gamma_p} underlying our proof. We denote the regularized Wick-ordered correlation function by \begin{equation*} (\widehat{\gamma}_{N,p}^V)_{\f x,\tilde{\f x}} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{\zeta^V}\, \mathbb{E} \qB{\r e^{-V} \wick{\bar{\phi}_N(\tilde x_1) \cdots \bar{\phi}_N(\tilde x_p)\,\phi_N(x_1)\cdots \phi_N(x_p)}}\,. \end{equation*} \begin{lemma} \label{lem:IBP_representation} We have \begin{equation*} (\widehat{\gamma}_{N,p}^V)_{\f x,\tilde{\f x}} = \frac{1}{\zeta^V} \mathbb{E} \qB{\bar L_{N, \tilde x_1} \cdots \bar L_{N, \tilde x_p} L_{N, x_1} \cdots L_{N, x_p} \r e^{-V}}\,. \end{equation*} \end{lemma} \begin{proof} Using the recursive characterization of Wick ordering from \eqref{Wick_recursion}, we find \begin{multline*} \mathbb{E} \qBB{\r e^{-V} \wick{ \prod_{i \in [p]} \bar{\phi}_N(\tilde x_i) \prod_{i \in [p]} \phi_N(x_i)}} = \mathbb{E} \qBB{\r e^{-V} \wick{ \prod_{i \in [p]} \bar{\phi}_N(\tilde x_i) \prod_{i \in [p-1]} \phi_N(x_i)} \, \phi_N(x_p)} \\ - \sum_{j \in [p]} G_N(\tilde x_j - x_p) \, \mathbb{E} \qBB{\r e^{-V} \wick{ \prod_{i \in [p] \setminus \{j\}}\bar{\phi}_N(\tilde x_i) \prod_{i \in [p-1]} \phi_N(x_i)}}\,. \end{multline*} Here we used \eqref{G_N_phi} and that $\mathbb{E}[\phi_N(x) \phi_N(y)] = 0$. Using Lemmas \ref{lem:IBP} and \ref{lem:L_phi} as well as the Leibniz rule for the operator $L_{N,x}$, we write the first term on the right-hand side as \begin{multline*} \mathbb{E} \qBB{L_{N,x_p} \pBB{\r e^{-V} \wick{ \prod_{i \in [p]} \bar{\phi}_N(\tilde x_i) \prod_{i \in [p-1]} \phi_N(x_i)}}} = \mathbb{E} \qBB{L_{N,x_p} (\r e^{-V}) \, \wick{ \prod_{i \in [p]} \bar{\phi}_N(\tilde x_i) \prod_{i \in [p-1]} \phi_N(x_i)}} \\ + \sum_{j \in [p]} G_N(\tilde x_j - x_p) \, \mathbb{E} \qBB{\r e^{-V} \wick{ \prod_{i \in [p] \setminus \{j\}}\bar{\phi}_N(\tilde x_i) \prod_{i \in [p-1]} \phi_N(x_i)}}\,. \end{multline*} We conclude that \begin{equation*} \mathbb{E} \qBB{\r e^{-V} \wick{ \prod_{i \in [p]} \bar{\phi}_N(\tilde x_i) \prod_{i \in [p]} \phi_N(x_i)}} = \mathbb{E} \qBB{L_{N,x_p} (\r e^{-V}) \, \wick{ \prod_{i \in [p]} \bar{\phi}_N(\tilde x_i) \prod_{i \in [p-1]} \phi_N(x_i)}}\,. \end{equation*} Repeating this argument $2p$ times yields the claim. \end{proof} Since $\zeta^{W^\epsilon} \to \zeta^V$, by the chain rule for the differential operator $L_{N,x}$, Hölder's inequality, and telescoping, we find that \eqref{convergence_gamma_field} follows from Lemmas \ref{lem:IBP_representation} and \ref{lem:V_E_L2} combined with the following result. \begin{lemma} \label{lem:derivatives-convergence} Let $0 \leq \ell \leq 4$ and $\f z = (z_1, \dots, z_\ell) \in \Lambda^\ell$. Abbreviate $\cal L_{N, \f z} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= L_{N, z_1}^\# \cdots L_{N, z_\ell}^\#$, where each $L^\#$ stands for either $L$ or $\bar L$. Then the following holds for any $r \geq 1$. \begin{enumerate}[label=(\roman*)] \item \label{lab:der1} $\sup_N \sup_{\f z} \norm{\cal L_{N, \f z}V}_{L^r(\P)} < \infty$. \item \label{lab:der2} As $\epsilon \to 0$ we have $\sup_N \sup_{\f z} \norm{\cal L_{N, \f z} W^\epsilon - \cal L_{N, \f z} V}_{L^r(\P)} \to 0$. \end{enumerate} \end{lemma} \begin{proof} We only prove \ref{lab:der2}; the proof of \ref{lab:der1} is similar, in fact easier. Since $\cal L_{N, \f v} W^\epsilon - \cal L_{N, \f v} V$ is a superposition of random variables in polynomial chaoses of order at most four (see Section \ref{sec:hypercontr}), from the hypercontractivity estimate of Remark \ref{rem:hypercontr_complex} we find that it suffices to consider $r = 2$. We proceed by telescoping via $V^\epsilon$, writing $W^\epsilon - V = (W^\epsilon - V^\epsilon) + (V^\epsilon - V)$. Thus, we have to differentiate the random variables \begin{align*} W^\epsilon - V^\epsilon &= \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \, G(x - \tilde x) \, \pb{\wick{\bar \phi(x) \phi(\tilde x)} - \wick{\abs{\phi(x)}^2}}\,, \\ V^\epsilon - V &= \frac{1}{2} \int \r d x \, \r d \tilde x \, \pb{v^\epsilon(x - \tilde x) - \delta(x - \tilde x)} \,\wick{ \abs{\phi(x)}^2 \, \abs{\phi(\tilde x)}^2} \end{align*} by $\cal L_{N, \f v}$. The zeroth order derivatives, $\ell = 0$, were estimated in Lemmas \ref{lem:R_variance} and \ref{lem:conv_V}. For the high-order derivatives, let us start with $W^\epsilon - V^\epsilon$. From \eqref{Wick_derivative} and Lemma \ref{lem:L_phi}, we find \begin{equation*} L_{N,z} (W^\epsilon - V^\epsilon) = \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \, G(x - \tilde x) \, G_N(z - x) \, \p{\phi(\tilde x) - \phi(x)}\,, \end{equation*} so that Lemma \ref{lem:Wick} yields \begin{multline*} \normb{L_{N,z} (W^\epsilon - V^\epsilon)}_{L^2(\P)}^2 = \int \r d x \, \r d \tilde x \, \r d y \, \r d \tilde y \, v^\epsilon(x - \tilde x) \, G(x - \tilde x) \, G_N(z - x) \, v^\epsilon(y - \tilde y) \, G(y - \tilde y) \, G_N(z - y) \\ \times \pb{G(x-y) + G(\tilde x - \tilde y) - G(x - \tilde y) - G(\tilde x - y)}\,. \end{multline*} The right-hand side is estimated similarly to the proof of Lemma \ref{lem:R_variance}, uniformly in $N$ and $z \in \Lambda$, using that $G_N$ is uniformly bounded in $L^2(\Lambda)$, and the bound \begin{equation*} \absb{G(x-y) + G(\tilde x - \tilde y) - G(x - \tilde y) - G(\tilde x - y)} \leq \abs{G(x - y) - G(x - \tilde y)} + \abs{G(\tilde x - \tilde y) - G(\tilde x - y)}\,. \end{equation*} Next, \begin{equation*} \bar L_{N, \tilde z} L_{N,z} (W^\epsilon - V^\epsilon) = \int \r d x \, \r d \tilde x \, v^\epsilon(x - \tilde x) \, G(x - \tilde x) \, G_N(z - x) \, \pb{G_N(\tilde z - \tilde x) - G_N(\tilde z - x)}\,, \end{equation*} which can again be estimated as in the proof of Lemma \ref{lem:R_variance}, uniformly in $N$ and $z, \tilde z \in \Lambda$. This concludes the estimate of $\norm{\cal L_{N, \f z} W^\epsilon - \cal L_{N, \f z} V^\epsilon}_{L^2(\P)}$. To estimate $\norm{\cal L_{N, \f z} V^\epsilon - \cal L_{N, \f z} V}_{L^2(\P)}$ we compute using \eqref{Wick_derivative} and Lemma \ref{lem:L_phi} \begin{equation*} L_{N,z} (V^\epsilon - V) = \int \r d x \, \r d \tilde x \, \pb{v^\epsilon(x - \tilde x) - \delta(x - \tilde x)} \, G_N(z - x) \,\wick{\phi(x) \, \abs{\phi(\tilde x)}^2}\,, \end{equation*} so that Lemma \ref{lem:Wick} yields \begin{multline*} \normb{L_{N,z} (V^\epsilon - V)}_{L^2(\P)}^2 = \int \r d x \, \r d \tilde x \, \r d y \, \r d \tilde y \, \pb{v^\epsilon(x - \tilde x) - \delta(x - \tilde x)} \, \pb{v^\epsilon(y - \tilde y) - \delta(y - \tilde y)} \, G_N(z - x) \, G_N(z - y) \\ \times \pb{G(x-y) G(\tilde x - \tilde y)^2 + G(x - \tilde y) G(\tilde x - y) G(\tilde x - \tilde y)}\,. \end{multline*} The right-hand side is estimated as in the proof of Lemma \ref{lem:R_variance}, uniformly in $N$ and $z \in \Lambda$. The higher derivatives are estimated analogously. This concludes the proof. \end{proof} \section{Proof of Proposition \ref{prop:step1}} \label{The rate of convergence} We study the rate of convergence of the relative partition function and the correlation functions in the mean-field limit, while keeping track of the parameter $\epsilon$. This amounts to a quantitative analysis of the infinite-dimensional saddle point argument for the functional integral introduced in \cite{FKSS_2020}. Note that, without the $\tau^\epsilon$ correction term in \eqref{Hamiltonian_H}, \eqref{W^epsilon}, and with $\epsilon=1$, this convergence was obtained in a qualitative way in \cite[Section 5]{FKSS_2020}. The main ingredients that enable a quantitative analysis are: (a) the Lipschitz continuity of the interaction potential $v^{\epsilon}$, with Lipschitz constant depending on $\epsilon$ (see Lemma \ref{v^{epsilon}_properties} (ii) below), and (b) quantitative $L^p$-H\"{o}lder continuity properties of Brownian motion (see Lemma \ref{Heat_kernel_estimates} (ii) below). As a result, we can find a suitable choice of $\epsilon$ as a function of $\nu$ such that we get the wanted convergence as $\nu \rightarrow 0$; see \eqref{epsilon_lower_bound} below. The setup is based on \cite{FKSS_2020}, and we refer the reader to this work for many of the proofs and explanations. Our methods work for $d = 2,3$, and hence all results of this section are stated for both dimensions. \normalcolor We now state the explicit lower bound on $\epsilon \equiv \epsilon(\nu)$. Namely, throughout the sequel we assume that $\epsilon$ satisfies \begin{equation} \label{epsilon_lower_bound} \epsilon(\nu) \gtrsim \begin{cases} \exp\Bigl(-(\log \nu^{-1})^{\frac{1-a}{2}}\Bigr) & \text{if } d=2\\ (\log \nu^{-1})^{-\frac{1-a}{2}}& \text{if } d=3\,, \end{cases} \end{equation} for some $a \in (0,1)$. Let us define $\chi\vcentcolon}%{\mathrel{\vcenter{\baselineskip0.75ex \lineskiplimit0pt \hbox{.}\hbox{.}}} \mathbb{R}^+ \rightarrow \mathbb{R}$ by \begin{equation} \label{sigma(epsilon)} \chi(t) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \begin{cases} \log t^{-1} & \text{if } d=2 \\ t^{-1} & \text{if } d=3\,. \end{cases} \end{equation} By \eqref{v_epsilon}, \eqref{tau_epsilon}, \eqref{sigma(epsilon)}, Lemma \ref{lem:G_smoothness} (when $d=2$), and Remark \ref{G_smoothness_3D_remark} (when $d=3$), we note that\footnote{Throughout the sequel, we do not emphasize the dependence of the implied constants on $d=2,3$.} \begin{equation} \label{tau(epsilon)_bound} |\tau^\epsilon| \lesssim_{\kappa,v} \chi(\epsilon)\,. \end{equation} Furthermore, by \eqref{sigma(epsilon)} and \eqref{epsilon_lower_bound}, it follows that for all $C,b>0$, we have \begin{equation} \label{epsilon_lower_bound_2} \lim_{\nu \rightarrow 0} \r e^{C \chi(\epsilon)^2}\,\nu^{b}=0\,. \end{equation} We now state the main results which, in light of \eqref{epsilon_lower_bound_2} imply Proposition \ref{prop:step1}. \begin{proposition} \label{Partition_function_rate_of_convergence} There exists $C_1>0$ depending on $\kappa,v$ such that \begin{equation*} \Bigl|\cal Z-\zeta^{W^\epsilon}\Bigr| \lesssim_{\kappa,v} \begin{cases} \r e^{C_1 \chi(\epsilon)^2}\, \nu^{1/4} & \text{if } d=2 \\ \r e^{C_1 \chi(\epsilon)^2}\, \nu^{1/4}\,\log \nu^{-1} & \text{if } d=3\,. \end{cases} \end{equation*} \end{proposition} In statements of results, we use the notation $b-$ to mean that the statement holds for $b - c$ for any constant $c > 0$. \begin{proposition} \label{Correlation_functions_rate_of_convergence} For $p \in \mathbb{N}$, we define \begin{equation} \label{theta_definition} \theta(d,p) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \begin{cases} \frac{1}{4p+4} & \text{if } d=2 \\ \frac{1}{12p+8}-& \text{if } d=3\,. \end{cases} \end{equation} There exists $C_2>0$ depending on $\kappa,v$ such that \begin{equation*} \Bigl\|\nu^p \, \widehat{\Gamma}_{p}-\widehat{\gamma}_{p}^{W^\epsilon}\Bigr\|_{C} \lesssim_{p,\kappa,v} \r e^{C_2 \chi(\epsilon)^2}\,\nu^{\theta(d,p)}\,. \end{equation*} \end{proposition} \begin{remark} \label{v_assumptions} We note that, in order to obtain Propositions \ref{Partition_function_rate_of_convergence} and \ref{Correlation_functions_rate_of_convergence} above, and hence Proposition \ref{prop:step1}, we only need to use the bounds from Lemma \ref{v^{epsilon}_properties} below (see Appendix \ref{Appendix C} for details). In light of this observation, we can consider more general $v$, which are not smooth. We always assume that $v :\mathbb{R}^d \rightarrow \mathbb{R}$ is even, $L^1$ with integral 1, and of positive type. Since $v$ is not smooth, we need to consider suitable regularizations $v_{\eta}$ of $v$. For a motivation and detailed description of the regularization, we refer the reader to \cite[Section 3.1]{FKSS_2020} and \cite[Section 4.1]{FKSS_2020}. A summary is also given in the study of the nonlocal problem in Section \ref{Extension of results to unbounded nonlocal interactions} below. In the first generalization, we consider $v$ Lipschitz and compactly supported. Then the result of Lemma \ref{v^{epsilon}_properties} holds for $v^{\epsilon}_{\eta}$ (given by \eqref{v_epsilon} with $v$ replaced by $v_{\eta}$), uniformly in $\eta$. This follows immediately from the proof of Lemma \ref{v^{epsilon}_properties} below. In the second generalization, we assume that $v$ is differentiable and that uniformly in $\epsilon \in (0,1)$, we have \begin{equation} \label{v_epsilon_assumptions_2} \sup_{x \in \Lambda}\,\sum_{n \in \mathbb{Z}^d} \Biggl|v \pbb{\frac{x - n}{\epsilon}}\Biggr|+\sup_{x \in \Lambda}\,\sum_{n \in \mathbb{Z}^d} \Biggl|\nabla v \pbb{\frac{x - n}{\epsilon}}\Biggr| \lesssim 1\,. \end{equation} Let us note that \eqref{v_epsilon_assumptions_2} holds if we assume that there exists $a>d$ such that for all $x \in \mathbb{R}^d$ \begin{equation*} |v(x)|+|\nabla v(x)| \lesssim \frac{1}{(1+|x|)^{a}}\,. \end{equation*} In particular, under the latter conditions, we do not need to assume that $v$ is compactly supported. \end{remark} \subsection{The partition function} In this subsection, we prove Proposition \ref{Partition_function_rate_of_convergence}. Before proceeding with the proof, we make several observations and review the functional integral representation from \cite{FKSS_2020}. We first recall some basic notions for Brownian paths. Given $0 \leq \tilde{\tau} < \tau$, we denote by $\Omega^{\tau,\tilde{\tau}}$ the space of continuous paths $\omega\vcentcolon}%{\mathrel{\vcenter{\baselineskip0.75ex \lineskiplimit0pt \hbox{.}\hbox{.}}} [\tilde{\tau},\tau] \rightarrow \Lambda$. Given $\tilde{x} \in \Lambda$ and $0 \leq \tilde{\tau} <\tau$, $\mathbb{P}^{\tau,\tilde{\tau}}_{\tilde{x}}(\r d \omega)$ denotes the law on $\Omega^{\tau,\tilde{\tau}}$ of standard Brownian motion with periodic boundary conditions on $\Lambda$ that equals $\tilde{x}$ at time $\tilde{\tau}$. Given $x,\tilde{x} \in \Lambda$ and $0 \leq \tilde{\tau} <\tau$, $\mathbb{P}^{\tau,\tilde{\tau}}_{x,\tilde{x}}(\r d \omega)$ denotes the law of the Brownian bridge $\Omega^{\tau,\tilde{\tau}}$ with periodic boundary conditions on $\Lambda$ that equals $\tilde{x}$ at time $\tilde{\tau}$ and $x$ at time $\tau$. For $t>0$, we write the heat kernel on $\Lambda$ as \begin{equation*} \psi^t(x)\mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \r e^{t \Delta/2}(x)=\sum_{n \in \mathbb{Z}^d} \frac{1}{(2\pi t)^{d/2}} \,\r e^{-\frac{|x-n|^2}{2t}}\,. \end{equation*} For $x,\tilde{x} \in \Lambda$ and $0 \leq \tilde{\tau} <\tau$, we define the positive measure \begin{equation} \label{W_measure} \mathbb{W}^{\tau,\tilde{\tau}}_{x,\tilde{x}}(\r d \omega) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \psi^{\tau-\tilde{\tau}}(x-\tilde{x})\,\mathbb{P}^{\tau,\tilde{\tau}}_{x,\tilde{x}}(\r d \omega)\,. \end{equation} Given $n \in \mathbb{N}^*$, $\tilde{\tau}<t_{1}<\cdots<t_{n}<\tau$, and $f\vcentcolon}%{\mathrel{\vcenter{\baselineskip0.75ex \lineskiplimit0pt \hbox{.}\hbox{.}}} \Lambda^n \rightarrow \mathbb{R}$ continuous, the measure \eqref{W_measure} satisfies \begin{align*} &\int \mathbb{W}^{\tau,\tilde{\tau}}_{x,\tilde{x}}(\r d \omega)\,f(\omega(t_{1}),\ldots,\omega(t_{n}))= \\ &\int \r d x_{1}\,\cdots\,\r d x_{n}\,\psi^{t_{1}-\tilde{\tau}}(x_{1}-\tilde{x})\,\psi^{t_{2}-t_{1}}(x_{2}-x_{1})\,\cdots\,\psi^{t_{n}-t_{n-1}}(x_{n}-x_{n-1})\,\psi^{\tau-t_{n}}(x-x_{n})\,f(x_{1},\ldots,x_{n})\,. \end{align*} We note several useful estimates for the above quantities. \begin{lemma} \label{Heat_kernel_estimates} The following estimates hold. \begin{itemize} \item[(i)] There exists a constant $C>0$ such that for all $0 \leq \tilde{\tau} <\tau$, we have \begin{equation*} \sup_{x,\tilde{x}} \int \mathbb{W}^{\tau,\tilde{\tau}}_{x,\tilde{x}}(\r d \omega)=\sup_{x,\tilde{x}} \psi^{\tau,\tilde{\tau}} (x-\tilde{x}) \leq C\,\biggl(1+\frac{1}{(\tau-\tilde{\tau})^{d/2}}\biggr)\,. \end{equation*} \item[(ii)] There exists a constant $C>0$ such that for all $\tilde{\tau} \leq s \leq t \leq \tau$, we have \begin{equation} \label{Heat_kernel_estimates_ii_1} \int \mathbb{W}^{\tau,\tilde{\tau}}_{x,\tilde{x}}(\r d \omega)\,|\omega(t)-\omega(s)|^2_{\Lambda} \leq C \,\biggl(1+\frac{1}{(\tau-\tilde{\tau})^{d/2}}\biggr)\,\biggl((t-s)+ |x-\tilde{x}|_{\Lambda}^2\,\frac{(t-s)^2}{(\tau-\tilde{\tau})^2}\biggr)\,. \end{equation} and \begin{equation} \label{Heat_kernel_estimates_ii_2} \int \mathbb{W}^{\tau,\tilde{\tau}}_{x,\tilde{x}}(\r d \omega)\,|\omega(t)-\omega(s)|_{\Lambda} \leq C \,\biggl(1+\frac{1}{(\tau-\tilde{\tau})^{d/2}}\biggr)\,\biggl((t-s)^{1/2}+ |x-\tilde{x}|_{\Lambda}\,\frac{t-s}{\tau-\tilde{\tau}}\biggr)\,. \end{equation} Here $|x|_{\Lambda} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \min_{n\in \mathbb{Z}^d} |x - n|$ denotes the periodic Euclidean norm of $x \in \Lambda$. \item[(iii)] For $0<s\leq t$, we have \begin{equation*} \|\psi^t-\psi^s\|_{L^1(\Lambda)} \leq d \log(t/s)\,. \end{equation*} \end{itemize} \end{lemma} The results of Lemma \ref{Heat_kernel_estimates} are contained in \cite{FKSS_2020}. Part (i) is given in \cite[Lemma 2.2]{FKSS_2020}. Estimate \eqref{Heat_kernel_estimates_ii_1} is proved in \cite[Lemma 2.3]{FKSS_2020}. Estimate \eqref{Heat_kernel_estimates_ii_2} then follows from the Cauchy-Schwarz inequality and part (i). Part (iii) follows since \begin{equation*} \|\psi^{t}-\psi^{s}\|_{L^1} \leq \int_{s}^{t} \r d u\, \|\partial_{u} \psi^{u}\|_{L^1} \leq \int_{s}^{t} \r d u\, \int_{\mathbb{R}^d} \r d x\, \biggl(\frac{d}{2u}+\frac{|x|^2}{2u^2}\biggr)\,\widetilde{\psi}^{u}(x)=d \log(t/s)\,, \end{equation*} where $\widetilde{\psi}^{u}(x)=\frac{1}{(2\pi u)^{d/2}}\,\r e^{-\frac{|x|^2}{2u}}$ is the heat kernel on $\mathbb{R}^d$, as was noted in the proof of \cite[Lemma 5.18]{FKSS_2020}. Let us fix a function $\varphi \in C_c^{\infty}(\mathbb{R})$ which is even, nonnegative, of positive type, and which satisfies $\varphi(0)=1$. For fixed $L>0$, given $\eta>0$, we define the $\nu$-periodic function \begin{equation} \label{approximate_delta_function} \delta_{\eta}(\tau) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{\eta} \sum_{y \in \mathbb{Z}} (\fra F^{-1} \varphi) \biggl(\frac{\tau- \nu y}{\eta}\Bigr)\,, \end{equation} where $\fra F$ denotes Fourier transform (see \eqref{Fourier}). Here, \eqref{approximate_delta_function} can be interpreted as an approximate delta function on $[-\nu/2,\nu/2)$. By construction, we have \begin{equation} \label{delta_nu_integral} \int_{0}^{\nu} \r d \tau\,\delta_{\eta}(\tau)=1\,,\qquad \delta_{\eta} \geq 0. \end{equation} For simplicity of notation, we suppress the dependence on $\epsilon$ and $\nu$ in the quantum objects. We only emphasize the $\eta$ dependence through a subscript when appropriate. We write the $\epsilon$ dependence as a superscript in the classical objects. Let us note several properties of $v^{\epsilon}$ that follow from \eqref{v_epsilon}. \begin{lemma} \label{v^{epsilon}_properties} There exists $C>0$, depending only on $v$, such that the following properties hold. \begin{itemize} \item[(i)] $\|v^{\epsilon}\|_{L^{\infty}(\Lambda)} \leq \frac{C}{\epsilon^d}$. \item[(ii)] We have that \begin{equation*} |v^{\epsilon}(x)-v^{\epsilon}(y)|\leq \frac{C}{\epsilon^{d+1}}\,|x-y|_{\Lambda} \end{equation*} for all $x,y \in \Lambda$. \end{itemize} \end{lemma} For $\eta>0$, recalling \eqref{approximate_delta_function}, we let $(\cal{C}_{\eta})^{\tau,\tilde{\tau}}_{x,\tilde{x}} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \nu \,\delta_{\eta}(\tau-\tilde{\tau})\,v^{\epsilon}(x-\tilde{x})$ and define $\mu_{\cal C_{\eta}}(\r d \sigma)$ to be the real Gaussian measure with mean zero and covariance \begin{equation} \label{Gaussian_measure_quantum} \int \mu_{\cal C_{\eta}} (\r d \sigma) \, \sigma(\tau,x)\,\sigma(\tilde{\tau},\tilde{x})=(\cal{C}_{\eta})^{\tau,\tilde{\tau}}_{x,\tilde{x}}\,. \end{equation} Since $v \in C_c^{\infty}(\mathbb{R}^d)$, under the law $\mu_{\cal C_{\eta}}$, $\sigma$ is almost surely a smooth periodic function on $[0,\nu] \times \Lambda$. By a direct calculation, we can rewrite \eqref{Hamiltonian_H} as \begin{equation} \label{H^epsilon_2} H=\bigoplus_{n \in \mathbb{N}} \biggl(H_{n,\varrho^{\epsilon}}-\frac{(\tau^\epsilon)^2}{2}\biggr)\,, \end{equation} where \begin{equation} \label{H^epsilon_3} H_{n,\varrho} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \Biggl[\nu \sum_{i=1}^{n} (\kappa-\Delta/2)_i + \frac{\nu^2}{2} \sum_{i,j=1}^{n}v^{\epsilon}(x_i-x_j)\Biggr]-\varrho \nu^2 n+\frac{\varrho^2\nu^2}{2} \end{equation} and $\varrho^{\epsilon}\mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{\varrho_\nu}{\nu}+\frac{\tau^\epsilon}{\nu}$. In particular, by arguing as in \cite[Appendix B]{FKSS_2020} and recalling \cite[Proposition 3.12 (iii)]{FKSS_2020}, \cite[Lemma 5.4]{FKSS_2020}, we obtain the functional integral representation for the quantum relative partition function \eqref{Z^epsilon_quantum}. \begin{lemma} \label{Partition_function_rate_of_convergence_1} We have $\cal Z=\lim_{\eta \rightarrow 0} \cal Z_{\eta}$, where \begin{equation} \label{Z^{epsilon}_{eta}} \cal Z_{\eta} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int \mu_{\cal C_{\eta}}(\r d \sigma)\,\r e^{\frac{(\tau^\epsilon)^2}{2}-\frac{\r i \tau^\epsilon[\sigma]}{\nu}}\,\r e^{F_2(\sigma)}\,, \end{equation} for \begin{multline} \label{F_2} F_2(\sigma) = - \sum_{\mathbf{r} \in (\nu \mathbb{N})^3} \frac{\ind{\abs{\mathbf{r}} > 0} \, \r e^{-\kappa \abs{\mathbf{r}}}}{\abs{\mathbf{r}}} \int_{[0,\nu]^3} \r d \mathbf{\tau} \int \r d \mathbf{x} \, \sigma(\tau_2, x_2) \, \sigma(\tau_3, x_3) \\ \times \int \bb W_{x_1, x_3}^{\tau_1 + r_3, \tau_3}(\r d \omega_3) \, \bb W_{x_3, x_2}^{\tau_3 + r_2, \tau_2}(\r d \omega_2) \, \bb W_{x_2, x_1}^{\tau_2 + r_1, \tau_1}(\r d \omega_1) \, \r e^{\r i \int \r d s \, \sigma([s]_\nu, \omega_1(s))}\,, \end{multline} which satisfies \begin{equation} \label{Re_F2} \re F_2 \leq 0\,. \end{equation} In \eqref{Z^{epsilon}_{eta}}, we write \begin{equation} \label{bracket_sigma} [\sigma] \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int_{0}^{\nu} \r d \tau \int \r d x\, \sigma(\tau,x) \end{equation} and in \eqref{F_2}, we write \begin{equation} \label{bracket_t} [t]_{\nu} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= (t \,\mathrm{mod}\,\nu) \in [0,\nu)\,. \end{equation} \end{lemma} We now explain how to obtain the functional integral representation for the classical partition function in our setting. We define $\mu_{v^{\epsilon}}(\r d \xi)$ to be the real Gaussian measure with covariance \begin{equation} \label{Gaussian_measure_classical} \int \mu_{v^{\epsilon}}(\r d \xi) \,\xi(x)\,\xi(\tilde x)=v^{\epsilon}(x-\tilde x)\,. \end{equation} Since $v \in C_c^{\infty}(\mathbb{R}^d)$, under the law $\mu_{v^{\epsilon}}$, $\xi$ is almost surely a smooth periodic function on $\Lambda$. By applying similar arguments as in \cite[Appendix B]{FKSS_2020}, and recalling \cite[Proposition 4.1]{FKSS_2020}, \cite[Lemma 5.4]{FKSS_2020}, we obtain the following representation. \begin{lemma} \label{Partition_function_rate_of_convergence_2} We have \begin{equation} \label{zeta^{epsilon}} \zeta^{W^\epsilon} = \int \mu_{v^{\epsilon}}(\r d \xi)\,\r e^{\frac{(\tau^\epsilon)^2}{2}-\r i \tau^\epsilon \langle \xi,1\rangle_{L^2}}\,\r e^{f_2(\xi)}\,, \end{equation} for \begin{equation} \label{f_2} f_2(\xi)=-\int_{[0,\infty)^{3}} \r d \mathbf{r}\, \frac{\r e^{-\kappa|\mathbf{r}|}}{|\mathbf{r}|}\,\int\r d \mathbf{x}\,\xi(x_2)\,\xi(x_3) \int \mathbb{W}_{x_{1},x_{3}}^{r_{3},0}(\r d \omega_{3})\,\mathbb{W}_{x_{3},x_{2}}^{r_{2},0}(\r d \omega_{2})\,\mathbb{W}_{x_{2},x_{1}}^{r_{1},0}(\r d \omega_{1})\,\r e^{\r i \int \r d s\,\xi(\omega_{1}(s))}\,, \end{equation} which satisfies \begin{equation} \label{Re_f2} \re f_2 \leq 0\,. \end{equation} Note that in \eqref{zeta^{epsilon}}, $\langle \xi,1 \rangle_{L^2}=\int \r d x\,\xi(x)$ denotes the $L^2$ inner product. \end{lemma} Recalling, \eqref{delta_nu_integral}, let us note the following useful result. \begin{lemma} \label{Lemma 5.2'} If $\sigma=\sigma(\tau,x)$ has law $\mu_{\cal C_{\eta}}$ with covariance \eqref{Gaussian_measure_quantum}, then its time average \begin{equation} \label{sigma_time_average} \langle \sigma \rangle (x) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{\nu}\,\int_{0}^{\nu} \r d \tau\, \sigma(\tau,x) \end{equation} has law $\mu_{v^{\epsilon}}$ with covariance \eqref{Gaussian_measure_classical}. \end{lemma} We observe that \begin{equation} \label{sigma_time_average_observation} \big \langle \langle \sigma \rangle,1 \big \rangle_{L^2}=\frac{1}{\nu}\,[\sigma]\,, \end{equation} for $[\sigma]$ as in \eqref{bracket_sigma}. Therefore, we can rewrite \eqref{zeta^{epsilon}} as \begin{equation} \label{zeta^{epsilon}_{eta}_2} \zeta^{W^\epsilon} = \int \mu_{\cal C_{\eta}}(\r d \sigma)\,\r e^{\frac{(\tau^\epsilon)^2}{2}-\frac{\r i \tau^\epsilon [\sigma]}{\nu}}\,\r e^{f_2(\langle \sigma \rangle)}\,. \end{equation} Combining \eqref{Z^{epsilon}_{eta}} and \eqref{zeta^{epsilon}_{eta}_2} and using the Cauchy-Schwarz inequality, we obtain the following result. \begin{lemma} \label{Partition_function_rate_of_convergence_3} Uniformly in $\eta>0$, we have \begin{equation*} \Bigl|\cal Z_{\eta}-\zeta^{W^\epsilon}\Bigr|\leq \r e^{\frac{(\tau^\epsilon)^2}{2}}\, \biggl(\int\mu_{\cal C_{\eta}}(\r d \sigma)\bigl|F_2(\sigma)-f_2(\langle \sigma \rangle)\bigr|^2\biggr)^{1/2}\,. \end{equation*} \end{lemma} In order to simplify notation in the sequel, we define the function $\Theta\vcentcolon}%{\mathrel{\vcenter{\baselineskip0.75ex \lineskiplimit0pt \hbox{.}\hbox{.}}} \mathbb{R}^+ \rightarrow \mathbb{R}$ by \begin{equation} \label{Theta_function} \Theta(t) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \begin{cases} \sqrt{t} & \text{if } d=2 \\ \sqrt{t}\log t^{-1}& \text{if } d=3\,. \end{cases} \end{equation} Note that the upper bound in Proposition \ref{Partition_function_rate_of_convergence} can then be rewritten as $\r e^{C_1 \chi(\epsilon)^2}\Theta^{1/2}(\nu)$, with $\chi$ given by \eqref{sigma(epsilon)}. We prove the following two estimates, which correspond to quantitative versions of \cite[Lemma 5.9]{FKSS_2020} and \cite[Lemma 5.10]{FKSS_2020} respectively, and in turn let us use the bound from Lemma \ref{Partition_function_rate_of_convergence_3}. \begin{lemma} \label{Partition_function_rate_of_convergence_4} Uniformly in $\eta>0$, we have \begin{equation*} \biggl|\int\mu_{\cal C_{\eta}}(\r d \sigma)\,\overline{F_2(\sigma)}F_2(\sigma)-\int\mu_{v^{\epsilon}}(\r d \xi)\,\overline{f_2(\xi)}f_2(\xi)\biggr|\lesssim_{\kappa,v} \frac{\Theta(\nu)}{\epsilon^{5d+1}}\,. \end{equation*} \end{lemma} \begin{lemma} \label{Partition_function_rate_of_convergence_5} Uniformly in $\eta>0$, we have \begin{equation*} \biggl|\int\mu_{\cal C_{\eta}}(\r d \sigma)\,\overline{f_2(\langle \sigma \rangle)}F_2(\sigma)-\int\mu_{v^{\epsilon}}(\r d \xi)\,\overline{f_2(\xi)}f_2(\xi)\biggr|\lesssim_{\kappa,v}\frac{\Theta(\nu)}{\epsilon^{5d+1}}\,. \end{equation*} \end{lemma} Assuming Lemmas \ref{Partition_function_rate_of_convergence_4} and \ref{Partition_function_rate_of_convergence_5} for now, we can prove the convergence rate given in Proposition \ref{Partition_function_rate_of_convergence}. \begin{proof}[Proof of Proposition \ref{Partition_function_rate_of_convergence}] The claim follows from Lemmas \ref{Partition_function_rate_of_convergence_1}--\ref{Partition_function_rate_of_convergence_5} by recalling \eqref{tau(epsilon)_bound}. \end{proof} The rest of this section is devoted to showing Lemmas \ref{Partition_function_rate_of_convergence_4} and \ref{Partition_function_rate_of_convergence_5}. Some of the steps in the discussion that follows are worked out in Appendix \ref{Appendix C}. Before proceeding with the proofs, we need to introduce some notation and definitions. Throughout we use the convention that, given a path $\omega \in \Omega^{\tau,\tilde{\tau}}$ and a function $f$, we write $\int \r d s f(\omega(s)) \equiv \int_{\tilde{\tau}}^{\tau} \r d s\,f(\omega(s))$. We define the following quantities that will allow us to rewrite the terms that arise in the sequel. \begin{definition}[Classical interactions] \label{V_classical} Let $x ,\tilde x \in \Lambda$ and $\omega \in \Omega^{\tau_1,\tilde \tau_1}, \tilde \omega \in \Omega^{\tau_2, \tilde \tau_2} $ be continuous paths. We then define the \emph{point-point interaction} \begin{equation*} (\mathbb{V}^{\epsilon})_{x,\tilde x} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int \mu_{v^{\epsilon}}(\r d \xi)\,\xi(x)\,\xi(\tilde x)=v^{\epsilon}(x-\tilde x)\,, \end{equation*} the \emph{point-path interaction} \begin{equation*} (\mathbb{V}^{\epsilon})_{x}(\omega) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int \mu_{v^{\epsilon}}(\r d \xi)\,\int \r d s\,\xi(\omega(s))=\int \r d s\,v^{\epsilon}(x-\omega(s))\,, \end{equation*} and the \emph{path-path interaction} \begin{equation*} \mathbb{V}^{\epsilon}(\omega,\tilde \omega) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int \mu_{v^{\epsilon}}(\r d \xi)\,\int \r d s\,\xi(\omega(s))\,\int \r d \tilde{s}\,\xi(\tilde{\omega}(\tilde{s}))=\int \r d s\,\int \r d \tilde{s}\,v^{\epsilon}(\omega(s)-\tilde{\omega}(\tilde{s}))\,. \end{equation*} \end{definition} In what follows, we use the notation $x_{i,0} \equiv x_i, \tilde{x}_{i,1} \equiv \tilde{x}_i$ for $i=1,2$ and we write \begin{equation} \label{set_A} A \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \{2,3\} \times \{0,1\}\,. \end{equation} Arguing analogously as for \cite[(5.8)--(5.9)]{FKSS_2020}, we get that \begin{equation} \label{5.8'} \int \mu_{v^{\epsilon}}(\r d \xi)\,\overline{f_2(\xi)}\,f_2(\xi)=\int_{[0,\infty)^3} \r d \mathbf{r} \,\frac{\r e^{-\kappa |\mathbf{r}|}}{|\mathbf{r}|}\,\int_{[0,\infty)^3} \r d \mathbf{\tilde r}\, \frac{\r e^{-\kappa |\mathbf{\tilde r}|}}{|\mathbf{\tilde r}|}\,I^{\epsilon}(\f r, \tilde{\f r})\,, \end{equation} where \begin{multline} \label{5.9'} I^{\epsilon}(\f r, \tilde{\f r}) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int \r d \mathbf{x} \, \int \r d \mathbf{\tilde x}\,\int \mathbb{W}^{r_3,0}_{x_1,x_3}(\r d \omega_3)\,\mathbb{W}^{r_2,0}_{x_3,x_2}(\r d \omega_2)\,\mathbb{W}^{r_1,0}_{x_2,x_1}(\r d \omega_1)\, \\ \times \mathbb{W}^{\tilde{r}_3,0}_{\tilde{x}_1,\tilde{x}_3}(\r d \tilde{\omega}_3)\,\mathbb{W}^{\tilde{r}_2,0}_{\tilde{x}_3,\tilde{x}_2}(\r d \tilde{\omega}_2)\,\mathbb{W}^{\tilde{r}_1,0}_{\tilde{x}_2,\tilde{x}_1}(\r d \tilde{\omega}_1)\, \r e^{-\frac{1}{2}\bigl(\mathbb{V}^{\epsilon}(\omega_1,\omega_1)+\mathbb{V}^{\epsilon}(\tilde \omega_1,\tilde \omega_1)-2\mathbb{V}^{\epsilon}(\omega_1,\tilde \omega_1)\bigr)}\, \\ \times \sum_{\Pi \in \fra{M}(A)} \prod_{\{a,b\} \in \Pi} (\mathbb{V}^{\epsilon})_{x_a,x_b}\,\prod_{a \in A \setminus [\Pi]} \r i \Bigl((\mathbb{V}^{\epsilon})_{x_a}(\omega_1)- (\mathbb{V}^{\epsilon})_{x_a}(\tilde{\omega}_1)\Bigr)\,. \end{multline} Here, we write $|\mathbf{r}|=r_1+r_2+r_3, |\mathbf{\tilde r}|=\tilde r_1+\tilde r_2+\tilde r_3$. Moreover, we denote by $\fra{M}(A)$ the set of partial pairings on the set $A$. \begin{definition}[Quantum interactions] \label{V_quantum} Let $(\tau,x),(\tilde{\tau},\tilde{x}) \in [0,\nu] \times \Lambda$ and let $\omega \in \Omega^{\tau_{1},\tilde{\tau}_{1}},\tilde{\omega} \in \Omega^{\tau_{2},\tilde{\tau}_{2}}$ be continuous paths. With $\delta_{\eta}$ given by \eqref{approximate_delta_function}, we define the \emph{point-point interaction} \begin{equation*} (\mathbb{V}_{\eta})_{x,\tilde{x}}^{\tau,\tilde{\tau}} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int \mu_{\cal{C}_{\eta}}(\r d \sigma)\,\sigma(\tau,x)\,\sigma(\tilde{\tau},\tilde{x})=\nu\,\delta_{\eta}(\tau-\tilde{\tau})\,v^{\epsilon}(x-\tilde{x})\,, \end{equation*} the \emph{point-path interaction} \begin{equation*} (\mathbb{V}_{\eta})_{x}^{\tau}(\omega) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int \mu_{\cal{C}_{\eta}}(\r d \sigma)\,\sigma(\tau,x)\, \int_{0}^{\nu} \r d t\,\int \r d s\,\delta(t-[s]_{\nu})\,\sigma(t,\omega(s)) =\nu\,\int \r d s\, \delta_{\eta}(\tau-[s]_{\nu})\,v^{\epsilon}(x-\omega(s))\,, \end{equation*} and the \emph{path-path interaction} \begin{multline*} \mathbb{V}_{\eta}(\omega,\tilde{\omega}) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int \mu_{\cal{C}_{\eta}}(\r d \sigma)\,\int_{0}^{\nu} \r d t\,\int \r d s\, \sigma(\tau,x)\,\delta(t-[s]_{\nu})\,\sigma(t,\omega(s))\,\int_{0}^{\nu}\r d \tilde{t}\,\int \r d \tilde{s}\,\delta(\tilde{t}-[\tilde{s}]_{\nu})\,\sigma(\tilde{t},\tilde{\omega}(\tilde{s})) \\ =\nu\,\int \r d s\,\int \r d \tilde{s}\, \delta_{\eta}([s]_{\nu}-[\tilde{s}]_{\nu})\,v^{\epsilon}(\omega(s)-\tilde{\omega}(\tilde{s}))\,. \end{multline*} Here, we recall \eqref{bracket_t}. \end{definition} Arguing analogously as for \cite[(5.17)--(5.18), (5.21)]{FKSS_2020}, we have that \begin{equation} \label{5.17'} \int \mu_{\cal{C}_{\eta}}(\r d \sigma) \, \overline{F_2(\sigma)} F_2(\sigma) = \sum_{\mathbf{r} \in (\nu \mathbb{N})^3} \frac{\ind{|\mathbf{r}|>0}\,\r e^{-\kappa|\mathbf{r}|}}{|\mathbf{r}|} \sum_{\tilde{\mathbf{r}} \in (\nu \mathbb{N})^3} \frac{\ind{|\tilde{\mathbf{r}}|>0}\,\r e^{-\kappa|\tilde{\mathbf{r}}|}}{|\tilde{\mathbf{r}}|}\, J(\mathbf{r},\tilde{\mathbf{r}})\,, \end{equation} where \begin{multline} \label{5.21'} J(\mathbf{r},\tilde{\mathbf{r}}) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int_{[0,\nu]^3} \r d \f \tau \,\int_{[0,\nu]^3} \r d \tilde{\f \tau}\, \int \r d \mathbf{x}\,\int\r d \tilde{\mathbf{x}}\,\int \mathbb{W}^{\tau_1+r_3,\tau_3}_{x_1,x_3}(\r d \omega_3)\,\mathbb{W}^{\tau_3+r_2,\tau_2}_{x_3,x_2}(\r d \omega_2)\,\mathbb{W}^{\tau_2+r_1,\tau_1}_{x_2,x_1}(\r d \omega_1) \\ \times \mathbb{W}^{\tilde{\tau}_1+\tilde{r}_3,\tilde{\tau}_3}_{\tilde{x}_1,\tilde{x}_3}(\r d \tilde{\omega}_3)\,\mathbb{W}^{\tilde{\tau}_3+\tilde{r}_2,\tilde{\tau}_2}_{\tilde{x}_3,\tilde{x}_2}(\r d \tilde{\omega}_2)\,\mathbb{W}^{\tilde{\tau}_2+\tilde{r}_1,\tilde{\tau}_1}_{\tilde{x}_2,\tilde{x}_1}(\r d \tilde{\omega}_1)\, \r e^{-\frac{1}{2}\bigl(\mathbb{V}_{\eta}(\omega_1,\omega_1)+\mathbb{V}_{\eta}(\tilde \omega_1,\tilde \omega_1)-2\mathbb{V}_{\eta}(\omega_1,\tilde \omega_1)\bigr)}\, \\ \times \sum_{\Pi \in \fra{M}(A)} \prod_{\{a,b\} \in \Pi} (\mathbb{V}_{\eta})_{x_a,x_b}^{\tau_a,\tau_b}\,\prod_{a \in A \setminus [\Pi]} \r i \Bigl((\mathbb{V}_{\eta})_{x_a}^{\tau_a}(\omega_1)- (\mathbb{V}_{\eta})_{x_a}^{\tau_a}(\tilde{\omega}_1)\Bigr)\,. \end{multline} The first step in the proof of Lemma \ref{Partition_function_rate_of_convergence_4} is to compare \eqref{5.21'} with $\nu^6 I^{\epsilon}(\mathbf{r},\tilde{\mathbf{r}})$, which appears in a Riemann sum of mesh size $\nu$ for \eqref{5.8'}. We show the following quantitative estimate. \begin{lemma}[Approximation of $J(\mathbf{r},\tilde{\mathbf{r}})$] \label{J_approximation} For all $\f r,\tilde{\f r} \in (\nu \mathbb{N})^3$ with $|\f r|, |\tilde{\f r}|>0$, we have that uniformly in $\eta>0$ \begin{equation} \label{J_approximation_claim} J(\mathbf{r},\tilde{\mathbf{r}})=\nu^6 I^{\epsilon}(\mathbf{r},\tilde{\mathbf{r}})+O\Biggl(\frac{\nu^{13/2}}{\epsilon^{5d+1}}\,\biggl(1+\frac{1}{|\f r|^{d/2}}\biggr)\,\biggl(1+\frac{1}{|\tilde{\f r}|^{d/2}}\biggr)\,(1+|\f r|+|\tilde{\f r}|)^6\Biggr)\,. \end{equation} \end{lemma} The second step in the proof of Lemma \ref{Partition_function_rate_of_convergence_4} consists in giving a quantitative estimate on the error obtained by approximating the integral \eqref{5.8'} with the above Riemann sum. To this end, we prove the following estimate. \begin{lemma}[Quantitative Riemann sum approximation for \eqref{5.8'}] \label{Riemann_sum_I} Recalling \eqref{Theta_function}, we have that uniformly in $\eta>0$ \begin{multline*} \Biggl|\int_{[0,\infty)^3} \r d \mathbf{r} \,\frac{\r e^{-\kappa |\mathbf{r}|}}{|\mathbf{r}|}\,\int_{[0,\infty)^3} \r d \mathbf{\tilde r}\, \frac{\r e^{-\kappa |\mathbf{\tilde r}|}}{|\mathbf{\tilde r}|}\,I^{\epsilon}(\f r, \tilde{\f r})-\nu^6 \sum_{\mathbf{r} \in (\nu \mathbb{N})^3}\,\frac{\ind{|\mathbf{r}|>0}\,\r e^{-\kappa |\mathbf{r}|}}{|\mathbf{r}|}\, \sum_{\mathbf{\tilde r} \in (\nu \mathbb{N})^3}\, \frac{\ind{|\mathbf{\tilde r}|>0}\,\r e^{-\kappa |\mathbf{\tilde r}|}}{|\mathbf{\tilde r}|}\,I^{\epsilon}(\f r, \tilde{\f r})\Biggr| \\ \lesssim_{\kappa,v} \frac{\Theta(\nu)}{\epsilon^{5d+1}}\,. \end{multline*} \end{lemma} We give the proofs of Lemmas \ref{J_approximation} and \ref{Riemann_sum_I} in Appendix \ref{Appendix C}. With these results, we have all of the necessary tools to prove Lemma \ref{Partition_function_rate_of_convergence_4}. \begin{proof}[Proof of Lemma \ref{Partition_function_rate_of_convergence_4}] The claim follows from Lemmas \ref{J_approximation} and \ref{Riemann_sum_I} by using \eqref{5.8'} and \eqref{5.17'}. \end{proof} In order to prove Lemma \ref{Partition_function_rate_of_convergence_5}, we need to make some minor modifications. Arguing analogously as for \cite[(5.31)--(5.33)]{FKSS_2020}, we have that \begin{equation} \label{5.31'} \int \mu_{\cal{C}_{\eta}}(\r d \sigma) \, \overline{f_2(\langle \sigma \rangle)} F_2(\sigma) =\frac{1}{\nu^3}\, \sum_{\mathbf{r} \in (\nu \mathbb{N})^3} \frac{\ind{|\mathbf{r}|>0}\,\r e^{-\kappa|\mathbf{r}|}}{|\mathbf{r}|} \int_{[0,\infty)^3} \r d \tilde{\mathbf{r}} \,\frac{\r e^{-\kappa|\tilde{\mathbf{r}}|}}{|\tilde{\mathbf{r}}|}\, \tilde{J}^{\epsilon}(\mathbf{r},\tilde{\mathbf{r}})\,, \end{equation} where \begin{multline} \label{5.33'} \tilde{J}^{\epsilon}(\mathbf{r},\tilde{\mathbf{r}}) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int_{[0,\nu]^3} \r d \f \tau \,\int_{[0,\nu]^3} \r d \tilde{\f \tau}\, \int \r d \mathbf{x}\,\int\r d \tilde{\mathbf{x}}\,\int \mathbb{W}^{\tau_1+r_3,\tau_3}_{x_1,x_3}(\r d \omega_3)\,\mathbb{W}^{\tau_3+r_2,\tau_2}_{x_3,x_2}(\r d \omega_2)\,\mathbb{W}^{\tau_2+r_1,\tau_1}_{x_2,x_1}(\r d \omega_1) \\ \times \mathbb{W}^{\tilde{r}_3,0}_{\tilde{x}_1,\tilde{x}_3}(\r d \tilde{\omega}_3)\,\mathbb{W}^{\tilde{r}_2,0}_{\tilde{x}_3,\tilde{x}_2}(\r d \tilde{\omega}_2)\,\mathbb{W}^{\tilde{r}_1,0}_{\tilde{x}_2,\tilde{x}_1}(\r d \tilde{\omega}_1)\, \r e^{-\frac{1}{2}\bigl(\mathbb{V}_{\eta}(\omega_1,\omega_1)+\mathbb{V}^{\epsilon}(\tilde \omega_1,\tilde \omega_1)-2\mathbb{V}^{\epsilon}(\omega_1,\tilde \omega_1)\bigr)}\, \\ \times \sum_{\Pi \in \fra{M}(A)} \prod_{\{a,b\} \in \Pi} (\mathbb{V}_{\eta})_{x_a,x_b}^{\tau_a,\tau_b}\,\prod_{a \in A \setminus [\Pi]} \r i \Bigl((\mathbb{V}_{\eta})_{x_a}^{\tau_a}(\omega_1)- (\mathbb{V}^{\epsilon})_{x_a}(\tilde{\omega}_1)\Bigr)\,. \end{multline} The following analogues of Lemmas \ref{J_approximation} and \ref{Riemann_sum_I} hold. \begin{lemma} \label{J_approximation2} For all $\f r \in (\nu \mathbb{N})^3$ and $\tilde{\mathbf{r}} \in [0,\infty)^3$ with $|\f r|, |\tilde{\mathbf{r}}|>0$, we have that uniformly in $\eta>0$ \begin{equation*} J(\mathbf{r},\tilde{\mathbf{r}})=\nu^6 I^{\epsilon}(\mathbf{r},\tilde{\mathbf{r}})+O\Biggl(\frac{\nu^{13/2}}{\epsilon^{5d+1}}\,\biggl(1+\frac{1}{|\f r|^{d/2}}\biggr)\,\biggl(1+\frac{1}{|\tilde{\f r}|^{d/2}}\biggr)\,(1+|\f r|+|\tilde{\f r}|)^6\Biggr)\,. \end{equation*} \end{lemma} \begin{lemma} \label{Riemann_sum_I2} Recalling \eqref{Theta_function}, we have that uniformly in $\eta>0$ \begin{multline*} \Biggl|\int_{[0,\infty)^3} \r d \mathbf{r} \,\frac{\r e^{-\kappa |\mathbf{r}|}}{|\mathbf{r}|}\,\int_{[0,\infty)^3} \r d \mathbf{\tilde r}\, \frac{\r e^{-\kappa |\mathbf{\tilde r}|}}{|\mathbf{\tilde r}|}\,I^{\epsilon}(\f r, \tilde{\f r})-\nu^3 \sum_{\mathbf{r} \in (\nu \mathbb{N})^3}\,\frac{\ind{|\mathbf{r}|>0}\,\r e^{-\kappa |\mathbf{r}|}}{|\mathbf{r}|}\, \int_{[0,\infty)^3}\r d \mathbf{\tilde r}\, \frac{\r e^{-\kappa |\mathbf{\tilde r}|}}{|\mathbf{\tilde r}|}\,I^{\epsilon}(\f r, \tilde{\f r})\Biggr| \\ \lesssim_{\kappa,v} \frac{\Theta(\nu)}{\epsilon^{5d+1}}\,. \end{multline*} \end{lemma} The proofs of Lemmas \ref{J_approximation2} and \ref{Riemann_sum_I2} are very similar to those of Lemmas \ref{J_approximation} and \ref{Riemann_sum_I}; see Appendix \ref{Appendix C}. \begin{proof}[Proof of Lemma \ref{Partition_function_rate_of_convergence_5}] The claim follows from Lemmas \ref{J_approximation2} and \ref{Riemann_sum_I2} by using \eqref{5.8'} and \eqref{5.31'}. \end{proof} \subsection{Correlation functions} We note the following analogues of Lemma \ref{Partition_function_rate_of_convergence_1} and \ref{Partition_function_rate_of_convergence_2} that give us functional integral representations for \eqref{hat_Gamma_p_2} and \eqref{hat_gamma_p_2} respectively. \begin{lemma} \label{hat_Gamma_fr} For all $p \in \mathbb{N}$, we have that as $\eta \rightarrow 0$, $\widehat{\Gamma}_{p,\eta}\overset{C}{\longrightarrow} \widehat{\Gamma}_{p}$, where $\widehat{\Gamma}_{p}$ satisfies \begin{equation} \label{Gamma_hat_quantum} \nu^p \, \widehat{\Gamma}_{p,\eta}=\frac{p!}{\cal Z_{\eta}^{\epsilon}}P_p \,Q_{p,\eta} \end{equation} for \begin{equation} \label{Q_quantum} (Q_{p,\eta})_{\f x,\tilde{\f x}} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int \mu_{\cal C_{\eta}} (\r d \sigma)\, \r e^{\frac{(\tau^\epsilon)^2}{2}-\frac{\r i \tau^\epsilon [\sigma]}{\nu}}\,\r e^{F_2(\sigma)}\, \prod_{i=1}^{p} \Biggl[\nu\sum_{r_i \in \nu \mathbb{N}^*} \r e^{-\kappa r_i} \, \int \mathbb{W}^{r_i,0}_{x_i,\tilde{x}_i}(\r d \omega_i)\, \Bigl(\r e^{\r i \int_{0}^{r_i} \r d t\, \sigma([t]_{\nu},\omega_i(t))}-1\Bigr)\Biggr]\,. \end{equation} Here, $F_2$ is given as in \eqref{F_2}. \end{lemma} \begin{lemma} \label{hat_gamma_fr} For all $p \in \mathbb{N}$, we have that \begin{equation} \label{gamma_hat_classical} \widehat{\gamma}_{p}^{W^\epsilon}=\frac{p!}{\zeta^{W^\epsilon}}P_p \,q_{p}^{\epsilon} \end{equation} for \begin{equation} \label{q_classical} (q_{p}^{\epsilon})_{\f x,\tilde{\f x}} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \int \mu_{v^{\epsilon}} (\r d \xi)\, \r e^{\frac{(\tau^\epsilon)^2}{2}-\r i \tau^\epsilon \langle \xi,1 \rangle_{L^2}}\,\r e^{f_2(\xi)}\, \prod_{i=1}^{p} \Biggl[\int_{0}^{\infty} \r d r_i\,\r e^{-\kappa r_i} \, \int \mathbb{W}^{r_i,0}_{x_i,\tilde{x}_i}(\r d \omega_i)\, \Bigl(\r e^{\r i \int_{0}^{r_i} \r d t\, \xi(\omega_i(t))}-1\Bigr)\Biggr]\,. \end{equation} Here, $f_2$ is given as in \eqref{f_2}. \end{lemma} Lemma \ref{hat_Gamma_fr} follows from \eqref{H^epsilon_2}, \eqref{H^epsilon_3}, \cite[Appendix B]{FKSS_2020}, \cite[Proposition 3.15]{FKSS_2020}, combined with \cite[(5.36)--(5.37)]{FKSS_2020}. Furthermore, in order to deduce Lemma \ref{hat_gamma_fr}, we argue as in \cite[Appendix B]{FKSS_2020}, \cite[Proposition 4.3]{FKSS_2020} and \cite[(5.38)]{FKSS_2020}. We omit the details. Recalling \eqref{sigma(epsilon)}, we now compare the quantities \eqref{Q_quantum} and \eqref{q_classical}. \begin{lemma} \label{Q_and_q_estimates} There exists $c_3>0$ depending on $\kappa,v$ such that the following results hold for all $p \in \mathbb{N}$. \begin{itemize} \item[(i)] We have \begin{equation*} \|Q_{p,\eta}\|_{C} \leq C_{\kappa,v}^p \, \r e^{c_3 \chi(\epsilon)^2}\, \biggl(\frac{1}{\epsilon^d}\biggr)^{p/2}\,. \end{equation*} \item[(ii)] For $\theta(d,p)$ as in \eqref{theta_definition}, we have uniformly in $\eta>0$ \begin{equation} \label{Q_and_q_estimates_ii} \|Q_{p,\eta}-q^{\epsilon}_{p}\|_{C} \leq C_{\kappa,v}^p \, \r e^{c_3 \chi(\epsilon)^2}\, \Biggl(\biggl(\frac{1}{\epsilon^d}\biggr)^{p/2}+\biggl(\frac{1}{\epsilon^{5d+1}}\biggr)^{1/2}\Biggr)\,\nu^{\theta(d,p)}\,. \end{equation} \end{itemize} \end{lemma} Furthermore, we show the following lower bound on the classical and quantum relative partition functions. \begin{lemma}[Lower bound on the relative partition function] \label{lower_bound} The following estimates hold for some constant $c_4>0$ depending on $\kappa$. \begin{itemize} \item[(i)] $\zeta^{W^{\epsilon}} \geq \exp[-c_4 \chi(\epsilon)^2]$. \item[(ii)] $\cal Z \geq \exp[-c_4 \chi(\epsilon)^2]$. \end{itemize} \end{lemma} We prove Lemmas \ref{Q_and_q_estimates} and \ref{lower_bound} in Appendix \ref{Appendix C}. Using Lemmas \ref{Q_and_q_estimates} and \ref{lower_bound}, we now prove Proposition \ref{Correlation_functions_rate_of_convergence}. \begin{proof}[Proof of Proposition \ref{Correlation_functions_rate_of_convergence}] By Lemmas \ref{Partition_function_rate_of_convergence_1}, \ref{hat_Gamma_fr}, and \ref{hat_gamma_fr} it suffices to estimate \begin{equation} \label{ratio_estimate} \lim_{\eta \rightarrow 0}\, \biggl\|\frac{Q_{p,\eta}}{\cal Z_{\eta}}-\frac{q^{\epsilon}_{p}}{\zeta^{W^\epsilon}}\biggr\|_{C} \leq \frac{\bigl|\cal Z-\zeta^{W^\epsilon}\bigr|}{\cal Z\,\zeta^{W^\epsilon}}\,\lim_{\eta \rightarrow 0}\,\|Q_{p,\eta}\|_{C}+\frac{1}{\zeta^{W^\epsilon}}\,\lim_{\eta \rightarrow 0}\,\|Q_{p,\eta}-q^{\epsilon}_{p}\|_{C}\,. \end{equation} The claim now follows from \eqref{ratio_estimate} by using Proposition \ref{Partition_function_rate_of_convergence}, Lemma \ref{Q_and_q_estimates} (i) and Lemma \ref{lower_bound} to estimate the first term and Lemma \ref{Q_and_q_estimates} (ii) combined with Lemma \ref{lower_bound} (i) to estimate the second term. Throughout we recall \eqref{epsilon_lower_bound}, and we obtain the claim if we take $C_2>C_1+c_3+2c_4$. \end{proof} \subsection{The mean-field limit for unbounded nonlocal interactions in dimensions $d=2,3$} \label{Extension of results to unbounded nonlocal interactions} We conclude this section by using the techniques developed above to extend the mean-field limit of \cite{LNR3, FKSS_2020}, with nonlocal interaction, from bounded interaction potentials to unbounded interaction potentials. Our assumptions on the potential are the same as in the seminal work \cite{bourgain1997invariant}. Previously, the mean-field limit with unbounded interaction potentials was considered in \cite{sohinger2019microscopic}, however with a modified, regularized, quantum many-body state instead of the grand canonical state \eqref{gc_density}. We remark that the results in \cite{bourgain1997invariant} are originally stated in a setting that does not assume any positivity of the interaction, and hence require a truncation in the Wick-ordered mass of the field. However, when restricted to positive (defocusing) interactions, the truncation can be removed. \begin{assumption} \label{v_assumption_L^q} In the classical setting, we consider $v \in L^q(\Lambda)$ which is even, real-valued, and of positive type, such that \begin{equation} \label{v_assumption_L^q_eq} \begin{cases} q>1 & \text{if } d=2 \\ q>3 & \text{if } d=3\,. \end{cases} \end{equation} \end{assumption} Note that, in terms of $L^q$ integrability, \eqref{v_assumption_L^q_eq} is the optimal range for $q$. We refer the reader to \cite{bourgain1997invariant} and \cite[Section 1.4]{sohinger2019microscopic} for a further discussion. In particular, for $d = 2$ we can take $v$ to be the Coulomb potential. With $v$ as above, we study the interacting field theory \eqref{field theory}, where now \begin{equation} \label{zeta_nonlocal} V=\frac{1}{2} \int \r d x\,\r d y\, \wick{|\phi(x)|^2}\,v(x-y)\,\wick{|\phi(y)|^2}\,. \end{equation} The interaction $V$ is rigorously defined by using a frequency truncation, as in Section \ref{sec:classical_field}. We refer the reader to \cite[Lemma 1.4]{sohinger2019microscopic} for a precise summary. We also make the appropriate modifications in the definition of the correlation functions. Similarly as in \eqref{approximate_delta_function}, we consider $\psi \in C_c^{\infty}(\mathbb{R}^d)$ even, nonnegative and satisfying $\psi(0)=1$ and for $\epsilon>0$, we define $\delta_{\epsilon}: \Lambda \rightarrow \mathbb{C}$ by \begin{equation} \label{delta_{epsilon,lambda}} \delta_{\epsilon,\Lambda}(x) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \frac{1}{\epsilon^d} \sum_{y \in \mathbb{Z}^d} (\fra F^{-1} \psi) \biggl(\frac{x-y}{\epsilon}\biggr)\,, \end{equation} where $\fra F$ denotes Fourier transform (see \eqref{Fourier}). With notation as in \eqref{delta_{epsilon,lambda}} and $v$ as in Assumption \ref{v_assumption_L^q}, we now write \begin{equation} \label{v_regularized} v^{\epsilon} \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= v*\delta_{\epsilon,\Lambda} \in C^{\infty}(\Lambda)\,. \end{equation} When working in the quantum setting, we hence set $\tau^\epsilon = 0$ and $E^{\epsilon}=0$ in \eqref{Hamiltonian_H}. In this section, instead of \eqref{epsilon_lower_bound}, we take \begin{equation} \label{epsilon_bound_nonlocal} \epsilon(\nu) \gtrsim \frac{1}{\log \nu^{-1}}\,. \end{equation} \begin{theorem} \label{theorem_non_local} Let $d \leq 3$. With $V$ as in \eqref{zeta_nonlocal}, $v$ as in Assumption \ref{v_assumption_L^q}, and $\epsilon>0$ as in \eqref{epsilon_bound_nonlocal}, we have the following results as $\epsilon, \nu \to 0$. \begin{itemize} \item[(i)] $\cal Z \to \zeta^V$. \item[(ii)] $\nu^p \, \widehat \Gamma_p \overset{C}{\longrightarrow} \widehat \gamma_p^{V}$. \end{itemize} \end{theorem} \begin{proof} We first prove (i). If $v \in L^q(\Lambda)$, we use \eqref{v_regularized} and Young's inequality, to deduce that for $\epsilon>0$ sufficiently small, we have \begin{equation} \label{v_regularized_bound} \|v^{\epsilon}\|_{L^{\infty}(\Lambda)} \lesssim_{\psi,q,v} \epsilon^{-\frac{d}{q}}\,,\qquad \|\nabla v^{\epsilon}\|_{L^{\infty}(\Lambda)} \lesssim_{\psi,q,v} \epsilon^{-\frac{d}{q}-1}\,. \end{equation} Furthermore, $v^{\epsilon}$ is of positive type and has compactly supported Fourier transform. We use the functional integral setup as before and start from appropriate analogues of Lemmas \ref{Partition_function_rate_of_convergence_1} and \ref{Partition_function_rate_of_convergence_2}. Using \eqref{v_regularized_bound} instead of Lemma \ref{v^{epsilon}_properties}, and arguing as for Lemmas \ref{Partition_function_rate_of_convergence_3}--\ref{Partition_function_rate_of_convergence_5}, we deduce that for $\epsilon>0$ sufficiently small and all $\nu>0$, we have \begin{equation} \label{nonlocal_bound_1} \Bigl|\cal Z-\cal \zeta^{V^\epsilon}\Bigr| \lesssim \Biggl(\frac{\Theta(\nu)}{\epsilon^{\frac{5d}{q}+1}}\Biggr)^{1/2}\,, \end{equation} where $V^{\epsilon}$ denotes the interaction as in \eqref{zeta_nonlocal} with $v$ replaced by $v^{\epsilon}$, and $\Theta(\nu)$ is as in \eqref{Theta_function}. By \eqref{epsilon_bound_nonlocal}, this is an acceptable upper bound and we reduce the claim to showing that \begin{equation} \label{zeta_epsilon_convergence} \lim_{\epsilon \rightarrow 0} \zeta^{V^\epsilon}=\zeta^V\,. \end{equation} Using the assumption that $v,v^{\epsilon}$ are of positive type and the Cauchy-Schwarz inequality, we obtain \begin{equation} \label{zeta_difference} |\zeta^{\epsilon}-\zeta| \leq \|V^{\epsilon}-V\|_{L^2(\P)}\,. \end{equation} We now show that \begin{equation} \label{nonlocal_bound_2} \|V^{\epsilon}-V\|_{L^2(\P)} \lesssim \|v^{\epsilon}-v\|_{L^q}\,. \end{equation} By \eqref{v_regularized}, we have that \eqref{nonlocal_bound_2} indeed implies \eqref{zeta_epsilon_convergence}. By using Fubini's theorem followed by Wick's theorem, we can rewrite the right-hand side of \eqref{zeta_difference} as \begin{equation} \label{nonlocal_bound_3} \frac{1}{2} \, \biggl(\int \r d x\, \r d \tilde x\, \r d y\, \r d \tilde y \,F^{\epsilon}(x,\tilde x,y, \tilde y) \biggr)^{1/2}\,, \end{equation} where \begin{equation} \label{F^epsilon} F^{\epsilon}(x,\tilde x,y, \tilde y) \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \sum_{\Pi \in \cal M_c^{\mathrm{Wick}}(B)} \prod_{\{i,j\} \in \Pi} G(x_i-x_j)\, \bigl[v^{\epsilon}(x-y)-v(x-y)\bigr]\,\bigl[v^{\epsilon}(\tilde x- \tilde y)-v(\tilde x- \tilde y)\bigr]\,. \end{equation} In \eqref{F^epsilon}, we let $B \mathrel{\vcenter{\baselineskip0.65ex \lineskiplimit0pt \hbox{.}\hbox{.}}}= \{1,2\} \times \{1,2\} \times \{+,-\}$. Furthermore, we use the variables $x_{(1,1,\pm)} \equiv x,\, x_{(2,1,\pm)} \equiv \tilde x,\, x_{(1,2,\pm)} \equiv y,\, x_{(2,2,\pm)} \equiv \tilde y$ and denote by $\cal M_c^{\mathrm{Wick}}(B)$ the set of all complete pairings $\Pi$ of $B$ where $\{(a,b,+),(a,b,-)\} \notin \Pi$ for all $a,b \in \{1,2\}$. We note that each integration variable in \eqref{nonlocal_bound_3} appears exactly once as part of the argument of $v^{\epsilon}-v$ and exactly twice as part of an argument of a Green function. We can therefore apply H\"{o}lder's inequality and deduce \eqref{nonlocal_bound_2}. Here, we use that, uniformly in $c,d \in \Lambda$ \begin{equation} \label{Holder_estimate} \int \r d x\, |v^{\sharp}(x)|\,G(x-c)\,G(x-d) \lesssim 1\,, \end{equation} where $v^{\sharp}=v$ or $v^{\epsilon}$. For \eqref{Holder_estimate}, we use the fact that $G \in L^{2q'}(\Lambda)$ (where $q'$ denotes the H\"{o}lder conjugate of $q$). We now prove (ii). With notation defined analogously as in \eqref{Gamma_hat_quantum}--\eqref{q_classical}, we use \eqref{v_regularized_bound} and argue as in the proof of Lemma \ref{Q_and_q_estimates} to deduce that for all $\epsilon>0$ and uniformly in $\eta>0$ \begin{equation} \label{nonlocal_bound_ii_1} \|Q_{p,\eta}\|_{C} \leq C_{\kappa,v}^p \,\biggl(\frac{1}{\epsilon^{d/q}}\biggr)^{p/2}\,, \quad \|Q_{p,\eta}-q^{\epsilon}_{p}\|_{C} \leq C_{\kappa,v}^p \, \Biggl(\biggl(\frac{1}{\epsilon^{d/q}}\biggr)^{p/2}+\biggl(\frac{1}{\epsilon^{5d/q+1}}\biggr)^{1/2}\Biggr)\,\nu^{\theta(d,p)}\,, \end{equation} with $\theta(d,p)$ given as in \eqref{theta_definition}. Similarly, arguing as in the proof of Lemma \ref{lower_bound}, using Wick's theorem we have that for all $\epsilon>0$ \begin{equation} \label{nonlocal_bound_ii_2} \zeta^{V^{\epsilon}} \gtrsim 1\,,\qquad \cal Z \geq \exp \biggl[-c\biggl(1+\frac{\nu \chi(\sqrt{\nu})}{\epsilon^{d/q}}\biggr)\biggr]\,, \end{equation} with $\chi$ as in \eqref{sigma(epsilon)}. In order to obtain \eqref{nonlocal_bound_ii_2} we used \eqref{Holder_estimate}. Using \eqref{nonlocal_bound_ii_1}--\eqref{nonlocal_bound_ii_2}, arguing as in the proof of Proposition \ref{Correlation_functions_rate_of_convergence}, and recalling \eqref{epsilon_bound_nonlocal}, we deduce that \begin{equation} \label{nonlocal_bound_ii_3_A} \Bigl\|\nu^p \, \widehat{\Gamma}_{p}-\widehat{\gamma}_{p}^{V^{\epsilon}}\Bigr\|_{C} \lesssim \nu^{\alpha}\,, \end{equation} for $0<\alpha<\theta(d,p)$. Hence, we reduce to showing \begin{equation} \label{nonlocal_bound_ii_3} \lim_{\epsilon \rightarrow 0} \Bigl\|\widehat{\gamma}_{p}^{V^{\epsilon}}-\widehat{\gamma}_{p}^V\Bigr\|_{C}=0\,. \end{equation} By \eqref{nonlocal_bound_2}, we have that $\normb{\r e^{-V^{\epsilon}} - \r e^{-V}}_{L^2(\P)} \to 0$ as $\epsilon \to 0$. Therefore, we obtain \eqref{nonlocal_bound_ii_3} if we prove bounds analogous to those in Lemma \ref{lem:derivatives-convergence} (with $W^{\epsilon}$ replaced by $V^{\epsilon}$). For fixed $0<N<\infty$ and $z \in \Lambda$, and note that by Lemma \ref{lem:L_phi} \begin{equation*} L_{N,z} (V^\epsilon - V) = \int \r d x \, \r d \tilde x \, \bigl[v^\epsilon(x - \tilde x) - v(x - \tilde x)\bigr] \, \, G_N(z - x) \,\phi(x)\,\wick{|\phi(x)|^2} \end{equation*} and so by Lemma \ref{lem:Wick}, we deduce that \begin{multline*} \normb{L_{N,z} (V^\epsilon - V)}_{L^2(\P)}^2 = \int \r d x \, \r d \tilde x \, \r d y \, \r d \tilde y \,\bigl[v^\epsilon(x - \tilde x) - v(x - \tilde x)\bigr] \,\bigl[v^\epsilon(y - \tilde y) - v(y - \tilde y)\bigr] \,G_N(z-x)\,G_N(z-y) \\ \times \bigl[G(x-y)\,G^2(\tilde x - \tilde y) + G(x-\tilde x)\,G(\tilde x - \tilde y)\,G(y-\tilde y) + G(x - \tilde y)\, G(\tilde x-\tilde y)\, G(y-\tilde x)\bigr]\,, \end{multline*} which by arguing analogously as for \eqref{nonlocal_bound_2} is $\lesssim \|v^{\epsilon}-v\|_{L^q}^2$. The other terms are treated similarly. We omit the details. We hence obtain \eqref{nonlocal_bound_ii_3}. \end{proof} \begin{remark} \label{weak_operator_topology} We note that if we relax the topology of convergence in \eqref{nonlocal_bound_ii_3} (and hence in Theorem \ref{theorem_non_local} (ii)) to the weak operator topology, we can obtain the result by using the first bound in \eqref{nonlocal_bound_ii_2}, and the Cauchy-Schwarz inequality, similarly as in \eqref{zeta_difference}. This applies to the case $(d,q)=1$ as well. We refer the reader to the proof of \cite[Proposition 4.4]{FKSS_2020} for details. \end{remark} \begin{remark} \label{endpoint_admissible} One can also consider $v \in L^1(\Lambda)$ which is even, real-valued, and of positive type with suitable decay on its Fourier coefficients, see \cite[(16)-(17)]{bourgain1997invariant}. For $d=3$, the assumption in \cite{bourgain1997invariant} is that $\hat{v}(k) \leq \frac{C}{\langle k \rangle^{2+\delta}}$ for some $\delta>0$, which is covered by Theorem \ref{theorem_non_local} above by the Hausdorff-Young inequality (in the classical setting, the decay assumption was recently relaxed in \cite{deng2021invariant}). For $d=2$, the assumption in \cite{bourgain1997invariant} is that $\hat{v}(k) \leq \frac{C}{\langle k \rangle^{\delta}}$ for some $\delta>0$. Note that this corresponds the \emph{endpoint admissible} regime in the terminology of \cite[Definition 1.2 and Section 4]{sohinger2019microscopic}, except that we do not assume pointwise nonnegativity of $v$. Here, it is possible to prove convergence of the partition function (and consequently the convergence of the correlation functions in the weak operator topology as in Remark \ref{weak_operator_topology} above). We present the details in Appendix \ref{Appendix C}. \end{remark}
1,108,101,566,171
arxiv
\subsection{$U=0$ limit} In the case of $U=0$, the $T_U (\mathbf{ k} , \mathbf{ p} ) \rightarrow 0$, the unitarity relation of physical $S$-matrix in Eq.(\ref{phyunit}) is thus reduced to a simple form, \begin{equation} \sum_{k_i=\pm p_i} | s_V (\mathbf{ k} , \mathbf{ p} )|^2 =1, \label{sVoptic} \end{equation} this is exactly what we expected for non-interacting two electrons. Assuming \mbox{$(p_1 >0, p_2>0) $}, the transmission and reflection coefficients may be introduced by \begin{align} \mathcal{T} & = \left | s_V( p_1, p_2 ; p_1, p_2) \right |^2 , \nonumber \\ \mathcal{R} & = \left | s_V( -p_1, p_2 ; p_1, p_2) \right |^2 + \left | s_V( p_1,- p_2 ; p_1, p_2) \right |^2 \nonumber \\ & + \left | s_V( -p_1, -p_2 ; p_1, p_2) \right |^2. \end{align} Hence, $\mathcal{T} $ does indeed describe the probability of finding both electrons in forward direction after scattering. The complication in reflection coefficient of two electrons are due to the fact that two electrons wave function now has four independent plane waves: \mbox{$e^{ \pm i p_1 x_1} e^{ \pm i p_2 x_2} $}, in addition to both electrons in forward direction, which create three other scenarios: (i) \mbox{$s_V( -p_1, p_2 ; p_1, p_2) $} describe particle-2 moves forward and particle-1 is scattered backward; (ii) similarly, \mbox{$s_V( p_1,- p_2 ; p_1, p_2) $} is related to particle-1 moves forward and particle-2 is scattered backward; (iii) and \mbox{$s_V( -p_1, -p_2 ; p_1, p_2) $} is associate with both particles are scattered backward. \subsection{$V=0$ limit} At another extreme limit, as \mbox{$V \rightarrow 0$}, \mbox{$s_V (\mathbf{ k} , \mathbf{ p} ) \rightarrow \delta_{\hat{k}_1, \hat{p}_1}\delta_{\hat{k}_2, \hat{p}_2}$}, \mbox{$\phi (\mathbf{ x} , \mathbf{ p}) \rightarrow e^{i \mathbf{ p} \cdot \mathbf{ x}}$}, and \begin{equation} G (\mathbf{ x}, \mathbf{ x}') \rightarrow - \frac{ m }{2} i H_0^{(1)} ( p | \mathbf{ x}- \mathbf{ x}'|). \end{equation} The physical $S$-matrix now has a form, \begin{align} \mathcal{S} &( \theta_k , \theta_{p_0}) = \Theta (\theta_k, \theta_{p_0}) + 2 i \oint \frac{d\theta_p}{2\pi} T_U( \mathbf{ k}, \mathbf{ p})\Theta (\theta_p, \theta_{p_0}) , \end{align} where \begin{equation} T_U( \mathbf{ k}, \mathbf{ p}) = - \frac{m}{2} \sum_{\alpha,\beta=0}^{N-1} e^{- i\mathbf{ k} \cdot \mathbf{ a}_\alpha } \left [ \mathcal{D}^{-1} \right ]_{\alpha, \beta} e^{ i\mathbf{ p} \cdot \mathbf{ a}_\beta } , \end{equation} and \begin{equation} \mathcal{D}_{\alpha,\beta} = \frac{1}{U_0} \delta_{\alpha,\beta}+ \frac{m}{2} i H_0^{(1)} ( p | \mathbf{ a}_\alpha - \mathbf{ a}_\beta|). \end{equation} The diagonal matrix elements $\mathcal{D}_{\alpha,\alpha} $ present another difficulty due to the ultraviolet divergence of Hankel function at origin, \begin{equation} H_0^{(1)} (p r ) \stackrel{r \rightarrow 0}{\rightarrow} 1+ \frac{2 i}{\pi} \left (\gamma_E + \ln \frac{\ p }{\Lambda} \right ) , \ \ \ \ \Lambda = \frac{2}{r}, \end{equation} where $\Lambda$ is served as ultraviolet regulator. Ultimately, the physical result should not depend on the choice or regulator and it will be set to \mbox{$\Lambda \rightarrow \infty$}. The ultraviolet divergence may be dealt with standard renormalization procedure \cite{Cavalcanti:1998jx,Mitra:1998vr}. The ultraviolet divergence in Hankel function at origin may be absorbed by bare coupling strength $U_0$, a scale dependent running renormalized coupling strength is hence introduced by \begin{equation} \frac{1}{U_R(\mu)} = \frac{1}{U_0} - \frac{ m}{\pi} (\gamma_E + \ln \frac{ \mu }{\Lambda}) , \end{equation} where $\mu$ stands for the renormalization scale, and $U_R(\mu)$ is the physical coupling strength measured at scale $\mu$. The diagonal matrix element $\mathcal{D}$ is now given by \begin{equation} \mathcal{D}_{\alpha,\alpha} = \frac{m}{2} i + \frac{1}{U_R(\mu)} - \frac{m}{\pi} \ln \frac{p }{ \mu }. \label{Dscaledep} \end{equation} The physical observable, $\mathcal{D}$, shouldn't depend on the renormalization scale $\mu$, \begin{equation} \frac{d }{ d \mu } \mathcal{D}_{\alpha,\alpha} =0. \end{equation} Hence it yields a equation for running coupling strength, \begin{equation} \frac{d U_R (\mu)}{d \ln \mu} = \frac{m U_R^2(\mu)}{\pi} , \end{equation} and the solution of running coupling strength is given by \begin{equation} \frac{1}{m U_R(\mu)} = \frac{1}{m U_R^B } - \frac{1 }{\pi} \ln \frac{\mu}{\mu_B} , \label{runUR} \end{equation} where the initial condition of physical observable \mbox{$U^B_R = U_R (\mu_B) $} is coupling strength measured at scale $\mu_B$. The scale dependence in $U_R(\mu)$ and \mbox{$\frac{m}{\pi} \ln \frac{p }{ \mu }$} in Eq.(\ref{Dscaledep}) cancel out, so ultimately, physical observable, $\mathcal{D}$, indeed doesn't depend on the choice of renormalization scale $\mu$: \begin{equation} \mathcal{D}_{\alpha,\alpha} = \frac{m}{2} i +\frac{1}{U_R^B} - \frac{m}{\pi} \ln \frac{p }{\mu_B } . \end{equation} For the weak coupling (\mbox{$U^B_R \ll m^{-1}$}), the $\mathcal{D}$ matrix may be approximated by only diagonal elements: \mbox{$\mathcal{D}_{\alpha,\alpha} \sim \delta_{\alpha , \beta} \frac{1}{U_R^B}$}, hence, \begin{align} T_U( \mathbf{ k}, \mathbf{ p}) & \rightarrow - \frac{\frac{1}{2}}{ \frac{i}{2} + \frac{1}{m U_R^B} - \frac{1}{\pi} \ln \frac{p }{ \mu_B } } \frac{e^{i\frac{ p L\Omega}{\sqrt 2 }}}{e^{i\frac{p L\Omega}{ \sqrt 2 N}}} \frac{\sin{\frac{ p L \Omega}{\sqrt 2}}}{\sin{\frac{p L \Omega}{ \sqrt 2 N}}}, \end{align} where $\Omega\equiv \cos(\theta_p-\frac{\pi}{4})-\cos(\theta_k-\frac{\pi}{4})$. We have also assumed that all atoms are separated with even distance: \mbox{$a_{\alpha} = \frac{L}{N} \alpha$}, $\alpha=0,\cdots, N-1$, where $L$ stands for the length of crystal. The transmission coefficient, in case of a shape peaked wave packet ($\tau \rightarrow 0$), is now given by \begin{equation} \mathcal{T} \rightarrow 1-\frac{{2\tau}}{{(2\pi)}^{\frac{3}{2}}}\frac{1}{ \frac{1}{4} + (\frac{1}{m U_R^B} - \frac{1}{\pi} \ln \frac{p }{ \mu_B })^2 }\int^{2\pi}_{\frac{\pi}{2}}\frac{\sin^2{\frac{p L \Omega}{\sqrt 2}}}{\sin^2{\frac{p L \Omega}{ \sqrt 2 N}}}d\theta_k. \label{T} \end{equation} One of interesting feature in two interacting electrons case is that due to the $U$-type three-body interaction, the integrand expression in Eq.(\ref{T}), $\frac{\sin^2{\frac{ p L \Omega}{\sqrt 2}}}{\sin^2{\frac{p L \Omega}{ \sqrt 2 N}}}$, shows the interference pattern and resembles to intensity distribution from an ideal grating with $N$ slits in optics or the resistance of one-dimensional chains in Kronig-Penny-like models (see, e.g. \cite{vg88}). In contrast, in the case of the single electron interacting with $N$ numbers of contact interactions, even at weak coupling limit, the phase factors in forward scattering amplitude all cancel out. The transmission coefficient for single electron is independent of phase factors: $ \mathcal{T}=1- N^2 \frac{m^2 U_0^2}{p^2}$, and shows no interference pattern. {\it Discussion and Summary.}---In order to see the resemblance of multiple channels Landauer-B\"uttiker formula and multiple particles $S$-matrix formalism, let's consider the case of single electron traveling in a quasi-1D wave guide along $z$-direction. The potential barrier is placed at center of wave guide, and the motion of electron in transverse direction is confined in a narrow tube. Hence, the energy spectra in transverse direction is discretized, the wave function is given by the product of a plane wave in $z$-direction, $e^{i p_n z}$, and bound state wave function in transverse direction, $ \Phi_{n} (x,y) $, where $n$ refers to the $n$-th energy state, $\epsilon_n$, in transverse direction, and $p_n = \sqrt{2m (E-\epsilon_n)}$. Assuming initial incident electron is in $n$-th eigenstate, thus, the scattered wave function of electron in forward direction is given by \begin{equation} \Psi_n(\mathbf{ x},E) \rightarrow \sum_{n'} S_{n,n'} \Phi_{n'} (x,y) e^{i p_{n'} z}, \end{equation} where $ S_{n,n'}$ is scattering $S$-matrix element between $n$-th and $n'$-th channels, and satisfies unitarity relation: $\sum_{n'} |S_{n, n'}|^2 =1$. Since transverse wave function, $ \Phi_{n} (x,y) $, is also well normalized according to \begin{equation} \int d x d y \Phi^*_{n'} (x,y) \Phi_{n} (x,y) = \delta_{n,n'}, \end{equation} the coefficient of plane wave in $z$-direction, $S_{n,n'} \Phi_{n'} (x,y)$, may still be used to describe probability of physical transition process. Hence, the transmission coefficient in initial channel-$n$ may be defined as net result of coefficient square, \begin{equation} \mathcal{T}_n = \int d x d y | \sum_{n'} S_{n,n'} \Phi_{n'} (x,y) |^2 = \sum_{n'} | S_{n,n'} |^2 . \end{equation} In the case of two electrons, the situation is somehow similar, the two electrons wave function in forward direction is now described by outgoing spherical waves, $\frac{e^{ i (px -\frac{\pi}{4})}}{\sqrt{2\pi px}}$, propagating in radial direction, and the angular dependent physical $S$-matrix element, \begin{equation} \Phi(\mathbf{ x}, \mathbf{ p}_0) \stackrel{\theta_x \rightarrow \theta_{p_0}}{ \longrightarrow } \mathcal{S} (\theta_x, \theta_{p_0} ) \frac{e^{i(px - \frac{\pi}{4})}}{\sqrt{2\pi p x}} . \label{wavepaket} \end{equation} If each possible configuration of allowed momenta distribution among particles is labelled as a single channel, in multiple particles case, there are infinite channels. The scattering of multiple particles may be treated as a continuously distributed multiple-channel problem. Physical $S$-matrix element square, $| \mathcal{S} (\theta_x, \theta_{p_0} )|^2$, hence describe the transition probability between channel-$\theta_x$ and channel-$\theta_{p_0} $. The transmission coefficient in initial channel-$\theta_{p_0}$ is thus given by net result of all forward transitions, \begin{equation} \mathcal{T}_{ \theta_{p_0}} = \int_0^{\frac{\pi}{2}} \frac{d\theta_x}{2\pi} | \mathcal{S} (\theta_x, \theta_{p_0} ) |^2 . \end{equation} In summary, the transport properties of few-electron system is normally complicated by some new features due to multiple particles interaction effect, such as interference and diffraction. The proper approach of introducing transmission and reflection coefficient of few-electron system is discussed in present work based on the probability interpretation of physical unitarity relation of scattering $S$-matrix. The normalization paradox of unitarity relation is remedied by the wave packet description of incident physical states. {\it Acknowledgement.}---We thank B.~Altshuler, M.~Ortuu{\~n}o and E.~Cuevas for their useful comments. P.G. acknowledges partial support by the National Science Foundation under Grant No. NSF PHY-1748958.
1,108,101,566,172
arxiv
\section{Introduction\label{sec:intro}} As one of the most important topics in document processing systems, signature verification has become an indispensable issue in modern society~\cite{dey2017signet,zheng2019ranksvm}. Precisely, it plays important role in enhancing security and privacy in various fields, such as finance, medical, forensic agreements, etc. As innumerable significant documents are signed almost every moment throughout the world, automatically examining the genuineness of the signed signatures has become a crucial subject. Since misjudgment is hardly allowed especially in serious and formal situations like in forensic usages, obtaining ``highly reliable" signatures is of great importance.\par Fig.~\ref{fig:dependentandindependent} illustrates two scenarios of signature verification. In the writer-dependent scenario~(a), it is possible to prepare the verifiers specialized for individuals. In contrast, in the writer-independent scenario~(b), we can prepare only a single and universal classifier that judges whether a pair of a questioned signature (i.e., a query) and a genuine reference signature are written by the same person or not. Consequently, for a reliable writer-independent verification, the classifier needs (i)~to deal with various signatures of various individuals and (ii)~not to accept unreliable pairs with some confidence.\par \setlength{\belowcaptionskip}{-0.5cm} \begin{figure}[t!] \begin{center} \includegraphics[width=0.8\textwidth]{figures/intro1_cropped.pdf}\\[-4mm] \caption{Two scenarios of signature verification. `Q' and `G' are the query signature and a genuine reference signature, respectively. } \label{fig:dependentandindependent} \end{center} \medskip \begin{center} \includegraphics[width=\columnwidth]{figures/intro2_cropped.pdf}\\[-2mm] \caption{(a), (b) Two ranking methods and (c)~their corresponding ROCs. In (a) and (b), a thick arrow is a ranking function that gives higher rank scores to positive samples (purple circles) than negative samples (pink circles). Each dotted line is an ``equi-distance'' line.} \label{fig:rankingmethods} \end{center} \end{figure} In this paper, we propose a new method to learn {\em top-rank pairs} for highly-reliable writer-independent signature verification. The proposed method is inspired by top-rank learning~\cite{li2014top,frery2017efficient,boyd2012accuracy}. Top-rank learning is one of the ranking tasks but is different from the standard ranking task. Fig.~\ref{fig:rankingmethods} shows the difference of top-rank learning from standard learning to rank. The objective of the standard ranking task~(a) is to determine the ranking function that evaluates positive samples much higher than negative samples as possible. This objective is equivalent to maximize AUC. In contrast, top-rank learning~(b) has a different objective to maximize {\em absolute top positives}, which are highly-reliable positive samples in the sense that no negative sample has a higher rank than them. In (c), the very beginning of the ROC of top-rank learning is a vertical part; this means that there are several positive samples that have no negative samples ranked higher than them. Consequently, Top-rank learning can derive absolute top positives as highly reliable positive samples. The ratio of absolute top positives over all positives is called ``{\em pos@top}.'' \par The proposed method to learn top-rank pairs accepts a paired feature vector for utilizing the promising property of top-rank learning for writer-independent signature verification. As shown in Fig.~\ref{fig:dependentandindependent}~(b), the writer-independent scenario is based on the pairwise evaluation between a query and a genuine reference. To integrate this pairwise evaluation into top-rank learning, we first concatenate $q$ and $g$ into a single paired feature vector $\boldsymbol{x}=q\oplus g$, where $q$ and $g$ denote the feature vector of a query and a genuine reference, respectively. The paired vector $\boldsymbol{x}$ is treated as positive when $q$ is written by the genuine writer and denoted as $\boldsymbol{x}^+$; similarly, when $q$ is written by a forgery, $\boldsymbol{x}$ is negative and denoted as $\boldsymbol{x}^-$. \par We train the ranking function $r$ with the positive paired vectors $\{\boldsymbol{x}^+\}$ and the negative paired vectors $\{\boldsymbol{x}^-\}$, by top-rank learning. As the result, we could have highly reliable positives as absolute top positives; more specifically, a paired signature in the absolute top positives is ``a more genuine pair'' than the most genuine-like negative pair (i.e., the most-skilled forgery), called {\em top-ranked negative}, therefore highly reliable. Although being an absolute top positive is much harder than just being ranked higher, we could make a highly reliable verification by using the absolute top positives and the trained ranking function $r$.\par To prove the reliability of the proposed method in terms of pos@top, we conduct writer-independent offline signature verification experiments with two publicly-available datasets: BHSig-B and BHSig-H. We use SigNet~\cite{dey2017signet} as not only the extractor of deep feature vectors ($q$ and $g$) from individual signatures but also a comparative method. SigNet is the current state-of-the-art model of offline signature verification. The experimental results prove that the proposed method outperforms SigNet not only pos@top but also other conventional evaluation metrics, such as accuracy, AUC, FAR, and FRR.\par Our contributions are arranged as follows: \begin{itemize}[itemsep=0pt,topsep=5pt] \item We propose a novel method to learn top-rank pairs. This method is the first application of top-rank learning to conduct a writer-independent signature verification task to the best knowledge of the authors, notwithstanding that the concepts of top-rank learning and absolute top positives are particularly appropriate to highly reliable signature verification tasks. \item Experiments on two signature datasets have been done to evaluate the effect of the proposed method, including the comparison with the SigNet. Especially, the fact that the proposed method achieves higher pos@top proves that the trained ranking function gives a more reliable score that guarantees ``absolutely genuine'' signatures. \end{itemize} \par \section{Related Work} The signature verification task has attracted great attention from researchers since it has been proposed~\cite{impedovo2008automatic,hafemann2017offline}. Generally, signature verification is divided into online~\cite{lee1996reliable} as well as offline~\cite{kalera2004offline,ferrer2012robustness} fashions. Online signatures offer pressure and stroke order information that is favorable to time series analysis methods~\cite{lai2018recurrent} like Dynamic Time Warping (DTW)~\cite{okawa2019template}. On the other hand, offline signature verification should be carried out only by making full use of image feature information~\cite{banerjee2021new}. As a result, acquiring efficacious features from offline signatures~\cite{hafemann2017offline,okawa2018bovw,ruiz2008offline} has become a highly anticipated challenge. \par In recent years, CNNs have been widely used in signature verification tasks thanks to their excellent representation learning ability ~\cite{hafemann2017learning,souza2018writer}. Among CNN-based models, Siamese network~\cite{melekhov2016siamese,guo2017learning} is one of the common choices when it comes to signature verification tasks. Specifically, a Siamese network is composed of two identically structured CNNs with shared weights, particularly powerful for similarity learning, which is a preferable learning objective in signature verification. For example, Dey et al. proposed a Siamese network-based model that optimizes the distances between two inputted signatures, shows outstanding performance on several famous signature datasets~\cite{dey2017signet}. Wei et al. also employed the Siamese network, and by utilizing inverse gray information with multiple attention modules, their work showed encouraging results as well~\cite{wei2019inverse}. However, none of the those approaches target revealing highly reliable genuine signatures. To acquire the highly reliable genuine signatures, learning to rank~\cite{trotman2005learning,burges2005learning} is a more reasonable approach than learning to classify. This is because learning to rank allows us to rank the signatures in order of genuineness. Bipartite ranking~\cite{agarwal2005generalization} is one of the most standard learning-to rank-approach. The goal of the bipartite ranking is to find a scoring function that gives a higher value to positive samples than negative samples. This goal corresponds to the maximization of AUC (Area under the ROC curve), and thus bipartite ranking has been used in various domains~\cite{usunier2011multiview,charoenphakdee2019learning,mehta2013efficient}. As a special form of bipartite ranking, the top-rank learning strategy~\cite{li2014top,frery2017efficient,boyd2012accuracy} possesses characteristics that are more suitable for absolute top positive hunting. In contrast to the standard bipartite ranking, top-rank learning aims at maximizing the absolute top positives, that is, maximizing the number of positive samples ranked higher than any negative sample. Therefore, top-rank learning is suitable for some tough tasks that require possibly highly reliable (e.g., medical diagnosis~\cite{zheng2021top}). To the best of our knowledge, this is the first application of top-rank learning to the signature verification task. TopRank CNN~\cite{zheng2021top} is a representation learning version of the conventional top-rank learning scheme, which combines the favorable characteristics of both CNN and the top-rank learning scheme. To be more specific, when encountered with entangled features in which positive is chaotically tied with negative, conventional top-rank learning methods without representation learning capability like TopPush~\cite{li2014top} can hardly achieve a high pos@top. That is to say, the representation learning ability of CNN structure makes the TopRank CNN a more powerful top-rank learning method. Moreover, for avoiding the easily-happen over-fitting phenomenon, TopRank CNN considerately attached the max operation with the p-norm relaxation. Despite the superiority of ranking schemes, studies that apply ranking strategies on signature verification tasks are still in great demand. Chhabra et al. in~\cite{chhabra2019siamese} proposed a Deep Triplet Ranking CNN, aiming at ranking the input signatures in genuine-like order by minimizing the distance between genuine and anchors. In the same year, Zheng et al. proposed to utilize RankSVM for writer-dependent classification, to ensure the generalization performance on imbalanced data~\cite{zheng2019ranksvm}. However, even if they care about ranking results to some extent, no existing studies have been dedicated to the absolute top genuine signatures yet. To address this issue, this work is mainly designed to focus on obtaining the absolute top genuine signatures, implemented by learning top-rank pairs for writer-independent offline signature verification tasks. \setlength{\belowcaptionskip}{-0.5cm} \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{figures/overall1_cropped.pdf}\\[-2mm] \caption{The overall structure of the proposed method to learn {\em top-rank pairs} for writer-independent signature verification. (a)~Feature representation of paired samples. (b)~Learning top-rank pairs with their representation. (c)~Top-rank loss function $\mathcal{L}_{\mathrm{TopRank}}$.} \label{fig:overall} \end{center} \end{figure} \section{Learning Top-Rank Pairs} Fig.~\ref{fig:overall} shows the overview of the proposed method to learn top-rank pairs for writer-independent offline signature verification. The proposed method consists of two steps: a representation learning step and a top-rank learning step. Fig.~\ref{fig:overall}~(a) shows the former step and (b) and (c) show the latter step. \subsection{Feature representation of paired samples\label{sec:concat-feature}} As shown in Fig.~\ref{fig:dependentandindependent}~(b), each input is a pair of a genuine reference sample $g$ and a query sample $q$ for writer-independent signature verification. Then the paired samples $(g,q)$ are fed to some function to evaluate their discrepancy. If the evaluation result shows a large discrepancy, the query is supposed to be a forgery; otherwise, the query is genuine.\par Now we concatenate the two $d$-dimensional feature vectors ($g$ and $q$) into a $2d$-dimensional single vector as shown in Fig.~\ref{fig:overall}~(a). Although the concatenation doubles the feature dimensionality, it allows us to treat the paired samples in a simple way. Specifically, we consider a (Genuine $g$, Genuine $q^+$)-pair as a positive sample with the feature vector $\boldsymbol{x}^+=g\oplus q^+$ and a (Genuine $g$, Forgery $q^-$)-pair as a negative sample with $\boldsymbol{x}^-=g\oplus q^-$. If we have $m$ (Genuine, Genuine)-pairs and $n$ (Genuine, Forgery)-pairs, we have two sets $\mathbf{\Omega}^+ = \{\boldsymbol{x}_i^+\ |\ i=1,\ldots,m\}$ and $\mathbf{\Omega}^-=\{\boldsymbol{x}_j^+\ |\ j=1,\ldots,n\}$. \par Under this representation, the writer-independent signature verification task is simply formulated as a problem to have a function $r(\boldsymbol{x})$ that gives a large value for $\boldsymbol{x}_i^+$ and a small value for $\boldsymbol{x}_j^-$. Ideally, we want to have $r(\boldsymbol{x})$ that satisfies $r(\boldsymbol{x}_i^+) > r(\boldsymbol{x}_j^-)$ for arbitrary $\boldsymbol{x}_i^+$ and $\boldsymbol{x}_j^-$. In this case, we have a constant threshold $\theta$ that satisfies $\max_j r(\boldsymbol{x}_j^-)< \theta < \min_i r(\boldsymbol{x}_i^+)$. If $r(\boldsymbol{x})> \theta$, $\boldsymbol{x}$ is simply determined as a (Genuine, Genuine)-pair. However, in reality, we do not have the ideal $r$ in advance; therefore we need to optimize (i.e., train) $r$ so that it becomes closer to the ideal case under some criterion. In \ref{sec:optimize}, pos@top is used as the criterion so that trained $r$ gives more absolute tops.\par As indicated in Fig.~\ref{fig:overall}~(a), each signature is initially represented as a $d$-dimensional vector $g$ (or $q$) by SigNet~\cite{dey2017signet}, which is still a state-of-the-art signature verification model realized by metric learning with the contrastive loss. Although it is possible to use another model to have the initial feature vector, we use SigNet throughout this paper. The details of SigNet will be described in \ref{sec:signet}. \subsection{Optimization to learn top-rank pairs\label{sec:optimize}} We then use a top-rank learning model for optimizing the ranking function $r(\boldsymbol{x})$. As noted in Section~\ref{sec:intro}, top-rank learning aims to maximize pos@top, which is formulated as: \begin{align} \label{eq:posatop} \mathrm{pos@top}=\frac{1}{m}\sum_{i=1}^m I\left(r(\boldsymbol{x}_i^+) > \max_{1\leq j\leq n}r(\boldsymbol{x}_j^-) \right), \end{align} where $I(z)$ is the indicator function. pos@top evaluates the number of positive samples with a higher value than any negative samples. The positive samples that satisfy the condition in Eq.\ref{eq:posatop} are called absolute top positives or just simply absolute tops. Absolute tops are very ``reliable'' positive samples because they are more positive than the top-ranked negative, that is, the ``hardest'' negative $\max_{1\leq j\leq n}r(\boldsymbol{x}_j^-).$ \par Among various optimization criteria, pos@top has promising properties for the writer-independent signature verification task. Maximization of pos@top is equivalent to the maximization of absolute tops --- this means we can have reliable positive samples to the utmost extent. In a very strict signature verification task, the query sample $q$ is verified as genuine only when the concatenated vector $\boldsymbol{x}=g\oplus q$ becomes one of the absolute tops. Therefore, having more absolute tops by maximizing pos@top will give more chance that the query sample is completely trusted as genuine. \par Note that we call $r(\boldsymbol{x})$ as a ``ranking'' function, instead of just a scoring function. In Eq.~(\ref{eq:posatop}), the value of the function $r$ is used just for the comparison of samples. This suggests that the value of $r$ has no absolute meaning. In fact, if we have a maximum pos@top by a certain $r(\boldsymbol{x})$, $\phi(r(\boldsymbol{x}))$ also achieves the same pos@top, where $\phi$ is a monotonically-increasing function. Consequently, the ranking function $r$ specifies the order (i.e., the rank) among samples. \par We will optimize $r$ to maximize $\boldsymbol{x}^+$ in pos@top. Top-Rank Learning is the existing problem to maximize pos@top for a training set whose samples are individual (i.e., unpaired) vectors. In contrast, our problem to learn top-rank pairs is a new ranking problem for the paired samples, and applicable to various ranking problems where the relative relations between two vectors are important~\footnote{Theoretically, learning top-rank pairs can be extended to handle vectors obtained by concatenating three or more individual vectors. With this extension, our method can rank the mutual relationship among multiple vectors.}.\par \par As noted in \ref{sec:DNN} and shown in Fig.~\ref{fig:overall}~(b), we train $r$ along with a deep neural network (DNN) to have a reasonable feature space to have more pos@top. However, there are some risks when we maximize Eq.~(\ref{eq:posatop}) directly using a DNN, if it has a high representation flexibility. The most realistic case is that the DNN overfits some outliers or noise. For example, if a negative outlier is distributed over the positive training samples, achieving the perfect pos@top is not a reasonable goal.\par To avoid such risks, we employ the $p$-norm relaxation technique~\cite{JMLR:v10:rudin09b,zheng2021top}. More specifically, we convert the maximization of pos@top into the minimization of the following loss: \begin{align} \label{eq:toprankloss} \mathcal{L}_{\mathrm{TopRank}}\left(\mathbf{\Omega^+},\mathbf{\Omega^-}\right)= \frac{1}{m}\sum_{i=1}^m\left( \sum_{j=1}^n\left( l(r(\boldsymbol{x}_i^+) - r(\boldsymbol{x}_j^-) )\right)^p\right)^{\frac{1}{p}}, \end{align} where $l(z) = \log(1+e^{-z})$ is a surrogate loss. Fig.~\ref{fig:overall}~(c) illustrates $\mathcal{L}_{\mathrm{TopRank}}$. When $p=\infty$, Eq.~(\ref{eq:toprankloss}) is reduced to $\mathcal{L}_{\mathrm{TopRank}}= \frac{1}{m}\sum_{i=1}^m \max_{1\leq j \leq n} l(r(\boldsymbol{x}_i^+) - r(\boldsymbol{x}_j^-))$, which is equivalent to the original pos@top loss of Eq.~(\ref{eq:posatop}). If we set $p$ at a large value (e.g., 32), the Eq.~(\ref{eq:toprankloss}) approaches the original loss of pos@top. In \cite{zheng2021top}, it is noted that it is better not to select a too large $p$, because it has a risk of over-fitting and the overflow error in the implementation. \par \setlength{\belowcaptionskip}{-0.5cm} \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{figures/overall2_cropped.pdf}\\[-2mm] \caption{The learning mechanisms of SigNet (top) and the learning top-rank pairs (bottom).} \label{fig:signettoprank} \end{center} \end{figure} \subsection{Learning top-rank pairs with their representation\label{sec:DNN}} In order to have a final feature representation to have a more pos@top, we apply a DNN to convert $\boldsymbol{x}$ non-linearly during the training process of $r$. Fig.~\ref{fig:overall}~(b) shows the process with the DNN. Each of the paired feature vectors in a minibatch is fed to a DNN. In the DNN, the vectors are converted to another feature space and then their ranking score $r$ is calculated. The parameters of DNN are trained by the loss function $\mathcal{L}_{\mathrm{TopRank}}$. \subsection{Initial features by SigNet\label{sec:signet}} As noted in \ref{sec:concat-feature}, we need to have initial vectors ($g$ and $q$) for individual signatures by an arbitrary signature image representation method. SigNet~\cite{dey2017signet} is the current state-of-the-art signature verification method and achieved high performance in standard accuracy measures. As shown in Fig.~\ref{fig:overall}~(a), SigNet is based on metric learning with a contrastive loss and takes a pair of a reference signature and a query signature as its input images. For all pairs of reference and query signatures, SigNet is optimized to decrease the distance between (Genuine, Genuine)-pairs and increase the distance between (Genuine, Forgery)-pairs. The trained network can convert a reference image into $g$ and a query into $q$. Then a positive sample $\boldsymbol{x}_i^+$ or a negative sample $\boldsymbol{x}_j^-$ is obtained by the concatenation $g\oplus q$, as described in \ref{sec:concat-feature}. \par Theoretically, we can conduct the end-to-end training of SigNet in Fig.~\ref{fig:overall}~(a) and DNN in (b). In this paper, however, we fix the SigNet model after its independent training with the contrastive loss. This is simply to make the comparison between the proposed method and SigNet as fair as possible. (In other words, we want to purely observe the effect of pos@top maximization and thus minimize the extra effect of further representation learning in SigNet.) \par One might misunderstand that the metric learning result by SigNet and the ranking result by top-rank learning in the proposed method are almost identical; however, as shown in Fig.~\ref{fig:signettoprank}, they are very different. As we emphasized so far, the proposed method aims to have more pos@top; this means we have a clear boundary between the absolute tops and the others. In contrast, SigNet has no such function. Consequently, SigNet might have a risk that a forgery has a very small distance with a genuine. Finally, this forgery will be wrongly considered as one of the reliable positives, which are determined by applying a threshold $\lambda$ to the distance by SigNet. \section{Experiments} In this section, we demonstrate the effectiveness of the proposed method on signature verification tasks. Specifically, we consider a comparative experiment with SigNet which is known as the outstanding method for the signature verification tasks. \subsection{Datasets} In this work, the BHSig260 offline signature dataset\footnote{Available at http://www.gpds.ulpgc.es/download} is used for the experiments\footnote{CEDAR dataset that is also used in~\cite{dey2017signet} was not used in this work because it has achieved 100\% accuracy in the test set. Besides, GPDS 300 and GPDS Synthetic Signature Corpus datasets are restricted from obtaining}, which composes two subsets where one set of signatures are signed in Bengali (named BHSig-B) and the other in Hindi (named BHSig-H). The BHSig-B dataset includes 100 writers in total, each of them possesses 24 genuine signatures and 30 skillfully forged signatures. On the other hand, BHSig-H dataset contains 160 writers, each of them own genuine and forged signatures same as BHSig-B. In the experiments, both of the datasets are divided into training, validation, and test set to the ratio of 8: 1: 1.\par Following the writer-independent setting, we evaluate the verification performance using the pair of signatures. That is, the task is to verify that the given pair is (Genuine, Genuine) or (Genuine, Forgery). We prepare a total of 276 (Genuine, Genuine)-pairs for each writer and a total of 720 (Genuine, Forgery)-pairs for each writer. \subsection{Experimental Settings} \subsubsection{Setting of the SigNet} \label{subsec:setting_signet} SigNet is also based on a Siamese network architecture, whose optimization objective is similarity measurement. In the experiment, we followed the training setting noted in~\cite{dey2017signet}, except for the modifications of the data division. \subsubsection{Setting of the proposed method} As introduced in Section~\ref{sec:signet}, we used the extracted features from the trained SigNet (124G Floating point operations (FLOPs)) as the initial features. For the learning top-rank pairs with their representation, we used a simple architecture, 4 fully-connected layers (2048, 1024, 512 and 128 nodes respectively) with ReLU function (1G FLOPs). The hyper-parameter $p$ of the loss function~\ref{eq:toprankloss} is chosen from $\{2, 4, 8, 16, 32\}$ based on the validation pos@top. As a result, we obtained $p$=4 for BHSig-B and $p$=16 for BHSig-H. The following results and the visualization are obtained by using these hyper-parameters. \subsection{Evaluation metrics} In the experiment, pos@top, accuracy, AUC, False Rejection Rate (FRR), and False Acceptance Rate (FAR) are used to comprehensively evaluate the proposed method and SigNet. \begin{itemize}[itemsep=0pt,topsep=5pt] \item \textbf{pos@top}: The ratio of the absolute top (Genuine, Genuine) signature pairs to the number of all of the (Genuine, Genuine) signature pairs (see also Eq.~(\ref{eq:posatop})). \item Accuracy: The maximum result of the average between True Positive Rate (TPR) and True Negative Rate (TNR), following the definition in~\cite{dey2017signet}. \item AUC: Area under the ROC curve. \item FAR: The ratio of the number of falsely accepted (Genuine, Forgery) signature pairs divided by the number of all (Genuine, Forgery) signature pairs. \item FRR: The ratio of the number of falsely rejected (Genuine, Genuine) signature pairs divided by the number of all (Genuine, Genuine) signature pairs. \end{itemize} \subsection{Quantitative and qualitative evaluations} \begin{table}[t!] \centering \caption{The comparison between the proposed method and SigNet on BHSig-B and BHSig-H datasets.}\label{tab1} \setlength{\tabcolsep}{1mm} \begin{tabular}{ccccccc} \toprule Dataset & Approaches & {\bfseries pos@top ($\uparrow$)} & Accuracy ($\uparrow$) & AUC ($\uparrow$)& FAR ($\downarrow$)& FRR ($\downarrow$)\\ \hline \specialrule{0em}{1pt}{1pt} \multirow{2}{*}{BHSig-B} & {\bfseries proposed } & {\bfseries 0.283} & {\bfseries 0.806} & {\bfseries 0.889} & {\bfseries 0.222} & {\bfseries0.222} \\ & SigNet & 0.000 & 0.756 & 0.847 & 0.246 & 0.247 \\ \hline \specialrule{0em}{1pt}{1pt} \multirow{2}{*}{BHSig-H} & {\bfseries proposed } & {\bfseries 0.114} & {\bfseries 0.836} & {\bfseries 0.908} & {\bfseries 0.179} & {\bfseries 0.178} \\ & SigNet & 0.000 & 0.817 & 0.891 & 0.192 & 0.192 \\ \bottomrule \end{tabular} \end{table} \setlength{\belowcaptionskip}{-0.1cm} \begin{figure}[t!] \centering \begin{subfigure}{0.495\columnwidth} \centering \includegraphics[width=6cm]{figures/Bengali_ROC_p=4_with_zoom_cropped.pdf} \caption{ROC curves (full and zoom) on BHSig-B} \label{chutian3 \end{subfigure} \centering \begin{subfigure}{0.495\columnwidth} \centering \includegraphics[width=6cm]{figures/Hindi_ROC_p=16_with_zoom_cropped.pdf} \caption{ROC curves (full and zoom) on BHSig-H} \label{chutian3 \end{subfigure} \caption{The comparison of ROC curves between the proposed method and SigNet on BHSig-B and BHSig-H datasets.} \label{fig:roc} \end{figure} \setlength{\belowcaptionskip}{-0.0cm} \begin{figure}[t!] \centering \begin{subfigure}{0.495\columnwidth} \centering \includegraphics[width=6cm]{figures/Bengali_fc_testing_toprankcnn_p=4_cropped.png} \caption{PCA result of the proposed method on BHSig-B} \label{fig:bengaliPCA \end{subfigure} \centering \begin{subfigure}{0.495\columnwidth} \centering \includegraphics[width=6cm]{figures/Hindi_fc_testing_toprankcnn_p=16_cropped.png} \caption{PCA result of the proposed method on BHSig-H} \label{fig:hindiPCA \end{subfigure}\\ \medskip \centering \begin{subfigure}{0.495\columnwidth} \centering \includegraphics[width=6cm]{figures/Bengali_top_hist_cropped.pdf} \captionsetup{justification=centering} \caption{Histogram of the ranking scores of the proposed method with a zoomed view on BHSig-B} \label{fig:bengalitophist \end{subfigure} \centering \begin{subfigure}{0.495\columnwidth} \centering \includegraphics[width=6cm]{figures/Hindi_top_hist_cropped.pdf} \captionsetup{justification=centering} \caption{Histogram of the ranking scores of the proposed method with a zoomed view on BHSig-H} \label{fig:hinditophist \end{subfigure}\\ \medskip \centering \begin{subfigure}{0.495\columnwidth} \centering \includegraphics[width=6cm]{figures/Bengali_sig_hist_cropped.pdf} \captionsetup{justification=centering} \caption{Histogram of the ranking scores of SigNet with a zoomed view on BHSig-B} \label{fig:bengalisighist \end{subfigure} \centering \begin{subfigure}{0.495\columnwidth} \centering \includegraphics[width=6cm]{figures/Hindi_sig_hist_cropped.pdf} \captionsetup{justification=centering} \caption{Histogram of the ranking scores of SigNet with a zoomed view on BHSig-H} \label{fig:hindisighist \end{subfigure} \centering \caption{(a) and (b)~PCA visualizations of feature distribution for the proposed method. The top-ranked negative and the absolute top positives are highlighted. (c)-(f)~Distributions of the ranking scores as histograms. The horizontal and vertical axes represent the ranking score and \#samples, respectively. } \label{fig:hist} \end{figure} The quantitative evaluations of the proposed method and SigNet on two datasets are shown in Table~\ref{tab1}. Remarkably, the proposed method achieved an overwhelming better performance on pos@top, while pos@top of SigNet is 0 for both datasets. This proves that the proposed method can reveal absolute top positive signature pairs (i.e., highly reliable signature pairs). Furthermore, the proposed method also outperformed the comparison method on all other evaluation criteria, accuracy and AUC, lower FAR, and FRR.\par The ROC curves of the proposed method and SigNet on two datasets are shown in Fig.~\ref{fig:roc} respectively, each followed by a corresponding zoom view of the beginning part of ROC curves. As a more intuitive demonstration of Table~\ref{tab1}, other than the larger AUC, it is extremely obvious that the proposed method achieved higher pos@top, which is the True Positive Rate (TPR) for $x=0$.\par Figs.~\ref{fig:hist}~(a) and (b) show the distributions of features from the proposed method mapped by Principal Component Analysis (PCA) on two datasets, colored by ranking scores (normalized within the range [0,1]). From the visualizations, we can easily notice the absolute top positive sample pairs distinguished by the top-ranked negative. \par Following the feature distributions, Figs.~\ref{fig:hist}~(c) and (d)~give a more intuitive representation of the ranking order for the learning top-rank pairs. These graphs include information of (1)~where did the top-ranked negative appear and (2)~how many (Genuine, Genuine) and (Genuine, Forgery)-pairs are scored to the same rank. It could be observed that the first negative appeared after a portion of positive from these two graphs. On the other hand, Figs.~\ref{fig:hist}~(e) and (f) show the ranking conditions of SigNet. Since the first negative pair shows on the top of the ranking, this causes 0 pos@top for both datasets. \par \setlength{\belowcaptionskip}{-0.5cm} \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{figures/examples_cropped.pdf} \caption{Examples of (a)(b)~absolute top (Genuine, Genuine)-pairs, (c)(d)~non-absolute top (Genuine, Genuine)-pairs, and (e)(f)~(Genuine, Forgery)-pairs from BHSig-B and BHSig-H respectively.} \label{fig:example} \end{center} \end{figure} As shown in Fig.~\ref{fig:example}, the absolute top (Genuine, Genuine)-pairs in (a) and (b) show great similarity to their counterparts. Especially, the consistency in their strokes and preference of oblique could be easily noticed even with the naked eye. On the other hand, both the non-absolute top (Genuine, Genuine)-pairs in (c) and (d) and (Genuine, Forgery)-pairs in (e) and (f) show less similarity to their corresponding signatures, no matter whether they are written by the same writer or not. Notwithstanding that they are all assigned with low ranking scores by learning top-rank pairs, such resemblance between two different classes could easily incur misclassification in conventional methods. Thus, our results shed light on the validity of the claimed effectiveness of the proposed method to maximize the pos@top. \section{Conclusion} As a critical application especially for formal scenarios like forensic handwriting analysis, signature verification played an important role since it has been proposed. In this work, we proposed a writer-independent signature verification model for learning top-rank pairs. What is novel and interesting for this model is that the optimization objective of top-rank learning is to maximize the pos@top, to say the highly reliable signature pairs in this case. This optimization goal has fulfilled the requirement of the intuitive need of signature verification tasks to acquire reliable genuine signatures, not only to naively classify positive from negative. Through two experiments on data set BHSig-B and BHSig-H, the effectiveness of pos@top maximization has been proved compared with a metric learning-based network, the SigNet. Besides, the performance of the proposed model on the AUC, accuracy, and other common evaluation criteria frequently used in signature verification shown encouraging results as well. \newcommand{\setlength{\itemsep}{0.2mm}}{\setlength{\itemsep}{0.2mm}} \bibliographystyle{IEEEtran}
1,108,101,566,173
arxiv
\section{INTRODUCTION}\label{sec:intro} \par \IEEEPARstart {O}{cupational} stress is well-researched \cite{Carneiro2019} \cite{Can2019a} \cite{Schmidt2018}\cite{Carneiro2017}\cite{Alberdi2016}, though not least due to its pernicious effect on people's health but also due to the economic benefits of keeping in check the stress level of employees. Admittedly, although a small amount of stress is benign and even auspicious because it provides the necessary gumption to survive the tribulations of the modern workplace \cite{Kirby2013} \cite{Dhabhar2012}, chronic stress (i.e., enduring stress) has detrimental repercussions. Physiological and psychological disorders \cite{Tennant2001} \cite{Colligan2005a}, job-related tensions \cite{Ganster2013}, and general deterioration of health are just a few examples of its adverse outcomes. Furthermore, stress is liable for significant economic losses because stressed-out workers have suboptimal productivity, are prone to higher job absenteeism and presenteeism, and are disproportionately predisposed to sickness \cite{EU-OSHA2017, Colligan2005a}. \par Consequently, the importance of overcoming stress at work is primordial to the well-being of the workers and the bottom line of any business. Nevertheless, at the moment, there exist no mainstream real-world stress monitoring system \cite{Peake2018}. The most reliable stress monitoring strategies rely on directly measuring the level of the stress-inducing hormones (e.g., salivary and cortisol concentration in sweat \cite{Marques2010} \cite{Hellhammer1994}) and on psychological evaluations performed by psychologists. However, these procedures are neither suitable nor feasible for continuously monitoring stress in the workplaces because they are obtrusiveness and are carried out sporadically. Moreover, in the case of physiological evaluations, people are reluctant to reveal their work stress honestly \cite{Eisen2008}. Luckily, stress spawns detectable physiological, psychological, and behavioral changes that can be used for automatic stress recognition \cite{Carneiro2019} \cite{Alberdi2016}. For example, acute stress decreases a person's Heart Rate Variability (HRV) and his parasympathetic activation \cite{JARVELIN-PASANEN2018}. Besides, there is plentiful research that shows that it is plausible to indirectly monitor stress using physiological signals such as the Electrodermal Activity (EDA) \cite{Adams2014}, the HRV \cite{Melillo2011}\cite{Cinaz2013}, the Electroencephalogram (EEG)\cite{Rahnuma2011}, and the Electromyography (EMG)\cite{Wei2013}. \par Although there is a surfeit of publications \cite{Carneiro2019} \cite{Schmidt2018} \cite{Poria2017} \cite{Alberdi2016} on automatic stress prediction, at the moment, aside from a few niche and non-scientifically proven consumer products, there exist no effective system that automatically and unobtrusively monitor people's stress in real-world environments \cite{Peake2018}. On the one hand, some of the proposed approaches (e.g., EEG based stress monitoring) are outright impractical because they are too obtrusive. On the other hand, the most precise approaches (e.g., \cite{Koldijk2018}, \cite{Poria2017} and \cite{Mozos2017}) predict stress using a fusion of multiple sensors data (e.g., audio, video, computer logging, posture, facial expression, and physiological features). These methods, however, raises technical, privacy and security challenges (e.g., the implication of user's computer keystrokes logging, video recording, and speech recording), and, are therefore inconvenient to deploy in the real-world settings because of company-wide computer security policies or due to international workplace privacy regulations. Finally, the most practical and unobtrusive stress monitoring methods (e.g., \cite{Gjoreski2016}\cite{Kocielnik2013} \cite{Zhai2006}\cite{Healey2005})\textemdash which are mostly based on physiological signal that are recordable on people's wrist (e.g., Photoplethysmography (PPG) and EDA) \textemdash are not yet mainstream to the general consumers despite their potential economic and health benefit. The lack of a viable stress monitoring products, despite the extensive research on occupational stress, the availability of enabling technology (e.g., smartphones with on-wrist HRV and EDA sensors) and despite the immense economical and health benefits such products would bring, begs the question of why this is the case. \par A recent review article on affect and stress recognition \cite{Schmidt2018} scrutinized the published literature and noted the striking discrepancy between the accuracy of person-specific stress prediction Machine Learning (ML) models (i.e., ML models that predict the stress of a specific person) and person-independent ML models (i.e., generic ML models that predict the stress of a any person). The article underscores that person-specific ML models (e.g.,\cite{Koldijk2018}, \cite{Nakashima2016}, \cite{Picard2001},\cite{Healey2005},\cite{Haag2010},\cite{Melillo2011},\cite{Valenza2014a} and \cite {Alberdi2018}) achieved an excellent prediction accuracy. Nevertheless, their predictions are person-specific\textemdash that is, the ML models would not generalize well in predicting stress of yet unseen people; therefore, cannot be used in creating mass-market stress monitoring products. On the contrary, the pragmatic person-independent solutions (e.g., \cite{Schmidt2018a},\cite {Koldijk2018},\cite{Zenonos2016},\cite{Nakashima2016}, \cite{Gjoreski2016}, \cite{Kolodyazhniy2011}, and \cite{Andre2008}) generally have a much lower stress prediction accuracy; accordingly, they are equally a poor choice for creating mass-market stress monitoring products. For example, \cite{Andre2008} achieved a 95.0\% emotion recognition accuracy using person-specific ML models; however, the same approach resulted in a mere 70\% accuracy when applied to a person-independent classification model. In a like manner, the authors in \cite{Nakashima2016} conducted experiments to monitor stress in daily work and found that ML models that use people's physiology to predict stress are highly person-dependent. Their person-specific ML models achieved a 97\% accuracy but the generic ones dwindled to a mere 42\% accuracy. Their results resemble that in \cite{Koldijk2018}, which achieved a 90.0\% accuracy when using a person-specific stress classification models. However, when applied the same approach to predict the stress of new subjects, its performance ebbed to a meager 58.8$\pm$11.6\% accuracy. \par \label{sec:intro:underperformance} These mediocre outcomes are expected. For example, the authors in \cite{Lamichhane2017} argued that, when people's physiological differences are not accounted for, the ML stress prediction models performed no better than a model with no learning capability. First stress is intrinsically idiosyncratic and depends on a person's uniqueness (e.g., his genetics) and his coping ability \cite{Kogler2015}. Second, there is incontrovertible evidence that there exist gender differences in how people respond to stress \cite{Wang2007a} and that men and women have a different feeling about stress because women tend to express a higher level of stress on self-report questionnaires \cite{Liapis2015} \cite{Matud2004}. Third, a stressor that produces stress in one person will not necessarily trigger the same stress response in a different person \cite{Hernandez2011}\cite{Childs2014}\cite{Johnstone2015} \cite{Sapolsky1994}. Finally, for the same person, there exist significant day-to-day variability in the cortisol awakening response, which may affect how that person responds to stress \cite{Almeida2009}. As a result, a practical stress monitoring scheme needs to take into account inter-individual and intra-individual differences, people's gender, the temporal variability of human stress and many other factors that influences how humans react to stress. The state of the art stress monitoring strategies (e.g., \cite{Attaran2018}) use person-specific ML models. Unfortunately, this method is not realistic for creating a real-world product. A stress monitoring system that uses this approach would be costly (e.g., collecting and training ML stress prediction models for every user of the system) and would require expensive recurrent updates because stress is innately dynamic. \par The recent research has proposed diverse methods to improve the performance of the generic stress prediction models. The most straightforward methods use normalization techniques (e.g., range normalization, standardization, baseline comparison, and Box-Cox transformation) to reduce the impact of inter-individual variability while preserving the differences between the stress classes \cite{Lamichhane2017}\cite{Aigrain2016}. The normalization improves the performance of the generic model but always underperforms compared to the person-specific ones. Furthermore, as \cite[Chap.~5]{Aigrain2016} noted, the normalization process is multifaceted and depends on trial and error methods. An alternative strategy is to predict stress based on clusters of similar users \cite{Koldijk2018}\cite{Xu2015}\cite{Ramos2014}. These techniques are important contributions to producing an effective stress monitoring system. However, they also perform inadequately compared to person-specific models. Moreover, these methods would likely prove too complex to use in real-world settings because they are sensitive to the number of clusters \cite{Xu2015} and, given that many factors influence a person's stress \cite{Schneiderman2008}, it is not clear what are the criteria for similarity to create the cluster similarities. \par In this paper, we propose a hybrid and cheaper to deploy stress prediction method that incorporates tiny person-specific physiological calibration samples into a much larger generic sample collected from a large group of people. The proposed method hinges on the premise that all humans share a hormonal response to stress \cite{Charmandari2005}, but that a person's unique factors such as gender \cite{Wang2007a}, genetics\cite{Wust2004}, personality \cite{Childs2014}, weight \cite{Jayasinghe2014}, and his/her coping ability \cite{Kogler2015} differentiate how the person reacts to stress. Hence, we hypothesize that it could be possible to reuse generic samples collected from many people as a starting point for creating a personalized and more effective model. To confirm these assumptions, we tested this strategy on two major stress datasets. Our results show a substantial improvement in the stress prediction models' performance even when we used only 100 calibration samples. In summary, in this paper: \begin{enumerate}[label=(\roman*)] \setlength\itemsep{1em} \item For each subject in the datasets, we train and validate \textit{n} person-specific regression and classification stress prediction models using a 10-fold cross-validation approach. The result shows that, for all subjects, the classification models achieved a greater than 95\% classification accuracy and that the regression models had a near-zero mean absolute error (MAE). \item We used a Leave-One-Subject-Out Cross-Validation (LOSO-CV) to assess the performance of generic stress prediction models. All models performed poorly (e.g., $42.5\%\pm 19.9\%$ accuracy, $14.0\pm7.9$ MAE, on one dataset) compared to person-specific models and that there was a wide performance variation between the subjects. \item We devise a hybrid technique that derives a personalized person-specific-like stress prediction model from samples collected from a large population and discussed how it could be used to develop a real-world continuous stress monitoring system in, e.g., intelligent buildings. \end{enumerate} \section{METHODS} \begin{table*}[htbp] \centering \caption{Selected heart rate variability (HRV) and electrodermal activity (EDA) features} \vspace{-2mm} \begin{tabular}{@{}rllr@{}} \toprule \multicolumn{1}{c}{\multirow{9}[4]{*}{\begin{sideways}HRV Features\end{sideways}}} & \multicolumn{1}{p{6.335em}}{Time domain} & Mean, median, standard deviation, skewness and kurtosis of all RR intervals & \\ & \multicolumn{1}{p{6.335em}}{RMSSD} & Root mean square of the successive differences & \\ & \multicolumn{1}{p{6.335em}}{SDSD} & Standard deviation of all interval of differences between adjacent RR intervals & \\ & \multicolumn{1}{p{6.335em}}{SDRR\_RMSSD} & Ratio of SDRR over RMSSD & \\ & \multicolumn{1}{p{6.335em}}{pNNx} & Percentage of number of adjacent RR intervals differing by more than 25 and 50 ms & ref. \cite{TaskForce1996} \\ & \multicolumn{1}{p{6.335em}}{SD1, SD2} & Short and long-term poincare plot descriptor of the heart rate variability & \\ & \multicolumn{1}{p{6.335em}}{RELATIVE\_RR} & Time domain features(e.g., mean, median, SDRR, RMSSD) of the relative RR & see note a \\ & \multicolumn{1}{p{6.335em}}{VLF, LF, HF} & Very low (VLF), Low (LF), High (HF) frequency band in the HRV power spectrum & \\ & \multicolumn{1}{p{6.335em}}{LF/HF} & Ration of low (LF) and high(HF) HRV frequencies & \\ \cmidrule{1-3} \multicolumn{1}{c}{\multirow{8}[2]{*}{\begin{sideways}EDA Features\end{sideways}}} & Time domain & Mean, max, min, range, kurtosis, skewness of the SCR & \\ & Derivatives & Mean and standard deviation of the 1st and second derivative of the SCR & \\ & \multicolumn{1}{p{6.335em}}{Peaks} & Mean, max, min, standard deviation of the peaks & \\ & \multicolumn{1}{p{6.335em}}{Onset} & Mean, max, min, standard deviation of the onsets & ref. \cite{Zangroniz2017} \\ & ALSC & Arc length of the SCR & see note b\\ & INSC & Inegral of the SCR & see note c \\ & APSC & Normalized average power of the SCR & see note d \\ & RMSC & Normalized room mean square of the SCR & see note e \\ \cmidrule{1-3} \multicolumn{3}{l}{ $^a REL_{{RR}_{i}}=2\Big[\frac{RR_{i} -RR_{i-1}}{RR_{i} +RR_{i-1}}\Big], \quad i=2, ..., N \quad $ } & \\ \multicolumn{3}{l}{$^b ALSC=\sum_{n=2}^{N}\sqrt{1+\big(r[n]-r[n-1]\big)^{2}}$} & \\ \multicolumn{3}{l}{$^c INSC=\sum_{n=1}^{N}\big|r[n]\big|$} & \\ \multicolumn{3}{l}{$^d APSC=\frac{1}{N}\sum_{n=1}^{N}r[n]^{2}$} & \\ \multicolumn{3}{l}{$^e RMSC=\sqrt{\frac{1}{N}\sum_{n=1}^{N}r[n]^{2}}$} & \\ \bottomrule \end{tabular}% \label{tab:hrv-eda-features}% \vspace{-4mm} \end{table*}% \subsection{Stress datasets} \label{sec:dataset} \par We used two stress datasets to conduct this study. The first dataset \textemdash the SWELL dataset \cite{Koldijk2014} \textemdash was collected at the Radboud University. This dataset is a result of experiments conducted on 25 subjects doing office work (for example writing reports, making presentations, reading e-mail and searching for information) who were exposed to quintessential work stressors (e.g., being unexpectedly interrupted by an urgent e-mail and pressure to complete work in a limited time). During the experiment, the researchers recorded the subjects' computer usage patterns, their facial expressions, their body postures, their electrocardiogram (ECG) signal, and their electrodermal activity (EDA) signal. The participants went through three different working conditions: \begin{enumerate} \setlength\itemsep{1em} \item \textit{no stress} \textemdash the participants performed the assigned tasks for a maximum of 45 minutes. \item\textit{time pressure} \textemdash each participant's time to finish the task was reduced to two-thirds of the duration that he/she took in the no-stress condition. \item \textit{interruption} \textemdash the participants received interrupting e-mails in the middle of their assigned tasks. Some e-mails were relevant to their tasks, and the participants were requested to take specific actions. Other e-mails were immaterial, and the participants did not need to take any action. \end{enumerate} At the end of each experiment condition, each participant's perceived stress was assessed using a variety of self-report questionnaires, including the NASA Task Load Index (NASA-TLX) \cite{Hart1988}. In this study, we focus on the NASA-TLX because it indicates a person's mental load based on a weighted average of multi-dimensional rating (in terms of mental demand, physical demand, temporal demand, effort, performance, and frustration) and is the standard method in assessing subjective workload. \par The second dataset \textemdash the WESAD dataset \cite{Schmidt2018a}\textemdash was collected by researchers from the Robert Bosch GmbH and the University of Siegen in Germany. The dataset includes physiological (EDA, ECG, EMG, respiration signal and skin temperature) and acceleration signal that the researchers collected from 15 subjects to whom they exposed to three affective stimuli as follows: \begin{enumerate} \setlength\itemsep{1em} \item \textit{baseline condition}\textemdash the baseline condition aimed at generating a neutral affective state onto the participants and lasted for 20 minutes. \item \textit{amusement condition} \textemdash the subjects watched funny video clips. Each video clip is followed by a brief (5 seconds) of neutral condition. The amusement condition lasted 392 seconds. \item stress conditions \textemdash the participants were subjected to the Trier Social Stress Test (TSST) \cite{Kirschbaum1993} and asked to give a five-minutes public speech and to count down from 2023 by 17. If the subject made an error, he/she is requested to start over. \end{enumerate} The amusement and the stress conditions were each followed by a meditation period to ``de-excite'' the participants back to the baseline conditions. Throughout the experiment, the participants provided five self-reports, including the Short Stress State Questionnaire (SSSQ) \cite{Helton2015} which was used to determine the type of stress (i.e., worry, engagement or distress) that was prevalent in the participants. \subsection{Feature extraction} \label{sec:feature-computation} \par We extracted HRV and EDA features from the two datasets. We computed the HRV features according to the standards and algorithms proposed by the Task Force of the European Society \cite{TaskForce1996}. Each HRV feature (\cref{tab:hrv-eda-features}) was computed on a five-minutes moving window as follows: first, we extracted an Inter-Beat Interval (IBI) signal from the peaks of the Electrocardiogram (ECG) signal of each subject. Then, we computed each HRV index on a 5-minutes IBI array. Finally, a new IBI sample is appended to the IBI array while the oldest IBI sample is removed from the beginning of the IBI array. The new resulting IBI array is used to compute the next HRV index. We repeated this process until the end of the entire IBI array. Likewise, for the EDA signals, the raw EDA signal was first filtered by a 4Hz fourth-order Butterworth low pass filter and then smoothed with a moving average filter. Next, we computed the EDA features ( \cref{tab:hrv-eda-features}) on 10-minute moving window signal extracted from various EDA attributes of the skin conductance response (SCR). \par All the resulting datasets\textemdash especially the WESAD datasets\textemdash are inherently unbalanced because their experimental protocols dictated different duration. We downsampled the datasets by randomly discarding some samples from the majority classes to make the dataset balanced; therefore, to prevent the majority classes from overshadowing the minority classes. Furthermore, for the WESAD dataset, we altogether removed all sample corresponding to \textit{amusement condition} because it is almost as short as the sliding window we would use for computing the feature. \subsection{Feature engineering} \label{sec:feature-engineering} An inspection of the histogram plots of the features computed in section \ref{sec:feature-computation} revealed that most features' data distribution is skewed. While this may not be an issue for some machine learning algorithms, in other cases, the distribution of the features is critical. For example, linear regression models expected a Gaussian distributed dataset. We mitigated this risk by applying a logarithmic transformation, a square root transformation, and a Yeo and Johnson \cite{Yeo2004} transformations to the skewed features. The application of the three transformations aimed to mutate the dataset into a new dataset that can be used with most machine learning algorithms. The logarithmic transform shrinks long heavy-tailed distribution of a feature X and bolsters its smaller values into larger ones. Therefore, it roughly transforms the data distribution into a normal distribution and reduces the effect of outliers. Likewise, we applied a square root transform on all positive feature to magnify the features' small numbers and to counterweight larger ones. However, it not possible to apply neither the logarithm transformation nor the square root transform to negative values; therefore, we used a Yeo and Johnson (\cref{eq:yeo-johnson}) transformation to the negative skewed features. \begin{equation}\label{eq:yeo-johnson} y(\lambda)= \begin{cases} \frac{(y+1)^{\lambda}-1}{\lambda},& \text{when } \lambda \neq0, \quad y\geq 0\\ log(y+1), & \text{when } \lambda =0, \quad y\geq 0\\ \frac{(1-y)^{2-\lambda}-1}{\lambda-2},& \text{when } \lambda \neq2, \quad y< 0\\ -log(1-y),& \text{when } \lambda =2, \quad y< 0\\ \end{cases} \end{equation} Additionally, as suggested in \cite{Aigrain2016}\cite{Lamichhane2017}, to minimize the influence of outliers and the inter-individual physiological variation in adapting to a stressor, we scaled the datasets by applying a scaler $S_c(X)$ to every data point $X_i$ of each feature $X$ (\cref{eq:scaler}). $S_c(X)$ removed the feature's media and uses its $25^{th}$ and $75^{th}$ quantiles to re-adjust the data points. \begin{equation}\label{eq:scaler} S_c(X)=\frac{X_{i}-median(X)}{Q_{3}(X)-Q_{1}(X)} \end{equation} \par The feature engineering resulted in as much as 94 features. It is possible that some of these features have correlations with others and that some are not very relevant to the stress prediction. There might thus a need to decrease the number of the datasets' attributes \textemdash not least because this will reduce the computational requirements of the resulting predictive models \textemdash but most importantly because it could increase the models' generalization. We computed the mean decrease impurity (MDI) of each feature (\cref{gini}), i.e., the mean loss in impurity index of all tree of a random forest when that particular feature is used during tree splitting. \begin{equation}\label{gini} G_{k} =\sum_{k=1}^{K}p_{k}\big(1-p_{k}\big) \end{equation} Where K is the total number of features and $p_k$ the proportion of a single HRV feature k. We ranked all the features and heuristically selected only the features with high MDI and removed those with very small ones. \cref{tab:datasets-summary} summarizes the resulting datasets\myFooterTex{The dataset is available at \url{https://www.kaggle.com/qiriro/ieee-tac}}. \begin{table}[htbp] \centering \caption{Summary of the downsampled datasets} \resizebox{\columnwidth}{!}{% \begin{tabular}{@{}rrrrr@{}} \toprule & signal & \# of samples & \# of features & \# of classes \\ \cmidrule{2-5}\multirow{2}[2]{*}{SWELL} & HRV & 204885 & 75 & 3 \\ & EDA & 51741 & 46 & 3 \\ \midrule \multirow{2}[2]{*}{WESAD} & HRV & 81892 & 40 & 2 \\ & EDA & 20496 & 45 & 2 \\ \bottomrule \end{tabular}% \label{tab:datasets-summary}% } \end{table}% \begin{table}[htbp] \centering \caption{Hyperparameters of the Random Forest models} \resizebox{\columnwidth}{!}{% \begin{tabular}{@{}lrr@{}} \toprule Hyperparameters & Classification & Regression \\ \midrule number of trees & 1000 & 1000 \\ maximum depth of the trees & 2 & 2 \\ best split max features & $\sqrt{\text{number of features}}$& $\frac{1}{3}(\text{number of features})$ \\ \bottomrule \end{tabular}% } \label{tab:rf-parameters}% \end{table}% \begin{table}[htbp] \centering \caption{Hyperparameters of the ExtraTrees models} \resizebox{\columnwidth}{!}{% \begin{tabular}{@{}lrr@{}} \toprule Hyperparameters & Classification & Regression \\ \midrule number of trees & 1000 & 1000 \\ maximum depth of the trees & 16 & 16 \\ best split max features &$\sqrt{\text{number of features}}$ &$\frac{1}{3}(\text{number of features})$ \\ \bottomrule \end{tabular}% } \label{tab:extra-trees-parameters}% \end{table}% \subsection{Stress prediction} \label{sec:classification} We developed regression stress prediction models based on each participant's self-reported stress and mental load scores (in terms of the NASA-TLX and SSSQ for the SWELL and WESAD datasets respectively) and based on the subtle changes in the participants' EDA and HRV signals. We also classified the stress based on the experiment conditions discussed in \cref{sec:dataset}. We trained and evaluated three stress prediction models: \begin{enumerate} \setlength\itemsep{0em} \item \textit{Person-specific models}\textemdash they were developed using Random Forest (RF) models (\cref{tab:rf-parameters}). All person-specific models were trained and tested exclusively on the physiological samples of the same person and validated using a 10-Folds cross-validation. \item \textit{Generic models}\textemdash they were also developed using Random Forest (RF) models (\cref{tab:rf-parameters}). We used a Leave-One-Subject-Out Cross-Validation(LOSO-CV)to assess how a generic model would perform in predicting the stress of unseen people, (i.e., the people whose samples were not part of the training set) as follows: In a dataset of n subjects, for each subject $S_i$, we trained the ML model on the data of \textit{(n-1)} subjects and validated its performance on the left-out subject $S_i$. \item \textit{Hybrid calibrated models}\textemdash as we expected (see discussion in \cref{sec:intro} on page~\pageref{sec:intro:underperformance} and \cref{sec:result}), the generic models performed poorly compared to the person-specific models. To mitigate this discrepancy, we devised a hybrid technique that derives a personalized stress prediction model from samples collected from a large population. The technique (\cref{algo:model-calibration}) consists of incorporating a few person-specific samples (the calibration samples) in a generic pool of physiological samples collected from a large group of people and to train a new model from this heterogeneous data. In this paper, for a dataset with N subjects, we used the calibration algorithm with $q=4$ and $n=N-q$, i.e., we reserved the physiological samples of four randomly selected subjects as \enquote{unseen subjects} and used data of the remaining $n=N-q$ subjects as the \enquote{generic samples}. All calibration models were trained on a Extremely Randomized Trees models (ExtraTrees) whose key hyperparameters are summarized in \cref{tab:extra-trees-parameters}. \begin{algorithm}[htbp] \DontPrintSemicolon \SetAlgoLined \KwIn{machine learning algorithm $h_m$} \KwData{ \begin{itemize} \item Samples $sample_{generic}$ collected from $n$ persons \item Calibration samples $sample_{calibration}$ that belong to $q$ unseen persons such that $q\ll n$ \end{itemize} } \KwOut{trained calibrated model $h_m\prime$} \tcc{mix the calibration samples and the generic samples} $D^\prime \longleftarrow \emptyset$\; $D^\prime \longleftarrow shuffle(sample_{generic}\cup sample_{calibration})$\; \tcc{train the model $h_m$ on dataset $D^\prime$} $h_m\prime \longleftarrow h_m(D^\prime)$\; \Return{$h_m\prime$}\; \caption{{\sc MODEL CALIBRATION} } \label{algo:model-calibration} \end{algorithm} \end{enumerate} We evaluated the classification models by computing their accuracy, precision, recall, and their \textrm{$F_{1} score$} when tested on the test datasets. As for the regression models, their performance is evaluated by calculating their mean absolute error (MAE) and their root mean squared error (RMSE). \section{RESULTS AND DISCUSSION} \label{sec:result} \begin{figure*}[tbph] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=1.0\linewidth]{swell-hrv-generic-vs-pers-spect} \caption{SWELL HRV dataset} \label{fig:swell-hrv-generic-vs-personal} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.9\linewidth]{swell-eda-generic-vs-pers-spect} \caption{SWELL EDA dataset} \label{fig:swell-eda-generic-vs-personal} \end{subfigure} \caption{\textbf{Performance comparison between the person-specific and the generic models trained on the SWELL datasets} \newline For all subjects, the person-specific classification models (classification on three classes)achieved a high accuracy, and the regression models (based on NASA-TLX ($max=55.5, min=26.1, std=14.8$)) have a small a RMSE (e.g., $95.2\%\pm 0.5\%, 2.3\pm 0.1 RMSE$ for the HRV dataset). However, because of the inter-individual differences reacting to stress, all the generic models performed poorly (e.g., $42.5\%\pm 19.9\%, 15.3\pm 7.9 RMSE$ for the HRV signal), and there is a vast performance variation between the subjects.} \label{fig:swell-generic-vs-personal-performance} \end{figure*} \begin{figure*} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.9\linewidth]{wesad-hrv-generic-vs-pers-spect} \caption{WESAD HRV dataset} \label{fig:wesad-hrv-generic-vs-personal} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.9\linewidth]{wesad-eda-generic-vs-pers-spect} \caption{WESAD EDA dataset} \label{fig:wesad-generic-vs-personal-performance} \end{subfigure} \caption{\textbf{Performance comparison between the person-specific and the generic models trained on the WESAD datasets} \newline For all subjects, the person-specific classification models (classification on two classes) achieved a high accuracy, and the regression models (based on SSSQ ($max=3.9, min=3.0, std=0.8$)) have a low RMSE (e.g., $98.9\%\pm2.4\%, 0.002\pm 0.001 RMSE$ for the HRV signal). However, because of the differences in how different subjects react to stress, all the generic models performed poorly, and there is a vast performance variation between the subjects (e.g., $83.9\%\pm13.2\%, 0.8\pm 0.3 RMSE$ for the HRV signal).Also note that, compared to the SWELL datasets (\cref{fig:swell-generic-vs-personal-performance}), the classification models achieved seemingly a better performance because the dataset contains only two classes} \label{fig:wesad-generic-vs-personal-comparision} \vspace{-3mm} \end{figure*} \subsection{Individual differences in stress prediction} \label{sec:result-differences} All the person-specific models (i.e., the models that predict the stress of a preordained person) achieved an unrivaled performance. This high performance is, however, deceptive in that it would not generalize on yet unseen people. Indeed, the generic models (i.e., the models that predict the stress of any person) performed very poorly as shown in \cref{fig:swell-generic-vs-personal-performance,fig:wesad-generic-vs-personal-comparision}. \par It is, of course, reasonable to assume the models over-fitted. However, there is no indication that this was the case. First, we validated all the person-specific models using a 10-fold cross-validation (CV) strategy, and it produced consistent predictions with a very low standard deviation between the 10-folds. K-Fold cross-validation provides an unbiased estimation of the performance of the model because it tests how well the k different parts of the training data perform on the model. Therefore, if there the models were over-fitted, the model would under-perform when tested on some folds. In our case, all folds achieved similar performance\myFooterTex{The interested readers are referred to the detailed tables in the supplementary material (see \cref{sec::supplement} for more details)}. Secondly, all the models use a very simple RF model (\cref{tab:rf-parameters}) that is less likely to overfit. We believe the models does not overfit because they consist of a large number of shallow trees (1000 trees, maximum depth=2) and that it has a small number of \textit{best split features}. A low \textit{best split features} allows the model to create more diverse and less correlated trees; therefore, the aggregation of the different trees results in a model with a low generalization error variance and a high stability \cite{Probst2019}. Moreover, the trees are shallow (maximum depth=2) to reduce the model's complexity; thus, minimize overfitting. Finally, our results is similar to other published literature: in general, person-specific models achieve accuracy greater than 90\% \cite{Attaran2018},\cite{Nakashima2016} \cite{Liapis2015}\cite{Rigas2012} \cite{Melillo2011}\cite{Healey2005}\cite{Andre2008} while generic models always under-perform \cite{Nakashima2016} \cite{Koldijk2018} \cite{Andre2008}. \par \begin{figure*}[tbph!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{swell-hrv-calibration} \caption{SWELL HRV dataset} \label{fig:swell-hrv-calibration} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{swell-eda-calibration} \caption{SWELL EDA dataset} \label{fig:swell-eda-calibration} \end{subfigure} \hfill \caption{\textbf{Performance of the hybrid model trained on the SWELL dataset} \newline without the calibration samples, both the regression and classification models performed crudely. However, when a few person-specific calibration samples were used for calibration, their performance steadily improved} \label{fig:swell-hibrid-performance} \end{figure*} \begin{figure*}[tbph] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{wesad-hrv-calibration} \caption{WESAD HRV dataset} \label{fig:wesad-hrv-calibration} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{wesad-eda-calibration} \caption{WESAD EDA dataset} \label{fig:wesad-eda-calibration} \end{subfigure} \caption{\textbf{Performance of the hybrid model trained on the WESAD dataset} \newline without the calibration samples, both the regression and classification models performed crudely. However, when a few person-specific calibration samples were used for calibration, their performance steadily improved} \label{fig:wesad-hibrid-performance} \end{figure*} The drop in accuracy, when tested on unseen subjects, is also nothing out of the ordinary, as already explained (\cref{sec:intro} on page~\pageref{sec:intro:underperformance}). Indeed, the models cannot learn the inter-subject physiological differences in how people respond to the stressors. To double-check this verdict, we added a \textit{subject id} as a control prediction feature to the datasets. The \textit{subject id} was used to monitor the subject to whom each sample in the datasets belongs to and to probe how much each model is influenced by knowing the origin of each sample. The influence of the \textit{subject id} on the model is assessed by comparing the importance (in terms of a mean decrease in impurity (MDI)) of the \textit{subject id} to that of other attributes of the dataset. The MDI score of an attribute reveals how much the said attribute contributes to making the final prediction of a model. We found that, in all datasets, the \textit{subject id} has the highest MDI; thus, is the most critical attribute for stress prediction. Additionally, as shown in \cref{fig:swell-generic-vs-personal-performance,fig:wesad-generic-vs-personal-comparision}, unlike the person-specific models, because each subject has a unique response to stress, the generic model's performance varies widely between the different subjects. Accordingly, using generic stress prediction models would lead to unpredictability and low performance compared to using person-specific models. \par This discrepancy in performance highlights the far-reaching importance of inter-individual physiological differences that makes it hard for a generic stress prediction model to generalize to new unseen people. As already discussed by other researchers, one-size-fits-all stress prediction models cannot work well because people express stress differently. Furthermore, there is a wide gap in how the generic models performed on different subjects. This wide gap implies that, if a system uses a generic model for stress prediction, in practice, its prediction would seem virtually arbitrary and would make it very laborious to troubleshoot when the system has bugs. Therefore, an effective system would need to rely on non-economically viable person-specific models. \subsection{Generic stress model calibration} \label{sec:result-calibration} While it was possible to slightly increase the performance of the generic models (e.g., by using complex stacked models), it was clear that the performance of the person-specific models always dwarfs that of the person-independent models (\cref{fig:swell-generic-vs-personal-performance,fig:wesad-generic-vs-personal-comparision}). Furthermore, it was not possible to reliably use hyperparameters optimizations. The hyperparameters tuning is perverse guesswork and an erratic process given that the distribution of each subject is somehow unique; for that reason, finding hyperparameters for a model that work well for all subjects is a futile endeavor. \par In an attempt to improve the models' generalization on unseen people, we investigated how each model would perform if it knew little information about the previously unseen subjects. Consequently, we devised a technique that derive a personalized model from the data collected from a large group of people (see \cref{algo:model-calibration} on page~\pageref{algo:model-calibration}). In this paper, we used half of the data from $q=4$ randomly selected subjects as the \textit{calibration samples} and the remaining half is used to test the performance of the calibrated models. The data of the remaining $n=N-q$ subjects were used as the \textit{generic samples}. In one sense, the calibration samples serve as \enquote{the fingerprints of a person}, i.e., they encode the \enquote{uniqueness} of an individual using tinny physiological samples of that person. \par When we applied this technique to stress prediction on the two datasets, the performance of all the models significantly increased, even when we only used a few calibration samples (see \cref{fig:swell-hibrid-performance,fig:wesad-hibrid-performance} for more details): \begin{itemize} \item The root-mean-square error (RMSE) and mean absolute error (MAE) sharply dropped when we used a few calibration samples, and this is the case both for the model trained on the EDA datasets and the model trained on the HRV dataset. For instance, for the model trained on the HRV signal of the SWELL dataset, the MAE decreased from 10.1 to 7.6 when we only used 10 calibration samples per unseen subject. Likewise, this error dropped even more so when we used 100 calibration samples ($\text{mean absolute error =4.7, root-mean-square error=6.6}$). \item In a like manner, the performance of the classification models noticeably increased when we used a few calibration samples. For instance, the model trained on the HRV signal of the SWELL dataset, the accuracy, the precision, and recall respectively increased from 37.5\%, 44.0\%, and 37.50\% to 61.6\%, 69.0\% and 61.6\% when we used only 10 calibration samples per unseen subjects and culminated in a 93.9\% accuracy, 94.4\% precision and 93.9\% recall with100 calibration samples per subject. \end{itemize} \par The increase in performance due to the few person-specific calibration samples highlights the influence of the person-specific biometrics in predicting stress. In \cite{Lamichhane2017}, the authors showed that, when inter-individual physiological differences are not accounted for, a stress predictive model may perform no better than a model with no learning capability. Our result highly their findings. Nevertheless, all humans share a common hormonal response to stress \cite{Charmandari2005}, albeit a person's unique factors such as gender \cite{Wang2007a}, genetics\cite{Wust2004}, personality \cite{Childs2014}, weight \cite{Jayasinghe2014} and his coping ability \cite{Kogler2015} differentiate how each person reacts to stress. Previous researchers (e.g., \cite{Koldijk2018, Xu2015, Ramos2014}) have achieved notable improvements in generic stress prediction models by clustering the subjects based on their physiological or physical similarity. Their methods are, however, not practical for mass-product stress monitoring product because they rely on heuristic clustering methods, and there is no authoritative subject clustering criterion. Our proposed method is simpler and much cheaper for a real-world deployment (see discussion in \cref{sec:monitoring}) and performs much better than any previously proposed generic model improvement technique. \section{STRESS MONITORING IN OFFICES} \label{sec:monitoring} \par The above results suggest that, in order to design a real-world stress monitoring system, it would be beneficial to rethink the trade-off between spending effort on collecting data and training high performing, but costly person-specific model, versus using a hybrid model derived from a mixture of a few person-specific physiological samples with physiological samples collected from a large population. The latter approach is less expensive, more flexible for deployment, and delivers comparable performance to that of person-specific models. \par The architecture and deployment of a stress monitoring system that uses this technique will undoubtedly involve a lot of technical challenges that are beyond the scope of this paper. We encourage the interested reader to examine \cite{Can2019a}\cite{Alberdi2016} for an exhaustive overview of these challenges. One of the biggest challenges is perhaps how to collect the required physiological signals unobtrusively. Indeed, the system should not interfere with a person's routine. At the same time, it should record the physiological signals meticulously, accurately and at an adequate sampling frequency because the quality of the physiological data affects the performance of the stress prediction models \cite{Chowdhury2018}. These stringent requirements necessitate making conflicting compromises. For instance, while an HRV signal recorded using the chest leads is always of the highest quality, its recording would hinder the person's normal life. Alternatively, the HRV signal could be obtained using a lower quality but less invasive PPG signal recorded from the person's wrist. There exist many wearable devices (e.g., smart-watches and fitness trackers) with built-in PPG sensors. For example, the Empatica E4 wristband\footnote{\url{https://www.empatica.com/research/e4/}} might serve for this purpose. The device boasts of a high-resolution EDA sensor with a strong steel electrode that can continuously record both the tonic and phasic changes in the skin conductance. As discussed in a recent article \cite{Menghini2019}, the Empatica E4 wrist band has an adequate accuracy in recording HRV in seated rest, paced breathing, and recovery conditions. However, it is not very reliable when its wearer makes wrist movements. \par \begin{figure*}[htbp] \centering \includegraphics[width=1.0\linewidth]{incremental-stress-prediction} \caption{\textbf{A simplified pipeline for a continuous stress monitoring model} \newline A person’s photoplethysmogram (PPG) and EDA signals are recorded using a wristband device. The signals are sent to a computing device where appropriate features (e.g., \cref{tab:hrv-eda-features} on page~\pageref{tab:hrv-eda-features}) are computed, preprocessed (e.g., data cleaning, and rebalancing) and sent to a remote server where they are used to predict the person’s stress. For calibration purses, the person also periodically provide self-assessment of his stress (e.g., via a web survey after the completion of his work). This feedback is used to train a personalized stress prediction model, which is published and consumed as a RESTful API. When the model deteriorates, it is automatically updated based on the periodic self-evaluations the system received from its users.} \label{fig:incrementalstressprediction} \end{figure*} Another challenge is how to deploy the stress prediction models. The recent reviews on stress recognition\cite{Carneiro2019}\cite{Schmidt2018} unanimously concluded that due to the physiological difference in how people react to stress, a stress monitoring system should adapt to every individual's physiological needs. The simple, and likely, the most accurate approach is to deploy each person's stress prediction model as a web service (e.g., Representational State Transfer (REST) web service) that can be consumed to predict the person's stress. Regrettably, such an approach is daunting, time-consuming, and expensive because, in.e.g., office environment, it would require to collect, clean and label new data and train a new model of each office employee. Moreover, once deployed, the resulting stress monitoring system will unquestionably not perform as expected because its performance would deteriorate with time considering that a person's stress is dynamic and affected by many factors \cite{Schneiderman2008, Johnson1992}. Consequently, with this approach, a real-world system will need to periodically start over and collect, label, and train new models for each user to prevent the system from the anticipated performance degradation. \par As implied by the results of this paper (see \cref{sec:result-calibration}), an alternative and cost-effective method would be to derive a high performing model from a combination of generic samples collected from a large population and few person-specific calibration samples. It would also be beneficial to automate this process entirely. As an illustration, after training and testing a generic stress prediction model, it could be possible to create an automatic self-updating stress prediction model pipeline, depicted in \cref{fig:incrementalstressprediction}, as follows: \begin{enumerate}[leftmargin=1.2cm, label=\textbf{STEP \Roman*}] \setlength\itemsep{0em} \item \textit{calibration samples collection} \textemdash Once the stress monitoring system is deployed, at the beginning (at this point, it uses only a generic model), it is primordial that its users take several self-evaluation surveys in different working conditions to allow the collection of self-evaluation ground-truths that reflect the broader rangers of stressors that its users will likely go through. At the same time, each user's physiological signals are recorded using an unobtrusive wearable device (e.g., an Empatica E4 wristband) and saved in a database. Once the system has collected enough calibration samples from the users, it would automatically create each user's personalized model by training a new model on a combination of the new user-specific data with the data that was used to train the generic model as shown in \cref{algo:model-calibration}. \item \textit{continuous machine learning}\label{step-cont-learning} \textemdash After these personalized models are deployed, the system would periodically remind its users to provide additional calibration samples by taking shorts self-report survey periodically (e.g., via a web survey every time he/she finished a task) to give more feedback data to improve the user's personalized model. Indeed, with time, the models will be prone to the effect of concept drift \cite{Gama2014}, i.e., they will become stale because their input data unpredictably change over time. In stress prediction, model drifting is particularly inevitable because stress is inherently dynamic \cite{Johnson1992}. The models, thus need to adapt to the new changes. For example, when the system has received a specific number of new calibration samples from a user, it would automatically test their accuracy against the existing model. If this prediction indicates a deterioration of the model, the system will need to update the model to reverse the drift. There are many ways to achieve this. One approach would be to train a new model on a combination of the data of the generic model and the new calibration samples. This approach would be, however, computationally expensive and require significant time to retrain each user's model. Depending on the system, it would be instead more appropriate to incrementally train the existing model as the new data is received \cite{Gepperth2016}. This approach is faster because it does not require retraining the whole model when new data come in. Instead, it extends the existing model by, e.g., combining the new data with a subset of the old data \cite{Castro2018}. Nevertheless, It is important to note that many machine learning algorithms do not support incremental learning and that, unless there is rigorous monitoring of the system, incremental learning may introduce nefarious predicaments \cite{Gepperth2016}. \item \textit{calibrated model deployment} \textemdash the model is published as, e.g., a REST Application Program Interface (REST API) and periodically updated depending on its performance as discussed in \ref{step-cont-learning} above. \end{enumerate} \par Although there is a need to validate our assumptions, we believe that developing a continuous stress monitoring system based on this strategy would present the following benefits over existing approaches: \begin{itemize} \item \textit{lower cost} \textemdash for practicality, the existing approaches would require collecting and labeling the training data for each user. This process is costly and would require expensive installation, support, and maintenance services costs. Our approach would likely be less expensive because there will be no need to collect large quantities of new data from each user. Instead, only a few user-specific samples are required. \item\textit{practicality} \textemdash all high accuracy stress prediction methods rely on person-specific models. As already discussed, this approach is suboptimal when applied to new unseen people. The alternative is to create person-specific models. While this approach performs excellently in predicting stress, it is not practical in real-world settings because it is not scalable to many users, would be very costly to implement, and, most importantly, this approach is rigid and not flexible to the expected dynamic changes in each user's stress. The proposed approach achieves a stress prediction accuracy that is comparable to that achieved by subject-dependent models and yet, presents enticing large scale deployment benefits. \item \textit{straightforward deployment} \textemdash once deployed, each user's person-specific model can be generated using negligible user-specific samples that can be unobtrusively collected using, e.g., an approach proposed in \cite{Bush2014} in which each user can self-evaluate (in terms of NASA-TLX and SSSQ) his stress level via a smartphone application. The self-evaluation would serve as a person-specific calibration to the generic model. Over time, when the model degrades due to the person's dynamics in stress, a few new physiological samples would be collected and used to train and update each person's model periodically. \end{itemize} \par Although the results of this study are encouraging, there are still many limitations. Notably, the study did not validate the proposed approach in real-world settings, and it reached its conclusion using only two datasets with a small homogeneous group of subjects. Further, designing a continuous stress monitoring system using the proposed approach requires extraordinary care because external factors can influence both the EDA and the HRV. In particular, the EDA signal, while it is often heralded as one of the best indicators of stress \cite{Alberdi2016, Setz2010}, it has significant drawbacks. The EDA is a result of electrical changes that happen when the skin receives signals from the nervous system. Under stress, the skin's conductance changes due to a subtle increase in sweat that lead to a decrease in the skin's electrical resistance. The variation in skin conductivity is, however, influenced by other unrelated factors such as the person's hydration, the ambient temperature, and the ambient humidity. Moreover, for the same person, an EDA signal may fluctuate from one day to another \cite{Bakker2011}. Additionally, because stress is intrinsically multifaceted (it consists of physiological, behavioral and affective response), as highlighted in \cite{Panagakis2018}, it is imperative to take into consideration its context (i.e., where, what, when, who, why, and how). This approach, as shown in \cite{Gjoreski2017}, may yield better and predictable results even when tested in real-life conditions. \par It is also important to highlight that the deployment of a stress monitoring system based on our approach still poses technical and cost challenges. The system would require considerable upfront investments and would be undoubtedly out of a budget of a small business. However, the investment might be well worth it for a large business. In our previous studies, we showed that it is possible to predict people's thermal comfort using the variations in their HRV \cite{Nkurikiyeyezu2019a}\cite{Nkurikiyeyezu2018b} \cite{Nkurikiyeyezu2017a}, and highlighted the energy-saving potential of this approach \cite{Nkurikiyeyezu2018a}. Therefore, the positive spillovers that might result in using the system may outstrip the initial investment because, in a responsive smart office, the system can be used as part a of a multipurpose system that uses the office occupants' physiological signals for preventive medicine, stress management, and provides an efficient thermal comfort at low energy. Additionally, there exist enabling technologies that would make these challenges a little bit easier. For example, IBM's Watson Studio\footnote{\url{https://www.ibm.com/cloud/machine-learning}} offers tools that simplify developing and deploying predictive models. In our proposed stress monitoring system, Watson Studio could be used \textemdash and requires little or no programming experience \textemdash to automate steps 1 and 2 (see \cref{fig:incrementalstressprediction} ) including model deterioration monitoring and deployment as a REST API. \section{CONCLUSION} Despite an extensive body of literature on stress recognition, and notwithstanding the potential economic and health benefit of stress monitoring, there exists no robust real-world stress recognition system. The most reliable and uncompromising methods use a fusion of multi-modal signals (e.g., physiological (such as HRV, EDA, EEG, EMG, skin temperature, respiration, pupil diameters, eye gaze), behavioral (keystrokes and mouse dynamics, and sitting posture), facial expression, speech patterns, and mobile phone use patterns). This approach, however, raises both practical challenges (e.g., real-time multi-modal data acquisition, data fusion, and data integration) and user privacy concerns (e.g., the implication of recording a person's computer keystrokes, his video and his speech), and, are not feasible in the real-world settings because of company-wide computer security policies or due to international workplace privacy laws. \par On the contrary, the most practical stress monitoring methods that use physiological signals are idiosyncratic because stress is inherently subjective and is felt differently depending on the person. Therefore, methods that use ML model that uses physiological signals fail to generalize well when predicting the stress of new unseen people. Thus, they are not suitable for a real-world stress monitoring system. Only person-specific models are accurate enough for this task. Unfortunately, unlike the generic models, person-specific models are inflexible and costly to deploy in real-world settings because they require collecting new data and training a new model for every user of the system. In an office environment, this entails spending precious resources to collect and train a new model for every employee. Moreover, because stress is inherently dynamic, these models will need expensive periodic updates to collect and retrain every model to prevent the system from deterioration due to concept drift. \par In this paper, we proposed a cost-effective hybrid stress prediction approach. Our method takes its foundation on the fact that humans share similar hormonal responses to stress. However, every person possesses unique factors (e.g., gender, age, weight, and copying ability) that differentiate the person from others. Therefore, we hypothesized that it could be possible to improve the generalization performance of a generic stress prediction model trained on a large population by deriving a personalized model from a combination of samples collected from a large group of people with a few person-specific samples. In a sense, the calibration samples serve as the \say{fingerprint} of a person and they introduce his/her \say{uniqueness} into the new model. \par We tested our method on two stress datasets and found that our approach performed much better than the generic models. Furthermore, we surmised that, in order to create a practical stress monitoring system, this approach would be cost-effective and practical to deploy in real-world settings and discussed some of its technical limitations. \section*{SUPPLEMENTARY MATERIAL} \label{sec::supplement} \noindent Additional supporting information are available online\myFooterTex{Available at \url{https://www.kaggle.com/qiriro/ieee-tac}} in our public repository. The repository contains more detailed information and the source code to replicate our finding: \begin{itemize} \item Source code we developed for this research \item Dataset of the computed HRV and EDA features \item HRV and EDA feature importance with and without the \textit{subject\_id} added to the datasets (see \cref{sec:feature-engineering}) \item Tables of the performance of the person-specific and generic models models (refer to \cref{sec:result-differences}) \item Tables of the of performance of the calibrated models (see details in \cref{sec:result-calibration}) \end{itemize} {\balance \microtypesetup{protrusion=false} \bibliographystyle{IEEEtran}
1,108,101,566,174
arxiv
\section{Introduction} Let $G=(V,E)$ be a graph. For a vertex $v\in V$, let $N_G(v)=\{u\in V |uv \in E\}$ and $N_G[v] = N_G(v) \cup \{v\}$ denote the open and closed neighborhoods of $v$, respectively. A set $D\subseteq V$ is called a \emph{dominating set} of a graph $G=(V,E)$ if $|N_G[v] \cap D| \geq 1$ for all $v \in V$. The \emph{domination number} of a graph $G$, denoted by $\gamma(G)$, is the minimum cardinality of a dominating set of $G$. The concept of domination has been well studied. Depending upon various applications, different variations of domination have appeared in the literature \cite{HaynesHedetniemiSlater1998,HaynesHedetniemiSlater11998}. Among different variations of domination, $k$-tuple domination and liar's domination are two important and well studied type of domination \cite{Harary2000,LiaoChang2002,LiaoChang2003,RodenSlater2009,Slater2009}. A set $D\subseteq V$ is called a \emph{$k$-tuple dominating set} of a graph $G=(V,E)$ if each vertex $v \in V$ is dominated by at least $k$ number of vertices in $D$, that is, $|N_G[v] \cap D| \geq k$ for all $v \in V$. The concept of $k$-tuple domination in graphs was introduced in~\cite{Harary2000}. For $k=2$ and $3$, it is called \emph{double domination} and \emph{triple domination} respectively. The \emph{$k$-tuple domination number} of a graph $G$, denoted by $\gamma_{k}(G)$, is the minimum cardinality of a $k$-tuple dominating set of $G$. It is a simple observation that for the existence of a $k$-tuple dominating set, we need $\delta(G)\geq k-1$, where $\delta(G)$ is the minimum degree of $G$. On the other hand, liar's domination is a new variation of domination and was introduced in 2009 by Slater~\cite{Slater2009}. A set $D\subseteq V$ is called a \emph{liar's dominating set} of a graph $G=(V,E)$ if the following two conditions are met: \begin{description} \item[condition (i)] $|N_G[v] \cap D| \geq 2$ for all $v\in V$ \item[condition (ii)] for every pair of distinct vertices $u, v\in V$, $|(N_G[u]\cup N_G[v])\cap D|\geq 3$ \end{description} In a network guarding scenario, if sentinels are placed in the vertices of the dominating set, then the graph (network) is guarded. Consider the situation where a single sentinel is unreliable or lies and we do not know the exact sentinel that lies. We then need a liar's dominating set to guard the network. The \emph{liar's domination number} of a graph $G$, denoted by $\gamma_{LR}(G)$, is the minimum cardinality of a liar's dominating set of $G$. Formally, the decision versions of \textsc{$k$-Tuple Domination Problem} and \textsc{Liar's Domination Problem} are defined as follows. \underline{\textsc{$k$-Tuple Domination Problem}} \begin{description} \item{\emph{Instance:}} A graph $G=(V,E)$ and a nonnegative integer $p$. \item{\emph{Question:}} Does there exist a $k$-tuple dominating set of cardinality at most $p$? \end{description} \underline{\textsc{Liar's Domination Problem}} \begin{description} \item{\emph{Instance:}} A graph $G=(V,E)$ and a nonnegative integer $p$. \item{\emph{Question:}} Does there exist a liar's dominating set of cardinality at most $p$? \end{description} Note that, every liar's dominating set is a double dominating set and every triple dominating set is a liar's dominating set. Hence, liar's domination number lies between double and triple domination number, that is, $\gamma_{2}(G)\leq \gamma_{LR}(G)\leq \gamma_{3}(G)$. The rest of the paper is organized as follows. Section $2$ introduces some pertinent definitions and preliminary results that are used in the rest of the paper and a brief review on the progress in the study of parametrization for domination problems. Section $3$ deals with the hardness results of both $k$-tuple domination problem and liar's domination problem. In Section $4$, we show that both \textsc{$k$-Tuple Domination Problem} and \textsc{Liar's Domination Problem} admit linear kernel in planar graphs. In Section $5$, we extend the results for bounded genus graphs. Finally, Section $6$ concludes the paper. \section{Preliminaries} Let $G=(V,E)$ be a graph. Let $G[S]$, $S \subseteq V$ denote the induced subgraph of $G$ on the vertex set $S$. The \emph{distance} between two vertices $u$ and $v$ in a graph $G$ is the number of edges in a shortest path connecting them and is denoted as $d_G(u,v)$. The degree of a vertex $v \in V(G)$, denoted by $deg_G(v)$, is the number of neighbors of $v$. \subsection{Graphs on surfaces} \label{subsec:graphsurfacenotation} In this subsection, we recall some basic facts about graphs on surfaces following the discussion in~\cite{Fominkernelgenus}. The readers are referred to~\cite{Mohar2001} for more details. A \emph{surface} $\Sigma$ is a compact $2$-manifold without boundary. Let $\Sigma_0$ denote the sphere $\{(x,y,z)| ~x^2+y^2+z^2=1\}$. A \emph{line} and \emph{O-arc} are subsets of $\Sigma$ that are homeomorphic to $[0,1]$ and a circle respectively. A subset of $\Sigma$ meeting the drawing only in vertices of $G$ is called \emph{$G$-normal}. If an O-arc is G-normal, then it is called a \emph{noose}. The length of a noose is the number of its vertices. The \emph{representativity} of $G$ embedded in $\Sigma\neq \Sigma_0$ is the smallest length of a non-contractible noose in $\Sigma$ and it is denoted by \emph{rep$(G)$}. The classification theorem for surfaces states that, any surface $\Sigma$ is homeomorphic to either a surface $\Sigma^h$ which is obtained from a sphere by adding $h$ handles (orientable surface), or a surface $\Sigma^k$ which is obtained from a sphere by adding $k$ crosscaps (non-orientable surface)~\cite{Mohar2001}. The \emph{Euler genus} of a non-orientable surface $\Sigma$, denoted by \emph{eg$(\Sigma)$}, is the number of crosscaps $k$ such that $\Sigma \cong \Sigma^k$ and for an orientable surface, \emph{eg$(\Sigma)$} is twice the number of handles $h$ such that $\Sigma \cong \Sigma^h$. Given a graph $G$, Euler genus of $G$, denoted by eg$(G)$, is the minimum eg$(\Sigma)$, where $\Sigma$ is a surface in which $G$ can be embedded. The \emph{Euler characteristic} of a surface $\Sigma$ is defined as $\chi(\Sigma)=2-\textup{eg}(\Sigma)$. For a graph $G$, $\chi(G)$ denotes the largest number $t$ for which $G$ can be embedded on a surface $\Sigma$ with $\chi(\Sigma)=t$. Let $G=(V,E)$ be a $2$-cell embedded graph in $\Sigma$, that is, all the faces of $G$ is homeomorphic to an open disk. If $F$ is the set of all faces, then \emph{Euler's formula} tells that $V-E+F=\chi(\Sigma)=2-\textup{eg}(\Sigma)$. Next we define a process called \emph{cutting along a noose $N$}. Although the formal defi is given in~\cite{Mohar2001}, we follow a more intuitive defi given in~\cite{Fominkernelgenus}. Let $N$ be a noose in a $\Sigma$-embedded graph $G=(V,E)$. Suppose for any $v\in N\cap V$, there exists an open disk $\Delta$ such that $\Delta$ contains $v$ and for every edge $e$ adjacent to $v$, $e\cap \Delta$ is connected. We also assume that $\Delta-N$ has two connected components $\Delta_1$ and $\Delta_2$. Thus we can define partition of $N_G(v)= N_G^1(v)\cup N_G^2(v)$, where $N_G^1(v)= \{u\in N_G(v)| uv\cap \Delta_1\neq \emptyset\}$ and $N_G^2(v)= \{u\in N_G(v)| uv\cap \Delta_2\neq \emptyset\}$. Now for each $v\in N\cap V$ we do the following: \begin{enumerate} \item remove $v$ and its incident edges \item introduce two new vertices $v^1, v^2$ and \item connect $v^i$ with the vertices in $N_G^i$, $i = 1, 2$. \end{enumerate} The resulting graph $\mathcal{G}$ is obtained from $\Sigma$-embedded graph $G$ by cutting along $N$. The following lemma is very useful in the proofs by induction on the genus. \begin{lemma}\textup{\cite{Fominkernelgenus}} \label{lemcutting} Let $G$ be a $\Sigma$-embedded graph and let $\mathcal{G}$ be a graph obtained from $G$ by cutting along a non-contractible noose $N$. Then one of the following holds \begin{itemize} \item $\mathcal{G}$ is the disjoint union of graphs $G_1$ and $G_2$ that can be embedded in surfaces $\Sigma_1$ and $\Sigma_2$ such that $eg(\Sigma) = eg(\Sigma_1) + eg(\Sigma_2)$ and $eg(\Sigma_i) > 0, i = 1, 2$. \item $\mathcal{G}$ can be embedded in a surface with Euler genus strictly smaller than eg$(\Sigma)$. \end{itemize} \end{lemma} A \emph{planar graph} $G=(V,E)$ is a graph that can be embedded in the plane. We term such an embedding as a \emph{plane graph}. \subsection{Parameterization and domination} A \emph{parameterized problem} is a language $L\subseteq \Sigma^* \times \mathds{N}$, where $\Sigma^*$ denotes the set of all finite strings over a finite alphabet $\Sigma$. A parameterized problem $L$ is \emph{fixed-parameter tractable} if the question ``$(x,p)\in L$'' can be decided in time $f(p)\cdot |x|^{O(1)}$, where $f$ is a computable function on nonnegative integers, $x$ is the instance of the problem and $p$ is the parameter. The corresponding complexity class is called \textsf{FPT}. Next we define a reducibility concept between two parameterized problems. \begin{defi} \textup{\cite{DowneyFellows,Niedermeier}} Let $L, L'\subseteq \Sigma^*\times \mathds{N}$ be two parameterized problems. We say that $L$ reduces to $L'$ by a \emph{standard parameterized m-reduction} if there are functions $p\mapsto p'$ and $p\mapsto p''$ from $\mathds{N}$ to $\mathds{N}$ and a function $(x,p)\mapsto x'$ from $\Sigma^*\times \mathds{N}$ to $\Sigma^*$ such that \begin{enumerate} \item $(x,p)\mapsto x'$ is computable in time $p''|x|^c$ for some constant $c$ and \item $(x,p)\in L$ if and only if $(x',p')\in L'$. \end{enumerate} \end{defi} A parameterized problem is in the class \textsf{W[i]}, if every instance $(x, p)$ can be transformed (in fpt-time) to a combinatorial circuit that has height at most $i$, such that $(x, p)\in L$ if and only if there is a satisfying assignment to the inputs, which assigns $1$ to at most $p$ inputs. A problem $L$ is said to be \textsf{W[i]}\emph{-hard} if there exists a standard parameterized m-reduction from all the problems in \textsf{W[i]} to $L$ and in addition, if the problem is in \textsf{W[i]}, then it is called \textsf{W[i]}\emph{-complete}. \remove{ The height is the largest number of logical units with unbounded fan-in on any path from an input to the output. The number of logical units with bounded fan-in on the paths must be limited by a constant that holds for all instances of the problem.} Next we define the reduction to problem kernel, also simply referred to as \emph{kernelization}. \begin{defi} \textup{\cite{Niedermeier}} Let $L$ be a parameterized problem. By reduction to problem kernel, we mean to replace instance $I$ and the parameter $p$ of $L$ by a ``reduced'' instance $I'$ and by another parameter $p'$ in polynomial time such that \begin{itemize} \item $p'\leq c\cdot p$, where $c$ is a constant, \item $I'\leq g(p)$, where $g$ is a function that depends only on $p$, and \item $(I,p)\in L$ if and only if $(I',p')\in L$. \end{itemize} The reduced instance $I'$ is called the \emph{problem kernel} and the size of the problem kernel is said to be bounded by $g(p)$. \end{defi} In parameterized complexity, domination and its variations are well studied problems. The decision version of domination problem is \textup{W[2]}-complete for general graphs~\cite{DowneyFellows}. But this problem is FPT when restricted to planar graphs~\cite{Alberkernel} though it is still \textup{NP}-complete for this graph class~\cite{GareyJohnson}. Furthermore, for bounded genus graphs, which is a super class of planar graphs, domination problem remains FPT~\cite{Fominkernelgenus}. It was proved that dominating set problem possesses a linear kernel in planar graphs~\cite{Alberkernel} and in bounded genus graphs~\cite{Fominkernelgenus}. Also domination problem admits polynomial kernel on graphs excluding a fixed graph $H$ as a minor~\cite{Gutner2009} and on $d$-degenerated graphs~\cite{kerneldomdegenerate}. A search tree based algorithm for domination problem on planar graphs, which runs in $O(8^p n)$ time, is proposed in~\cite{Albersearchtree}. For bounded genus graphs, similar search tree based algorithm is proposed in~\cite{Ellis2004} and has a time complexity of $O((4g+40)^p n^2)$, where $g$ is the genus of the graph. Algorithms with running time of $O(c^{\sqrt{p}}n)$ for domination problem on planar graphs have been devised in~\cite{Alber2002,Alber2004,Fomin2003,Fominkernelgenus}. Like domination problem, \textsc{$k$-Tuple Domination Problem} and \textsc{Liar's Domination Problem} are both \textsf{NP}-complete~\cite{LiaoChang2003,Slater2009} for general graphs. However, these problems have been polynomially solved for different graph classes~\cite{LiaoChang2002, LiaoChang2003, PandaPaultree, PandaPaulpig}. But for planar graphs and hence for graphs with bounded genus, \textsc{$k$-Tuple Domination Problem} remains \textsf{NP}-complete~\cite{Lee2008}. In~\cite{Slater2009}, though the \textsf{NP}-completeness proof is given for general graphs, it can be verified that using the same construction one can find the \textsf{NP}-completeness of \textsc{Liar's Domination Problem} in planar graphs, see Lemma~\ref{lem-liar-dom-planar-NP} in Appendix~\ref{sec-appendix}. Some generalization of classical domination problem have been studied in the literature from parameterized point of view. Among those problems, $k$-dominating threshold set problem, $[\sigma, \rho]$-dominating set problem (also known as generalized domination) are generalized version of $k$-tuple dominating set problem. In~\cite{Golovachthresholddom2008}, it is proved that $k$-dominating threshold set problem is FPT in $d$-degenerated graphs. $[\sigma, \rho]$-domination is studied in~\cite{sigmarhohardness,Telle1997,Rooijsigmarhodom2009}. A set $D$ of vertices of a graph $G$ is $[\sigma,\rho]$-dominating set if for any $v \in D, |N(v)\cap D| \in \sigma$ and for any $v\notin D, |N(v)\cap D| \in \rho$ for any two sets $\sigma$ and $\rho$. It is known that $[\sigma, \rho]$-domination is FPT when parameterized by treewidth~\cite{Rooijsigmarhodom2009}. By Theorem 32 of~\cite{Alber2002}, it follows that $k$-tuple domination is FPT on planar graphs. But there is no explicit kernel for both $k$-tuple domination and liar's domination problem in the literature. There have been successful efforts in developing meta-theorems like the celebrated Courcelle's theorem~\cite{Courcelle92} which states that all graph properties definable in monadic second order logic can be decided in linear time on graphs of bounded tree-width. This also implies FPT algorithms for bounded tree-width graph for these problems. In case of kernelization in bounded genus graphs, Bodlaender et al. give two meta-theorems~\cite{Bodlaender2009}. The first theorem says that all problems expressible in counting monadic second order (CMSO) logic and satisfying a coverability property admit a polynomial kernel on graphs of bounded genus and the second theorem says that all problems that have a finite integer index and satisfy a weaker coverability property admit a linear kernel on graphs of bounded genus. It is easy to see that both $k$-tuple and liar's domination problems can be expressed in CMSO logic. Let $G=(V,E)$ be an instance of a graph problem $\Pi$ such that $G$ is embeddable in a surface of Euler genus at most $r$. The basic idea of quasi-coverable property for $\Pi$ is that there exists a set $S\subseteq V$ satisfying the conditions of $\Pi$ such that the tree-width of $G\setminus R^r_G(S)$ is at most $r$ where $R^r_G(S)$ is a special type of reachability set from $S$. In domination type of problems, this reachability set is actually the whole graph and hence these problems satisfy the quasi-coverable property. The basic idea of strong monotonicity for a graph problem $\Pi$ is roughly as follows: Let $\mathcal{F}_i$ be a class of graphs $G$ having a specific set of vertices $S$ termed as the boundary of $G$ such that $|S|=i$. The glued graph $G=G_1\oplus G_2$ of $G_1$ and $G_2$ is the graph which is obtained by taking the disjoint union of $G_1$ and $G_2$ and joining $i$ edges between the vertices of the boundary sets. A problem $\Pi$ is said to satisfy the strong monotonicity if for every boundaried graph $G=(V,E)\in \mathcal{F}_i$, there exists a set $W\subseteq V$ of a specific cardinality which satisfy the property of $\Pi$ such that for every boundaried graph $G'=(V',E')\in \mathcal{F}_i$ with a set $W' \subseteq V'$, satisfying the property of $\Pi$, the vertex set $W\cup W'$ satisfies the property of $\Pi$ for the glued graph $G=G \oplus G'$. It can be verified easily that both $k$-tuple domination and liar's domination problems satisfy the strongly monotone property. The strongly monotone property implies the finite integer index for these problems. Hence, by the second meta-theorem in~\cite{Bodlaender2009}, both $k$-tuple and liar's domination problems admit linear kernels for graphs on bounded genus. Though these meta-theorems provide simple criteria to decide whether a problem admits a linear or polynomial kernel, finding a linear kernel with reasonably small constants for a specific problem is a worthy topic of further research~\cite{Bodlaender2009}. In this paper, we have obtained linear kernels with small constants for both the problems on bounded genus graphs. We have also proved the \textsf{W[2]}-hardness for $k$-tuple and liar's domination for general graphs. \section{Hardness results in general graphs} In this section, we show that \textsc{$k$-tuple Domination Problem} and \textsc{Liar's Domination Problem} are \textsf{W[2]}-hard. In~\cite{sigmarhohardness}, it is proved that $[\sigma,\rho]$-domination problem for any recursive sets $\sigma$ and $\rho$ is \textsf{W[2]}-hard. This implies the hardness for $k$-tuple domination in general graphs. But in this paper, we have come up with a simple \textsf{W[2]}-hardness proof for $k$-tuple domination in general graphs. To prove this, we show standard parameterized m-reductions from \textsc{Domination Problem}, which is known to be \textsf{W[2]}-complete~\cite{DowneyFellows}, to \textsc{$k$-Tuple Domination Problem} and \textsc{Liar's Domination Problem}, respectively. \begin{theo} \textsc{$k$-Tuple Domination Problem} is $\mathsf{W[2]}$-hard. \end{theo} \begin{proof} We show a standard parameterized m-reduction from \textsc{Domination Problem} to \textsc{$k$-Tuple Domination Problem}. Let $<G=(V,E),p>$ be an instance of \textsc{Domination Problem}. We construct an instance $<G'=(V',E'), p'>$ of the \textsc{$k$-Tuple Domination Problem} as follows: $V' = V \cup V_k$ where $V_k = \{u_1, u_2, \ldots, u_k\}$ and $ E' = E\cup \{v_iu_j| v_i \in V \mbox{ and } u_j \in V_k \setminus u_k \} \cup \{u_iu_j| u_i, u_j \in V_k, i \not= j\}$. Also set $p'=p+k$. The construction of $G'$ from $G$ in case of triple domination is illustrated in Figure \ref{figk-dom}. \begin{figure}[h] \centering \includegraphics[width=.5\textwidth]{k-dom} \caption{Construction of $G'$ from $G$ for triple domination} \label{figk-dom} \end{figure} \begin{claim} $G$ has a dominating set of size at most $p$ if and only if $G'$ has a $k$-tuple dominating set of size at most $p'$. \end{claim} \begin{proof} Let $D$ be a dominating set of $G$ of cardinality at most $p$ and $D'= D \cup V_k$. Each $v_i \in V$ is dominated by at least one vertex from $D$ and by $k-1$ vertices from $V_k$. Each $u_i \in V_k$ is dominated by $k$ vertices of $V_k$. Thus, $D'$ is a $k$-tuple dominating set of $G'$ of cardinality at most $p'$. \remove{It is easy to verify that for each vertex $x\in V'$, $|N_{G'}[x]\cap D'|\geq k$, i.e., $D'$ is a $k$-tuple dominating set of $G'$ of cardinality $p+k=p'$. } Conversely, let $D'$ be a $k$-tuple dominating set of $G'$ of cardinality at most $p'$. Note that each $k$-tuple dominating set contains the set $V_k$ because to dominate $u_k$ by $k$ vertices we must select all the vertices of $V_k$. Let $D = D'\setminus V_k$. Clearly $D \subseteq V$ and $|D| \leq p$. Now for each $v\in V$, $|N_G[v]\cap D|\geq 1$ because otherwise, there exists a vertex $v\in V$ such that $|N_{G'}[v]\cap D'|=k-1$. This is a contradiction because $D'$ is a $k$-tuple dominating set of $G'$. Thus $D$ is a dominating set of $G$ of cardinality at most $p$. Hence, $G$ has a dominating set of size at most $p$ if and only if $G'$ has a $k$-tuple dominating set of size at most $p'$. \end{proof} Thus, \textsc{$k$-Tuple Domination Problem} is \textsf{W[2]}-hard. \end{proof} Next we show the \textsf{W[2]}-hardness of \textsc{Liar's Domination Problem}. \begin{theo} \textsc{Liar's Domination Problem} is $\mathsf{W[2]}$-hard. \end{theo} \begin{proof} We show a standard parameterized m-reduction from \textsc{Domination Problem} to \textsc{Liar's Domination Problem}. Let $<G=(V,E),p>$ be an instance of \textsc{Domination Problem}. We construct an instance $<G'=(V',E'), p'>$ of the \textsc{Liar's Domination Problem} as follows: $V'=V\cup \{u, u', v, v', w\}$ and $E'=E\cup \{v_iu|v_i \in V\}\cup \{v_iv|v_i \in V \} \cup \{uu', vv', wu, wv\}$. Also $p'=p+4$. The construction of $G'$ from $G$ is illustrated in Figure \ref{fig liar}. \begin{figure}[h] \centering \includegraphics[width=.5\textwidth]{liar} \caption{Construction of $G'$ from $G$} \label{fig liar} \end{figure} \begin{claim} $G$ has a dominating set of size at most $p$ if and only if $G'$ has a liar's dominating set of size at most $p'$. \end{claim} \begin{proof} Let $D$ be a dominating set of $G$ of cardinality at most $p$ and $D'=D\cup \{u, u', v, v'\}$. It is easy to verify that for each vertex $x\in V'$, $|N_{G'}[x]\cap D'|\geq 2$ and for every pair of vertices $x,y\in V'$, $|(N_{G'}[x]\cup N_{G'}[y])\cap D'|\geq 3$. Hence $D'$ is a liar's dominating set of $G'$ of cardinality at most $p+4=p'$. Conversely, let $D'$ be a liar's dominating set of $G'$ of cardinality at most $p'$. Each liar's dominating set must contain the set $\{u, u', v, v'\}$ because to doubly dominate $u'$ and $v'$ we must select the vertices $\{u, u', v, v'\}$. Let $X \subseteq V$ denote the set of vertices that are dominated by exactly two vertices ($u$ and $v$) from $D'$. We claim $|X| \le 1$. If there exists two such vertices $x,y \in V$, then $|(N_{G'}[x] \cup N_{G'}[y]) \cap D'| = 2$ which violates condition (ii) of liar's domination. We now deal with two cases: \begin{description} \item[$|X| = 1 $:] Let $X=\{x\}$. Here $|N_{G'}[x]\cap D'|=2$. This implies $w \in D'$, otherwise the pair $x$ and $w$ violates condition (ii) of liar's domination. We set $D''=(D' \setminus \{w\})\cup \{x\}$. $D''$ is also a liar's dominating set of $G'$ of cardinality at most $p'$. Note that all vertices in $V$ is triply dominated by $D''$ and it does not contain $w$. \item[$|X| = 0 $:] In this case each vertex of $V$ is triply dominated by $D'$. Now if $w\notin D'$, we are done. Otherwise the set $D'\setminus \{w\}$ forms a liar's dominating set of $G'$ of cardinality at most $p'$ such that each vertex of $V$ is triply dominated by $D'\setminus \{w\}$. \end{description} Hence without loss of generality, we assume that there is a liar's dominating set $D'$ of $G'$ of cardinality at most $p'$ such that every vertex in $V$ is triply dominated by $D'$ and $w\notin D'$. Let $D=D'\setminus \{u, u', v, v'\}$. Clearly $D \subseteq V$ and $|D| \leq p$. Now for each $x\in V$, $|N_G[x]\cap D|\geq 1$ because otherwise, there exists a vertex $x\in V$ such that for the pair $x$ and $w$, condition (ii) of liar's domination is violated. This is a contradiction because $D'$ is a liar's dominating set of $G'$. Thus $D$ is a dominating set of $G$ of cardinality at most $p$. Hence, $G$ has a dominating set of size at most $p$ if and only if $G'$ has a liar's dominating set of size at most $p'$. \end{proof} Thus, \textsc{Liar's Domination Problem} is \textsf{W[2]}-hard. \end{proof} \section{Linear kernels for planar graphs} Having seen that $k$-tuple and liar's domination are \textsf{W[2]}-hard in general graphs, we focus on planar graphs in this section and show that they are FPT. \subsection{Double domination} In this subsection we show that \textsc{Double Domination Problem} in planar graphs possesses a linear kernel. Our proof technique uses the region decomposition idea of Alber et al.~\cite{Alberkernel}. First we describe the reduction rules for kernelization. \subsubsection{Reduction rule} Let $G=(V,E)$ be the instance for \textsc{Double Domination Problem}. Consider a pair of vertices $u,v\in V$. Let $N_G(u,v)=N_G(u)\cap N_G(v)$. We partition the vertices of $N_G(u,v)$ in to three parts as follows: \begin{eqnarray*} N^1_G(u,v)&=&\{x\in N_G(u,v)| N_G(x)\setminus \{N_G(u,v) \cup \{u,v\} \}\neq \emptyset\};\\ N^2_G(u,v)&=&\{x\in N_G(u,v)\setminus N^1_G(u,v) | N_G(x)\cap N^1_G(u,v) \neq \emptyset\};\\ N^3_G(u,v)&=& N_G(u,v)\setminus (N^1_G(u,v)\cup N^2_G(u,v)). \end{eqnarray*} \noindent{\textbf{Reduction Rule:}} For every pair of distinct vertices $u,v\in V$, if $N^3_G(u,v) \neq \emptyset$, then \begin{itemize} \item delete all the vertices of $N^2_G(u,v)$ and \item delete all vertices of $N^3_G(u,v)$ except one vertex. \end{itemize} \begin{lemma} Let $G=(V,E)$ be a graph and $G'=(V', E')$ be the resulting graph after having applied the reduction rule to $G$. Then $\gamma_2(G)=\gamma_2(G')$. \end{lemma} \begin{proof} Let $u, v\in V$ such that $N^3_G(u,v) \neq \emptyset$. Now if $N^2_G(u,v)= \emptyset$ and $|N^3_G(u,v)|=1$, then $G'$ is same as $G$. So, without loss of generality, assume that $\left| N^3_G(u,v) \right| > 1$ and $N^2_G(u,v)\neq \emptyset$. Note that a vertex $x$ of $N^3_G(u,v)$ can be doubly dominated by any two vertices from $N_G[x]\subseteq \{N^2_G(u,v)\cup N^3_G(u,v) \cup \{u,v\}\}$. Again for any two vertices $x, y \in N^2_G(u,v)\cup N^3_G(u,v) \cup \{u,v\}$, $N_G[x]\cup N_G[y] \subseteq N_G[u]\cup N_G[v]$. This shows that we can double dominate each vertex of $N^3_G(u,v)$ in an optimal way by selecting $u$ and $v$ only. This selection of $u$ and $v$ was forced by the only vertex $w\in N^3_G(u,v)$ that remained in $G'$. We claim that $G$ contains a minimum double dominating set $D$ which does not contain any vertex from $N^2_G(u,v)\cup N^3_G(u,v)$. First observe that there can not be three or more vertices from $N^2_G(u,v)\cup N^3_G(u,v)$ in $D$. If it were, then we could replace those three or more vertices by $u$ and $v$, thus contradicting the minimality of $D$. Now for those two (or one) vertices from $N^2_G(u,v)\cup N^3_G(u,v)$ in $D$, we can replace them by $u$ and (or) $v$. Therefore, $G$ contains a minimum double dominating set $D$ which does not contain any vertex from $N^2_G(u,v)\cup N^3_G(u,v)$. Clearly, this set $D$ also forms a minimum double dominating set of $G'$. Hence, $\gamma_2(G)=\gamma_2(G')$. \end{proof} In this reduction, for a pair of distinct vertices $u, v\in V$, we have actually deleted at most $\min\{deg_G(u), deg_G(v)\}$ vertices. So, the time taken is $\sum_{u,v\in V} \min\{deg_G(u), deg_G(v)\}$ for the whole reduction process. Since for a planar graph $\sum_{v\in V} deg_G(v)=O(n)$, where $n$ is the number of vertices, we have the following lemma. \begin{lemma} For a planar graph having $n$ vertices, the reduction rule can be carried out in $O(n^3)$ time. \end{lemma} \subsubsection{A linear kernel} \label{sssec:linearkernel} In this subsection, we show that the reduction rule given in the previous section yields a linear kernel for \textsc{Double Domination Problem} in planar graphs. For this proof, first we find a ``maximal region decomposition'' of the vertices $V'$ of the reduced graph $G'=(V',E')$ and then we show that $|V'|=O(\gamma_2(G'))$. We start with some definitions regarding maximal region decomposition following Alber et al.~\cite{Alberkernel}. \begin{defi}\label{def_region} Let $G=(V,E)$ be a plane graph. A closed subset of the plane is called a region $R(u,v)$ between two vertices $u,v$ if the following properties are met: \begin{enumerate} \item the boundary of $R(u,v)$ is formed by two simple paths $P$ and $Q$ between $u$ and $v$ of length at most two edges, and \item all the vertices which are strictly inside the region $R(u,v)$ are from $N_G(u)\cap N_G(v)$. \end{enumerate} \end{defi} The definition of a region is slightly different from the definition given in~\cite{Alberkernel}, where all the vertices which are strictly inside the region $R(u,v)$ are from $N_G(u)\cup N_G(v)$. Note that by the above definition, paths of length one or two between $u$ and $v$ can form a region $R(u,v)$. For a region $R=R(u,v)$, let $\partial(R)$ denote the boundary of $R$ and $V(R)$ denote the vertices inside or on the boundary of $R$, i.e., $V(R)= \{u\in V |~ u~ \mbox{ is inside } R \mbox{ or on } \partial(R)\}$. \begin{defi}\label{def_regiondecom} Let $G=(V,E)$ be a plane graph and $D\subseteq V$. A $D$-region decomposition of $G$ is a set $\mathcal{R}$ of regions between pairs of vertices in $D$ such that \begin{enumerate} \item for $R(u,v)\in \mathcal{R}$ no vertices of $D$ (except $u$ and $v$) lies in $V(R(u,v))$, and \item for two regions $R_1, R_2\in \mathcal{R}$, $(R_1\cap R_2)\subseteq (\partial(R_1)\cup \partial(R_2))$, i.e., they can intersect only at the vertices on the boundary. \end{enumerate} For a $D$-region decomposition $\mathcal{R}$, we define $V(\mathcal{R})= \cup_{R\in \mathcal{R}} V(R)$. A $D$-region decomposition $\mathcal{R}$ is called maximal if there is no region $R$ such that $\mathcal{R'}=\mathcal{R}\cup R$ is a $D$-region decomposition, where $V(\mathcal{R})$ is a strict subset of $V(\mathcal{R'})$. \end{defi} First we observe an important property of a maximal $D$-region decomposition. \begin{lemma}\label{lem V=V(R)} Let $G=(V,E)$ be a plane graph with a double dominating set $D$ and let $\mathcal{R}$ be a maximal $D$-region decomposition. Then $V=V(\mathcal{R})$. \end{lemma} \begin{proof} Let $v\in V$ be a vertex such that $v\notin V(\mathcal{R})$. There can be two cases -- $v \in D$ and $v \notin D$. First, let us assume that $v\in D$. Since $D$ is a double dominating set of $G$, there exists another vertex $x\in D$ such that $vx \in E$. Now, the path $P=(x,v)$ forms a region $R$. Clearly $\mathcal{R}\cup R$ forms a $D$-region decomposition of $G$ which contradicts the maximality of $\mathcal{R}$. Let us now consider the other case $v\notin D$. Since $D$ is a double dominating set of $G$, there exists $x, y\in D$ such that $vx, vy\in E$. In this case, the path $P=(x,v,y)$ forms a region $R$. Here also, $\mathcal{R}\cup R$ forms a $D$-region decomposition of $G$ which contradicts the maximality of $\mathcal{R}$. Thus each vertex of $V$ is in $V(\mathcal{R})$, that is, $V\subseteq V(\mathcal{R})$. Thus $V=V(\mathcal{R})$. \end{proof} It is obvious that, for a plane graph $G=(V,E)$ with a double dominating set $D$, there exists a maximal $D$-region decomposition $\mathcal{R}$. Based on Lemma \ref{lem V=V(R)}, we propose a greedy algorithm to compute a maximal $D$-region decomposition, which is given in Algorithm \ref{maxregiondecomp}. The algorithm basically ensures the properties of the region decomposition mentioned in Definitions \ref{def_region} and \ref{def_regiondecom}. \begin{algorithm} \relsize{-1}{ \KwIn{A plane graph $G=(V,E)$ and a double dominating set $D\subseteq V$.} \KwOut{A maximal $D$-region decomposition $\mathcal{R}$ of $G$.} \Begin{ $V_{used}\leftarrow \emptyset$, $\mathcal{R}\leftarrow \emptyset$\; \While {$V_{used}\neq V$}{ Select a vertex $x$ from $V\setminus V_{used}$\; Consider the set $\mathcal{R}_x$ of all regions $S$ with the following properties: \begin{enumerate} \item $S$ is a region between $u$ and $v$, where $u,v \in D$. \item $S$ contains $x$. \item no vertex from $D\setminus \{u,v\}$ is in $V(S)$. \item $(S\cup R)\subseteq (\partial(S)\cup \partial(R))$ for all $R\in \mathcal{R}$. \end{enumerate} Choose a region $S_x\in \mathcal{R}_x$ which is maximal in terms of vertices\; $\mathcal{R}\leftarrow \mathcal{R}\cup \{S_x\}$\; $V_{used}\leftarrow V_{used} \cup V(S_x)$\; } $\Return(\mathcal{R})$\; } \caption{REGION$\_$DECOMPOSITION$(G, D)$} \label{maxregiondecomp} } \end{algorithm} Clearly Algorithm \ref{maxregiondecomp} output a maximal $D$-region decomposition in polynomial time. Next, we show that for a given plane graph $G$ with a double dominating set $D$, every maximal $D$-region decomposition contains at most $O(|D|)$ many regions. For that purpose, we observe that a $D$-region decomposition induces a graph in a very natural way. \begin{defi}\label{def_indced graph} The induced graph $G_{\mathcal{R}}=(V_{\mathcal{R}}, E_{\mathcal{R}})$ of a $D$-region decomposition $\mathcal{R}$ of $G$ is the graph with possible multiple edges which is defined as follows: $V_{\mathcal{R}}= D$ and $E_{\mathcal{R}}=\{(u,v)|$there is a region $R(u,v) \in \mathcal{R}$ between $u,v\in D \}$. \end{defi} Note that, since by Definition \ref{def_regiondecom} the regions of a $D$-region decomposition do not intersect, the induced graph $G_{\mathcal{R}}$ of a $D$-region decomposition $\mathcal{R}$ is a planar graph with multiple edges. Next we bound the number of regions in a maximal $D$-region decomposition using the concept of \emph{thin planar graph} following Alber et al.~\cite{Alberkernel}. \begin{defi} A planar graph $G=(V,E)$ with multiple edges is thin if there exists a planar embedding such that if there are two edges $e_1, e_2$ between a pair of distinct vertices $v,w\in V$, then there must be two further vertices $u_1, u_2\in V$ which sit inside the two disjoint areas of the plane that are enclosed by $e_1$ and $e_2$. \end{defi} \begin{lemma} Let $D$ be a double dominating set of a planar graph $G=(V,E)$. Then the induced graph $G_{\mathcal{R}}=(V_{\mathcal{R}}, E_{\mathcal{R}})$ of a maximal $D$-region decomposition $\mathcal{R}$ of $G$ is a thin planar graph. \end{lemma} \begin{proof} Let $R_1$ and $R_2$ be two regions between two vertices $v,w\in D$ and $e_1$ and $e_2$ be the corresponding multiple edges between two vertices $v, w\in V_{\mathcal{R}}$. Let $A$ be an area enclosed by $e_1$ and $e_2$. If $A$ contains a vertex $u\in D$, we are done. Suppose there is no vertex of $D$ in $A$. Now consider the following cases: \begin{description} \item[There is no vertex from $V\setminus D$ in $A$:] In this case, by combining the regions $R_1$ and $R_2$, we can form a bigger region which is a contradiction to the maximality of $\mathcal{R}$. \item[There is a vertex $x\in (V\setminus D)$ in $A$:] In this case, if $x$ is double dominated by $v$ and $w$, then again we can combine the two regions $R_1$ and $R_2$ to get a bigger region. So, assume that $x$ is dominated by some vertex $u$ other than $v$ and $w$. Since $G$ is planar, $u$ must be in $A$ which contradicts the fact that $A$ does not contain any vertex from $D$. \end{description} Hence, combining both the cases we see that $G_{\mathcal{R}}$ is a thin planar graph. \end{proof} In~\cite{Alberkernel}, it is proved that for a thin planar graph $G=(V,E)$, we have $|E|\leq 3|V|-6$. Hence we have the following lemma. \begin{lemma}\label{lem R less 3D} For a plane graph $G$ with a double dominating set $D$, every maximal $D$-region decomposition $\mathcal{R}$ contains at most $3|D|$ many regions. \end{lemma} Now, if we can bound the number of vertices that belongs to any region $R(u,v)$ of a maximal $D$-region decomposition $\mathcal{R}$ by some constant factor, we are done. However, achieving this constant factor bound is not possible for any plane graph $G$. But in a reduced plane graph, we can obtain this bound, as shown in the following lemma. \begin{lemma}\label{lem constnt vertex} A region $R$ of a plane reduced graph contains at most $6$ vertices, that is, $|V(R)|\leq 6$. \end{lemma} \begin{proof} Let $R$ be the region between $u$ and $v$ and $\partial(R)=\{u, x, v, y\}$. First note that $R$ contains at most two vertices from $N_{G'}^1(u,v)$ and the only possibility of such vertices are $x$ and $y$. If there exists a vertex $w \in N_{G'}^1(u,v)$, apart from $x$ and $y$, then $w$ has to have a neighbor $z \notin N_{G'}(u,v)$. $z$ should be inside the region $R$ and hence, cannot be double dominated. Now, because of the reduction rule, we can say that $\left| N_{G'}^3 (u,v) \right| \le 1$. We consider the two cases: \begin{description} \item[Case I ($\left| N_{G'}^3 (u,v) \right| = 1$):] In this case, $\left| N_{G'}^2 (u,v) \right| = \emptyset$ by the reduction rule. Hence, $|V(R)| \le 5$. \item[Case II ($\left| N_{G'}^3 (u,v) \right| = 0$):] In this case, we claim that there can be at most two vertices from $N_{G'}^2 (u,v)$. If possible, let $p,q,r \in N_{G'}^2 (u,v)$. Now all these three vertices must be adjacent to either $x$ or $y$, which is not possible because of planarity. Hence, in this case $|V(R)| \le 6$. \end{description} \end{proof} \remove{ Otherwise, there exists a vertex $z\in V(R)$ which is not doubly dominated. Now because of planarity, there exists at most two vertices $p, q$ from $N_G^2(u,v)$ inside the region $R$. Also note that if there exists a vertex $w$ from $N_G^3(u,v)$ inside $R$, then there cannot be any vertex from $N_G^2(u,v)$ inside $R$ (because of the reduction rule). Thus in the worst case, $V(R)$ contains at most $6$ vertices, that is, $|V(R)|\leq 6$.} First observe that, for a reduced graph $G'$ with a minimum double dominating set $D$, by Lemma \ref{lem R less 3D}, there exists a maximal $D$-region decomposition $\mathcal{R}$ with at most $3\cdot \gamma_2(G')$ regions. Also by Lemma \ref{lem V=V(R)}, we have $V'=V(\mathcal{R})$ and by Lemma \ref{lem constnt vertex}, we have for each region $|V(R)|\leq 6$. Thus we have $|V'|=|V(\mathcal{R})|= |\cup_{R\in \mathcal{R}} V(R)| \leq \sum_{R\in \mathcal{R}} \left|V(R) \right| \leq 6\cdot |\mathcal{R}| \leq 18\cdot \gamma_{2}(G')$. \remove{ \begin{eqnarray*} |V|=|V(\mathcal{R})|= |\cup_{R\in \mathcal{R}} V(R)| \leq \sum_{R\in \mathcal{R}} |V(R)| \leq 6\cdot |\mathcal{R}| \leq 18\cdot \gamma_{2}(G). \end{eqnarray*}} Hence, we have the following theorem. \begin{theo}\label{theodoublekernel} For a reduced planar graph $G'=(V',E')$, we have $|V'|\leq 18\cdot \gamma_{2}(G')$, that is, \textsc{Double Domination Problem} on planar graph admits a linear kernel. \end{theo} \subsection{Liar's and $k$-tuple domination} We first show that the number of vertices in a plane graph, $|V| = O(\gamma_{LR}(G))$. In this respect, first we note that both the results in Lemma \ref{lem V=V(R)} and Lemma \ref{lem R less 3D} are valid for any plane graph $G$ and any double dominating set $D$. Since every liar's dominating set is also a double dominating set, similar type of results hold for any plane graph $G$ and any liar's dominating set $L$. \remove{The only part where we use the reduced graph is to show that the number of vertices in a region $R$ of a $D$-region decomposition is bounded above by a constant.} We claim that the number of vertices in a region $R$ of a $L$-region decomposition is bounded above by a constant. Let $R$ be a region between $u$ and $v$ and $\partial(R) = \{u,x,v,y\}$. Note that in $V(R)$ there are two vertices ($u$ and $v$) from $L$. Now, if there exists two vertices $p, q \in V(R) \setminus \partial(R)$, then for the pair $p$ and $q$ condition (ii) of liar's domination is violated. Hence, there is at most one vertex in $V(R) \setminus \partial(R)$. Therefore, $|V(R)| \le 5$. \remove{ But for liar's dominating set $L$, we can prove this directly for any plane graph. Because each region contains only two vertices of $L$ which, in turn, implies that $|V(R)|\leq 5$. Otherwise there exists a pair of vertices $u,v\in V(R)$ such that $|(N_G[u]\cup N_G[v])\cap D|\leq 2$.} Thus we have $|V|=|V(\mathcal{R})|= |\cup_{R\in \mathcal{R}} V(R)| \leq \sum_{R\in \mathcal{R}} |V(R)| \leq 5\cdot |\mathcal{R}| \leq 15\cdot |L|\leq 15\cdot \gamma_{LR}(G).$ \remove{ \begin{eqnarray*} |V|=|V(\mathcal{R})|= |\cup_{R\in \mathcal{R}} V(R)| \leq \sum_{R\in \mathcal{R}} |V(R)| \leq 5\cdot |\mathcal{R}| \leq 15\cdot |L|\leq 15\cdot \gamma_{LR}(G). \end{eqnarray*} } Hence, we have the following theorem \begin{theo}\label{theoliarker} For a planar graph $G=(V,E)$, $|V|\leq 15\cdot \gamma_{LR}(G)$. \end{theo} \remove{ In this subsection, we show similar result for \textsc{$k$-Tuple Domination Problem} ($k\geq 3$) as we have done for \textsc{Liar's Domination Problem} in the previous subsection. Since every $k$-tuple dominating set for $k\geq 3$ is a liar's dominating set, it immediately follows from Theorem \ref{theoliarker} that for a planar graph $G=(V,E)$, $|V|\leq 15\cdot \gamma_{k}(G)$, where $k\geq 3$. But we can improve the constant by a little margin as shown in the following theo.} Since every $k$-tuple dominating set for $k\geq 3$ is a liar's dominating set, we can use Theorem \ref{theoliarker}. But, we can improve the constant a little bit. \begin{theo}\label{theotupker} For a planar graph $G=(V,E)$, $|V|\leq 12\cdot \gamma_{k}(G)$, where $k\geq 3$. \end{theo} \begin{proof} Let $D$ be a minimum $k$-tuple dominating set of $G=(V,E)$. Since every $k$-tuple dominating set is a double dominating set, by Lemma \ref{lem R less 3D} we can form a maximal $D$-region decomposition $\mathcal{R}$ of $G$ containing at most $3\cdot |D|$ many regions. Again by Lemma \ref{lem V=V(R)}, we have $V=V(\mathcal{R})$. Since each region contains only two vertices of $D$, we have $|V(R)|\leq 4$. Otherwise there exists one vertex in $V(R)$ which is not dominated by $k$ vertices of $D$. Hence $|V|\leq 4 \cdot |\mathcal{R}|\leq 12\cdot |D|\leq 12\cdot \gamma_{k}(G)$. \end{proof} \section{Linear kernels for bounded genus graphs} In this section, we extend our results to bounded genus graphs to show that \textsc{$k$-Tuple Domination Problem} and \textsc{Liar's Domination Problem} admit a linear kernel. The notations in this section follow Section \ref{subsec:graphsurfacenotation}. For double domination problem, we apply the same reduction rule on a graph $G$ with bounded genus $g$ to obtain the reduced graph $G'$. Note that the reduced graph $G'$ is also of bounded genus $g$. Let $G=(V,E)$ be an $n$-vertex $\Sigma$-embedded graph. It is easy to observe that, since $\sum_{v\in V}deg_G(v)=O(n+\textup{eg}(\Sigma))$, the reduced graph $G'=(V',E')$ can be computed in $O(n^3 + n^2\cdot \textup{eg}(\Sigma))$ time, where $|V|=n$. Next we show that $|V'|=O(\gamma_2(G')+g)$ which implies \textsc{Double Domination Problem} admits a linear kernel in bounded genus graphs. To prove the above, we consider two cases. In the first case, we assume that the reduced $\Sigma$-embedded graph has representativity strictly greater than $4$. In the case when $rep(G)\leq 4$, we go by induction on the Euler genus of surface $\Sigma$. In the first case, the graphs are locally planar, i.e., all the contractable noose are of length less or equal to $4$. Since the boundary of the regions in planar case is less than or equal to $4$, the boundary $\partial(R)$ of any region $R$ of a $D$-region decomposition $\mathcal{R}$ is contractible. Hence the proof in the planar case can be extended in this case. Hence we have the following lemma. \begin{lemma}\label{lemdoublerep4} Let $G'=(V',E')$ be a reduced $\Sigma$-embedded graph where $\textup{rep}(G')>4$. Then $|V'|\leq 18(\gamma_2(G')+ \textup{eg}(\Sigma))$. \end{lemma} \begin{proof} Let $D$ be a double dominating set of $G'$ and $\mathcal{R}$ is a maximal $D$-region decomposition of $G'$. Forming a induced graph, $G_\mathcal{R}$ as in case of double domination problem in planar graphs (Section \ref{sssec:linearkernel}), we have $|\mathcal{R}|\leq 3\cdot(|D|+ eg(\Sigma))$. Also, in this case, every vertex of $V'$ belongs to at least one region of $\mathcal{R}$ and for a region $R$, $|V(R)| \le 6$. Hence, we have $|V'|\leq 18(\gamma_2(G')+ \textup{eg}(\Sigma))$. \end{proof} Next consider the case where $3\leq \textup{rep}(G')\leq 4$. For a noose $N$ in $\Sigma$, we define the graph $G_N=(V_N,E_N)$ as follows. First we consider the graph $\mathcal{G}$ obtained from $G'=(V',E')$ by cutting along $N$. Then for every $v\in N\cap V'$ if $v^i$, $i = 1, 2$, is not adjacent to a pendant vertex, then we add a pendant vertex $u^i$ adjacent to $v^i$ to form $G_N$. Clearly $G_N$ has genus less than that of $G'$. If we add all the vertices of $V_N \setminus V'$ to a double dominating set $D$ of $G'$, then we clearly obtain a double dominating set of $G_N$ and as, $\textup{rep}(G')\leq 4$, $|V_N \setminus V'| \le 16$. Hence, $\gamma_2(G_N)\leq \gamma_2(G')+|N\cap V'|\leq \gamma_2(G')+ 16$. Also note that if $G'$ is a reduced graph, then so is $G_N$. Using these facts, we prove that \textsc{Double Domination Problem} possesses a linear kernel when restricted to graphs with bounded genus. \begin{lemma}\label{lemdoublegenus} For any reduced $\Sigma$-embedded graph $G'=(V',E')$ with \textup{eg}$(\Sigma)\geq 1$, $|V'|\leq 18(\gamma_2(G')+ 32\cdot \textup{eg}(\Sigma)-16)$. \end{lemma} \begin{proof} We prove this result by induction on $\textup{eg}(\Sigma)$. Suppose $\textup{eg}(\Sigma)=1$. If $\textup{rep}(G')>4$, then the result follows from Lemma \ref{lemdoublerep4}. Otherwise Lemma \ref{lemcutting} implies that the graph $G_N$, described above, is planar. Hence by Theorem \ref{theodoublekernel}, we have $|V_N|\leq 18\cdot \gamma_2(G_N)$. Thus $|V'|\leq |V_N|\leq 18(\gamma_2(G')+16)$. Assume that $|V'|\leq 18(\gamma_2(G')+ 32\cdot \textup{eg}(\Sigma)-16)$ for any $\Sigma$-embedded reduced graph $G'$ with eg$(\Sigma)\leq g-1$. Consider a reduced $\Sigma$-embedded graph $G'$ with eg$(\Sigma)= g$. Now if rep$(G')>4$, then again by Lemma \ref{lemdoublerep4}, we are done. Hence assume that rep$(G')\leq 4$. By Lemma \ref{lemcutting}, either $G_N$ is the disjoint union of graphs $G_1$ and $G_2$ that can be embedded in surfaces $\Sigma_1$ and $\Sigma_2$ such that eg$(\Sigma)=$ eg$(\Sigma_1)+$ eg$(\Sigma_2)$ and eg$(\Sigma_i) > 0$, $i = 1, 2$ (this is the case when $N$ is surface separating curve), or $G_N$ can be embedded in a surface with Euler genus strictly smaller than eg$(\Sigma)$ (this holds when N is not surface separating). Let us consider the case where $G_N$ is the disjoint union of graphs $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ that can be embedded in surfaces $\Sigma_1$ and $\Sigma_2$. Since eg$(\Sigma_i)\leq g-1$ for $i=1,2$, we can apply the induction hypothesis on $G_i$. Thus we have, \begin{align*} |V'|\leq |V_N|&= |V_1|+ |V_2|& \\ &\leq \sum_{i=1}^{2} 18(\gamma_2(G_i)+ 32\cdot \textup{eg}(\Sigma_i)-16)&\\ &\leq 18(\gamma_2(G_N)+ 32\cdot \textup{eg}(\Sigma)-32)& \text{as $G_1$ and $G_2$ are disjoint}\\ &\leq 18(\gamma_2(G')+ 32\cdot \textup{eg}(\Sigma)-16).& \end{align*} Next we consider the case where $G_N$ can be embedded in a surface $\Sigma'$ with Euler genus strictly smaller than $g$. In this case too, we can apply induction hypothesis on $G_N$. Thus we have, \begin{align*} |V'|\leq |V_N| &\leq 18(\gamma_2(G_N)+ 32\cdot \textup{eg}(\Sigma')-16)&\\ &\leq 18(\gamma_2(G_N)+ 32\cdot (\textup{eg}(\Sigma)-1)- 16)&\\ &\leq 18(\gamma_2(G')+ 32\cdot \textup{eg}(\Sigma) -32)& \text{as $\gamma_2(G_N) \leq \gamma_2(G')+ 16$}\\ &\leq 18(\gamma_2(G')+ 32\cdot \textup{eg}(\Sigma)-16).& \end{align*} Thus we have proved that, $|V'|\leq 18(\gamma_2(G')+ 32\cdot \textup{eg}(\Sigma)-16)$ for every $\Sigma$-embedded graph $G'=(V',E')$. \end{proof} Hence by Lemma \ref{lemdoublerep4} and Lemma \ref{lemdoublegenus}, we have the main result of this subsection. \begin{theo} \textsc{Double Domination Problem} admits a linear kernel for bounded genus graphs. \end{theo} For liar's domination problem, by Theorem \ref{theoliarker}, we have $|V|\leq 15\cdot \gamma_{LR}(G)$ in case of a planar graph $G=(V,E)$. Proceeding exactly in the same way as in the case of double domination, we can have the following theorem for a $\Sigma$-embedded graph $G$. \begin{theo}\label{theoliargenus} Let $G=(V,E)$ be a $\Sigma$-embedded graph. Then $|V|\leq 15(\gamma_{LR}(G)+ 32\cdot \textup{eg}(\Sigma))$. \end{theo} Since for any graph that admits a $k$-tuple dominating set ($k\geq 3$), $\gamma_{LR}(G)\leq \gamma_{k}(G)$, we have the following corollary of Theorem \ref{theoliargenus}. \begin{coro} For a $\Sigma$-embedded graph $G=(V,E)$, $|V|\leq 15(\gamma_{k}(G)+ 32\cdot \textup{eg}(\Sigma))$. \end{coro} \section{Conclusion} \label{sec-conclusion} In this paper, we first have proved that \textsc{$k$-Tuple Domination Problem} and \textsc{Liar's Domination Problem} are \textsf{W[2]}-hard for general graphs. Then we have shown that these two problems admit linear kernel for planar graphs and also for bounded genus graphs. It would be interesting to look for other graph classes where these problems admit efficient parameterized algorithms. \section*{Acknowledgements} The authors want to thank Venkatesh Raman and Saket Saurabh for some of their nice introductory expositions to parametrization. \section{Appendix} \label{sec-appendix} \begin{lemma}\label{lem-liar-dom-planar-NP} \textsc{Liar's Domination Problem} is NP-complete for planar graphs. \end{lemma} \begin{proof} The reduction is from \textsc{Domination Problem} in planar graphs, which is known to be NP-complete\cite{GareyJohnson}. Let $G=(V,E)$ be a planar graph with $V=\{v_1, v_2, \ldots, v_n\}$ and $k$ be an integer. We construct an instance $G'=(V',E')$ and $k'$ of \textsc{Liar's Domination Problem} as follows: We add a set of $3n$ new vertices $S=\{x_i, y_i, z_i|1\leq i\leq n\}$ to the vertex set of $V$, i.e., $V'=V\cup S$ and the edge set of $G'$ is given by $E'=E\cup \{v_ix_i, x_iy_i, y_iz_i|1\leq i\leq n\}$. Note that, since $G$ is planar, so is $G'$. Also assume that $k'=k+3n$. In~\cite{Slater2009}, it is proved that $G$ has a dominating set of cardinality at most $k$ if and only if $G'$ has a liar's dominating set of cardinality at most $k'=k+3n$. Thus, \textsc{Liar's Domination Problem} is NP-complete for planar graphs. \end{proof} \phantomsection \bibliographystyle{alpha \addcontentsline{toc}{section}{Bibliography}
1,108,101,566,175
arxiv
\section{Introduction} Fast radio bursts (FRBs) are intense radio transients with extreme brightness temperatures that show dispersion relations consistent with propagation through cold plasma \citep{Lorimer2007Sci...318..777L, Thornton2013Sci...341...53T,Petroff2016PASA...33...45P}. Until now, more than seventy FRBs have been discovered. Only FRB 121102 and FRB 180814 show repeating bursts \citep{Spitler2014ApJ...790..101S,Spitler2016Natur.531..202S,Amiri2019a}. The sub-arcsecond localization of FRB 121102 using the VLA confirmed its cosmological origin (at redshift 0.193) \citep{Chatterjee2017Natur.541...58C}. The combined redshift and DM information of FRBs can be used as cosmological probes if a large sample of FRBs has redshift measurements \citep{Deng2014ApJ...783L..35D,Gao2014, McQuinn2014,Zheng2014, Zhou2014,Wei2015,Yu2017A&A...606A...3Y,Macquart2018, Li2018NatCo...9.3833L, Wang2018, Walters2018}. However, the physical origin of FRBs is still mystery \citep{Pen2018NatAs...2..842P}. There are many models have been proposed \citep{Totani2013PASJ...65L..12T, Falcke2014A&A...562A.137F, Zhang2014ApJ...780L..21Z, Cordes2016MNRAS.457..232C, Dai2016ApJ...829...27D, Wang2016ApJ...822L...7W, Zhang2016ApJ...827L..31Z, Katz2016, Metzger2017, Zhang2017ApJ...836L..32Z, Pen2018NatAs...2..842P, Platts2018arXiv181005836P}. On the other hand, it is well known that solar type III radio bursts, identified by high brightness temperatures and rapid frequency drift, are a common signature of fast electron beams in the solar corona \citep{Bastian1998}. It arises from the nonlinear conversion of Langmuir waves generated by two-stream instability of electron beams \citep{Ginzburg1958SvA.....2..653G, Melrose2017RvMPP...1....5M, Kliem2000}. There are at least three common properties for repeating FRBs and solar type III radio bursts. First, they both have high brightness temperatures, 10$^6$ K-10$^{12}$ K for solar radio bursts and as high as 10$^{35}$ K for FRBs. Second, the frequency drift (high-to-low temporal evolution) is found in repeating FRB 121102 \citep{Hessels2018arXiv181110748H} and FRB 180814 \citep{Amiri2019a}, and solar type III radio bursts \citep{Bastian1998}. Third, both radio bursts show similar intensity temporal evolution \citep{Amiri2019a,Hessels2018arXiv181110748H,Fainberg1974}. Although radio bursts are common phenomena in repeating FRBs and the Sun, the burst energy spans more than 15 orders of magnitude and an outstanding question appears, namely, do repeating FRBs and solar type III radio bursts have a similar physical mechanism? Interestingly, some theoretical models have suggested that repeating FRBs could be magnetically dominated explosive events \citep{Lu2018MNRAS.477.2470L,Lyutikov2019arXiv190103260L}, similar as that of solar type III radio bursts. However, a physical analogy between FRBs and solar type III radio bursts has not yet been established. In this paper, we will investigate the physical connection between the FRB 121102 and solar type III radio bursts. \section{Data and Method} We collect the bursts of FRB 121102 from the observation by Green Bank telescope at 4-8 GHz. Recent work identified 93 pulses of FRB 121102 from 6 hours of observation \citep{Gajjar2018ApJ...863....2G,Zhang2018ApJ...866..149Z}. This observation constructs the largest sample of FRB 121102 for a single observation. Using this sample, we can avoid the complex selection effect caused by the different telescopes at different frequencies. As for solar type III radio bursts, we select the data from the National Centers for Environmental Information (NCEI) observed by United States Air Force Radio Solar Telescope Network (RSTN) \footnote{\href{ftp://ftp.ngdc.noaa.gov/STP/space-weather/solar-data/solar-features/solar-radio/radio-bursts/reports/fixed-frequency-listings/}{ftp://ftp.ngdc.noaa.gov/STP/space-weather/solar-data/solar-features/solar-radio/radio-bursts/reports/fixed-frequency-listings/}}, which has observed for many years and has accumulated lots of data. RSTN consists of four sites: Learmonth, Palehua, Sagamore Hill and San Vito. The device and analysis methods in all sites are identical, so we can simply put them together to study their statistical properties. The data of RSTN contains solar radio bursts at 8 frequencies (245 MHz, 410 MHz, 610 MHz, 1415 MHz, 2695 MHz, 4995 MHz, 8800 MHz, 15400 MHz). We divide the data into multiple subsamples based on frequency and calculate their statistical nature on each frequency. In order to obtain high quality data, we filter the data according to criteria given by \citet{Giersch2017SpWea..15.1511G}. Based on these criteria, we select a large sample of solar type \uppercase\expandafter{\romannumeral3} radio bursts. The number of bursts in each frequency is listed in Table \ref{tab:1}. The number of bursts of FRB 121102 is small, so it's preferable to consider cumulative distribution rather than differential distribution. We derive the distribution of energy, peak flux and duration time from FRB 121102 and show the results in right panels of Figure \ref{fig:frbE}, Figure \ref{fig:frbf} and Figure \ref{fig:frbwidth}. The energy is calculated from $E=4\pi d_L^2 S\Delta \nu/(1+z)$, where $d_L$ is the luminosity distance, $S$ is fluence, $z$ is redshift and $\Delta \nu$ is bandwidth. It should be noted that we must consider the deviation from ideal power-law distribution. There are many effects that cause this deviation, such as the threshold of telescope and a physical threshold of an instability. Thus we adopt threshold power-law distribution to fit the cumulative distribution, which is \begin{equation} \label{eq:Fculdis} N(>E)=a+b(E^{1-\alpha_E}-E_{\rm max}^{1-\alpha_E}), \end{equation} where $E_{\rm max}$ is the maximum energy of FRB and $\alpha_E$ is the power-law index of differential distribution of energy. We also consider the waiting time of FRB 121102. Taking the difference between the start times of 2 bursts as the waiting time, we compute the differential distribution of waiting time and show it in right panel of Figure \ref{fig:frbwaiting} as blue points. Below, we use Poisson process to explain the waiting time distributions. For constant burst rate, the waiting time follows the Poisson interval distribution \citep{Wheatland1998ApJ...509..448W} \begin{equation} \label{eq:cPdt} P(\Delta t) = \lambda e^{-\lambda \Delta t}, \end{equation} where $\lambda$ is the burst rate. If the rate is time dependent, the distribution can be treated as a piecewise constant Poisson process consisting $N$ intervals with $\lambda_i$ and duration $t_i$. The wait time distribution can be derived by \citet{Aschwanden2011soca.book.....A} \begin{equation}\label{eq:addPdt} P(\Delta t) \simeq \frac{1}{\bar{\lambda}} \sum_{i=1}^N \frac{t_i}{T} \lambda_i^2 e^{-\lambda_i \Delta t}, \end{equation} where $\bar{\lambda}$ is the average burst rate and $T$ is the duration of the observing period \citep{Wheatland1998ApJ...509..448W}. Equation (\ref{eq:addPdt}) can be transformed into a continues function \begin{equation} \label{eq:conPdt} P(\Delta t) = \frac{\int_0^T \lambda(t)^2 e^{-\lambda(t)\Delta t } dt}{\int_0^T \lambda(t) dt}. \end{equation} In this case, we adopt an exponentially growing occurrence rate \citep{Aschwanden2011soca.book.....A}, and obtain $ P(\Delta t) = \lambda_0/(1 + \lambda_0 \Delta t)^2$. Using the Markov chain Monte Carlo (MCMC) method, we derive $ \lambda_0 = 1.23^{+0.80}_{-0.38} \times 10^{-5}~ \rm ms^{-1}$. The differential distributions of energy, peak flux, duration and waiting time for solar type III radio bursts are also derived. Unlike FRB 121102, the number of solar radio bursts is large enough to get a differential distribution. The data of RSTN consists with eight frequencies. In these frequencies, 4995 MHz is more interested, because it has the similar frequency with the data of FRB 121102. We use ideal power-law function to fit the differential distributions of energy, peak flux and duration. This function form is \begin{equation}\label{energydis} dN/dE\propto E^{-\alpha_E}, \end{equation} where $\alpha_E$ is the power-law index. Assuming the number of solar radio bursts in a given bin satisfies Poisson distribution, the best-fitting results are derived using MCMC methods. The power-law index of energy is $1.63 \pm 0.06$, the power-law index of peak flux is $1.84 \pm 0.04 $ and the power-law index of duration is and $ 1.69 \pm 0.02 $. The fitting results show in left panels of Figure \ref{fig:frbE}, Figure \ref{fig:frbf} and Figure \ref{fig:frbwidth}. As for waiting times, the differential distribution is derived. We still use $P(\Delta t) = \lambda_0/(1 + \lambda_0 \Delta t)^2$ to fit the distributions. As for the solar radio bursts in other seven frequencies, we also derive the distributions and fit these distributions with the same function. We give the best-fitting values for $ \alpha_E $, $ \alpha_F $, $ \alpha_T $ and $ \lambda_0 $ in Table \ref{tab:1}. The best-fitting values are consistent with each other for the eight frequencies. \section{Results} In this paper, we compare statistical properties of the energy, flux, duration and waiting time distributions. We collect 21 bursts \citep{Gajjar2018ApJ...863....2G} and 72 bursts \citep{Zhang2018ApJ...866..149Z} of FRB 121102 recorded with the C-band receiver at the Green Bank Telescope (GBT). Due to the small number of bursts, the cumulative distribution is performed. The energy distribution shows a flat part at the low-energy regime, which could be due to incomplete sampling and some selection bias for large bursts \citep{2015ApJ...814...19A}. Therefore, in order to avoid the selection effect, only the distribution above the break is fitted. The right panel of Figure \ref{fig:frbE} shows the energy cumulative distribution of FRB 121102. The red line is best-fitting with $\alpha_E= 1.63 \pm 0.06$, which consistent with the value found by \citet{Wang2017} and \citet{Law2017}. For solar type III radio bursts, The energy of solar type III radio bursts can be derived from $E=4\pi D^2 S \Delta \nu$, where $D=1$ AU, $S\simeq T\times F$, and $\Delta \nu=14$ MHz is the bandwidth. $T$ is the duration time and $F$ is the flux of radio bursts. We derive the differential distribution of energy at 4995 MHz, which is closest to the frequency of FRB data. From equation (\ref{energydis}), the power-law index of energy is $ \alpha_E= 1.52 \pm 0.05$, which is shown as red line in left panel of Figure \ref{fig:frbE}. It must be noted that similar indices are found at different frequencies (see Table 1). The value of $\alpha_E$ is consistent with those previous works \citep{SaintHilarieApJ...762...60S}. From Figure \ref{fig:frbE}, similar distributions of energy are found between FRB 121102 and solar type III radio bursts. Figure \ref{fig:frbf} shows the differential distribution of peak flux $F$ for solar type III radio bursts at 4995 MHz (left panel) and the cumulative distribution of peak flux for FRB 121102 (right panel), respectively. Using the same fitting method as Figure \ref{fig:frbE}, we find the power-law indices are $\alpha_F= 1.84 \pm 0.04 $ and $\alpha_F= 1.94 \pm 0.03$ for solar radio bursts and FRB 121102, respectively. Solar type III radio bursts at different frequencies show similar distributions (see Table 1). The value of $\alpha_F$ for FRBs is consistent with that found by \citet{Wang2017}. Figure \ref{fig:frbwidth} shows the differential distribution of duration $T$ for solar type III radio bursts at 4995 MHz (left panel) and the cumulative distribution of duration for FRB 121102 (right panel), respectively. Using the same fitting method as Figure \ref{fig:frbE}, we find the power-law indices are $\alpha_T= 1.69 \pm 0.02 $ and $\alpha_T=1.57 \pm 0.13 $ for solar radio bursts and FRB 121102, respectively. Solar type III radio bursts at different frequencies show similar distributions (see Table \ref{tab:1}). From Figures \ref{fig:frbE}, \ref{fig:frbf} and \ref{fig:frbwidth}, we have found a similar power-law dependence of the occurrence rate for both radio bursts, very similar to what is found in solar flares \citep{Crosby1993SoPh..143..275C,Aschwanden2011soca.book.....A,Wang2013NatPh...9..465W}. These three properties are predicated by avalanche models of self-organized criticality (SOC) systems \citep{Bak1988PhRvA..38..364B,Bak1987PhRvL..59..381B,Lu1991ApJ...380L..89L}. For example, from numerical simulations, \citet{Lu1991ApJ...380L..89L} have found that the power-law indices of solar flares are $\alpha_E\sim 1.5$, $\alpha_F\sim 1.8$ and $\alpha_T\sim 1.6$ for energy, peak flux and duration distributions, respectively. Therefore, both radio bursts are SOC events. The concept of SOC was proposed to explain the power-law and scale-invariant correlations extending many orders of magnitude in complex systems \citep{Bak1988PhRvA..38..364B,Bak1987PhRvL..59..381B}. SOC governs many nonlinear dissipative systems of our universe \citep{Aschwanden2011soca.book.....A}. What can we learn from the similar distributions between solar type III radio bursts and FRBs? It is generally believed that type III bursts arise from the nonlinear conversion of Langmuir waves at the local plasma frequency by energetic electron beams accelerated during solar flares \citep{Ginzburg1958SvA.....2..653G,Robinson2000GMS...119...37R}. Numerical simulations have revealed that solar radio bursts are caused by particle acceleration episodes that result from bursty magnetic reconnection \citep{Kliem2000}. From observations, direct evidences have been found that energetic electrons are accelerated by magnetic reconnections, which also produce X-ray flares \citep{Cairns2018NatSR...8.1676C}. So type III radio bursts are triggered by magnetic reconnections. As for FRB 121102, the radio emission may be coherent radiation by bunches of relativistic electrons that result from magnetic reconnection in the magnetosphere of a magnetar \citep{Katz2016,Metzger2017,Lu2018MNRAS.477.2470L,Lyutikov2019arXiv190103260L}. The similar flux and duration distributions support that both radio bursts are triggered by magnetic reconnection. The fourth statistical property is the waiting time distribution, which has been studied in solar X-ray flares \citep{Wheatland1998ApJ...509..448W}, and X-ray flares in black hole systems \citep{Wang2013NatPh...9..465W,Wang2015ApJS..216....8W}. The waiting time $\Delta t$ is defined as the time interval between two successive bursts. This distribution provides extra constraints on theoretical models. For example, avalanche models predict that bursts occur independently \citep{Aschwanden2011soca.book.....A,Lu1993ApJ...412..841L}. Figure \ref{fig:frbwaiting} shows the occurrence rates as a function of waiting times for solar type III radio bursts at 4995 MHz (left panel) and FRB 121102 (right panel), respectively. A Poissonian random process has a power-law-like waiting time distribution for a time-dependent rate, which is the prediction of the SOC theory \citep{Aschwanden2011soca.book.....A}. We fit the waiting time distribution with \begin{equation} \label{eq:Pdt} P(\Delta t) = \frac{\lambda_0}{(1 + \lambda_0 \Delta t)^2}. \end{equation} For large waiting times ($\Delta t\gg 1/\lambda_0$), it gives the power-law limit $P(\Delta t)\approx \Delta t^{-2}$. The fitting results from MCMC method using equation (\ref{eq:Pdt}) are shown as solid lines in Figure \ref{fig:frbwaiting}. The mean rates are $\lambda_0=1.23^{+0.80}_{-0.38} \times 10^{-5} \rm ~ms^{-1}$ for FRB 121102 and $ 1.10^{+0.11}_{-0.01} \times 10^{-5} ~(6s)^{-1}$ for solar radio bursts. The waiting times at other frequencies can be also well fitted using equation (\ref{eq:Pdt}) (see Table \ref{tab:1}). \section{Discussion} In this paper, we find that repeating FRB 121102 and solar type III radio bursts have similar statistical properties, which are predicted by SOC systems. The similarities, together with the type III radio bursts are triggered by magnetic reconnection, indicate that repeating FRBs are powered by magnetic energy within magnetars magnetospheres. Many facilities join the FRB searches, such as Parkes \citep{Petroff2016PASA...33...45P}, the Australian Square Kilometer Array Pathfinder (ASKAP) \citep{Johnston2009ASPC..407..446J}, UTMOST \citep{Bailes2017PASA...34...45B}, the Canadian Hydrogen Intensity Mapping Experiment (CHIME) \citep{Amiri2019a}, the Five-hundred-meter Aperture Spherical radio Telescope (FAST) \citep{Li2018IMMag..19..112L}, and MeerKAT \citep{Sanias2018IAUS..337..406S}. In future, large sample of FRBs may unveil underlying physics. \acknowledgements We would like to acknowledge the National Centers for Environmental Information (NCEI) and their past and present staff who work to make space weather data freely available. We also thank X. Cheng and Y. C. Zou for discussions. This work is supported by the National Natural Science Foundation of China (grant No. U1831207, 11573014 and 11833003) and the National Key Research and Development Program of China (grant No. 2017YFA0402600).
1,108,101,566,176
arxiv
\section{Introduction} The optical response in periodically driven topological systems is important for the study of the gain and loss mechanism, especially in the many-electron system. For non-Hermitian system under laser, the conductivity and absorption as well as the scattering are related to the polarization vector of the driving field even in the ${\bf q}\rightarrow 0$ limit, where ${\bf q}$ is the in-plane Bloch wave vector in the direction of the in-plane light-polarization, while in the imputity scattering system (with considerable Coulomb effect), ${\bf q}$ is the scattering wave vector (due to the charged impurities) which can be seen in the transition matrix element of the dynamical polarization. We note that the direction of the polarization is importnt for the optical or scattering properties of the two-dimension system we talking in this article, the anisotropic polarization induced by both the warping structure and the polarized incident light is discussed. The hexagonal Dirac-cone warping leads to the anisotropic decay of the quasiparticle interference and the backscattering, and it help to preserve the quasiparticle chirality. While the anisotropic polarized light also leads to anisotropic conductivity or mobility\cite{Liu H,Liu Y}. In this article, we mainly discuss the silicene and the related group-V and group-VI materils with the in-plane polarized of the driving field, \section{Model of silicene in optical system} For the low-energy Dirac-Hamilatonian of silicene in tight-binding model, it reads\cite{Wu C HX,Wu C H3,Wu C H1,Wu C H4,Wu C H7} \begin{equation} \begin{aligned} H(t)=&\hbar v_{F}(\eta\tau_{x}P_{x}(t)+\tau_{y}P_{y}(t))+\eta\lambda_{{\rm SOC}}\tau_{z}\sigma_{z}+a\lambda_{R_{2}}\eta\tau_{z}(P_{y}\sigma_{x}-P_{x}\sigma_{y})\\ &-\frac{\overline{\Delta}}{2}E_{\perp}\tau_{z}+\frac{\lambda_{R_{1}}}{2}(\eta\sigma_{y}\tau_{x}-\sigma_{x}\tau_{y})+M_{s}s_{z}+M_{c} -\eta\tau_{z}\hbar v_{F}^{2}\frac{\mathcal{A}}{\Omega}+\mu, \end{aligned} \end{equation} where $P_{x}(t)=k_{x}-\frac{e}{c}A_{x}(t)=k_{x}-\frac{e}{c}A{\rm sin}\Omega t$ with $A$ the scalar potential. $E_{\perp}$ is the perpendicularly applied electric field, $a=3.86$ is the lattice constant, $\mu$ is the chemical potential, $\overline{\Delta}$ is the buckled distance between the upper sublattice and lower sublattice, $\sigma_{z}$ and $\tau_{z}$ are the spin and sublattice (pseudospin) degrees of freedom, respectively. $\eta=\pm 1$ for K and K' valley, respectively. $M_{s}$ is the spin-dependent exchange field and $M_{c}$ is the charge-dependent exchange field. $\lambda_{SOC}=3.9$ meV is the strength of intrinsic spin-orbit coupling (SOC) and $\lambda_{R_{2}}=0.7$ meV is the intrinsic Rashba coupling which is a next-nearest-neightbor (NNN) hopping term and breaks the lattice inversion symmetry. $\lambda_{R_{1}}$ is the electric field-induced nearest-neighbor (NN) Rashba coupling which has been found that linear with the applied electric field in our previous works\cite{Wu C H1,Wu C H2,Wu C H3,Wu C H4,Wu C H5}, which as $\lambda_{R_{1}}=0.012E_{\perp}$. We here ignore the effects of the high-energy bands on the low-energy bands. The Dirac-mass and the corresponding quasienergy spectrum (obtained throught the diagonalization procedure) are\cite{Wu C HX} \begin{equation} \begin{aligned} &m_{D}^{\eta\sigma_{z}\tau_{z}}=|\eta\sqrt{\lambda_{{\rm SOC}}^{2}+a^{2}\lambda^{2}_{R_{2}}k^{2}}s_{z}\tau_{z}-\frac{\overline{\Delta}}{2}E_{\perp}\tau_{z}+M_{s}s_{z}-\eta\hbar v_{F}^{2}\frac{\mathcal{A}}{\Omega}|,\\ &\varepsilon=s\sqrt{a^{2}\lambda^{2}_{R_{2}}k^{2}+(\sqrt{\hbar^{2}v_{F}^{2}{\bf k}^{2} +(\eta\lambda_{{\rm SOC}}s_{z}\tau_{z}-\frac{\overline{\Delta}}{2}E_{\perp}\tau_{z}-\eta\hbar v_{F}^{2}\frac{\mathcal{A}}{\Omega} )^{2}}+M_{s}s_{z}+s\mu)^{2}}, \end{aligned} \end{equation} respectively, where the dimensionless intensity $\mathcal{A}=eAa/\hbar$ is the vector potential and $s=\pm 1$ is the electron/hole index. The optical response in periodically driven topological systems (time-Floquet systems) which is non-Hermiticity, has been widely studied\cite{Koutserimpas T T} where the existence of $\mathcal{PT}$-symmetry relys on the frequency of applied light. It's believed that the related events through a certain operator $\hat{f}$ has the form of $B=\hat{f}A$\cite{Li C C} in quantum system, e.g., through the SU(2) evolution operator, the effective Floquet Hamiltonian of the time-Floquet systems consider the Berry connection (Berry gauge potential) reads, \begin{equation} \begin{aligned} H_{{\rm eff}}=\frac{i\hbar}{T}{\rm log}[\hat{\mathcal{T}}e^{-\frac{i}{\hbar}\int^{T}_{0}H(t)dt}] -\frac{i\hbar}{T}{\rm log}[\hat{\mathcal{P}}e^{\frac{i}{\hbar}\int_{\mathcal{C}}A({\bf k})d{\bf k}}], \end{aligned} \end{equation} where $\hat{\mathcal{T}}$ is the time-order operator $\hat{\mathcal{P}}$ is the path operator of the electron contour $C$ in the phase-space. Note that here the contour encircled by the quasiparticle in momentum space and with the time-dependence of the Bloch bands which is required by the anomalous velocity term due to the Berry curvature. For the case of light-driven contour in momentum space, $\mathcal{C}_{k}=\eta N\hbar\Omega$, where $\mathcal{C}_{k}$ is the projection of $\mathcal{C}$ on the momentum space, and $N$ is the number of Fourier modes. The eigenfunction of system satisfies $\Psi(t)=e^{iN\hbar\Omega t}\psi(t)$, and with the eigenvalue in the diagonal block of Floquet Hamiltonian shifted by the value of $\eta N\hbar\Omega$, while the time-dependent monochromatic harmonic perturbation enters in the off-diagonal blocks of the Floquet Hamiltonian\cite{Wu C H2,Wu C H5,Perez-Piskunow P M}. The total Hamiltonian consider the effect of circularly polarized light is $H=\frac{1}{T}\int^{T}_{0}H(t)e^{-iN\hbar\Omega t}dt+V(t)$, where $V(t)$ is the electron-radiation interaction \begin{equation} \begin{aligned} V(t)=e^{-i\eta N\hbar\Omega t}(-ie{\bf v}A)+e^{i\eta N\hbar\Omega t}(-ie{\bf v}A)^{\dag}, \end{aligned} \end{equation} where $A=E/\hbar\Omega$ is the scalar potential with $E$ the complex amplitude of electric field, and ${\bf v}$ is the interband transition velocity matrix element, which reads\cite{Wu C HX} \begin{equation} \begin{aligned} {\bf v}=&\langle \psi|\frac{\partial H}{\hbar\partial {\bf k}}|\psi' \rangle +\hbar\partial_{t}{\bf k}\cdot\Omega({\bf k})\\ =& (v_{F}+\frac{i\eta a\lambda_{R_{2}}}{\hbar})(1+\eta\frac{m_{D}^{\eta s_{z}\tau_{z}}}{\varepsilon})\\ &+[\partial_{r}V({\bf r})+\frac{e}{\hbar}\partial_{\mu}\Phi({\bf k})e^{\mu}-\frac{e}{\hbar}{\bf v}_{g}\times{\bf B}({\bf k})] \cdot\frac{1}{2 }\frac{2\eta \hbar^{2}v_{F}^{2}m^{\eta s_{z}\tau_{z}}_{D}}{2\varepsilon(4(m^{\eta s_{z}\tau_{z}}_{D})^{2}+\hbar^{2}v_{F}^{2}k^{2})}, \end{aligned} \end{equation} when takes the Berry correction into consideration, where $\psi$ is the Bloch wave function in conduction band while $\psi'$ is the Bloch wave function in conduction band, and $\Omega({\bf k})$ is the Berry curvature which is nonzero when the inversion symmetry is broken, it reads \begin{equation} \begin{aligned} \Omega({\bf k})=-{\rm Im}\left[ \sum_{\psi'\neq\psi}\frac{\langle\psi'|\partial_{k}H_{k}|\psi\rangle\times\langle\psi'|\partial_{k}H_{k}|\psi\rangle}{(\varepsilon_{\psi}-\varepsilon_{\psi'})^{2}}\right]. \end{aligned} \end{equation} Although the Berry curvature has been ignored in most of the computation of the velocity operator, however, it's very important to the non-adiabatic correction in the system with broken reversal symmetry or the time-reversal symmetry (due to the off-resonance light or the competition between Zeeman coupling and Rashba-coupling), and it's closely related to the quantum anomalous Hall effect\cite{Zhao A,Jungwirth T}, especially for the nonrelativity particle in semiclassical limit. In the absent of Berry correction, it's been found that $\partial H_{cv}/\hbar\partial k=2\partial H_{cc}/\hbar\partial k$, where $H_{cv}$ describes the interband transition and $H_{cc}$ describes the intraband transition. Considering the Berry correction, for the velocity of intraband transition, the berry term vanishes according to above expression (since $\psi=\psi'$), thus $\partial H_{cv}/\hbar\partial k\neq 2\partial H_{cc}/\hbar\partial k$. \section{Optical absorption in the presence of Dirac-cone warping} The unpolarized optical absorption coefficient for spin- and valley-degenerate reads\cite{Huang R} \begin{equation} \begin{aligned} \alpha(\Omega)=\frac{g_{s}g_{v}}{2n_{t}}\alpha_{0}\int_{\mathcal{C}=\hbar\Omega}d\phi=\frac{4\pi}{n_{t}}\alpha_{0}, \end{aligned} \end{equation} where $\alpha_{0}=\frac{e^{2}}{2\epsilon_{0}h c}=1/137.036$ is the Sommerfeld fine structure constant, $g_{s}g_{v}$ denotes the spin and valley degenerate, $n_{t}$ is the refractive index. For the special case that at the Dirac-point (${\bf k}=0$) with gapless band structure and vanishing reflectivity coefficient and vanishing SOC\cite{Matthes L}, since then the transition is dominated by the intraband (mainly the conduction band) transition, the well known optical absorption coefficient can be obtained as \begin{equation} \begin{aligned} \alpha(0)=\frac{g_{s}g_{v}}{2n_{t}}\alpha_{0}\pi=\frac{\pi}{n_{t}}\alpha_{0}=\frac{e}{n_{t}4\hbar\varepsilon_{0}c}, \end{aligned} \end{equation} where $\varepsilon_{0}$ is the permittivity of vacuum and $c$ is the speed of light. Note that this ideal optical absorption coefficient only correct for the case that without the interband transition and SOC, i.e., both the frequency and chemical potential need to be zero (undoped), and for the chiral fermions with the gapless band structure. While in the semiclassical limit, the above result becomes \begin{equation} \begin{aligned} \alpha^{*}(0)=\frac{g_{s}g_{v}}{2n_{t}}\alpha_{0}\int_{\mathcal{C}=\hbar\Omega} [1+(\partial_{t}{\bf k}\times\Omega({\bf k}))^{2}\hbar^{2}(\partial_{{\bf k}}\varepsilon)^{-2} +2\hbar(\partial_{{\bf k}}\varepsilon)^{-1}{\bf k}\times\Omega({\bf k})]d\phi. \end{aligned} \end{equation} In fact, in the presence of symmetry breaking (like the inversion symmetry or time-reversal invariance required by the nonzero Berry curvature), the gap is generally nonzero, thus the above ideal absorption coefficient $\frac{\pi}{n_{t}}\alpha_{0}$ is hard to realized since the effect of interband transition is dominate for the gapped case. The optical absorption between the $\pi$ and $\pi^{*}$ band can also be probed by the high-harmonic spectroscopy method or the synchrotron radiation source\cite{Kobayashi K}. It's also found that, in a small frequency, the two-dimension universal absorbance becomes\cite{Matthes L2} \begin{equation} \begin{aligned} \alpha(0)=\frac{g_{s}g_{v}}{n_{t}}[\alpha_{0}\pi(1+\frac{1}{16t^{2}}\hbar^{2}\omega^{2})], \end{aligned} \end{equation} where $t$ is the nearest-neighbor hopping modified by the light, and becomes $1.09$ eV $\sim$ $0.4$ eV ($t=1.6$ eV for $\omega=0$ case). In the low-energy-limit, we usually consider only the direct interband transition, however, the indirect interband exist mediated by the remote band\cite{Huang R}, which gives rise the warping effect. Unlike the graphene or MoS$_{2}$\cite{Kormányos A} which have trigonal warping (with three Dirac-cone) constant-energy-contour (or the Dirac-cone) at high energy state\cite{Tikhonenko F V}, the silicene on Ag(111)\cite{Feng B} or other substrates has hexagonl warping with six Dirac-cone for the Dirac-cone or local density of states (LDOS) both in momentum space and real space, like the Bi$_{2}$Te$_{3}$\cite{Fu L}. The plots are presented in the Fig.1, where we can see that the hexagonal warping is more obvious in LDOS for large $t$. We also can see that, the lower the energy, the higher the degree of isotropy of the constant-energy-contour in Brillouin zone (BZ), and thus with higher isotropy of quasiparticle scattering. The hexagonal warping produces strongly decays the quasiparticle interference\cite{Feng B} as well as the backscattering, thus it retard the decaying of the LDOS (the Friedel oscillation) and breaks the weak intravalley localization\cite{Tikhonenko F V}. The quasiparticle interference here is due to the broken of quasiparticle chirality. The intervalley scattering which is suppressed in the clean limit corresponds to the short-wavelength interference (large ${\bf k}$) and it's competing with the quasiparticle chirality, while the intravalley scattering corresponds to the long-wavelength interference (small ${\bf k}$). For time-reversal-invariant system, the opposite spin (or pseudospin) also suppress the intervalley backscattering in a warping system. The saddle-point Van Hove-singularities are shown in the LDOS plot of Fig.1 which are localed in the $M$-point of the BZ boundary, and also generates the Fermi surface by the quasiparticle scattering along the Fermi patches between two opposite edges\cite{Feng B}. The DOS is not differentiable in these points due to the saddle point singularity\cite{John R}. However, for the case of large gap and in the presence of remote-band coupling\cite{Huang R}, the minimum-point Van Hove-singularities\cite{Souslov A} appear, which we don't discuss in this article. For bilayer silicene or bilayer graphene, another source of the trigonal Warping\cite{Ezawa M,McCann E,Morell E S} is the interlayer hopping which has three direction for the hopping from one site in bottom layer to the upper layer, and it's not negligible especially under the laser in terahertz range. \section{Dynamical polarization in low-frequency within random phase approximation} It's found that the low-frequency (infrared or visible region) absorbance is similar among the two-dimension group-IV crystals \cite{Matthes L2}, like the graphene, silicene, and germanene, we only take silicene as an example here. Firstly, through the random phase approximation (RPA) in the presence of strong SOC, the dielectric function within one-loop approximation reads \begin{equation} \begin{aligned} \varepsilon(\omega,{\bf q})=1-\frac{2\pi e^{2}}{\epsilon_{0}\epsilon{\bf q}}\Pi(\omega,{\bf q}), \end{aligned} \end{equation} where $\epsilon=2.45$ is the static background dielectric constant for the air/SiO$_{2}$ substrate, and $\Pi(\omega,{\bf q})$ is the dynamical polarization function\cite{Wu C H3} \begin{equation} \begin{aligned} \Pi(\omega,{\bf q})=g_{s}g_{v}\frac{2\pi e^{2}}{\epsilon_{0}\epsilon}\sum_{m_{D}} \int_{BZ}\frac{d^{2}k}{4\pi^{2}}\sum_{{\bf q};s,s'=\pm 1}\frac{f_{s({\bf k}+{\bf q})}-f_{s'{\bf k}}}{s\varepsilon_{{\bf k}+{\bf q}} -s'\varepsilon_{{\bf k}}-\omega-i\delta}{\bf F}_{ss'}({\bf k},({\bf k}+{\bf q})), \end{aligned} \end{equation} where $s,s'$ are the band index ($ss'=1$ for the intraband case and $ss'=-1$ for the interband case), and the Coulomb interaction-induced transition matrix element here is \begin{equation} \begin{aligned} {\bf F}_{ss'}({\bf k},({\bf k}+{\bf q}))=\frac{1}{2}(1+ss'{\rm cos}\theta_{\sigma\eta})=\frac{1}{2}\left[1+ss'(\frac{{\bf k}({\bf k}+{\bf q})}{E_{{\bf k}}E_{{\bf k}+{\bf q}}} +\frac{m_{D}^{2}}{E_{{\bf k}}E_{{\bf k}+{\bf q}}})\right], \end{aligned} \end{equation} where $\theta_{\sigma\eta}$ is the scattering angle in scattering phase space where the anisotropic intervalley scattering is possible through the edge states. Here the transition matrix element ${\bf F}_{ss'}({\bf k},({\bf k}+{\bf q}))$ including both the interband ($ss'=-1$) and intraband ($ss'=1$) transitions. The scattering angle in momentum space between ${\bf k}$ and ${\bf k}+{\bf q}$ is $\theta$ and obeys ${\rm cos}\theta=\langle\chi({\bf k})|\chi({\bf k}+{\bf q})\rangle=(k+q{\rm cos}\phi)/\sqrt{k^{2}+q^{2}+2kq{\rm cos}\phi}$ where $\phi$ is the angle between ${\bf k}$ and ${\bf q}$, and $|\chi({\bf k})\rangle=\psi_{s}^{*}({\bf k})\psi_{s'}({\bf k})$, $|\chi({\bf k}+{\bf q})\rangle=\psi_{s}({\bf k}+{\bf q})\psi_{s'}^{*}({\bf k}+{\bf q})$ is the eigenstates with the eigenvectors $\psi$ of the Hamiltonian \cite{Wu C H3}. \subsection{$\mu=2$ eV$>\Delta$} Firstly, we discuss the case of chemical potential $\mu=2$ eV which is larger than the band gap. In the ${\bf q}\sim\omega$ space formed by the scattering wave vector ${\bf q}$ and frequency $\omega$, the single particle excitation regime (or electron-hole continuum regime) can be devided into two parts: The intraband part and interband part as shown in Fig.2, where the blue line surrounds the interband part while the red line surrounds the low-energy intraband part. The region surrounded by the blue line and the region surrounded by the red line can be analytically expressed as\cite{Wu C H3,Pyatkovskiy P K} \begin{equation} \begin{aligned} \mu+\sqrt{({\bf q}-{\bf k}_{F})^{2}+ (m_{D}^{\eta \sigma_{z}\tau_{z}})^{2}}<\omega<\mu+\sqrt{({\bf q}+{\bf k}_{F})^{2}+ (m_{D}^{\eta \sigma_{z}\tau_{z}})^{2}}\\ \omega<\mu-\sqrt{({\bf q}-{\bf k}_{F})^{2}+ (m_{D}^{\eta \sigma_{z}\tau_{z}})^{2}}, \end{aligned} \end{equation} respectively, where ${\bf k}_{F}=\sqrt{\mu^{2}-(m_{D}^{\eta \sigma_{z}\tau_{z}})^{2}}$ is the Fermi wave vector. Note that the above functions are valid for any value of chemical potential. We can see that, for band gap $\Delta=2$ eV (i.e., equals to the chemical potential), the spingle particle excitation regime vanishes in the region shown in the last panel of Fig.2 and thus the plasmon model won't undamped at all in this region. The value of ${\bf q}=2{\bf k}_{F}$ decrease with the increasing band gap as indicated by the red arrow. In Fig.3, we show the dynamical polarization at the ${\bf q}\sim\omega$ space. We want to note that the imaginary part of polarization function is not always negative as shown in the figure (see also the Refs.\cite{Wu C H3,Kotov V N}). Fig.4 shows the dielectric function obtained above, we can see the real part of dielectric function is much larger than the imaginary part, and there is a peak in the real part along the line of ${\bf q}=2{\bf k}_{F}$ as indicated by the red arrow. The Fig.5 shows the absorption of the radiation (not the optical absorbance), which reads \begin{equation} \begin{aligned} \alpha({\bf q},\omega)=-{\rm Im}\frac{1}{\varepsilon({\bf q},\omega)}=-L({\bf q},\omega), \end{aligned} \end{equation} where $L({\bf q},\omega)$ is the energy loss function. Thus it's easy to obtain that the energy loss function should be the inversion of the absorption, and they are both dependent on the band gap and the spin-valley coupled selection rule\cite{Wu C HX}. From Fig.5, the negative optical absorption appear for small band gap (e.g., $\Delta=0.02$ eV) which shows no practical physical meaning. Here we want to note that the energy loss function provides the electron-hole spectral density in single-particle excitation regime, and the damping in single-particle excitation regime also leads to the resonance of the energy loss function\cite{Wu C H3}. The process of the energy-loss exist as long as the frequency $\omega$ is nonzero and it also results in the loss of DOS just like the one caused by the chiral quasiparticle scattering though it's suppressed by the hexagonal warping. The absorption vanishes in the last panel of Fig.5 as the single particle excitation regime vanishes (see Fig.2). For the case of large band gap (in the adiabatical approximation), the electron-ion plasmon rised in plasmon frequency and at the presence of long-range Coulomb interaction\cite{Wunsch B}. In this case, the phonon dispersion of the acoustic phonon can be obtained as\cite{Wunsch B} \begin{equation} \begin{aligned} \omega_{{\rm ph}}=\sqrt{\frac{\alpha E_{ion}}{\epsilon_{0}\epsilon{\bf q}+g_{s}g_{v}\alpha\mu/4\pi}}{\bf q}, \end{aligned} \end{equation} where $E_{ion}$ is the ion confinement energy and $\alpha=e^{2}/\epsilon_{0}\epsilon\hbar v_{F}$ is the fine structure constant. Through the expression, the phonon dispersion also follows the $\sqrt{{\bf q}}$-behavior as most two-dimension materials do. The phonon dispersion is shown in the dash-line of the first panel of Fig.5, which mainly distributed in the low-frequency regime (while the optical phonon dispersion is mainly distributed in the high-frequency regime), and independent of the band gap. \subsection{$\mu=0.01$ eV$<\Delta$} We then discuss the case that the chemical potential is smaller than the band gap, where we set it as 0.01 eV here. In this case, the imaginary part of the dielectric function reads\cite{Tabert C J,Pyatkovskiy P K,Scholz A} \begin{equation} \begin{aligned} {\rm Im}[\varepsilon({\bf q},\omega)]=\frac{2\pi e^{2}}{\epsilon_{0}\epsilon{\bf q}}\frac{{\rm q}^{2}}{16}\theta(\omega^{2}-{\bf q}^{2} -4(m_{D}^{\eta\sigma_{z}\tau_{z}})^{2})(\frac{1}{\sqrt{\omega^{2}-{\bf q}^{2}}}+\frac{4(m_{D}^{\eta\sigma_{z}\tau_{z}})^{2}}{(\omega^{2}-{\bf q}^{2})^{3/2}}). \end{aligned} \end{equation} Through the Kramers-kronig relation \begin{equation} \begin{aligned} {\rm Re}[\Pi({\bf q},\Omega)]=\frac{2}{\pi}\mathcal{P}\int^{\infty}_{0}d\omega\frac{\omega{\rm Im}[\Pi({\bf q},\omega)]}{\omega^{2}-\Omega^{2}}, \end{aligned} \end{equation} the real part of the dielectric function can be obtained as \begin{equation} \begin{aligned} {\rm Re}[\varepsilon({\bf q},\omega)]=&\frac{2\pi e^{2}}{\epsilon_{0}\epsilon{\bf q}}\frac{{\rm q}^{2}}{4\pi} [\frac{m_{D}^{\eta\sigma_{z}\tau_{z}}}{{\bf q}^{2}-\omega^{2}}+\frac{{\bf q}^{2}-\omega^{2}-4(m_{D}^{\eta\sigma_{z}\tau_{z}})^{2}}{4|{\bf q}^{2}-\omega^{2}|^{3/2}}\\ &(\theta({\bf q}-\omega){\rm arccos}\frac{{\bf q}^{2}-\omega^{2}-4(m_{D}^{\eta\sigma_{z}\tau_{z}})^{2}}{\omega^{2}-{\bf q}^{2}-4(m_{D}^{\eta\sigma_{z}\tau_{z}})^{2}} -\theta(\omega-{\bf q}){\rm ln}\frac{(2m_{D}^{\eta\sigma_{z}\tau_{z}}+\sqrt{\omega^{2}-{\bf q}^{2}})^{2}}{|\omega^{2}-{\bf q}^{2}-4(m_{D}^{\eta\sigma_{z}\tau_{z}})^{2}|})]. \end{aligned} \end{equation} where $\theta$ is the step function. In this case, the dynamical polarization, dielectric function, and the absorption are shown in the Fig.6-8, respectively. We then obtain the purely negative dynamical polarization and purely positive dielectric function as shown in Fig.6-7. Distincted from Fig.5, the absorption for $\mu<\Delta$ is purely positive, and mainly locate in the interband single particle excitation regime as shown in the Fig.8. That reveals that it's quite important for the value of chemical potential compared to the band gap (and it's also important for the estimation of the effect of Berry correction as presented in the Sec.2). \section{Optical properties of Silicene, MoS$_{2}$, and black phosphorus} The scattering momentum here is mainly contributed from the charged impurity scattering (with Coulomb interaction), while for the optical transition, due to the almost vanishing photon wave vector, i.e., in the limit of ${\bf q}\rightarrow 0$, which also called the head or wings of the polarizability\cite{Gajdo? M}. We focus only on the interband optical transition through a finite gap, then the transition matrix becomes\cite{Matthes L2} \begin{equation} \begin{aligned} \sqrt{{\bf F}({\bf k})} =\frac{e\hbar}{i\sqrt{4\pi\epsilon_{0}}m_{0}}\frac{\langle\psi;{\bf k}|m_{0}{\bf v}\cdot{\bf e}_{\bf q}|\psi';{\bf k}\rangle}{\varepsilon_{\psi}-\varepsilon_{\psi'}}, \end{aligned} \end{equation} where ${\bf v}$ is the velocity operator and ${\bf e}_{\bf q}$ denotes the direction of scattering wave vector which is also the direction of the dielectric function. At Dirac-cone (i.e., ${\bf k}=0$), the velocity operator in above can be written in the same form as Eq.(5) in the nonadiabatic approximation. We carry out the density functional theory (DFT) calculation base on the generalized gradient approximation (GGA) in Quantum-ESPRESSO package\cite{Giannozzi P}, The plane wave energy cutoff is setted as 400 eV for the ultrasoft pseudopotential in our calculations, and the structures are relaxed until the Hellmann-Feynman force on the atoms are below 0.01 eV/\AA\ . The optical properties including the optical absorption has been studied for group IV matters, like the silicene, graphene, germanene and tinene\cite{Matthes L,Singh N}, while in this section, we focus on several typical materials in group IV-VI: monolayer silicene, monolayer MoS$_{2}$, and monolayer black phosphorus (phosphorene), which all have strong intrinsic SOC and hexagonal layered structure, and all exhibit abundant optical response characteristics\cite{Çakır D}. The results of DFT calculation are shown in Fig.6, where we show the in-plane component of the absorption, energy loss function, optical parameters, and the dielectric function of these three matter. A clear splitting of the main peak in optical absorption is presented, which can't be found in the energy loss function, we think the reason for such difference is partly due to the local field effect (the off-diagonal element of the dielectric function), which is been ignored in the approximation of ${\bf G}+{\bf q}\rightarrow 0$\cite{Gajdo? M} where ${\bf G}$ is the vectors of the reciprocal lattice. The neglect of the local-field effect also results in the decrease of the number of the plasmon branches due to the suppression of the intreband transition\cite{Wu C H3}. For the system with quasi-one-dimensional band structure, like the black phosphorus\cite{Tran V}, the many-electron effect sometimes need to be taken into account except for in the low-temperature limit, it's similar to the case as we discussed\cite{Wu C H2,Wu C H8} with the considerable long- and short-range Coulomb effect. Due to the many-electron effect, the band gap as well as the related excitonic effect are also affected by the self-energy matrix $\Sigma$\cite{Tran V}, a large nontrivial band gap may opened by the many-electron effect. The variable self-energy also leads to nonzero vertex contribution which is important especially in the variant cluster approximation. It's also found that, for superlattice arrangement of the layered structure, when ${\rm Re}\varepsilon({\bf q}\rightarrow 0,\omega)\approx 1$, the shape of absorption and the energy loss function may be more similar\cite{Matthes L}. A small broaden peak near 10 eV in absorption is observed both for silicene and black phosphorus, it's due to the transition between the $s$ hybridized orbital and the $\pi^{*}$ band in parallel band region for group-IV materials. We found that at ${\bf q}\rightarrow 0$ limit, the initial refractive index $n_{t}$ descrise from monolayer silicene to monolayer black phosphorus: 4.05 eV, 18 eV, 2.15 eV for monolayer silicene, monolayer MoS$_{2}$, and monolayer black phosphorus, respectively, which means that, in this order, the coupling between $\pi$-band and $\sigma$-band decrease while the electron mobility increase\cite{John R}. Except for monolayer black phosphorus, the maximum refractive indices are almost appear in the zero photon energy. For dielectric function, we see that the real part of all these three matters have negative part, which means the existence of the plasmon frequency, since the plasmon frequency can be approximatly obtained by solving the ${\rm Re}[\varepsilon({\bf q},\omega_{p})]$ for the case of small damping. Thus through the plots of dielectric function we can obtain the plasmon frequency of monolayer silicene, monolayer MoS$_{2}$, and monolayer black phosphorus as 8 eV, 18 eV, and 11 eV, respectively. These results also agree with the plots of energy loss function, whose peak also reveals the plasmon frequency, and we can see that the peaks are locate in 8 eV, 17.5 eV, 11 eV for the monolayer silicene, monolayer MoS$_{2}$, and monolayer black phosphorus, respectively, which are very close to the above results in the plots of dielectric function. That's to say, both the peaks of energy loss function and the dip of the real part of (in-plane) dielectric function are due to the coherent collective excitation models. Note that the plasmon frequency doesn't exist in the graphene which has purely positive real dielectric function as presented in Refs.\cite{John R}, and thus exhibits pure metallic bahavior with with the optical response mainly in ultraviolet regime. We can also see that the imaginary part of the dielectric function is in a similar shape with the absorption, which means that the positive imaginary part of the dielectric function corresponds to the abosrption gain. From the plot of energy loss function of silicene, there are also two small peaks in front of the main peak in 8 eV, which are in 2 eV and 5 eV, and contributed by the $pi$-plasmon and $\sigma$-plasmon, respectively. Note that here the $pi$-plasmon peak is rather weak (lower than 0.3 eV), however, in doped case, it's possible to produce the high energy $pi$-plasmon (about 5$\sim$6 eV) due to the Van Hove singularities, as observed in doped graphene\cite{Yuan S}, such high energy $pi$-plasmon correponds to the large collective excitation which will decay the plasmon into electron-hole pair, and it's unlike the acoustic plasmon mention above which follows the ${\bf q}$-behavior, but follows the linear behavior with ${\bf q}$ just like the classical bilayer silicene as discussed in our previous report\cite{Wu C H3}. The reason of the linear behavior for high energy $pi$-plasmon is due to the fast damping in small ${\bf q}$ region or even in the ${\bf q}\rightarrow 0$ limit \cite{Yuan S} and thus it's hard to find the stable plasmons for such case. Note that all the optical parameters obtained here are smaller than that in their host materials (bulk form). We are mainly focu on the infrare and visible region of the photon energy, however, in negative photon energy, some interst phenomenons are been found, like the multi-photon resonance due to the transition of subbands\cite{Yin X}. \section{Conclusion} The optical response is of special interest for the intriguing materials of the group-IV, group-V, and group-VI. We diacuss the optical properties of the silicene, MoS$_{2}$ and black phosphorus, and it's important for their exciting potential applications. While in semiclassical case with small band gap (i.e., the non-adiabatic) case, the Berry correction is also important for the interband or intraband transition matrix element. Except that, the opposite Berry curvature and spin/orbital magnetic moment between neighbor valley also give rise to the topological spin/valley Hall effect. For silicene, the dynamical polarization, dielectric function, and the absorption of the radiation are discussed in the absence of many-electron effect. The many-electron effects on optical response need to be considered in the presence of segments of quasi one-dimension band, like the black phosphorus, due to the presence of many-electron effect, the self-energy and the related excitonic effect need to be taken into account\cite{Yang L}. In the presence of Coulomb interaction (electron-electron interaction), it also leads to the dampling of self-energy of the Dirac-quasiparticle (due to the electron-phonon coupling, electron-collision, or the acoustic phonon scattering) in the low-energy limit and it follows the Kramers-Kronig relation, then the energy becomes $\varepsilon\rightarrow \varepsilon+\Sigma, \ {\rm Re}\Sigma \sim g_{c}e^{1/g_{c}},\ {\rm Im}\Sigma \sim g^{2}_{c}e^{1/g_{c}}$\cite{González J,Khveshchenko D V}, with first term the linear dispersion term and the second term $\Sigma$ the nonlinear dispersion term due to the screening effect. The anisotropic effects induced by the hexagonal warping structure of silicene or the charged impurity, and the anisotropic polarization induced by the polarized incident light are also discussed. Our results exhibit the great potential in the optoelectronic applications of the materials we discussed. \end{large} \renewcommand\refname{References}
1,108,101,566,177
arxiv
\section{Interacting Hopf Algebras: a Complete Axiomatisation of Affine Circuits} \label{sec:axiomatisation} This appendix contains the equational theory of affine relations over a field $\field$, called the theory of Affine Interacting Hopf algebras ($\AIH{\field}$), as it appears in~\cite{interactinghopf} (for the linear fragment) and~\cite{BonchiPSZ19} (for the affine extension). The axioms are in Figure~\ref{fig:ih}; we briefly explain them below. \begin{figure*}[htbp] \begin{align*} \tikzfig{ax/add-associative}\;\myeq{$\circ$-as}\;\; \tikzfig{ax/add-associative-1}&\qquad \tikzfig{ax/add-commutative}\myeq{$\circ$-co}\;\;\, \tikzfig{ax/add}\qquad \tikzfig{ax/add-unital-left}\myeq{$\circ$-unl}\;\tikzfig{ax/id \\ \tikzfig{ax/co-add-associative-1}\;\;\myeq{$\circ$-coas}\; \tikzfig{ax/co-add-associative}&\qquad \tikzfig{ax/co-add-commutative}\;\;\,\myeq{$\circ$-coco}\;\; \tikzfig{ax/co-add}\qquad \tikzfig{ax/co-add-unital-left}\;\myeq{$\circ$-counl}\;\tikzfig{ax/id \\ \tikzfig{ax/copy-associative}\;\;\,\myeq{$\bullet$-coas}\;\; \tikzfig{ax/copy-associative-1}&\qquad \tikzfig{ax/copy-commutative}\;\;\,\myeq{$\bullet$-coco}\;\; \tikzfig{ax/copy}\qquad \tikzfig{ax/copy-unital-left}\;\myeq{$\bullet$-counl}\;\tikzfig{ax/id \\ \tikzfig{ax/co-copy-associative}\;\myeq{$\bullet$-as}\;\; \tikzfig{ax/co-copy-associative-1}&\qquad \tikzfig{ax/co-copy-commutative}\,\myeq{$\bullet$-co}\;\, \tikzfig{ax/co-copy}\qquad \tikzfig{ax/co-copy-unital-left}\myeq{$\bullet$-unl}\;\tikzfig{ax/id} \end{align*} \hdashrule{\linewidth}{1pt}{1pt} \begin{equation*} \tikzfig{ax/add-copy-bimonoid}\;\;\myeq{$\circ\bullet$-bi}\;\;\tikzfig{ax/add-copy-bimonoid-1} \qquad \tikzfig{ax/add-copy-bimonoid-unit} \,\;\;\,\myeq{$\circ\bullet$-biun}\;\;\;\, \tikzfig{ax/add-bimonoid-unit-1} \qquad \tikzfig{ax/add-copy-bimonoid-counit} \;\;\;\myeq{$\bullet\circ$-biun}\;\;\;\, \tikzfig{ax/add-copy-bimonoid-counit-1}\qquad\tikzfig{ax/bone-white-black}\;\;\myeq{$\circ\bullet$-bo}\;\;\;\tikzfig{empty-diag} \end{equation*} \hdashrule{\linewidth}{1pt}{1pt} \vspace{1pt} \begin{equation*} \tikzfig{ax/copy-Frobenius-left}\;\;\myeq{$\bullet$-fr1}\;\; \tikzfig{ax/copy-Frobenius}\;\;\myeq{$\bullet$-fr2}\;\; \tikzfig{ax/copy-Frobenius-right} \qquad \tikzfig{ax/copy-special}\myeq{$\bullet$-sp}\tikzfig{ax/id}\qquad \tikzfig{ax/bone-black}\;\;\myeq{$\bullet$-bo}\;\;\tikzfig{empty-diag} \end{equation*} \vspace{3pt} \begin{equation*} \tikzfig{ax/add-Frobenius-left}\;\;\myeq{$\circ$-fr1}\;\; \tikzfig{ax/add-Frobenius}\;\;\myeq{$\circ$-fr2}\;\; \tikzfig{ax/add-Frobenius-right} \qquad \tikzfig{ax/add-special}\myeq{$\circ$-sp}\tikzfig{ax/id}\qquad\tikzfig{ax/bone-white}\;\;\myeq{$\circ$-bo}\;\;\tikzfig{empty-diag} \end{equation*} \vspace{2pt} \hdashrule{\linewidth}{1pt}{1pt} \begin{align*} \tikzfig{ax/reals-add}\;\myeq{add}\;\tikzfig{ax/reals-add-1} \qquad \tikzfig{ax/zero}\;\myeq{zer}\;\tikzfig{ax/reals-zero}\\ \tikzfig{ax/reals-copy}\;\myeq{dup}\; \tikzfig{ax/reals-copy-1} \qquad \tikzfig{ax/reals-delete}\;\myeq{del}\;\tikzfig{ax/delete} \end{align*} \begin{equation*} \tikzfig{ax/reals-multiplication}\;\myeq{$\times$}\;\tikzfig{ax/reals-multiplication-1} \qquad \tikzfig{ax/reals-sum}\;\myeq{$+$}\;\tikzfig{ax/reals-sum-1}\qquad \tikzfig{ax/reals-scalar-zero}\;\myeq{$0$}\;\tikzfig{ax/reals-scalar-zero-1} \end{equation*} \hdashrule{\linewidth}{1pt}{1pt} \begin{equation*} \tikzfig{ax/scalar-division}\;\myeq{$r$-inv}\; \tikzfig{ax/id} \qquad \tikzfig{ax/id}\;\myeq{$r$-coinv}\;\tikzfig{ax/scalar-co-division}\quad \text{ for } r\neq 0, r\in \field \end{equation*} \vspace{3pt} \hdashrule{\linewidth}{1pt}{1pt} \begin{equation*} \tikzfig{ax/one-copy}\quad \myeq{1-dup}\quad \tikzfig{ax/one2}\qquad\qquad\qquad\qquad \tikzfig{ax/one-delete}\quad \myeq{1-del}\quad \tikzfig{empty-diag} \end{equation*} \begin{equation*} \tikzfig{ax/one-false}\quad \myeq{$\varnothing$}\quad\tikzfig{ax/one-false-disconnect} \end{equation*} \vspace{3pt} \hdashrule{\linewidth}{1pt}{1pt} \begin{equation*} \tikzfig{generators/co-one}\;\;\myeq{co1}\;\;\tikzfig{ax/co-one-def}\qquad\quad \tikzfig{generators/co-scalar-r}\;\;\myeq{coreg}\;\;\tikzfig{ax/co-scalar-def} \end{equation*} \caption{Axioms of Affine Interacting Hopf Algebras ($\AIH{\field}$).\label{fig:ih}} \end{figure*} \begin{itemize} \item In the first block, both the black and white structures are commutative monoids and comonoids, expressing fundamental properties of addition and copying. \item In the second block, the white monoid and black comonoid interact as a bimonoid. Bimonoids are one of two canonical ways that monoids and comonoids interact, as shown in~\cite{Lack2004a}. \item In the third and fourth block, both the black and the white monoid/comonoid pair form an extraspecial Frobenius monoid. The Frobenius equations (\textsf{fr 1}) and (\textsf{fr 2}) are a famous algebraic pattern which establishes a bridge between algebraic and topological phenomena, see~\cite{Carboni1987,Kock2003,Coecke2017}. The ``extraspecial'' refers to the two additional equations, the \emph{special} equation ($\bullet$\textsf{-sp}) and the \emph{bone} equation ($\bullet$-\textsf{bo}). The Frobenius equations, together with the special equation, are the another canonical pattern of interaction between monoids and comonoids identified in~\cite{Lack2004a}. Together with the bone equation, the set of four equations characterises \emph{corelations}, see~\cite{Bruni01somealgebraic,Zanasi16,CF}. \item The equations in the fourth block are parametrised over $r \in \field$ and describe commutativity of $\tscalar{r}$ with respect to the other operations, as well as multiplication and addition of scalars. \item The fifth block encodes multiplicative inverses of the field, guaranteeing that $\tcoscalar{r}$ behaves as division by $r$. \item The sixth block deals with the truly affine part of the calculus, the constant $\tikzfig{one}$ and its relationship to other generators. The first two equations just say that $\tikzfig{one}$ can be copied and deleted by the black structure, in other words that it denotes a single value. More interestingly, the third equation of this block is justified by the possibility of expressing the empty set, by, for example, \begin{equation} \label{eq:empty-set} \dsem{\tikzfig{ax/empty}}\,=\{(\bullet,1)\}\,\ensuremath{;}\,\{(0,\bullet)\} = \varnothing. \end{equation} The last equation thus guarantees that this diagram behaves like logical false, since for any $c$ and $d$ in, $\varnothing \oplus c = \varnothing \oplus d = \varnothing$; composing or taking the monoidal product of $\varnothing$ with any relation results in $\varnothing$. \item Finally, the last block constrains the mirror generators for $\tikzfig{one}$ and $\tscalar{r}$ to be obtained from them by bending the wires around, using the black Frobenius structure. Note that there is some redundancy in the presentation, as we could have used only $\tikzfig{one}$ and $\tscalar{r}$ as generators, and taken these to be definitions of $\tikzfig{co-one}$ and $\tcoscalar{r}$. \end{itemize} \section{From Matrices to Circuits and Back} \label{sec:mat} Several proofs in Section~\ref{sec:realisability} exploit the ability to represent matrices and vectors in the graphical syntax. Details can be found in~\cite[Sec. 3.2]{ZanasiThesis} but we recall the basics below. Roughly speaking, for any field $\field$ the theory of matrices lives inside both $\IH{\field}$ as the subprop generated by $\tikzfig{./generators/copy},\tikzfig{./generators/delete},\tikzfig{./generators/add},\tikzfig{./generators/zero}$ along with the scalars $\tikzfig{./generators/scalar}$. It means that, using only these we can represent any matrix with coefficients in $\field$. And, moreover, reasoning about them can be done entirely graphically, as the corresponding equational theory is complete. To develop some intuition for this correspondence, let us demonstrate how matrices are represented diagrammatically. Vectors can just be seen as $m\times 1$ matrices. An $m\times n$ matrix $\mathcal{M}_{\scriptscriptstyle d}$ corresponds to a diagram $d$ with $n$ wires on the left and $m$ wires on the right---the left ports can be interpreted as the columns and the right ports as the rows of $\mathcal{M}_{\scriptscriptstyle d}$. The left $j$th port is connected to the $i$th port on the right through an $r$-weighted wire whenever coefficient $({\mathcal{M}_{\scriptscriptstyle d}})_{ij}$ is a nonzero scalar $r\in \field$. When the $({\mathcal{M}_{\scriptscriptstyle d}})_{ij}$ entry is $0$, they are disconnected. Since composition along a wire carries the multiplicative structure of $\field$, we can simply draw the connection as a plain wire if $({\mathcal{M}_{\scriptscriptstyle d}})_{ij}=1$. For example, \begin{equation*} \text{The matrix } \label{eq:matrix-diagram} \mathcal{M}_{\scriptscriptstyle d} = \begin{pmatrix} a & 0 & 0 \\ b & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \end{equation*} is represented by the following diagram: \[d = \tikzfig{ex-matrix}\] Conversely, given a diagram, we recover the matrix by counting weighted paths from left to right ports. Then we have $\dsem{d} = \{ (v,\mathcal{M}_{\scriptscriptstyle d}\cdot v) \mid v \in \field^n\}$. \section{Missing Proofs}\label{app:proofs} \begin{proof}[Proof of Theorem~\ref{thm:opsem-morphism}] We need to prove that \begin{enumerate} \item $\osem{c \,\ensuremath{;}\, d}=\osem{c}\,\ensuremath{;}\, \osem{d}$, \item $\osem{c\oplus d}=\osem{c} \oplus \osem{d}$. \end{enumerate} We prove 1. here. 2. is completely analogous. \medskip By rule for $\,\ensuremath{;}\,$ in \figref{fig:operationalSemantics}, for each $\zeta \in \osem{c;d}$ there exist two infinite computations starting at time $t$, $$\context{t} c_0 \dtrans{u_t}{v_t}c_1\dtrans{u_{t+1}}{v_{t+1}}c_2\dtrans{u_{t+2}}{v_{t+2}} \dots \quad\text{ and }\quad \context{t} d_0 \dtrans{v_t}{w_t}d_1\dtrans{v_{t+1}}{w_{t+1}}d_2\dtrans{v_{t+2}}{w_{t+2}}\dots $$ for $c_0$ and $d_0$ the initial states of $c$ and $d$ respectively, and such that $\zeta(i)=(u_i,w_i)$. Then, by defining $\sigma(i)=(u_i,v_i)$ and $\tau(i)=(v_i,w_i)$ for all $i\in \mathbb{N}$, we have that $\sigma \in \osem{c}$ and $\tau \in \osem{d}$. By construction, $\sigma$ is compatible with $\tau$ and $\zeta=\sigma ; \tau$. Therefore $\zeta \in \osem{c};\osem{\tau}$. Conversely, suppose that $\zeta \in \osem{c}\,\ensuremath{;}\,\osem{\tau}$. Then there exists $\sigma\in \osem{c}$ and $\tau \in \osem{d}$ such that $\sigma$ and $\tau$ are compatible and $\zeta=\sigma\,\ensuremath{;}\,\tau$. This means that there exist two infinite computations (starting at potentially different times), $$\context{t} c_0 \dtrans{u_t}{v_t}c_1\dtrans{u_{t+1}}{v_{t+1}}c_2\dtrans{u_{t+2}}{v_{t+2}} \dots \quad\text{ and }\quad \context{s} d_0 \dtrans{v_t}{w_t}d_1\dtrans{v_{t+1}}{w_{t+1}}d_2\dtrans{v_{t+2}}{w_{t+2}}\dots $$ for $c_0$ and $d_0$ the initial states of $c$ and $d$ respectively, such that $\sigma(i)=(u_{i},v_{i})$ for $i \geq t$ and $\tau(i)=(v_{i},w_{i})$ for $i \geq s$. Without loss of generality, we can assume that $t\leq s$. We now have two cases: either $t < s$ or $t=s$. \begin{itemize} \item If $t<s$, since $s\leq 0$ by assumption (cf. Definition \ref{def:trajectories}) we can apply Lemma \ref{lemma:idle} iteratively to extend the computation $$\context{s} d_0 \dtrans{v_t}{w_t}d_1\dtrans{v_{t+1}}{w_{t+1}}d_2\dtrans{v_{t+2}}{w_{t+2}}\dots $$ by $q-p$ $\dtrans{0}{0}$ transitions into the past, to obtain $$\context{t} d_0 \dtrans{0}{0} d_0 \dtrans{0}{0} \dots d_0 \dtrans{v_t}{w_t}d_1\dtrans{v_{t+1}}{w_{t+1}}d_2\dtrans{v_{t+2}}{w_{t+2}}\dots $$ Clearly, the trajectory associated to this computation is still $\tau$ since $\tau(i) = 0$ for $i<s$. We have now reduced the problem to the next case. \item If $t=s$, since $\zeta=\sigma\,\ensuremath{;}\,\tau$, then $\zeta(i)=(u_i,w_i)$. By the rule for $\,\ensuremath{;}\,$ in \figref{fig:operationalSemantics}, there exists an infinite computation $$\context{t} e_0 \dtrans{u_t}{w_t}e_1\dtrans{u_{t+1}}{w_{t+1}}e_2\dtrans{u_{t+2}}{w_{t+2}} \dots$$ for $e_o$ the initial state of $c\,\ensuremath{;}\, d$. \end{itemize} Therefore $\zeta\in \osem{c \,\ensuremath{;}\, d}$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:rational-map}] Suppose that $A$ is rational. Since rational fractions of polynomials form a ring, the multiplication and addition of two rational fractions is still rational, and therefore $A\cdot r\in\ratio^m$ for all $r\in\ratio^n$. Conversely, suppose that $A\cdot r\in\ratio^m$ for all $r\in\ratio^n$. Suppose now that $A$ has an non rational coefficient, say in column $i$. Let $e_i = \begin{pmatrix}0 \dots 0 & 1 & 0 \dots 0\end{pmatrix}^T$ be the element of the canonical basis fo $\frpoly^n$ where the single $1$ is at position $i$. Then $A\cdot e_i$ returns the $i$th column of $A$, which contains an non rational coefficient by hypothesis. As a result, all coefficients of $A$ have to be rational as required. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:affine-rational}] Suppose that $f\colon p\mapsto A\cdot p+b$ is rational. Since rational fractions of polynomials form a ring, the multiplication and addition of two rational fractions is still rational, and therefore $f(r)\in\ratio^m$ for all $r\in\ratio^n$. Conversely, suppose that $f(r)\in\ratio^m$ for all $r\in\ratio^n$. Then $f(0) = b\in\ratio^m$. We can reason as in the proof of Proposition~\ref{prop:rational-map} to prove that $A$ must be rational and conclude that $f$ is a rational affine map. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:single-foot}] First to see that we can eliminate all $\tikzfig{co-one}$, it is sufficient to notice that \[\tikzfig{co-one}\quad \stackrel{\tiny \AIH }{=}\quad \tikzfig{ax/co-one-def}\] Now we are left with only $\tikzfig{one}$, we can prove the statement by induction on the number of $\tikzfig{one}$. If $c$ contains no $\tikzfig{one}$, then let \[ c' =\;\; \tikzfig{c-after-del}\] By (1-\textsf{del}), $\dsem{c}=\dsem{c'}$. Assume that the proposition is true for some nonnegative integer $n$. Then, given $c$ with $n+1$ $\tikzfig{one}$, we can use the symmetric monoidal structure to pull one $\tikzfig{one}$ through and write $c$ as follows: \[\tikzfig{c-n-ones}\] for some circuit $d$ with $n$ $\tikzfig{one}$. We can apply the induction hypothesis to $d$ and get $d'$ with a single $\tikzfig{one}$ such that $\dsem{d}=\dsem{d'}$. Thus, by the same reasoning as before, there exists $d''$ with no $\tikzfig{one}$ such that \[ c \stackrel{\tiny \AIH }{=}\;\; \tikzfig{two-ones}\quad\myeq{1-dup}\quad\tikzfig{single-one}\] Since the last diagram contains only a single $\tikzfig{one}$, we are done. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:affine-realisability}] Assume that $\tikzfig{one}$ is connected to an input port. Then, $\hat{c}$ can be rewired to a signal flow graph of which that port is an input. Then we have obtained a rewiring of $c$ as a circuit of $\mathsf{ASF}$. Conversely, assume that the circuit $c$ is realisable. Then it can be rewired to an equivalent affine signal flow graph $f$. Then, by definition of $\mathsf{ASF}$, the constant $\tikzfig{one}$ can only appear as an input of $\hat{f}$. Finally, $\hat{f}$ is a signal flow graph that is a rewiring of $\hat{c}$, so this same port is also an input of $\hat{c}$. \end{proof} \subsection{Syntax}\label{sec:syntax} \vspace{-.2cm} \begin{align c \; ::= \; & \tikzfig{./generators/delete} \;\; | \;\; \tikzfig{./generators/copy} \;\; | \;\; \tikzfig{./generators/scalar} \;\; | \;\; \tikzfig{./generators/register} \;\; | \;\; \tikzfig{./generators/add} \;\; | \;\; \tikzfig{./generators/zero} \;\; | \;\; \tikzfig{one} \;\; | \;\; \label{eq:SFcalculusSyntax1} \\ & \,\tikzfig{./generators/co-delete} \!\;\; | \;\; \tikzfig{./generators/co-copy} \;\; | \;\; \tikzfig{./generators/co-scalar} \;\; | \;\; \!\tikzfig{./generators/co-register} \;\; | \;\; \!\tikzfig{./generators/co-add} \;\; | \;\; \tikzfig{./generators/co-zero} \;\; | \;\; \tikzfig{co-one} \;\; | \;\; \label{eq:SFcalculusSyntax2}\\ & \,\tikzfig{./generators/empty-diag} \;\; | \;\; \tikzfig{./generators/id} \;\; | \;\; \!\!\tikzfig{./generators/sym} \;\; | \;\; c\oplus c \;\; | \;\; c \,\ensuremath{;}\, c \label{eq:SFcalculusSyntax3} \end{align} The syntax of the calculus, generated by the grammar above, is parametrised over a given field $\field$, with $k$ ranging over $\field$. We refer to the constants in rows~\eqref{eq:SFcalculusSyntax1}-\eqref{eq:SFcalculusSyntax2} as \emph{generators}. Terms are constructed from generators, $\tikzfig{./generators/empty-diag}$, $\tikzfig{./generators/id}$, $\tikzfig{./generators/sym}$, and the two binary operations in~\eqref{eq:SFcalculusSyntax3}. We will only consider those terms that are \emph{sortable}, i.e. they can be associated with a pair $\sort{n}{m}$, with $n,m\in \mathbb{N}$. Sortable terms are called \emph{circuits}: intuitively, a circuit with sort $\sort{n}{m}$ has $n$ ports on the left and $m$ on the right. The sorting discipline is given in Fig.~\ref{fig:sortInferenceRules}. We delay discussion of computational intuitions to Section~\ref{sec:opsem} but, for the time being, we observe that the generators of row \eqref{eq:SFcalculusSyntax2} are those of row \eqref{eq:SFcalculusSyntax1} ``reflected about the $y$-axis''. \vspace{-.2cm} \subsection{String Diagrams} \vspace{-.2cm} It is convenient to consider circuits as the arrows of a symmetric monoidal category $\mathsf{ACirc}$ (for Affine Circuits). Objects of $\mathsf{ACirc}$ are natural numbers (thus $\mathsf{ACirc}$ is a \emph{prop}~\cite{MacLane1965}) and morphisms $n \to m$ are the circuits of sort $\sort{n}{m}$, quotiented by the laws of symmetric monoidal categories \cite{mclane,Selinger2009}\footnote{This quotient is harmless: both the denotational semantics from \cite{BonchiPSZ19} and the operational semantics we introduce in this paper satisfy those axioms on the nose.}. The circuit grammar yields the symmetric monoidal structure of $\mathsf{ACirc}$: sequential composition is given by $c \,\ensuremath{;}\, d$, the monoidal product is given by $c \oplus d$, and identities and symmetries are built by pasting together $\tikzfig{./generators/id}$ and $\tikzfig{./generators/sym}$ in the obvious way. We will adopt the usual convention of writing morphisms of $\mathsf{ACirc}$ as \emph{string diagrams}, meaning that $ c \,\ensuremath{;}\, c' \text{ is drawn } \lower9pt\hbox{$\includegraphics[height=.8cm]{graffles/seqcomp.pdf}$} \; \text{ and } c \oplus c' \text{ is drawn } \lower15pt\hbox{$\includegraphics[height=1.2cm]{graffles/tensor.pdf}$}. $ More succinctly, $\mathsf{ACirc}$ is the free prop on generators \eqref{eq:SFcalculusSyntax1}-\eqref{eq:SFcalculusSyntax2}. The free prop on \eqref{eq:SFcalculusSyntax1}-\eqref{eq:SFcalculusSyntax2} sans $\tikzfig{one}$ and $\tikzfig{co-one}$, hereafter called $\mathsf{Circ}$, is the signal flow calculus from \cite{Bonchi2015}. \begin{example}\label{ex:loop} The diagram {\lower12pt\hbox{$\includegraphics[height=1.2cm]{graffles/111imp.pdf}$}} represents the circuit \[ ( (\tikzfig{./generators/co-delete}\,\ensuremath{;}\, \tikzfig{./generators/copy}) \oplus \tikzfig{./generators/id}) \,\ensuremath{;}\, (\tikzfig{./generators/id} \oplus (\tikzfig{./generators/add} \,\ensuremath{;}\, \tikzfig{./generators/copy})) \,\ensuremath{;}\, (((\tikzfig{./generators/id} \oplus \tikzfig{./generators/register}) \oplus \tikzfig{./generators/id}) \,\ensuremath{;}\, ( (\tikzfig{./generators/co-copy} \,\ensuremath{;}\, \tikzfig{./generators/delete}) \oplus \tikzfig{./generators/id})).\] \end{example} \subsection{Denotational Semantics and Axiomatisation}\label{sec:denotational} The semantics of circuits can be given denotationally by means of affine relations. \begin{definition} Let $\field$ be a field. An affine subspace of $\field^d$ is a subset $V\subseteq \field^d$ that is either empty or for which there exists a vector $a\in\field^d$ and a linear subspace $L$ of $\field^d$ such that $V = \{a+v \mid v\in L\}$. A \emph{$\field$-affine relation} of type $n \to m$ is an affine subspace of $\field^n\times\field^m$, considered as a $\field$-vector space. \end{definition} Note that every linear subspace is affine, taking $a$ above to be the zero vector. Affine relations can be organised into a prop: \begin{definition}\label{DEF_SV} Let $\field$ be a field. Let $\ARel{\field}$ be the following prop: \begin{itemize} \item arrows $n\to m$ are $\field$-affine relations. \item composition is relational: given $ G=\{(u,v)\,|\,u\in\field^n,v\in\field^m\} \text{ and } H=\{(v,w)\,|\,v\in\field^m,w\in\field^l\}$, their composition is $G\,\ensuremath{;}\, H \ \ensuremath{:\!\!=}\ \{(u,w)\,|\,\exists v. (u,v)\in G \wedge (v,w)\in H\}$. \item monoidal product given by $G\oplus H = \left\{ \left(\left(\begin{array}{c}\!\!u\!\! \\ \!\!u'\!\!\end{array}\right), \left(\begin{array}{c}\!\!v\!\! \\ \!\!v'\!\!\end{array}\right)\right) \,|\, (u,v)\in G, (u',v')\in H\right\}$. \end{itemize} \end{definition} In order to give semantics to $\mathsf{ACirc}$, we use the prop of affine relations over the field $\frpoly$ of fractions of polynomials in $x$ with coefficients from $\field$. Elements $q\in \frpoly$ are a fractions $\frac{k_0+k_1\cdot x^1 + k_2 \cdot x^2 + \dots + k_n \cdot x^n}{l_0+l_1\cdot x^1 + l_2 \cdot x^2 + \dots + l_m \cdot l^m}$ for some $n,m\in \mathbb{N}$ and $k_i,l_i\in \field$. Sum, product, $0$ and $1$ in $\frpoly$ are defined as usual. \begin{definition}\label{def:SemIB} The prop morphism $\dsem{\cdot} \: \mathsf{ACirc} \to \ARel{\frpoly}$ is inductively defined on circuits as follows. For the generators in \eqref{eq:SFcalculusSyntax1} \[ \tikzfig{./generators/copy} \ \longmapsto \ \left\{ % \left( p ,\left(\begin{array}{c} \! p\! \\ \! p\! \end{array}\right)\right) \mid p \in \frpoly \right\} \qquad \tikzfig{./generators/add} \ \longmapsto \ \left\{ \left( \left(% \begin{array}{c} \!p\! \\ \!q\! \end{array}\right)\!\!, p+q\right) \mid p,q \in \frpoly \right\} \] \[ \tikzfig{./generators/delete} \ \longmapsto \ \{ (p, \bullet ) \mid p\in\frpoly \}\qquad\quad \tikzfig{./generators/zero} \ \longmapsto \ \{( \bullet ,0) \}\qquad\qquad\tikzfig{one} \; \longmapsto \{ (\bullet, 1 )\} \] \vspace{-5pt} \[ \tscalar{r} \ \longmapsto \ \{ (p , p \cdot r) \mid p \in \frpoly \} \qquad \tikzfig{./generators/register} \ \longmapsto \ \{ (p , p \cdot x) \mid p \in \frpoly \}\] where $\bullet$ is the only element of $\frpoly^0$. The semantics of components in \eqref{eq:SFcalculusSyntax2} is symmetric, e.g. $\tikzfig{./generators/co-delete}$ is mapped to $\{ (p, \bullet ) \mid p\in\frpoly \}$. For \eqref{eq:SFcalculusSyntax3} \begin{align*} \tikzfig{./generators/id}\! \ \longmapsto \ \{(p,p) \mid p \in \frpoly \} \qquad \tikzfig{./generators/sym} \ \longmapsto \ \left\{\left( \left(% \begin{array}{c} \!p \!\\ \!q\! \end{array}\right), \left(% \begin{array}{c} \!q\!\\ \!p\! \end{array}\right)\right) \mid p, q \in \frpoly \right\} \\ \tikzfig{./generators/empty-diag} \ \longmapsto \ \{ (\bullet , \bullet)\} \qquad c_1\oplus c_2\ \longmapsto\ \dsem{c_1} \oplus\dsem{c_2} \qquad c_1\,\ensuremath{;}\, c_2\ \longmapsto\ \dsem{c_1} \,\ensuremath{;}\,\dsem{c_2} \end{align*} \end{definition} The reader can easily check that the pair of $1$-dimensional vectors $\left(1, \frac{1}{1-x}\right)\in \frpoly^1\times \frpoly^1$ belongs to the denotation of the circuit in Example \ref{ex:loop}. The denotational semantics enjoys a sound and complete axiomatisation. The axioms involve only basic interactions between the generators~\eqref{eq:SFcalculusSyntax1}-\eqref{eq:SFcalculusSyntax2}. The resulting theory is that of \emph{Affine Interacting Hopf Algebras} ($\AIH$). The generators in \eqref{eq:SFcalculusSyntax1} form a Hopf algebra, those in \eqref{eq:SFcalculusSyntax2} form another Hopf algebra, and the interaction of the two give rise to two Frobenius algebras. We recall the full set of equations in the Appendix~\ref{sec:axiomatisation} We refer the reader to \cite{BonchiPSZ19} for the full set of equations and all further details. \begin{proposition} For all $c,d$ in $\mathsf{ACirc}$, $\dsem{c} = \dsem{d}$ if and only if $c \stackrel{\tiny \AIH }{=} d$.\end{proposition} \subsection{Affine vs Linear Circuits} It is important to highlight the differences between $\mathsf{ACirc}$ and $\mathsf{Circ}$. The latter is the purely linear fragment: circuit diagrams of $\mathsf{Circ}$ denote exactly the \emph{linear} relations over $\frpoly$ \cite{Bonchi2014b}, while those of $\mathsf{ACirc}$ denote the \emph{affine} relations over $\frpoly$. The additional expressivity afforded by affine circuits is essential for our development. One crucial property is that every polynomial fraction can be expressed as an affine circuit of sort $\sort{0}{1}$. \begin{lemma}\label{prop:laurentCircuit} For all $p \in \frpoly$, there is $c_{p}\in \mathsf{ACirc}[0,1]$ with $\dsem{c_{p}}=\{(\bullet, p)\}$. \end{lemma} \begin{proof} For each $p \in \frpoly$, let $P$ be the linear subspace generated by the pair of $1$-dimensional vectors $(1,p)$. By fullness of the denotational semantics of $\mathsf{Circ}$ \cite{Bonchi2014b}, there exists a circuit $c$ in $\mathsf{Circ}$ such that $\dsem{c}=P$. Then, $\dsem{\tikzfig{one} \,\ensuremath{;}\, c} = \{(\bullet, p)\}$. \end{proof} The above observation yields the following: \begin{proposition}\label{prop:denotation-context} Let $(u,v)\in \frpoly^n\times \frpoly^m$. There exist circuits $c_u\in \mathsf{ACirc}[0,n]$ and $c_v\in \mathsf{ACirc}[m,0]$ such that $\dsem{c_u}=\{(\bullet, u)\}$ and $\dsem{c_v}=\{(v,\bullet)\}$. \end{proposition} \begin{proof} Let $u={\tiny\left(% \begin{array}{c} \!p_1\! \\ \!\vdots\! \\ \!p_n\! \end{array}\right)} \text{ and } v={\tiny\left(% \begin{array}{c} \!q_1\! \\ \!\vdots\! \\ \!q_m\! \end{array}\right)}\text{.}$ By Lemma \ref{prop:laurentCircuit}, for each $p_i$, there exists a circuit $c_{p_i}$ such that $\dsem{c_{p_i}}=\{(\bullet, p_i)\}$. Let $c_u = c_{p_1} \oplus \dots \oplus c_{p_n}$. Then $\dsem{c_u}=\{(\bullet, u)\}$. For $c_v$, it is enough to see that Proposition \ref{prop:laurentCircuit} also holds with $0$ and $1$ switched, then use the argument above. \end{proof} Proposition~\ref{prop:denotation-context} asserts that any behaviour $(u,v)$ occurring in the denotation of some circuit $c$, i.e., such that $(u,v)\in\dsem{c}$, can be expressed by a pair of circuits $(c_u,c_v)$. We will, in due course, think of such a pair as a \emph{context}, namely an environment with which a circuit can interact. Observe that this is not possible with the linear fragment $\mathsf{Circ}$, since the only singleton linear subspace is $0$. Another difference between linear and affine concerns circuits of sort $\sort{0}{0}$. Indeed $\frpoly^0=\{\bullet\}$, and the only linear relation over $\frpoly^0\times \frpoly^0$ is the singleton $\{(\bullet, \bullet)\}$, which is $id_0$ in $\ARel{\frpoly}$. But there is another affine relation, namely the \emph{empty relation} $\emptyset \in \frpoly^0\times \frpoly^0$. This can be represented by $\tikzfig{empty}$, for instance, since $\dsem{\tikzfig{ax/empty}}\,=\{(\bullet,1)\}\,\ensuremath{;}\,\{(0,\bullet)\} = \emptyset$. \begin{proposition}\label{prop:twopossibility} Let $c\in \mathsf{ACirc}[0,0]$. Then $\dsem{c}$ is either $id_0$ or $\emptyset$. \end{proposition} \section{Contextual Equivalence and Full Abstraction}\label{sec:fullabs} This section contains the main contribution of the paper: a traditional full abstraction result asserting that contextual equivalence agrees with denotational equivalence. It is not a coincidence that we prove this result in the affine setting: affinity plays a crucial role, both in its statement and proof. In particular, Proposition~\ref{prop:twopossibility} gives us two possibilities for the denotation of $\sort{0}{0}$ circuits: \textit{(i)} $\emptyset$---which, roughly speaking, means that there is a problem (see e.g. Example~\ref{example:empty}) and no infinite computation is possible---or \textit{(ii)} $id_0$, in which case infinite computations are possible. This provides us with a basic notion of observation, akin to observing termination vs non-termination in the $\lambda$-calculus. \begin{definition} For a circuit $c\in\mathsf{ACirc}[0,0]$ we write $c\mathrel{\uparrow}$ if $c$ can perform an infinite computation and $c \mathrel{\mathrlap{/}{\uparrow}}$ otherwise. For instance $\tikzfig{./generators/empty-diag} \mathrel{\uparrow}$, while $\tikzfig{empty}\mathrel{\mathrlap{/}{\uparrow}}$. \end{definition} To be able to make observations about arbitrary circuits we need to introduce an appropriate notion of context. Roughly speaking, contexts for us are $\sort{0}{0}$-circuits with a hole into which we can plug another circuit. Since ours is a variable-free presentation, ``dangling wires'' assume the role of free variables~\cite{GhicaL17}: restricting to $\sort{0}{0}$ contexts is therefore analogous to considering \emph{ground} contexts---i.e. contexts with no free variables---a standard concept of programming language theory. To define contexts formally, we extend the syntax of~Section \ref{sec:syntax} with an extra generator ``$-$'' of sort $\sort{n}{m}$. A $\sort{0}{0}$-circuit of this extended syntax is a \emph{context} when ``$-$'' occurs exactly once. Given an $\sort{n}{m}$-circuit $c$ and a context $C[-]$, we write $C[c]$ for the circuit obtained by replacing the unique occurrence of ``$-$'' by $c$. With this setup, given an $\sort{n}{m}$-circuit $c$, we can insert it into a context $C[-]$ and observe the possible outcome: either $C[c]\mathrel{\uparrow}$ or $C[c]\mathrel{\mathrlap{/}{\uparrow}}$. This naturally leads us to contextual equivalence and the statement of our main result. \begin{definition} Given $c,d\in \mathsf{ACirc}[n,m]$, we say that they are \emph{contextually equivalent}, written $c \equiv d$, if for all contexts $C[-]$, $$C[c]\mathrel{\uparrow} \text{ iff } C[d]\mathrel{\uparrow}\text{.}$$ \end{definition} \begin{example} Recall from Example~\ref{example:spancospan}, the circuits $\tikzfig{./generators/id}$ and $\cospanregs{}{}$. Take the context $C[-]=c_\sigma \,\ensuremath{;}\, \; - \; \,\ensuremath{;}\, c_\tau$ for $c_\sigma \in \mathsf{ACirc}[0,1]$ and $c_\tau \in \mathsf{ACirc}[1,0]$. Assume that $c_\sigma$ and $c_\tau$ have a single infinite computation. Call $\sigma$ and $\tau$ the corresponding trajectories. If $\sigma = \tau$, both $C[\tikzfig{./generators/id}]$ and $C[\cospanregs{}{}]$ would be able to perform an infinite computation. Instead if $\sigma \neq \tau$, none of them would perform any infinite computation: $\tikzfig{./generators/id}$ would stop at time $t$, for $t$ the first moment such that $\sigma(t)\neq \tau(t)$, while $C[\cospanregs{}{}]$ would stop at time $t+1$. Now take as context $C[-] = \tikzfig{./generators/co-delete}\,\ensuremath{;}\, -\,\ensuremath{;}\, \tikzfig{./generators/delete}$. In contrast to $c_\sigma$ and $c_\tau$, $\tikzfig{./generators/co-delete}$ and $\tikzfig{./generators/delete}$ can perform more than one single computation: at any time they can nondeterministically emit any value. Thus every computation of $C[\tikzfig{./generators/id}] = \tikzfig{ax/bone-black}$ can \emph{always} be extended to an infinite one, forcing synchronisation of $\tikzfig{./generators/co-delete}$ and $\tikzfig{./generators/delete}$ at each step. For $C[\cospanregs{}{}]=\tikzfig{delete-cospan-regs}$, $\tikzfig{./generators/co-delete}$ and $\tikzfig{./generators/delete}$ may emit different values at time $t$, but the computation will get stuck at $t+1$. However, our definition of $\mathrel{\uparrow}$ only cares about whether $C[\cospanregs{}{}]$ \emph{can} perform an infinite computation. Indeed it can, as long as $\tikzfig{./generators/co-delete}$ and $\tikzfig{./generators/delete}$ consistently emit the same value at each time step. If we think of contexts as tests, and say that a circuit $c$ passes test $C[-]$ if $C[c]$ perform an infinite computation, then our notion of contextual equivalence is \emph{may-testing} equivalence~\cite{de1984testing}. From this perspective, $\tikzfig{./generators/id}$ and $\cospanregs{}{}$ are not \emph{must equivalent}, since the former must pass the test $\tikzfig{./generators/co-delete}\,\ensuremath{;}\, -\,\ensuremath{;}\, \tikzfig{./generators/delete}$ while $\cospanregs{}{}$ may not. It is worth to remark here that the distinction between may and must testing will cease to make sense in Section \ref{sec:sfg} where we identify a certain class of circuits equipped with a proper flow directionality and thus a deterministic, input-output, behaviour. \end{example} \begin{theorem}[Full abstraction]\label{thm:fullabstraction} $c \equiv d$ iff $c \stackrel{\tiny \AIH }{=} d$ \end{theorem} The remainder of this section is devoted to the proof of Theorem~\ref{thm:fullabstraction}. We will start by clarifying the relationship between fractions of polynomials (the denotational domain) and trajectories (the operational domain). \subsection{From Polynomial Fractions to Trajectories} \label{sec:frpoly-traj} The missing link between polynomial fractions and trajectories are \emph{(formal) Laurent series}: we now recall this notion. Formally, a Laurent series is a function $\sigma\colon \mathbb{Z} \to \field$ for which there exists $j \in \mathbb{Z}$ such that $\sigma(i) = 0$ for all $i < j$. We write $\sigma$ as $\dots,\sigma(-1),\underline{\sigma(0)}, \sigma(1), \dots$ with position $0$ underlined, or as formal sum $\LSum{d}\sigma(i)x^i$. Each Laurent series $\sigma$ has then a \emph{degree} $d \in \mathbb{Z}$, which is the first non-zero element. Laurent series form a field $\laur$: sum is pointwise, product is by convolution, and the inverse $\sigma^{-1}$ of $\sigma$ with degree $d$ is defined as: \begin{eqnarray}\label{eq:inverse} \sigma^{-1}(i) = \, \begin{cases} 0 & \text{ if } i < -d \\ \sigma(d)^{-1} &\text{ if } i=-d \\ \frac{\sum_{i=1}^{n} \big( \sigma(d+i) \cdot \sigma^{-1}(-d+n-i)\big)}{-\sigma(d)} & \text{ if } i\!=\!-d\!+\!n \text{ for } n\!>\!0 \end{cases}\hspace{-0.7cm \end{eqnarray} Note (formal) power series, which form `just' a ring $\fps$, are a particular case of Laurent series, namely those $\sigma$s for which $d \geq 0$. What is most interesting for our purposes is how polynomials and fractions of polynomials relate to $\laur$ and $\fps$. First, the ring $\poly$ of polynomials embeds into $\fps$, and thus into $\laur$: a polynomial $p_0+p_1x+\dots +p_nx^n$ can also be regarded as the power series $\LSum{0}p_ix^i$ with $p_i=0$ for all $i>n$. Because Laurent series are closed under division, this immediately gives also an embedding of the field of polynomial fractions $\frpoly$ into $\laur$. Note that the full expressiveness of $\laur$ is required: for instance, the fraction $\frac{1}{x}$ is represented as the Laurent series $\dots ,0,1,\underline{0},0, \dots$, which is not a power series, because a non-zero value appears before position $0$. In fact, fractions that are expressible as power series are precisely the \emph{rational} fractions, i.e. of the form $\frac{k_0+k_1x+k_2x^2 \dots + k_nx^n}{l_0+l_1x+l_2x^2 \dots + l_nx^n}$ where $l_0\neq 0$. \noindent \begin{minipage}[c]{.60\linewidth} Rational fractions form a ring $\ratio$ which, differently from the full field $\frpoly$, embeds into $\fps$. Indeed, whenever $l_0\neq 0$, the inverse of $l_0+l_1x+l_2x^2 \dots + l_nx^n$ is, by \eqref{eq:inverse}, a \emph{bona fide} power series. The commutative diagram on the right is a summary. \end{minipage} \begin{minipage}[c]{.40\linewidth} \[ \xymatrix@R=1pt@C=1pt{ \fps \ar@{^{(}->}[rrrr] & & & & \laur\\ \\ \\ & & \ratio \ar@{_{(}->}[lluuu] \ar@{^{(}->}[rrd] \\ \poly \ar@{^{(}->}[urr] \ar@{_{(}->}[rrrr] \ar@{^{(}->}[uuuu] & & & & \frpoly \ar@{_{(}->}[uuuu] } \] \end{minipage} \medskip Relations between $\laur$-vectors organise themselves into a prop $\ARel{\laur}$ (see Definition \ref{DEF_SV}). There is an evident prop morphism $\iota \colon \ARel{\frpoly} \to \ARel{\laur}$: it maps the empty affine relation on $\frpoly$ to the one on $\laur$, and otherwise applies pointwise the embedding of $\frpoly$ into $\laur$. For the next step, observe that trajectories are in fact rearrangements of Laurent series: each pair of vectors $(u,v)\in\laur^n \times \laur^m$, as on the left below, yields the trajectory $\kappa(u,v)$ defined for all $i\in \mathbb{Z}$ as on the right below. $$(u,v) = \left(\!{\small\left( \begin{array}{c} \!\alpha^1\! \\ \vdots\\ \!\alpha^n\! \end{array}\right)}, \, {\small\left(% \begin{array}{c} \!\beta^1\! \\ \vdots\\ \!\beta^m\! \end{array}\right)}\!\right) \qquad \qquad \kappa(u,v)(i)=\left(\!{\small\left( \begin{array}{c} \!\alpha^1(i)\! \\ \vdots\\ \!\alpha^n(i)\! \end{array}\right)}, \, {\small\left(% \begin{array}{c} \!\beta^1(i)\! \\ \vdots\\ \!\beta^m(i)\! \end{array}\right)}\!\right)$$ Similarly to $\iota$, the assignment $\kappa$ extends to sets of vectors, and also to a prop morphism from $\ARel{\laur}$ to $\mathsf{Traj}$. Together, $\kappa$ and $\iota$ provide the desired link between operational and denotational semantics. \begin{theorem}\label{thm:main} $\osem{\cdot}=\kappa \circ \iota \circ \dsem{\cdot}$ \end{theorem} \begin{proof} Since both are symmetric monoidal functors from a free prop, it is enough to check the statement for the generators of $\mathsf{ACirc}$. We show, as an example, the case of $\tikzfig{./generators/copy}$. By Definition \ref{def:SemIB}, $\dsem{\tikzfig{./generators/copy}} = \left\{\left( p ,\left(\begin{array}{c} \! p\! \\ \! p\! \end{array}\right) \right) \mid p\in \frpoly \right\} $. This is mapped by $\iota$ to $\left\{\left( \alpha ,\left(\begin{array}{c} \! \alpha\! \\ \! \alpha\! \end{array}\right) \right) \mid \alpha\in \laur \right\}$. Now, to see that $\kappa(\iota(\dsem{\tikzfig{./generators/copy}}))= \osem{\tikzfig{./generators/copy}}$, it is enough to observe that a trajectory $\sigma$ is in $\kappa(\iota(\dsem{\tikzfig{./generators/copy}}))$ precisely when, for all $i$, there exists some $k_i\in \field$ such that $\sigma(i)=\left( k_i ,\left(\begin{array}{c} \! k_i\! \\ \!k_i\! \end{array}\right) \right)$. \end{proof} \subsection{Proof of Full Abstraction} We now have the ingredients to prove Theorem \ref{thm:fullabstraction}. First, we prove an adequacy result for $\sort{0}{0}$ circuits. \begin{proposition}\label{cor:dsemexperiments} Let $c\in \mathsf{ACirc}[0,0]$. Then $\dsem{c}=id_0$ if and only if $c\mathrel{\uparrow}$. \end{proposition} \begin{proof} By Proposition \ref{prop:twopossibility}, either $\dsem{c}=id_0$ or $\dsem{c}=\emptyset$, which, combined with Theorem \ref{thm:main}, means that $ \osem{c}=\kappa \circ \iota (id_0)$ or $\osem{c}=\kappa \circ \iota(\emptyset)$. By definition of $\iota$ this implies that either $\osem{c}$ contains a trajectory or not. In the first case $c\mathrel{\uparrow}$; in the second $c \mathrel{\mathrlap{/}{\uparrow}}$. \end{proof} Next we obtain a result that relates denotational equality in all contexts to equality in $\AIH$. Note that it is not trivial: since we consider ground contexts it does not make sense to merely consider ``identity'' contexts. Instead, it is at this point that we make another crucial use of affinity, taking advantage of the increased expressivity of affine circuits, as showcased by Proposition \ref{prop:denotation-context}. \begin{proposition}\label{lemma:contextequivimpliesdenequiv} If $\dsem{C[c]}=\dsem{C[d]}$ for all contexts $C[-]$, then $c \stackrel{\tiny \AIH }{=} d$. \end{proposition} \begin{proof} Suppose that $c \stackrel{\AIH}{\neq} d$. Then $\dsem{c}\neq \dsem{d}$. Since both $\dsem{c}$ and $\dsem{d}$ are affine relations over $\frpoly$, there exists a pair of vectors $(u,v)\in \frpoly^n \times \frpoly^m$ that is in one of $\dsem{c}$ and $\dsem{d}$, but not both. Assume wlog that $(u,v)\in\dsem{c}$ and $(u,v)\notin\dsem{d}$. By Proposition \ref{prop:denotation-context}, there exists $c_u$ and $c_v$ such that $\dsem{c_u \,\ensuremath{;}\, c \,\ensuremath{;}\, c_v} = \dsem{c_u} \,\ensuremath{;}\, \dsem{c} \,\ensuremath{;}\, \dsem{c_v}=\{(\bullet, u)\}\,\ensuremath{;}\,\dsem{c} \,\ensuremath{;}\, \{(v,\bullet)\}$. Since $(u,v)\in \dsem{c}$, then $\dsem{c_u \,\ensuremath{;}\, c \,\ensuremath{;}\, c_v}=\{(\bullet, \bullet)\}$. Instead, since $(u,v)\notin \dsem{d}$, we have that $\dsem{c_u \,\ensuremath{;}\, d \,\ensuremath{;}\, c_v}=\emptyset$. Therefore, for the context $C[-]=c_u \,\ensuremath{;}\, - \,\ensuremath{;}\, c_v\text{,}$ we have that $\dsem{C[c]} \neq \dsem{C[d]}$. \end{proof} The proof of our main result is now straightforward. \begin{proof}[Proof of Theorem \ref{thm:fullabstraction}] Let us first suppose that $c \stackrel{\tiny \AIH }{=} d$. Then $\dsem{C[c]}=\dsem{C[d]}$ for all contexts $C[-]$, since $\dsem{\cdot}$ is a morphism of props. By Corollary \ref{cor:dsemexperiments}, it follows immediately that $C[c]\mathrel{\uparrow}$ if and only if $C[d]\mathrel{\uparrow}$, namely $c \equiv d$. Conversely, suppose that, for all $C[-]$, $C[c]\mathrel{\uparrow}$ iff $C[d]\mathrel{\uparrow}$. Again by Corollary \ref{cor:dsemexperiments}, we have that $\dsem{C[c]}=\dsem{C[d]}$. We conclude by invoking Proposition~\ref{lemma:contextequivimpliesdenequiv}. \end{proof} \section{Introduction} Compositional accounts of models of computation often lead one to consider \emph{relational} models because a decomposition of an input-output system might consist of internal parts where flow and causality are not always easy to assign. These insights led Willems~\cite{Willems2007} to introduce a new current of control theory, called \emph{behavioural} control: roughly speaking, behaviours and observations are of prime concern, notions such as state, inputs or outputs are secondary. Independently, programming language theory converged on similar ideas, with \emph{contextual equivalence}~\cite{morris1969lambda,plotkin1975call} often considered as \emph{the} equivalence: programs are judged to be different if we can find some context in which one behaves differently from the other, and what is observed about ``behaviour'' is often something quite canonical and simple, such as termination. Hoare~\cite{Hoare1985} and Milner~\cite{Milner1980} discovered that these programming language theory innovations also bore fruit in the non-deterministic context of concurrency. Here again, research converged on studying simple and canonical contextual equivalences~\cite{Milner1992a,Honda1995}. This paper brings together all of the above threads. The model of computation of interest for us is that of signal flow graphs~\cite{Shannon1942,mason1953feedback}, which are feedback systems well known in control theory~\cite{mason1953feedback} and widely used in the modelling of linear dynamical systems (in continuous time) and signal processing circuits (in discrete time). The \emph{signal flow calculus}~\cite{BonchiSZ17,Bonchi2015} is a syntactic presentation with an underlying compositional denotational semantics in terms of linear relations. Armed with \emph{string diagrams}~\cite{Selinger2009} as a syntax, the tools and concepts of programming language theory and concurrency theory can be put to work and the calculus can be equipped with a structural operational semantics. However, while in previous work~\cite{Bonchi2015} a connection was made between operational equivalence (essentially trace equivalence) and denotational equality, the signal flow calculus was not quite expressive enough for contextual equivalence to be a useful notion. The crucial step turns out to be moving from \emph{linear} relations to \emph{affine} relations, i.e. linear subspaces translated by a vector. In recent work~\cite{BonchiPSZ19}, we showed that they can be used to study important physical phenomena, such as current and voltage sources in electrical engineering, as well as fundamental synchronisation primitives in concurrency, such as mutual exclusion. Here we show that, in addition to yielding compelling mathematical domains, affinity proves to be the magic ingredient that ties the different components of the story of signal flow graphs together: it provides us with a canonical and simple notion of observation to use for the \emph{definition} of contextual equivalence, and gives us the expressive power to prove a bona fide full abstraction result that relates contextual equivalence with denotational equality. To obtain the above result, we extend the signal flow calculus to handle affine behaviour. While the denotational semantics and axiomatic theory appeared in~\cite{BonchiPSZ19}, the operational account appears here for the first time and requires some technical innovations: instead of traces, we consider \emph{trajectories}, which are infinite traces that may start in the past. To record the time, states of our transition system have a runtime environment that keeps track of the global clock. Because the affine signal flow calculus is oblivious to flow directionality, some terms exhibit pathological operational behaviour. We illustrate these phenomena with several examples. Nevertheless, for the linear sub-calculus, it is known~\cite{Bonchi2015} that every term is denotationally equal to an executable realisation: one that is in a form where a consistent flow can be identified, like the classical notion of signal flow graph. We show that the question has a more subtle answer in the affine extension: not all terms are realisable as (affine) signal flow graphs. However, we are able to characterise the class of diagrams for which this is true. \vspace{-.1cm} \paragraph{Related work.} Several authors studied signal flow graphs by exploiting concepts and techniques of programming language semantics, see e.g.~\cite{Prak2014,DBLP:conf/lics/Milius10,DBLP:journals/tcs/Rutten05,BaezErbele-CategoriesInControl}. The most relevant for this paper is \cite{BaezErbele-CategoriesInControl}, which, independently from~\cite{BonchiSZ17}, proposed the same syntax and axiomatisation for the ordinary signal flow calculus and shares with our contribution the same methodology: the use of \emph{string diagrams} as a mathematical playground for the compositional study of different sorts of systems. The idea is common to diverse, cross-disciplinary research programmes, including Categorical Quantum Mechanics~\cite{Abramsky2004,Coecke2008,Coecke2017}, Categorical Network Theory~\cite{Baez2014}, Monoidal Computer~\cite{Pavlovic13,Pavlovic14a} and the analysis of (a)synchronous circuits~\cite{Ghica2013,Ghica2016}. \vspace{-.1cm} \paragraph{Outline} In Section~\ref{sec:SFcalculus} we recall the affine signal flow calculus. Section~\ref{sec:opsem} introduces the operational semantics for the calculus. Section~\ref{sec:fullabs} defines contextual equivalence and proves full abstraction. Section~\ref{sec:sfg} introduces a well-behaved class of circuits, that denotes functional input-output systems, laying the groundwork for Section~\ref{sec:realisability}, in which the concept of realisability is introduced before a characterisation of which circuit diagrams are realisable. Missing proofs are presented in Appendix~\ref{app:proofs}. \section{Background: the Affine Signal Flow Calculus}\label{sec:SFcalculus} \input{background2} \input{opsem} \input{fullAbstraction} \input{realisability.tex} \section{Conclusion and Future Work}\label{sec:conclusion} \input{conclusion} \bibliographystyle{splncs04} \section{Operational Semantics for Affine Circuits}\label{sec:opsem} \begin{figure*}[ht] \[ \fullcontext{t}{\tikzfig{./generators/copy}} \dtrans{\,k\,}{k \, k} \fullcontext{t+1}{\tikzfig{./generators/copy}}\quad\quad \fullcontext{t}{\tikzfig{./generators/delete}} \dtrans{k}{\bullet} \fullcontext{t+1}{\tikzfig{./generators/delete}} \] \[ \fullcontext{t}{\tikzfig{./generators/add}} \dtrans{k \,\, l }{k+l} \fullcontext{t+1}{\tikzfig{./generators/add}} \quad\quad \fullcontext{t}{\tikzfig{./generators/zero}} \dtrans{\bullet}{0} \fullcontext{t+1}{\tikzfig{./generators/zero}} \] \[ \fullcontext{t}{\stregister{l}}\! \dtrans{k}{l} \fullcontext{t+1}{\stregister{k}} \qquad \fullcontext{t}{\tscalar{r}} \dtrans{\,\,l\,\,}{rl} \fullcontext{t+1}{\tscalar{r}} \] \[ \fullcontext{0}{\tikzfig{one}} \dtrans{\bullet}{1} \fullcontext{1}{\tikzfig{one}} \qquad \qquad \fullcontext{t}{\tikzfig{one}} \dtrans{\bullet}{0} \fullcontext{t+1}{\tikzfig{one}} \quad (t \neq 0) \] \hdashrule{\linewidth}{1pt}{1pt} \[ \fullcontext{t}{\tikzfig{./generators/co-copy}}\! \dtrans{k \, k}{k} \fullcontext{t+1}{\!\tikzfig{./generators/co-copy}} \quad\quad \fullcontext{t}{\tikzfig{./generators/co-delete}} \dtrans{\bullet}{k} {\fullcontext{t+1}{\tikzfig{./generators/co-delete}}} \] \[ \fullcontext{t}{\tikzfig{./generators/co-add}}\dtrans{ k+l}{k \, l }\! \fullcontext{t+1}{\tikzfig{./generators/co-add}}\quad\quad \fullcontext{t}{\tikzfig{./generators/co-zero}} \dtrans{0}{\bullet} \fullcontext{t+1}{\tikzfig{./generators/co-zero}} \] \[ \fullcontext{t}{\stcoregister{l}}\! \dtrans{l}{k} \fullcontext{t+1}{\stcoregister{k}} \qquad \fullcontext{t}{\tcoscalar{r}} \dtrans{rl}{\,l\,} \fullcontext{t+1}{\tcoscalar{r}} \] \[ \fullcontext{0}{\tikzfig{co-one}} \dtrans{1}{\bullet} \fullcontext{1}{\tikzfig{co-one}} \qquad \qquad \fullcontext{t} {\tikzfig{co-one}} \dtrans{0}{\bullet} \fullcontext{t+1}{\tikzfig{co-one}} \quad (t \neq 0) \] \hdashrule{\linewidth}{1pt}{1pt} \[\mathclap{ \fullcontext{t}{\tikzfig{./generators/id}} \dtrans{k}{k} \fullcontext{t+1}{\tikzfig{./generators/id}} \; \fullcontext{t}{\tikzfig{./generators/sym}} \dtrans{k \, l}{l \, k} \fullcontext{t+1}{\tikzfig{./generators/sym}}\; \fullcontext{t}{\tikzfig{./generators/empty-diag}}\dtrans{\bullet}{\bullet}\fullcontext{t+1}{\tikzfig{./generators/empty-diag}} }\] \hdashrule{\linewidth}{1pt}{1pt} \[ \derivationRule{\fullcontext{t}{c}\dtrans{u}{v} \fullcontext{t+1}{c'}\quad \fullcontext{t}{d}\dtrans{v}{w} \fullcontext{t+1}{d'} } {\fullcontext{t}{c\,\ensuremath{;}\, d} \dtrans{u}{w} \fullcontext{t+1}{c'\,\ensuremath{;}\, d'}}{} \] \[ \derivationRule{\fullcontext{t}{c}\dtrans{u_1}{v_1} \fullcontext{t+1}{c'}\quad \fullcontext{t}{d}\dtrans{u_2}{v_2} \fullcontext{t+1}{d'} } {\fullcontext{t}{c\oplus d} \dtrans{u_1 \, u_2}{v_1 \, v_2} \fullcontext{t+1}{c'\oplus d'}}{} \] \caption{Structural rules for operational semantics, with $p\in\mathbb{Z}$, $k,l$ ranging over $\field$ and $u,v,w$ vectors of elements of $\field$ of the appropriate size. The only vector of $\field^0$ is written as $\bullet$ (as in Definition \ref{def:SemIB}), while a vector $(k_1 \; \dots \; k_n)^T \in \field^n$ as $k_1 \dots k_n$\label{fig:operationalSemantics}.} \end{figure*} Here we give the structural operational semantics of affine circuits, building on previous work~\cite{Bonchi2015} that considered only the core linear fragment, $\mathsf{Circ}$. We consider circuits to be \emph{programs} that have an observable behaviour. Observations are possible interactions at the circuit's interface. Since there are two interfaces: a left and a right, each transition has two labels. In a transition $\fullcontext{t}{c}\dtrans{v}{w} \fullcontext{t'}{c'}$, $c$ and $c'$ are \emph{states}, that is, circuits augmented with information about which values $k \in \field$ are stored in each register ($\tikzfig{./generators/register}$ and $\tikzfig{./generators/co-register}$) at that instant of the computation. When transitioning to $c'$, the $v$ above the arrow is a vector of values with which $c$ synchronises on the left, and the $w$ below the arrow accounts for the synchronisation on the right. States are decorated with runtime contexts: $t$ and $t'$ are (possibly negative) integers that---intuitively---indicate the time when the transition happens. Indeed, in~\figref{fig:operationalSemantics}, every rule advances time by $1$ unit. ``Negative time'' is important: as we shall see in Example~\ref{example:1overX}, some executions must start in the past. The rules in the top section of \figref{fig:operationalSemantics} provide the semantics for the generators in \eqref{eq:SFcalculusSyntax1}: $\tikzfig{./generators/copy}$ is a \emph{copier}, duplicating the signal arriving on the left; $\tikzfig{./generators/delete}$ accepts any signal on the left and discards it, producing nothing on the right; $\tikzfig{./generators/add}$ is an \emph{adder} that takes two signals on the left and emits their sum on the right, $\tikzfig{./generators/zero}$ emits the constant $0$ signal on the right; $\tikzfig{./generators/scalar}$ is an \emph{amplifier}, multiplying the signal on the left by the scalar $k\in \field$. All the generators described so far are stateless. State is provided by $\stregister{l}$ which is a \emph{register}; a synchronous one place buffer with the value $l$ stored. When it receives some value $k$ on the left, it emits $l$ on the right and stores $k$. The behaviour of the affine generator $\tikzfig{one}$ depends on the time: when $t=0$, it emits $1$, otherwise it emits $0$. Observe that the behaviour of all other generators is time-independent. So far, we described the behaviour of the components in \eqref{eq:SFcalculusSyntax1} using the intuition that signal flows from left to right: in a transition $\dtrans{v}{w}$, the signal $v$ on the left is thought as trigger and $w$ as effect. For the generators in \eqref{eq:SFcalculusSyntax2}, whose behaviour is defined by the rules in the second section of \figref{fig:operationalSemantics}, the behaviour is symmetric---indeed, here it is helpful to think of signals as flowing from right to left. The next section of~\figref{fig:operationalSemantics} specifies the behaviours of the structural connectors of~\eqref{eq:SFcalculusSyntax3}: $\tikzfig{./generators/sym}$ is a \emph{twist}, swapping two signals, $\tikzfig{./generators/empty-diag}$ is the empty circuit and $\tikzfig{./generators/id}$ is the \emph{identity} wire: the signals on the left and on the right ports are equal. Finally, the rule for sequential $\,\ensuremath{;}\,$ composition forces the two components to have the same value $v$ on the shared interface, while for parallel $\oplus$ composition, components can proceed independently. Observe that both forms of composition require component transitions to happen at the same time. \begin{definition} Let $c\in \mathsf{ACirc}$. The \emph{initial state} $c_0$ of $c$ is the one where all the registers store $0$. A \emph{computation} of $c$ starting at time $t\leq 0$ is a (possibly infinite) sequence of transitions \begin{equation}\label{eq:computation} \fullcontext{t}{c_0} \dtrans{v_t}{w_t} \fullcontext{t+1}{c_{1}} \dtrans{v_{t+1}}{w_{t+1}} \fullcontext{t+2}{c_{2}} \dtrans{v_{t+2}}{w_{t+2}} \dots \end{equation} \end{definition} Since all transitions increment the time by $1$, it suffices to record the time at which a computation starts. As a result, to simplify notation, we will omit the runtime context after the first transition and, instead of~\eqref{eq:computation}, write \[\context{t}c_0 \dtrans{v_t}{w_t} c_{1} \dtrans{v_{t+1}}{w_{t+1}} c_{2} \dtrans{v_{t+2}}{w_{t+2}} \dots\] \begin{example}\label{exm:opsem} The circuit in Example \ref{ex:loop} can perform the following computation. \begin{multline*} \context{0} \!{\tikzfig{accu-buffer-0}}\!\!\! \dtrans{1}{1} \!{\tikzfig{accu-buffer-1}}\!\!\! \dtrans{0}{1} \!{\tikzfig{accu-buffer-1}}\!\!\! \dtrans{0}{1} \cdots \end{multline*} \end{example} In the example above, the flow has a clear left-to-right orientation, albeit with a feedback loop. For arbitrary circuits of $\mathsf{ACirc}$ this is not always the case, which sometimes results in unexpected operational behaviour. \begin{example}\label{example:1overX} In $\tikzfig{one-coreg}$ is not possible to identify a consistent flow: $\tikzfig{one}$ goes from left to right, while $\tikzfig{./generators/co-register}$ from right to left. Observe that there is no computation starting at $t=0$, since in the initial state the register contains $0$ while $\tikzfig{one}$ must emit $1$. There is, however, a (unique!) computation starting at time $t=-1$, that loads the register with $1$ before $\tikzfig{one}$ can also emit $1$ at time $t=0$. \[\context{-1} \onecoreg{0} \dtrans{\bullet}{1} \onecoreg{1} \dtrans{\bullet}{0} \onecoreg{0} \dtrans{\bullet}{0} \onecoreg{0} \dtrans{\bullet}{0} \dots\] Similarly, $\onetwocoregs{}{}$ features a unique computation starting at time $t=-2$. \[ \context{-2} \onetwocoregs{0}{0} \dtrans{\bullet}{1} \onetwocoregs{0}{1} \dtrans{\bullet}{0} \onetwocoregs{1}{0} \dtrans{\bullet}{0} \onetwocoregs{0}{0} \dtrans{\bullet}{0} \dots \] \end{example} It is worthwhile clarifying the reason why, in the affine calculus, some computations start in the past. As we have already mentioned, in the linear fragment the semantics of all generators is time-independent. It follows easily that time-independence is a property enjoyed by all purely linear circuits. The behaviour of $\tikzfig{one}$, however, enforces a particular action to occur at time 0. Considering this in conjunction with a right-to-left register results in $\onecoreg{}$, and the effect is to anticipate that action by one step to time -1, as shown in Example~\ref{example:1overX}. It is obvious that this construction can be iterated, and it follows that the presence of a single time-dependent generator results in a calculus in which the computation of some terms must start at a finite, but unbounded time in the past. \begin{example}\label{example:empty} Another circuit with conflicting flow is $\tikzfig{empty}$. Here there is no possible transition at $t=0$, since at that time $\tikzfig{one}$ must emit a $1$ and $\tikzfig{./generators/co-zero}$ can only synchronise on a $0$. Instead, the circuit $\tikzfig{./generators/empty-diag}$ can always perform an infinite computation $\context{t} \tikzfig{./generators/empty-diag} \dtrans{\bullet}{\bullet} \tikzfig{./generators/empty-diag} \dtrans{\bullet}{\bullet} \dots$, for any $t\leq 0$. Roughly speaking, the computations of these two $\sort{0}{0}$ circuits are operational mirror images of the two possible denotations of Proposition~\ref{prop:twopossibility}. This intuition will be made formal in Section \ref{sec:fullabs}. For now, it is worth observing that for all $c$, $\tikzfig{./generators/empty-diag} \oplus c$ can perform the same computations of $c$, while $\tikzfig{empty} \oplus c$ cannot ever make a transition at time $0$. \end{example} \begin{example}\label{example:spancospan} Consider the circuit $\spanregs{}$, which again features conflicting flow. Our equational theory equates it with $\tikzfig{./generators/id}$, but the computations involved are subtly different. Indeed, for any sequence $a_i\in \field$, it is obvious that $\tikzfig{./generators/id}$ admits the computation \begin{equation}\label{eq:idcomp} \context{0} \tikzfig{./generators/id} \dtrans{a_0}{a_0} \tikzfig{./generators/id} \dtrans{a_1}{a_1} \tikzfig{./generators/id} \dtrans{a_2}{a_2} \dots \end{equation} The circuit $\spanregs{}$ admits a similar computation, but we must begin at time $t=-1$ in order to first ``load'' the registers with $a_0$: \begin{equation}\label{eq:spanregscomp} \context{-1} \spanregs{0} \dtrans{0}{0} \spanregs{a_0} \dtrans{a_0}{a_0} \spanregs{a_1} \dtrans{a_1}{a_1} \spanregs{a_2} \dtrans{a_2}{a_2} \dots \end{equation} The circuit $\cospanregs{}{}$, which again is equated with $\tikzfig{./generators/id}$ by the equational theory, is more tricky. Although every computation of $\tikzfig{./generators/id}$ can be reproduced, $\cospanregs{}{}$ admits additional, problematic computations. Indeed, consider \begin{equation}\label{eq:cospanregscomp} \context{0} \cospanregs{0}{0} \dtrans{0}{1} \cospanregs{0}{1} \end{equation} at which point no further transition is possible---the circuit can deadlock. \end{example} The following lemma is an easy consequence of the rules of \figref{fig:operationalSemantics} and follows by structural induction. It states that all circuits can stay idle \emph{in the past}. \begin{lemma}\label{lemma:idle} Let $c\in \mathsf{ACirc}[n,m]$ with initial state $c_0$. Then $\context{t} c_0 \dtrans{0}{0} c_0$ if $t < 0$. \end{lemma} \subsection{Trajectories} For the non-affine version of the signal flow calculus, we studied in \cite{Bonchi2015} \emph{traces} arising from computations. For the affine extension, this is not possible since, as explained above, we must also consider computations that start in the past. In this paper, rather than traces we adopt a common control theoretic notion. \begin{definition} An $(n,m)$-\emph{trajectory} $\sigma$ is a $\mathbb{Z}$-indexed sequence $\sigma \mathrel{:} \mathbb{Z} \to \field^n\times\field^m$ that is finite in the past, i.e., for which $\exists j\in \mathbb{Z}$ such that $\sigma(i) = (0,0)$ for $i\leq j$. \end{definition} By the universal property of the product we can identify $\sigma \mathrel{:} \mathbb{Z} \to \field^n\times\field^m$ with the pairing $\langle\sigma_l, \sigma_r\rangle$ of $\sigma_l \mathrel{:} \mathbb{Z} \to \field^n$ and $\sigma_r \mathrel{:} \mathbb{Z} \to\field^m$. A $(k,m)$-trajectory $\sigma$ and $(m,n)$-trajectory $\tau$ are \emph{compatible} if $\sigma_r = \tau_l$. In this case, we can define their composite, a $(k,n)$-trajectory $\sigma\,\ensuremath{;}\, \tau$ by $\sigma\,\ensuremath{;}\, \tau \ \ensuremath{:\!\!=}\ \langle \sigma_l, \tau_r \rangle$. Given an $(n_1,m_1)$-trajectory $\sigma_1$, and an $(n_2,m_2)$-trajectory $\sigma_2$, their product, an $(n_1+n_2,m_1+m_2)$-trajectory $\sigma_1\oplus \sigma_2$, is defined $(\sigma_1\oplus \sigma_2)(i)\ \ensuremath{:\!\!=}\ \left(\begin{array}{c} \!\sigma(i)\! \\ \!\tau(i)\! \end{array}\right)$. Using these two operations we can organise \emph{sets} of trajectories into a prop. \begin{definition}\label{def:opsettraj} The composition of two sets of trajectories is defined as $ S\,\ensuremath{;}\, T \ \ensuremath{:\!\!=}\ \{\sigma\,\ensuremath{;}\, \tau\mid \sigma\in S, \tau\in T \text{ are compatible}\}. $ The product of sets of trajectories is defined as $ S_1\oplus S_2\ \ensuremath{:\!\!=}\ \{ \sigma_1\oplus \sigma_2 \mid \sigma_1\in S_1, \sigma_2\in S_2\}. $ \end{definition} Clearly both operations are strictly associative. The unit for $\oplus$ is the singleton with the unique $(0,0)$-trajectory. Also $\,\ensuremath{;}\,$ has a two sided identity, given by sets of ``copycat'' $(n,n)$-trajectories. Indeed, we have that: \begin{proposition} Sets of $(n,m)$-trajectories are the arrows $n\rightarrow m$ of a prop $\mathsf{Traj}$ with composition and monoidal product given as in Definition \ref{def:opsettraj}. \end{proposition} $\mathsf{Traj}$ serves for us as the domain for operational semantics: given a circuit $c$ and an \emph{infinite} computation $$\context{t} c_0 \dtrans{u_t}{v_t}c_1\dtrans{u_{t+1}}{v_{t+1}}c_2\dtrans{u_{t+2}}{v_{t+2}} \dots $$ its associated trajectory $\sigma$ is \begin{equation}\label{eq:compt-traj} \sigma(i)= \begin{cases} (u_{i},v_{i}) & \text{ if } i \geq t,\\ (0 ,0) & \text{ otherwise.}\\ \end{cases} \end{equation} \vspace{-.2cm} \begin{definition}\label{def:trajectories} For a circuit $c$, $\osem{c}$ is the set of trajectories given by its infinite computations, following the translation~\eqref{eq:compt-traj} above. \end{definition} The assignment $c\mapsto\osem{c}$ is compositional, that is: \begin{theorem}\label{thm:opsem-morphism} $\osem{\cdot}\colon \mathsf{ACirc} \to \mathsf{Traj}$ is a morphism of props. \end{theorem} \begin{proof}In Appendix~\ref{app:proofs}.\end{proof} \begin{example} Consider the computations \eqref{eq:idcomp} and \eqref{eq:spanregscomp} from Example \ref{example:spancospan}. According to~\eqref{eq:compt-traj} both are translated into the trajectory $\sigma$ mapping $i\geq 0$ into $(a_i,a_i)$ and $i < 0$ into $(0,0)$. The reader can easily verify that, more generally, it holds that $\osem{\tikzfig{./generators/id}}=\osem{\spanregs{}}$. At this point it is worth to remark that the two circuits would be distinguished when looking at their traces: the trace of computation \eqref{eq:idcomp} is different from the trace of \eqref{eq:spanregscomp}. Indeed, the full abstraction result in \cite{Bonchi2015} does not hold for all circuits, but only for those of a certain kind. The affine extension obliges us to consider computations that starts in the past and, in turn, this drives us toward a stronger full abstraction result, shown in the next section. Before concluding, it is important to emphasise that $\osem{\tikzfig{./generators/id}}=\osem{\cospanregs{}{}}$ also holds. Indeed, problematic computations, like \eqref{eq:cospanregscomp}, are all finite and, by definition, do not give rise to any trajectory. The reader should note that the use of trajectories is not a semantic device to get rid of problematic computations. In fact, trajectories do not appear in the statement of our full abstraction result; they are merely a convenient tool to prove it. Another result (Proposition \ref{prop:sfg-infinite}) independently takes care of ruling out problematic computations. \end{example} \section{Functional Behaviour and Signal Flow Graphs}\label{sec:sfg} There is a sub-prop $\mathsf{SF}$ of $\mathsf{Circ}$ of classical \emph{signal flow graphs} (see \emph{e.g.} \cite{mason1953feedback}). Here signal flows left-to-right, possibly featuring \emph{feedback loops}, provided that these go through at least one register. Feedback can be captured algebraically via an operation $\Tr{}(\cdot) \: \mathsf{Circ}[n+1,m+1] \to \mathsf{Circ}[n,m]$ taking $c \: n+1 \to m+1$ to: \ctikzfig{trace-form} Following~\cite{Bonchi2015}, let us call $\mathsf{f}\mathsf{Circ}$ the free sub-prop of $\mathsf{Circ}$ of circuits built from~\eqref{eq:SFcalculusSyntax3} and the generators of \eqref{eq:SFcalculusSyntax1}, without $\tikzfig{one}$. Then $\mathsf{SF}$ is defined as the closure of $\mathsf{f}\mathsf{Circ}$ under $\Tr{}(\cdot)$. For instance, the circuit of Example~\ref{exm:opsem} is in $\mathsf{SF}$. Signal flow graphs are intimately connected to the executability of circuits. In general, the rules of Figure~\ref{fig:operationalSemantics} do not assume a fixed flow orientation. As a result, some circuits in $\mathsf{Circ}$ are not executable as \emph{functional input-output} systems, as we have demonstrated with $\onecoreg{}$, $\tikzfig{empty}$ and $\cospanregs{}{}$ of Examples~\ref{example:1overX}-\ref{example:spancospan}. Notice that none of these are signal flow graphs. In fact, the circuits of $\mathsf{SF}$ do not have pathological behaviour, as we shall state more precisely in Proposition~\ref{prop:sfg-infinite}. At the denotational level, signal flow graphs correspond precisely to \emph{rational} functional behaviours, that is, matrices whose coefficients are in the ring $\ratio$ of \emph{rational fractions} (see Section~\ref{sec:frpoly-traj}). We call such matrices, rational matrices. One may check that the semantics of a signal flow graph $c \colon\sort{n}{m}$ is always of the form $\dsem{c} = \{(v, A \cdot v) \mid v \in \frpoly^{n} \}$, for some $m \times n$ rational matrix $A$. Conversely, all relations that are the graph of rational matrices can be expressed as signal flow graphs. \begin{proposition}\label{prop:sfg-rational} Given $c\colon\sort{n}{m}$, we have $\dsem{c}=\{(p, A\cdot p)\mid p\in\frpoly^n\}$ for some rational $m\times n$ matrix $A$ iff there exists a signal flow graph $f$, i.e., a circuit $f\colon\sort{n}{m}$ of $\mathsf{SF}$, such that $\dsem{f}=\dsem{c}$. \end{proposition} \begin{proof} This is a folklore result in control theory which can be found in~\cite{Rutten08_rationalstreamscoalgebraically}. The details of the translation between rational matrices and circuits of $\mathsf{SF}$ can be found in~\cite[Section 7]{BonchiSZ17}. \end{proof} The following gives an alternative characterisation of rational matrices---and therefore, by Proposition~\ref{prop:sfg-rational}, of the behaviour of signal flow graphs---that clarifies their role as realisations of circuits. \begin{proposition}\label{prop:rational-map} An $m\times n$ matrix is rational iff $A\cdot r\in\ratio^m$ for all $r\in\ratio^n$. \end{proposition} \begin{proof}In Appendix~\ref{app:proofs}.\end{proof} Proposition~\ref{prop:rational-map} is another guarantee of good behaviour---it justifies the name of inputs (resp. outputs) for the left (resp. right) ports of signal flow graphs. Recall from Section~\ref{sec:frpoly-traj} that rational fractions can be mapped to Laurent series of nonnegative degree, i.e., to plain power series. Operationally, these correspond to trajectories that start after $t=0$. Proposition~\ref{prop:rational-map} guarantees that any trajectory of a signal flow graph whose first nonzero value on the left appears at time $t=0$, will not have nonzero values on the right starting before time $t=0$. In other words, signal flow graphs can be seen as processing a stream of values from left to right. As a result, their ports can be clearly partitioned into inputs and outputs. But the circuits of $\mathsf{SF}$ are too restrictive for our purposes. For example, $\tikzfig{x+1}$ can also be seen to realise a functional behaviour transforming inputs on the left into outputs on the right yet it is not in $\mathsf{SF}$. Its behaviour is no longer linear, but affine. Hence, we need to extend signal flow graphs to include functional affine behaviour. The following definition does just that. \begin{definition}\label{def:ASFG} Let $\mathsf{ASF}$ be the sub-prop of $\mathsf{ACirc}$ obtained from \emph{all} the generators in \eqref{eq:SFcalculusSyntax1}, closed under $\Tr{}(\cdot)$. Its circuits are called \emph{affine signal flow graphs}. \end{definition} As before, none of $\onecoreg{}$, $\tikzfig{empty}$ and $\cospanregs{}{}$ from Examples~\ref{example:1overX}-\ref{example:spancospan} are affine signal flow graphs. In fact, $\mathsf{ASF}$ rules out pathological behaviour: all computations can be extended to be infinite, or in other words, do not get stuck. \begin{proposition}\label{prop:sfg-infinite} Given an affine signal flow graph $f$, for every computation $$\context{t} f_0 \dtrans{u_t}{v_t}f_{1}\dtrans{u_{t+1}}{v_{p+1}}\dots f_{n} $$ there exists a trajectory $\sigma\in \osem{c}$ such that $\sigma(i) = (u_i,v_i)$ for $t\leq i\leq t+n$. \end{proposition} \begin{proof} By induction on the structure of affine signal flow graphs. \end{proof} If $\mathsf{SF}$ circuits correspond precisely to $\ratio$-matrices, those of $\mathsf{ASF}$ correspond precisely to $\ratio$-affine transformations. \begin{definition} A map $f:\;\frpoly^n\to \frpoly^m$ is an \emph{affine map} if there exists an $m\times n$ matrix $A$ and $b\in\frpoly^m$ such that $f(p) = A\cdot p+b$ for all $p\in\frpoly^n$. We call the pair $(A,b)$ the representation of $f$. \end{definition} The notion of rational affine map is a straightforward extension of the linear case and so is the characterisation in terms of rational input-output behaviour. \begin{definition} An affine map $f:\; p\mapsto A\cdot p+b$ is \emph{rational} if $A$ and $b$ have coefficients in $\ratio$. \end{definition} \begin{proposition}\label{prop:affine-rational} An affine map $f:\;\frpoly^n\to \frpoly^m$ is rational iff $f(r)\in\ratio^m$ for all $r\in\ratio^n$. \end{proposition} \begin{proof}In Appendix~\ref{app:proofs}.\end{proof} The following extends the correspondence of Proposition~\ref{prop:sfg-rational}, showing that $\mathsf{ASF}$ is the rightful affine heir of $\mathsf{SF}$. \begin{proposition}\label{prop:asfg-rational} Given $c\colon\sort{n}{m}$, we have $\dsem{c}=\{(p, f(p))\mid p\in\frpoly^n\}$ for some rational affine map $f$ iff there exists an affine signal flow graph $g$, i.e., a circuit $g\colon\sort{n}{m}$ of $\mathsf{ASF}$, such that $\dsem{g}=\dsem{c}$. \end{proposition} \begin{proof} Let $f$ be given by $p\mapsto Ap+b$ for some rational $m\times n$ matrix $A$ and vector $b\in\ratio^m$. By Proposition~\ref{prop:sfg-rational}, we can find a circuit $c_A$ of $\mathsf{SF}$ such that \smallskip \noindent \begin{minipage}{.65\textwidth} $\dsem{c_A}=\{(p, A\cdot p)\mid p\in\frpoly\}$. Similarly, we can represent $b$ as a signal flow graph $c_b$ of sort $\sort{1}{m}$. Then, the circuit on the right is clearly in $\mathsf{ASF}$ and verifies $\dsem{c} = \{(p,Ap+b)\mid p\in\frpoly\}$ as required. \end{minipage} \begin{minipage}{.35\textwidth} \begin{center} $c\;:=\;\tikzfig{affine-sfg}$ \end{center} \end{minipage} For the converse direction it is straightforward to check by structural induction that the denotation of affine signal flow graphs is the graph (in the set-theoretic sense of pairs of values) of some rational affine map. \end{proof} \section{Realisability}\label{sec:realisability} In the previous section we gave a restricted class of morphisms with good behavioural properties. We may wonder how much of $\mathsf{ACirc}$ we can capture with this restricted class. The answer is, in a precise sense: most of it. Surprisingly, the behaviours realisable in $\mathsf{Circ}$---the purely linear fragment---are not more expressive. In fact, from an operational (or denotational, by full abstraction) point of view, $\mathsf{Circ}$ is nothing more than jumbled up version of $\mathsf{SF}$. Indeed, it turns out that $\mathsf{Circ}$ enjoys a \emph{realisability} theorem: any circuit $c$ of $\mathsf{Circ}$ can be associated with one of $\mathsf{SF}$, that implements or realises the behaviour of $c$ into an executable form. \smallskip \noindent \begin{minipage}{.7\textwidth} But the corresponding realisation may not flow neatly from left to right like signal flow graphs do---its inputs and outputs may have been moved from one side to the other. Consider for example, the circuit on the right \end{minipage} \begin{minipage}{.3\textwidth} $\quad\tikzfig{sfg-jumbled}$ \end{minipage} \smallskip \noindent It does not belong to $\mathsf{SF}$ but it can be read as a signal flow graph with an input that has been bent and moved to the bottom right. The behaviour it realises can therefore executed by rewiring this port to obtain a signal flow graph: \[\tikzfig{sfg-rewired} \quad\stackrel{\tiny \AIH }{=}\quad \tikzfig{sfg-rewired-1}\] We will not make this notion of rewiring precise here but refer the reader to~\cite{Bonchi2015} for the details. The intuition is simply that a rewiring partitions the ports of a circuit into two sets---that we call inputs and outputs---and uses $\tikzfig{cup}$ or $\tikzfig{cap}$ to bend input ports to the left and and output ports to the right. The realisability theorem then states that we can always recover a (not necessarily unique) signal flow graph from any circuit by performing these operations. \begin{theorem}{\cite[Theorem~5]{Bonchi2015}}\label{thm:realisability} Every circuit in $\mathsf{Circ}$ is equivalent to the rewiring of a signal flow graph, called its \emph{realisation}. \end{theorem} This theorem allows us to extend the notion of inputs and outputs to all circuits of $\mathsf{Circ}$. \begin{definition} A port of a circuit $c$ of $\mathsf{Circ}$ is an \emph{input} (resp. \emph{output}) port, if there exists a realisation for which it is an input (resp. output). \end{definition} Note that, since realisations are not necessarily unique, the same port can be both an input and an output. Then, the realisability theorem (Theorem~\ref{thm:realisability}) says that every port is always an input, an output or both (but never neither). An output-only port is an output port that is not an input port. Similarly an input-only port in an input port that is not an output port. \begin{example} The left port of the register $\tikzfig{./generators/register}$ is input-only whereas its right port is output-only. In the identity wire, both ports are input and output ports. The single port of $\tikzfig{./generators/zero}$ is output-only ; that of $\tikzfig{./generators/delete}$ is input-only. \end{example} While in the purely linear case, all behaviours are realisable, the general case of $\mathsf{ACirc}$ is a bit more subtle. To make this precise, we can extend our definition of realisability to include affine signal flow graphs. \begin{definition} A circuit of $\mathsf{ACirc}$ is \emph{realisable} if its ports can be rewired so that it is equivalent to a circuit of $\mathsf{ASF}$. \end{definition} \begin{example} $\tikzfig{one}$ is realisable; $\boxtikzfig{one-coreg}{r}\,$ is not. \end{example} Notice that Proposition~\ref{prop:asfg-rational}, gives the following equivalent semantic criterion for realisability. Realisable behaviours are precisely those that map rationals to rationals. \begin{theorem}\label{thm:Filippo-realisability} A circuit $c$ is realisable iff its ports can be partitioned into two sets, that we call inputs and outputs, such that the corresponding rewiring of $c$ is an affine rational map from inputs to outputs. \end{theorem} We offer another perspective on realisability below: realisable behaviours correspond precisely to those for which the $\tikzfig{one}$ constants are connected to inputs of the underlying $\mathsf{Circ}$-circuit. First, notice that, since \[\tikzfig{one-copy} \quad \myeq{1-dup} \quad\tikzfig{one2}\quad \text{and}\quad \tikzfig{one-delete} \; \myeq{1-del} \quad\tikzfig{empty-diag}\] in $\AIH$, we can assume without loss of generality that each circuit contains exactly one $\tikzfig{one}\,$. \begin{proposition}\label{prop:single-foot} Every circuit $c$ of $\mathsf{ACirc}$ is equivalent to one with precisely one $\tikzfig{one}$ and no $\tikzfig{co-one}$. \end{proposition} \begin{proof}In Appendix~\ref{app:proofs}.\end{proof} For $c\colon\sort{n}{m}$ a circuit of $\mathsf{ACirc}$, we will call $\hat{c}$ the circuit of $\mathsf{Circ}$ of sort $\sort{n+1}{m}$ that one obtains by first transforming $c$ into an equivalent circuit with a single $\tikzfig{one}$ and no $\tikzfig{co-one}$ as above, then removing this $\tikzfig{one}$, and replacing it by an identity wire that extends to the left boundary. \begin{theorem}\label{thm:affine-realisability} A circuit $c$ is realisable iff $\tikzfig{one}$ is connected to an input port of $\hat{c}$. \end{theorem} \section{From trajectories to Laurent series} Trajectories are just Laurent series in disguise. Each computation with runtime context defines a vector in $\laur^n \times \laur^m$, i.e., a pair of tuples of Laurent series in a straightforward way: for example, for a $(1,1)$ circuit $c$, the trajectory given by the infinite computation \[\context{p} c_0 \dtrans{u_0}{v_0} c_1 \dtrans{u_1}{v_1} c_2 \dtrans{u_2}{v_2} c_3 \dtrans{u_3}{v_3} \dots\] can be mapped to the pair of Laurent series $(\sigma,\tau)$ where $\sigma(i) = u_{i-p}$ and $\tau(i) = v_{i-p}$ for $i \geq m$ and $\sigma(i) = \tau(i) = 0$ otherwise. More generally, given a $(n,m)$-trajectory $\sigma$, we can just reorganise the components of each step of the trajectory to obtain a pair of vectors $L(\sigma)\in\laur^n \times \laur^m$, $$\left({\small\left( \begin{array}{c} \!\alpha^1\! \\ \vdots\\ \!\alpha^n\! \end{array}\right)}, \, {\small\left(% \begin{array}{c} \!\beta^1\! \\ \vdots\\ \!\beta^m\! \end{array}\right)}\right)$$ where \begin{equation}\label{eq:L-left} \alpha^j(i)= \sigma_l(i)^j \qquad \beta^l(i) = \sigma_r(i)^l \end{equation} where $\sigma_l(i)^j$ (resp. $\sigma_r(i)^k$) denotes the $j$th (resp. $k$th) component of $\sigma_l(i)$ (resp. $\sigma_r(i)$), for $1\leq j\leq n$ (resp. $1\leq l\leq m$) and all $i\in\mathbb{Z}$. Clearly, this operation is functorial and respects the monoidal product. \begin{theorem}\marginpar{RP: not sure whether we need to still prove it. It's obvious and tedious.} $L\colon \mathsf{PTR} \to \Rl{\laur}$ is a morphism of props. \end{theorem} \begin{theorem}\label{thm:main} $L \circ \osem{\cdot}=\dsem{\cdot}$ \end{theorem} \begin{proof} Since both are monoidal functors, it is enough to check the statement for the generators. \end{proof} \begin{corollary}\label{cor:dsemexperiments} Let $c\in \mathsf{ACirc}[0,0]$. Then $\dsem{c}=id_0$ iff $c\uparrow$. \end{corollary} \begin{proof} By Proposition \ref{prop:twopossibility}, either $\dsem{c}=id_0$ or $\dsem{c}=\emptyset$, which by mean of Theorem \ref{thm:main} means that $L\circ \osem{c}=id_0$ or $L \circ \osem{c}=\emptyset$. By definition of $L$ this means that either $\osem{c}$ contains a pointed trace or not. In the first case $c\uparrow$; in the second $ \not {c \uparrow}$. \end{proof} \section{From trajectories to Laurent series} Trajectories are just Laurent series in disguise. Each computation with runtime context defines a vector in $\laur^n \times \laur^m$, i.e., a pair of tuples of Laurent series in a straightforward way: for example, for a $(1,1)$ circuit $c$, the trajectory given by the infinite computation \[\context{p} c_0 \dtrans{u_0}{v_0} c_1 \dtrans{u_1}{v_1} c_2 \dtrans{u_2}{v_2} c_3 \dtrans{u_3}{v_3} \dots\] can be mapped to the pair of Laurent series $(\sigma,\tau)$ where $\sigma(i) = u_{i-p}$ and $\tau(i) = v_{i-p}$ for $i \geq m$ and $\sigma(i) = \tau(i) = 0$ otherwise. More generally, given a $(n,m)$-trajectory $\sigma$, we can just reorganise the components of each step of the trajectory to obtain a pair of vectors $L(\sigma)\in\laur^n \times \laur^m$, $$\left({\small\left( \begin{array}{c} \!\alpha^1\! \\ \vdots\\ \!\alpha^n\! \end{array}\right)}, \, {\small\left(% \begin{array}{c} \!\beta^1\! \\ \vdots\\ \!\beta^m\! \end{array}\right)}\right)$$ where \begin{equation}\label{eq:L-left} \alpha^j(i)= \sigma_l(i)^j \qquad \beta^l(i) = \sigma_r(i)^l \end{equation} where $\sigma_l(i)^j$ (resp. $\sigma_r(i)^k$) denotes the $j$th (resp. $k$th) component of $\sigma_l(i)$ (resp. $\sigma_r(i)$), for $1\leq j\leq n$ (resp. $1\leq l\leq m$) and all $i\in\mathbb{Z}$. Clearly, this operation is functorial and respects the monoidal product. \begin{theorem}\marginpar{RP: not sure whether we need to still prove it. It's obvious and tedious.} $L\colon \mathsf{PTR} \to \Rl{\laur}$ is a morphism of props. \end{theorem} \begin{theorem}\label{thm:main} $L \circ \osem{\cdot}=\dsem{\cdot}$ \end{theorem} \begin{proof} Since both are monoidal functors, it is enough to check the statement for the generators. \end{proof} \begin{corollary}\label{cor:dsemexperiments} Let $c\in \mathsf{ACirc}[0,0]$. Then $\dsem{c}=id_0$ iff $c\uparrow$. \end{corollary} \begin{proof} By Proposition \ref{prop:twopossibility}, either $\dsem{c}=id_0$ or $\dsem{c}=\emptyset$, which by mean of Theorem \ref{thm:main} means that $L\circ \osem{c}=id_0$ or $L \circ \osem{c}=\emptyset$. By definition of $L$ this means that either $\osem{c}$ contains a pointed trace or not. In the first case $c\uparrow$; in the second $ \not {c \uparrow}$. \end{proof}
1,108,101,566,178
arxiv
\section{Introduction} The calculation of accurate potential energies of molecules and materials at affordable cost is at the heart of computational chemistry. While state-of-the-art \textit{ab initio} electronic structure theories can yield highly accurate results, they are computationally too expensive for routine applications. Density functional theory (DFT) is computationally cheaper and has thus enjoyed widespread applicability. However, DFT is hindered by a lack of systematic improvability and from an uncertain quality for many applications. In recent years, a variety of machine learning approaches has emerged which promise to mitigate the cost of highly accurate electronic structure methods while preserving accuracy. \cite{bartok_gaussian_2010, rupp_fast_2012,hansen_assessment_2013,ramakrishnan_big_2015, behler_perspective_2016, brockherde_bypassing_2017, schutt_schnet_2017, schutt_quantum-chemical_2017, smith_ani-1_2017, mcgibbon_improving_2017, collins_constant_2018, fujikake_gaussian_2018, lubbers_hierarchical_2018, nguyen_comparison_2018, welborn_transferability_2018, wu_moleculenet_2018, yao_tensormol-01_2018, cheng_universal_2019, cheng_regression_2019, christensen_operators_2019, grisafi_transferable_2019, smith_approaching_2019, unke_physnet_2019, fabrizio_machine_2020, chen_ground_2020, christensen_fchl_2020, dick_machine_2020, liu_transferable_2020, qiao_orbnet_2020,manzhos_machine_2020} While these machine learning methods share similar goals, they differ in the representation of the molecules and in the machine learning methodology itself. Here, we will focus on the molecular-orbital-based machine learning (MOB-ML) approach. \cite{welborn_transferability_2018, cheng_universal_2019, cheng_regression_2019} The defining feature of MOB-ML is its framing of learning highly accurate correlation energies as learning a sum of orbital pair correlation energies. These orbital pair correlation energies can be individually regressed with respect to a feature vector representing the interaction of the molecular orbital pairs. Without approximation, it can be shown that such pair correlation energies add up to the correct total correlation energy for single-reference wave function methods. Phrasing the learning problem in this manner has the advantage that a given pair correlation energy, and, hence, a given feature vector, is independent of molecular size (after a certain size threshold has been reached) because of the inherent spatial locality of dynamic electron correlation. Consequently, operating in such an orbital pair interaction framework converts the general extrapolation task of training on small molecules and predicting on large molecule into an interpolation task of training on orbital pairs in a small molecule and predicting on the same orbital pairs in a large molecule. In this work, we address challenges introduced by operating in a vectorized molecular orbital pair interaction framework (Section~\ref{sec:theory}). We show how changes to the feature design affect the performance and transferability of MOB-ML models within the same molecular family (Section~\ref{subsec:sec1}) and across molecular families (Sections~\ref{subsec:sec2}-\ref{subsec:sec3}). We probe these effects on relative- and total-energy predictions for organic and transition-metal containing molecules, and we investigate the applicability of MOB-ML to transition-state structures and non-covalent interactions. \section{Theory} \label{sec:theory} MOB-ML predicts correlation energies based on information from the molecular orbitals. \cite{welborn_transferability_2018, cheng_universal_2019, cheng_regression_2019} The correlation energy $E^\text{corr}$ in the current study is defined as the difference between the true total electronic energy and the Hartree--Fock (HF) energy for a given basis set. Without approximation, the correlation energy is expressed as a sum over correlation energy contributions from pairs of occupied orbitals $i$ and $j$, \cite{nesbet_brueckners_1958} \begin{equation} E^\text{corr} = \sum_{ij} \epsilon_{ij}. \end{equation} Electronic structure theories offer different ways of approximating these pair correlation energies. For example, the second-order M{\o}ller-Plesset perturbation theory (MP2) correlation energy is \cite{moller_note_1934} \begin{equation} \epsilon_{ij}^\text{MP2} = \sum_{ab} \frac{\left<ia||jb\right>^2}{F_{aa}+F_{bb} - F_{ii} - F_{jj}}, \end{equation} where $a,b$ denote virtual orbitals, $F$ the Fock matrix in the molecular orbital basis, and $\left<ia||jb\right>$ the anti-symmetrized exchange integral. We denote a general repulsion integral over the spatial coordinates $\mathbf{x}_1, \mathbf{x}_2$ of molecular orbitals $p,q,m,n$ following the chemist's notation as \begin{equation} \begin{split} [\resizebox{!}{\CapLen}{$\kappa$}^{pq}]_{mn} &= \left< pq | mn \right> \\&= \int d \mathbf{x}_1 d \mathbf{x}_2 p(\mathbf{x}_1)^* q(\mathbf{x}_1) \frac{1}{|\mathbf{x}_1 - \mathbf{x}_2|} m(\mathbf{x}_2)^* n(\mathbf{x}_2). \end{split} \end{equation} The evaluation of correlation energies with post-HF methods like MP2 or coupled-cluster theory (including CCSD(T)) involves computations that exceed the cost of HF theory by orders of magnitude. By contrast, MOB-ML predicts the correlation energy at negligible cost by machine-learning the map \begin{equation} \epsilon_{ij} \approx \epsilon^\text{ML}(\mathbf{f}_{ij}), \end{equation} where $\mathbf{f}_{ij}$ denotes the feature vector into which information on the molecular orbitals is compiled. Following our previous work,\cite{cheng_universal_2019} we define a canonical order of the orbitals $i$ and $j$ by rotating them into gerade and ungerade combinations (see Eq.~(7) in Ref.~\onlinecite{cheng_universal_2019}), creating the rotated orbitals $\tilde{i}$ and $\tilde{j}$. The feature vector $\mathbf{f}_{ij}$ assembles information on the molecular orbital interactions: (i) Orbital energies of the valence-occupied and valence-virtual orbitals $F_{pp}$, (ii) mean-field interaction energy of valence-occupied and valence-occupied orbitals and of valence-virtual and valence-virtual orbitals $F_{pq}$, (iii) Coulomb interaction of valence-occupied and valence-occupied orbitals, of valence-occupied and valence-virtual orbitals, and valence-virtual and valence-virtual orbitals $[\resizebox{!}{\CapLen}{$\kappa$}^{pp}]_{qq}$, and (iv) exchange interaction of valence-occupied and valence-occupied orbitals, of valence-occupied and valence-virtual orbitals, and valence-virtual and valence-virtual orbitals $[\resizebox{!}{\CapLen}{$\kappa$}^{pq}]_{pq}$. We note that all of these pieces of information enter either the MP2 or the MP3 correlation energy expressions, which helps to motivate their value within our machine learning framework. We remove repetitive information from the feature vector and separate the learning problem into the cases where (i) $i\ne j$ where we employ the feature vector as defined in Eq.~(\ref{eq:off-diag_f}) and (ii) $i=j$ where we employ the feature vector as defined in Eq.~(\ref{eq:diag_f}), \onecolumngrid \begin{equation} \label{eq:off-diag_f} \begin{split} \mathbf{f}_{ij} =& \{ \{F_{\tilde{i}\tilde{i}}, F_{\tilde{i}\tilde{j}}, F_{\tilde{j}\tilde{j}}\}, \{F_{\tilde{i}k}\}, \{F_{\tilde{j}k}\}, \{F_{ab}\}, \\ & \{[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{i}\tilde{i}}]_{\tilde{i}\tilde{i}}, [\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{i}\tilde{i}}]_{\tilde{j}\tilde{j}}, [\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{j}\tilde{j}}]_{\tilde{j}\tilde{j}}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{i}\tilde{i}}]_{kk}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{j}\tilde{j}}]_{kk}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{i}\tilde{i}}]_{aa}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{j}\tilde{j}}]_{aa}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{aa}]_{bb}\} , \\ & \{[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{i}\tilde{j}}]_{\tilde{i}\tilde{j}}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{i}k}]_{\tilde{i}k}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{j}k}]_{\tilde{j}k}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{i}a}]_{\tilde{i}a}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{j}a}]_{\tilde{j}a}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{ab}]_{ab}\} \}, \\ \end{split} \end{equation}{} \begin{equation} \label{eq:diag_f} \begin{split} \mathbf{f}_{i} =& \{ F_{ii}, \{F_{ik}\}, \{F_{ab}\}, [\resizebox{!}{\CapLen}{$\kappa$}^{ii}]_{ii}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{ii}]_{kk}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{ii}]_{aa}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{aa}]_{bb}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{ik}]_{ik}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{ia}]_{ia}\}, \{[\resizebox{!}{\CapLen}{$\kappa$}^{ab}]_{ab}\} \}. \\ \end{split} \end{equation}{} \twocolumngrid \noindent Here, the index $k$ denotes an occupied orbital other than $i$ and $j$. For blocks in the feature vector that include more than one element, we specify a canonical order of the feature vector elements. In our previous work,\cite{cheng_universal_2019} this order was given by the sum of the Euclidean distances between the centroids of orbital $\tilde{i}$ and $p$ and between the centroids of orbital $\tilde{j}$ and $p$. In the current work, we introduce a different strategy to sort the feature vector elements (Section~\ref{sec:sorting}), we modify the protocol with which we obtain the feature vector elements associated with $\tilde{i}, \tilde{j}$ (Section~\ref{sec:invariance}), and we revise our feature vector elements to ensure size consistency (Section~\ref{sec:sizeconsistency}). We provide a conceptual description of the changes to the feature set below and we give the full definition of the feature vector elements and the criteria according to which the feature elements are ordered in Tables~S3--S6 in the Supporting information. \subsection{Defining importance of feature vector elements} \label{sec:sorting} Careful ordering of the elements of the feature vector blocks in necessary in the current work because Gaussian process regression (GPR) is sensitive to permutation of the feature vector elements. Furthermore, the application of a Gaussian process requires that the feature vectors be of fixed length. \cite{rasmussen_gaussian_2006} Given the near-sighted nature of dynamical electron correlation, it is expected that only a limited number of orbital-pair interactions are important to predict the pair correlation energy with MOB-ML. To construct the fixed-length feature vector, a cutoff criterion must be introduced.\cite{welborn_transferability_2018} For some feature vector elements, a robust definition of importance is straight-forward. The spatial distance between the orbital centroids $i$ and $a$ is, for example, a reliable proxy for the importance of the feature vector elements $\{[\resizebox{!}{\CapLen}{$\kappa$}^{ii}]_{aa}\}$ of the feature vector $ \mathbf{f}_{i}$. However, the definition of importance is less straightforward for feature vector elements that involve more than two indices. The most prominent example is the $\{[\resizebox{!}{\CapLen}{$\kappa$}^{ab}]_{ab}\}$ feature vector block of $\mathbf{f}_{ij}$, which contains the exchange integrals between the valence-virtual orbitals $a$ and $b$ and which should be sorted with respect to the importance of these integrals for the prediction of the pair correlation energy $\epsilon_{ij}$. It is non-trivial to define a spatial metric which defines the importance of the feature vector elements $\{[\resizebox{!}{\CapLen}{$\kappa$}^{ab}]_{ab}\}$ to predict the pair correlation energy $\epsilon_{ij}$; instead, we employ the the MP3 approximation for the pair correlation energy, \begin{equation} \label{eq:mp3} \begin{split} \epsilon_{ij}^\text{MP3} &= \frac{1}{8} \sum_{abcd} \left(t_{ij}^{ab}\right)^* \left<ab||cd\right> t_{ij}^{cd} + \frac{1}{8} \sum_{klab} \left(t_{ij}^{ab}\right)^* \left<kl||ij\right> t_{kl}^{ab} \\&- \sum_{kabc} \left(t_{ij}^{ab}\right)^* \left<kb||ic\right> t_{kj}^{ac}, \\ \end{split} \end{equation} where $t_{ij}^{ab}$ denotes the T-amplitude. Although we operate in a local molecular orbital basis, the canonical formulae are used to define the importance criterion; if we consider orbital localization as a perturbation (as in Kapuy--M{\o}ller--Plesset theory \cite{kapuy_application_1983}), the canonical expression is the leading order term. The term we seek to attach an importance to, $\{[\resizebox{!}{\CapLen}{$\kappa$}^{ab}]_{ab}\}$, appears in the first term of Eq.~(\ref{eq:mp3}) and all integrals necessary to compute this term are readily available as (a combination of) other feature elements, i.e., we do not incur any additional significant computational cost to obtain the importance of the feature vector elements. The way in which we determine the importance of the $\{[\resizebox{!}{\CapLen}{$\kappa$}^{ab}]_{ab}\}$ elements here is an example of a more general strategy that we employ, in which the importance is assigned according to the lowest-order perturbation theory in which the features first appear in. Similar considerations have to be made for each feature vector block, all of which are specified in detail in Tables~S3 and S4 in the Supporting Information. \subsection{Orbital-index permutation invariance} \label{sec:invariance} The Fock, Coulomb, and exchange matrix elements that comprise MOB features are naturally invariant to rotation and translation of the molecule. However, some care is needed to ensure that these invariances are not lost in the construction of symmetrized MOB features. In particular, rotating the valence-occupied orbitals into gerade and ungerade combinations leads to an orbital-index permutation variance for energetically degenerate orbitals $i,j$ because the sign of the feature vector elements $M_{\tilde{j}p}$, \begin{equation} M_{\tilde{j}p} = \frac{1}{\sqrt{2}} M_{ip} - \frac{1}{\sqrt{2}} M_{jp}, \end{equation} depends on the arbitrary assignment of the indices $i$ and $j$. To rectify this issue, we include the absolute value of the generic feature vector element $M$ in the feature vector instead of the signed value, \begin{equation} M_{\tilde{j}p} = \left| \frac{1}{\sqrt{2}} M_{ip} - \frac{1}{\sqrt{2}} M_{jp} \right|, \end{equation} where $M_{\tilde{j}p}$ may be $F_{\tilde{j}p}$, $[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{j}\tilde{j}}]_{pp}$, or $[\resizebox{!}{\CapLen}{$\kappa$}^{\tilde{j}p}]_{\tilde{j}p}$. The corresponding equation, \begin{equation} M_{\tilde{j}p} = \frac{1}{\sqrt{2}} M_{ip} + \frac{1}{\sqrt{2}} M_{jp}, \end{equation} is already orbital-index permutation invariant because we chose $M_{pq}$ ($p\ne q$) to be positive. \cite{cheng_universal_2019} \subsection{Size consistency} \label{sec:sizeconsistency} Size consistency is the formal property by which the energy of two isolated molecules equals the sum of their dimer upon infinite separation.\cite{bartlett_many-body_1981, SzaboSizeConsistent} In the context of MOB-ML, satisfaction of this property requires that the contributions from the diagonal feature vectors are not affected by distant, non-interacting molecules and that \begin{equation} \epsilon^{ML}(\mathbf{f}_{ij})=0 \text{ for } r_{ij} = \infty \end{equation} for contributions from the off-diagonal feature vectors. To ensure that MOB-ML exhibits size-consistency without the need for explicit training on the dimeric species, the following modifications to the feature vectors are made. \paragraph{Diagonal feature vector.} The feature vector as defined in Eq.~(\ref{eq:diag_f}) contains three blocks whose elements are independent of orbital $i$, $\{F_{ab}\}$, $\{[\resizebox{!}{\CapLen}{$\kappa$}^{aa}]_{bb}\}$, and $\{[\resizebox{!}{\CapLen}{$\kappa$}^{ab}]_{ab}\}$. The magnitude of these feature vector elements does not decay with an increasing distance between orbital $i$ localized on molecule $I$ and an orbital (for example, $a$) localized on molecule $J$. To address this issue, we multiply these feature vector elements by their estimated importance (see Section~\ref{sec:sorting}) so that they decay smoothly to zero. The other feature vector elements decay to zero when the involved orbitals are non-interacting albeit at different rates; we take the cube of feature vector elements of the type $\{[\resizebox{!}{\CapLen}{$\kappa$}^{pp}]_{qq}\}$ to achieve a similar decay rate for all feature vector elements in the short- to medium-range which facilitates machine learning. \paragraph{Off-diagonal feature vector.} We modify the off-diagonal feature vector such that $\mathbf{f}_{ij}=\mathbf{0} \text{ for }r_{ij} = \infty$ by first applying the newly introduced changes for $\mathbf{f}_{i}$ also for $\mathbf{f}_{ij}$. Further action is needed for the off-diagonal case because many feature vector elements do not decay to zero when the distance between $i$ and $j$ is large due to rotation of the orbitals into a gerade and an ungerade combination, e.g., $F_{\tilde{i}k}= \left| \frac{1}{\sqrt{2}} F_{ik} + \frac{1}{\sqrt{2}} F_{jk} \right| = \left| \frac{1}{\sqrt{2}} F_{ik} \right| \text{for } r_{ij}=\infty, r_{jk}=\infty$. As a remedy, we apply a damping function of the form $ \frac{1}{1+\frac{1}{6}(r_{ij}/r_0)^6}$ to each feature vector element. The form of this damping function is inspired by the semi-classical limit of the MP2 expression as it is also used for semi-classical dispersion corrections. \cite{grimme_dispersion_2016} The damping radius, $r_0$, needs to be sufficiently large as to not interfere with machine learning at small $r_{ij}$. If a damping radius close to zero would be chosen, all off-diagonal feature vectors would be zero which nullifies the information content; however, the damping radius $r_0$ also should not be too large as size-consistency has to be fully learned until the off-diagonal feature vector is fully damped to zero. Therefore, we employ a damping radius in the intermediate-distance regime and we empirically found $r_0 = 5.0$~Bohr to work well. Lastly, we enforce that $\epsilon^{ML}(\mathbf{0})=0$. The MOB features are engineered to respect this limit and would, for example, in a linear regression with a zero intercept trivially predict a zero-valued pair correlation energy without any additional training. However, the Gaussian process regression we apply in this work does not trivially yield a zero-valued pair correlation energy for a zero-valued feature vector. In the case that a training set does not include examples of zero-valued feature vectors, we need to include zero-valued feature vectors and zero-valued pair correlation energies in training to ensure that $\epsilon^{ML}(\mathbf{0})=0$. For no model trained in the current study were more than 5\% zero-valued feature vectors included. The resulting MOB-ML model leads to size consistent energy predictions to the degree to which the underlying MO generation is. It is not required that the dimer is explicitly part of training the MOB-ML model to obtain this result. The detailed definition of each feature vector block is summarized in Tables~S5 and S6. We apply the feature set defined in Tables~S5 and S6 consistently in this work. \section{Computational details} We present results for five different data sets: (i) a series of alkane molecules, (ii) the potential energy surface of the malonaldehyde molecule, (iii) a thermalized version of the QM7b and the GDB13 data set (i.e., QM7b-T and GDB13-T), \cite{cheng_thermalized_2019} (iv) a set of backbone-backbone interactions (BBI) \cite{burns_biofragment_2017}, and (v) a thermalized version of a subset of mononuclear, octahedral transition metal complexes put forward by Kulik and co-workers \cite{nandy_strategies_2018}. We refer to the Supporting Information Section II for a description how the structures were obtained or generated. All generated structures are available in Ref.~\onlinecite{caltech_data}. The features for all structures were generated with the \textsc{entos qcore} \cite{manby_entos_2019} package. The feature generation is based on a HF calculation applying a cc-pVTZ\cite{dunning_gaussian_1989} basis for the elements H, C, N, O, S, and Cl. We apply a def2-TZVP basis set \cite{weigend_balanced_2005} for all transition metals. The HF calculations were accelerated with density fitting for which we applied the corresponding cc-pVTZ-JKFIT\cite{weigend_fully_2002} and def2-TZVP-JKFIT \cite{weigend_accurate_2006} density fitting bases. Subsequently, we localized the valence-occupied and the valence-virtual molecular orbitals with the Boys--Foster localization scheme \cite{boys_construction_1960,foster_canonical_1960} or with the intrinsic bond orbital (IBO) localization scheme \cite{knizia_intrinsic_2013}. We implemented a scheme to localize the valence-virtual orbitals with respect to the Boys--Foster function (for details on this implementation, see Section II in the Supplementary Information). We applied the Boys--Foster localization scheme for the data sets (i), (iii), (iv), and (v) for valence-occupied and valence-virtual molecular orbitals. IBO localization for valence-occupied and valence-virtual molecular orbitals led to better results for data set (ii). The resulting orbitals are imported into the Molpro 2018.0 \cite{werner_molpro_2018,werner_molpro_2020} package via the \texttt{matrop} functionality to generate the non-canonical MP2 \cite{werner_fast_2003} or CCSD(T) \cite{scuseria_comparison_1990, hampel_local_1996, schutz_local_2000} pair correlation energies with the same orbitals we applied for the feature generation. These calculations are accelerated with the resolution of the identity approximation. The frozen-core approximation is invoked for all correlated calculations. We follow the machine learning protocol outlined in previous work \cite{cheng_universal_2019} to train the MOB-ML models. In a first step, we perform MOB feature selection by evaluating the mean decrease of accuracy in a random forest regression in the \textsc{Scikit-learn} v0.22.0 package \cite{pedregosa_scikit-learn_2011}. We then regress the diagonal and off-diagonal pair correlation energies separately with respect to the selected features in the \textsc{GPy} 1.9.6 software package. \cite{gpy_gpy_2012} We employ the Mat\'ern 5/2 kernel with white noise regularization\cite{rasmussen_gaussian_2006}. We minimize the negative log marginal likelihood objective with respect to the kernel hyperparameters with a scaled conjugate gradient scheme for 100 steps and then apply the BFGS algorithm until full convergence. As indicted in the results, both random-sampling and active-learning strategies were employed for the selection of molecules in the training data sets. In the active-learning strategy, we use a previously trained MOB-ML model to evaluate the Gaussian process variance for each molecule, and then include the points with the highest variance in the training data set, as outlined in Ref.~\onlinecite{shapeev_active_2020}. To estimate the Gaussian process variance for each molecule, it was assumed the variances per molecular orbital pair are mutually independent. \section{Results} \subsection{Transferability within a molecular family} \label{subsec:sec1} We first examine the effect of the feature vector generation strategy on the transferability of MOB-ML models within a molecular family. To this end, we revisit our alkane data set \cite{cheng_universal_2019} which contains 1000 ethane and 1000 propane geometries as well as 100 butane and 100 isobutane geometries. We perform the transferability test outlined in Ref.~\onlinecite{cheng_universal_2019}, i.e., training a MOB-ML model on correlation energies for 50 randomly chosen ethane geometries and 20 randomly chosen propane geometries to predict the correlation energies for the 100 butane and 100 isobutane geometries (see Figure~\ref{figure:figure1}). \begin{figure}[htbp] \includegraphics[width=\columnwidth]{figures/figure1.png} \caption{ Errors in the predicted correlation energies with respect to the CCSD(T) reference values for butane and isobutane. The bar attached to each prediction error indicates the associated Gaussian process variance. The MOB-ML model used for these predictions was trained on 50 ethane and 20 propane molecules. The gray shaded area corresponds to the region where the error is smaller than chemical accuracy (1~kcal/mol). } \label{figure:figure1} \end{figure} This transferability test was repeated with 10000 different training data sets (each consisting of data for 50 ethane molecules and 20 propane molecules) to assess the training set dependence of the MOB-ML models. As suggested in Ref.~\onlinecite{chen_ground_2020}, we consider various performance metrics to assess the prediction accuracy of the MOB-ML models: (i) the mean error (ME, Eq.~(S3)), (ii) the mean absolute error (MAE, Eq.~(S4)), (iii) the maximum absolute error (MaxAE, Eq.~(S5)), and (iv) the mean absolute relative error (MARE, Eq~(S6)) which applies a global shift setting the mean error to zero. We report the minimum, peak, and maximum encountered MAREs in Table~\ref{tab:table1} alongside literature values obtained in our previous work \cite{cheng_universal_2019}, by Dick \textit{et al.}, \cite{dick_machine_2020} and by Chen \textit{et al}. \cite{chen_ground_2020} The MEs, MAEs, and MaxAEs are reported in Figure~S1. \begin{center} \begin{table}[htbp] \centering \begin{tabular}{p{1.6cm}p{2.15cm}p{0.6cm}p{0.6cm}p{0.6cm}p{0.6cm}p{0.6cm}p{0.6cm}} \hline \hline Method & Feature set & \multicolumn{6}{c}{MARE} \\ & & \multicolumn{3}{c}{Butane} & \multicolumn{3}{c}{Isobutane}\\ & & min & peak & max & min & peak & max \\ \hline NeuralXC\cite{dick_machine_2020} & --- & & 0.15 & & & 0.14 & \\ DeepHF\cite{chen_ground_2020} & --- & 0.06& 0.11 & 0.43 & 0.07 & 0.13 & 0.53 \\ \hline MOB-ML & Ref.~\onlinecite{cheng_universal_2019} & & 0.20 & & & 0.21 & \\ & \textbf{this work } & \textbf{0.06} & \textbf{0.11} & \textbf{0.19 } & \textbf{0.06} & \textbf{0.10} & \textbf{0.19}\\ \hline \hline \end{tabular} \caption{Comparison of the minimum, peak, and maximum mean absolute error after global shift (MARE) in~kcal/mol for the prediction of CCSD(T) correlation energies for butane and isobutane obtained with different methods.} \label{tab:table1} \end{table} \end{center} In general, MOB-ML as well as NeuralXC\cite{dick_machine_2020} and DeepHF\cite{chen_ground_2020} produce MAREs well below chemical accuracy for correlation energies of butane and isobutane when trained on correlation energies of ethane and propane. Updating the feature vector generation strategy for MOB-ML results in the best peak MAREs for butane as well as for isobutane which are 0.11~kcal/mol and 0.10~kcal/mol, respectively. As in our previous work, \cite{cheng_universal_2019} we note that the total correlation energy predictions may be shifted with respect to the reference data so that the MEs for MOB-ML range from $-0.92$ to 2.70~kcal/mol for butane and from $-0.18$ to 1.02~kcal/mol for isobutane (see also Figure~S1). This shift is strongly training-set dependent, which was also observed for results obtained with DeepHF \cite{chen_ground_2020}. The results highlight that this is an extrapolative transferability test. A considerable advantage of applying GPR in practice is that each prediction is accompanied by a Gaussian process variance which, in this case, indicates that we are in an extrapolative regime (see Figure~\ref{figure:figure1}). Extrapolations might be associated with quality degradation which we see, most prominently, for the mean error in butane. By contrast, other machine learning approaches like neural networks are less clear in terms of whether the predictions are in an interpolative or extrapolative regime.\cite{hirschfeld_uncertainty_2020} By including the butane molecule with the largest variance in the training set (which then consists of 50 ethane, 20 propane, and 1 butane geometries) we reduce the ME from 0.78 to 0.25, MAE from 0.78 to 0.26, MaxAE from 1.11 to 0.51, and the MARE from 0.11 to 0.09~kcal/mol for butane (see Figure~S2). These results directly illustrate that MOB-ML can be systematically improved by including training data that is more similar to the test data; the improved confidence of the prediction is then also directly reflected in the associated Gaussian process variances. As a second example, we examine the transferability of a MOB-ML model trained within a basin of a potential energy surface to the transition-state region of the same potential energy surface. We chose malonaldehyde for this case study as it has also been explored in previous machine learning studies \cite{brockherde_bypassing_2017}. We train a MOB-ML model on 50 thermalized malonaldehyde structures which all have the property that $d$(O$^1$--H) + $d$(O$^2$--H) > 0.4~\AA\ (where $d$ denotes the distance between the two nuclei) which ensures that we are sampling from the basins. We then apply this trained model to predict the correlation energies for a potential energy surface mapping out the hydrogen transfer between the two oxygen atoms (see Figure~\ref{figure:figure2}). MOB-ML produces an accurate potential energy surface for the hydrogen transfer in malonaldehyde only from information on the basins (compare left and middle left panel of Figure~\ref{figure:figure2}). The highest encountered errors on the minimum potential energy path are smaller than 1.0~kcal/mol. Unsurprisingly, the predicted minimum energy structure ($d$(O$^1$--H) = 1.00~\AA, $d$(O$^2$--H) = 1.63~\AA) is very similar to the reference minimum energy structure ($d$(O$^1$--H) = 1.00~\AA, $d$(O$^2$--H) = 1.64~\AA). Strikingly, the predicted energy of 2.65~kcal/mol at the saddle point at $d$(O$^1$--H) = $d$(O$^2$--H) = 1.22~\AA\ differs from the reference energy by only 0.35~kcal/mol, although the MOB-ML model was not trained on any transition-state like structures. The highest errors are encountered in the high-energy regime and this region is also associated with the highest Gaussian process variance indicating low confidence in the predictions (compare middle right and right panel of Figure~\ref{figure:figure2}). The Gaussian process variance reflects the range of structures the MOB-ML model has been trained in and highlights again that we did not include transition-state-like structures in the training. \onecolumngrid \begin{center} \begin{figure}[htbp] \includegraphics[width=\textwidth]{figures/figure2.png} \caption{ Relative energies obtained with MP2/cc-pVTZ (left panel), relative energies predicted with MOB-ML (middle left panel), the difference between the MOB-ML prediction and the reference data (middle right panel), and the Gaussian process variance (right panel) for the proton transfer in malonaldehyde as a function of the distance of the proton from the two oxygen atoms. } \label{figure:figure2} \end{figure} \end{center} \twocolumngrid \subsection{Transferability across organic chemistry space} \label{subsec:sec2} The Chemical Space Project\cite{reymond_chemical_2015} computationally enumerated all possible organic molecules up to a certain number of atoms, resulting in the GDB databases.\cite{blum_970_2009} In this work, we examine thermalized subsets \cite{cheng_universal_2019} of the GDB13 data set \cite{blum_970_2009} to investigate the transferability of MOB-ML models across organic chemistry space. The application of thermalized sets of molecules has the advantage that we can study the transferability of our models for chemical and conformational degrees of freedom at the same time. To test the transferability of MOB-ML across chemical space, we train our models on a thermalized set of seven and fewer heavy-atom molecules (also known as QM7b-T \cite{cheng_universal_2019}) and then we test the prediction accuracy on a QM7b-T test set and on a thermalized set of molecules with thirteen heavy atoms (GDB13-T \cite{cheng_universal_2019}; see also Section V in the Supporting Information), as also outlined in our previous work. \cite{cheng_universal_2019, cheng_regression_2019} We first investigate the effect of changing the feature vector generation protocol on the QM7b-T$\rightarrow$QM7b-T prediction task (see Figure~\ref{figure:figure3}). \begin{figure}[htbp] \includegraphics[width=\columnwidth]{figures/figure3.png} \caption{ Comparison of the prediction mean absolute errors of total correlation energies for QM7b-T test molecules as a function of the number of QM7b-T molecules chosen for training for different machine learning models: MOB-ML as outlined in Ref.~\onlinecite{cheng_universal_2019} (orange circles), MOB-ML as outlined in this work with random sampling (green circles), and MOB-ML as outlined in this work with active sampling. The green shaded area corresponds to the 90\% confidence interval for the predictions obtained from 50 random samples of the training data. } \label{figure:figure3} \end{figure} In Ref.~\onlinecite{cheng_universal_2019}, we found that training on about 180 structures is necessary to achieve a model with an MAE below 1~kcal/mol. The FHCL method yields an MAE below 1~kcal/mol when training on about 800 structures \cite{christensen_fchl_2020} and the DeepHF method already exhibits an MAE below 1~kcal/mol when training on their smallest chosen training set which consists of 300 structures (MAE=0.79~kcal/mol). \cite{chen_ground_2020} The refinements in the current work reduce the number of required training structures to reach chemical accuracy to about 100 structures when sampling randomly. This number is, however, strongly training set dependent. We can remove the training-set dependence by switching to an active learning strategy where we can achieve an MAE below 1~kcal/mol reliably with about 70 structures. In general, the MAE obtained with the active learning strategy is comparable to the smallest MAEs obtained with random sampling strategies. This has the advantage that a small number of reference data can be generated in a targeted manner. In general, our aim is to obtain a machine learning model which reliably predicts broad swathes of chemical space. For an ML model to be of practical use, it has to be able to describe out-of-set molecules of different sizes to a similar accuracy when accuracy is measured size-intensively. \cite{SzaboSizeConsistent} We probe the ability of MOB-ML to describe out-of-set molecules with a different number of electron pairs by applying a model trained on correlation energies for QM7b-T molecules to predict correlation energies for GDB13-T. We collect the best results published for this transfer test in the literature in Figure~\ref{figure:figure4}. \begin{figure}[htbp] \includegraphics[width=\columnwidth]{figures/figure4.png} \caption{ Comparison of the prediction mean absolute errors of total correlation energies for GDB13-T molecules as a function of the number of QM7b-T molecules chosen for model training for different machine learning models: MOB-ML as outlined in this work with random sampling (green circles), MOB-ML with a single GPR\cite{cheng_universal_2019} (orange circles), MOB-ML with RCR/GPR \cite{cheng_regression_2019} (brown circles), DeepHF\cite{chen_ground_2020} (red squares), FHCL18\cite{christensen_fchl_2020} (purple squares). The green shaded area corresponds to the 90\% confidence interval for the predictions obtained from 50 random samples of the training data. } \label{figure:figure4} \end{figure} Our previous best single GPR model achieved an MAE of 2.27~kcal/mol when trained on 220 randomly chosen structures. \cite{cheng_universal_2019} The modifications in the current work now yield a single GPR model which achieves an MAE of 1.47--1.62~kcal/mol for GDB13-T when trained on 220 randomly chosen QM7b-T structures. Strikingly, MOB-ML outperforms machine learning models trained on thousands of molecules like our RCR/GPR model and FHCL18 \cite{christensen_fchl_2020}. The current MOB-ML results are of an accuracy that is similar to the best reported results from DeepHF (an MAE of 1.49 kcal/mol);\cite{chen_ground_2020} however, MOB-ML only needs to be trained on about 3\% of the molecules in the QM7b data set while DeepHF is trained on 42\% to obtain comparable results (MAE of 1.52~kcal/mol for 3000 training structures). The best reported result for DeepHF (MAE of 1.49 kcal/mol) was obtained by training on 97\% of the molecules of the QM7b data set. We attribute the excellent transferability of MOB-ML to the fact that it focuses on the prediction of orbital-pair contributions, thereby reframing an extrapolation problem into an interpolation problem when training machine learning models on small molecules and testing them on large molecules. The pair correlation energies predicted for QM7b-T and for GDB13-T span a very similar range (0 to $-20$~kcal/mol), and they are predicted with a similar Gaussian process variance (see Figure~S5) which we would expect in an interpolation task. The final errors for GDB13-T are larger than for QM7b-T, because the total correlation energy is size-extensive; however, the size-intensive error per electron pair spans a comparable range for QM7b-T and for GDB13-T (see Figure~S4). This presents a significant advantage of MOB-ML over machine learning models which rely on a whole-molecule representation and creates the opportunity to study molecules of a size that are beyond the reach of accurate correlated wave function methods. Most studies in computational chemistry require accurate relative energies rather than accurate total energies. Therefore, we also assess the errors in the relative energies for the sets of conformers for each molecule in the QM7b-T and in the GDB13-T data sets obtained with MOB-ML with respect to the reference energies (see Figure~\ref{figure:figure5}). \begin{figure}[htbp] \includegraphics[width=\columnwidth]{figures/figure5.png} \caption{ Prediction mean absolute errors for relative correlation energies as a function of the number of QM7b-T molecules chosen for model training for QM7b-T (blue circles) and for GDB13-T (orange crosses). The blue and orange shaded areas correspond to the 90\% confidence interval for the predictions obtained from 50 random samples of the training data. The gray shaded area corresponds to the region where the error is smaller than chemical accuracy (1~kcal/mol). } \label{figure:figure5} \end{figure} We emphasize that MOB-ML is not explicitly trained to predict conformer energies, and we include at most one conformer for each molecule in the training set. Nevertheless, MOB-ML produces on average chemically accurate relative conformer energies for QM7b-T when trained on correlation energies for only 30 randomly chosen molecules (or 0.4\% of the molecules) in the QM7b set. We obtain chemically accurate relative energies for the GDB13-T data set when training on about 100 QM7b-T molecules. The prediction accuracy improves steadily when training on more QM7b-T molecules reaching a mean MAE of 0.43~kcal/mol for the relative energies of the rest of the QM7b-T set and of 0.77~kcal/mol for the GDB13-T set. We now present the first reported test of MOB-ML for non-covalent interactions in large molecules. To this end, we examine the backbone-backbone interaction (BBI) data set \cite{burns_biofragment_2017} which was designed to benchmark methods for the prediction of interaction energies encountered within protein fragments. Using the implementation of MOB-ML described here and using only 20 randomly selected QM7b-T molecules for training, the method achieves a mean absolute error of 0.98~kcal/mol for the BBI data set (see Figure~\ref{figure:figure6}). \begin{figure}[htbp] \includegraphics[width=\columnwidth]{figures/figure6.png} \caption{ Top panel: Errors in predictions were made with a MOB-ML model trained on 20 randomly selected QM7b-T molecules with FS~3 with respect to reference MP2/cc-pVTZ interaction energies for the BBI data set. Bottom panel: Errors in predictions were made with a MOB-ML model trained on 20 randomly selected QM7b-T molecules and augmented with the 2 BBI data points with the largest variance (orange circles) with respect to reference MP2/cc-pVTZ interaction energies. The bar attached to each prediction error indicates the associated Gaussian process variance. The gray shaded area corresponds to the region where the error is smaller than chemical accuracy (1~kcal/mol). } \label{figure:figure6} \end{figure} However, these predictions are uncertain as indicated by the large Gaussian process variances associated with these data points which strongly suggested that we are now, as expected, in an extrapolative regime. We further improve the predictive capability of MOB-ML by augmenting the MOB-ML model with data from the BBI set. Specifically, we can draw on an active learning strategy and consecutively include data points until all uncertainties are below 1~kcal/mol which in this case corresponds to only two data points. This reduces the MAE to 0.28~kcal/mol for the remaining 98 data points in the BBI set. Including more reference data points would further improve the performance for this specific data set. However, this is not the focus of this work. Instead, we simply emphasize that MOB-ML is a clearly extensible strategy to accurately predict energies for large molecules and non-covalent intermolecular interactions while providing a useful estimation of confidence. \subsection{Transition-metal complexes} \label{subsec:sec3} We finally present the first application of MOB-ML to transition-metal complexes. To this end, we train a MOB-ML model on a thermalized subset of mononuclear, octahedral transition-metal complexes introduced by Kulik and co-workers \cite{nandy_strategies_2018} which we denote as TM-T. The chosen closed-shell transition-metal complexes feature different transition metals (Fe, Co, Ni) and ligands. The ligands span the spectrochemical series from weak-field (e.g., thiocyanate) over to strong-field (e.g., carbonyl) ligands. We see in Figure~\ref{figure:figure7} that the learning behaviour between TM-T and QM7b-T is similar when the error is measured per valence-occupied orbital. \begin{figure}[h] \includegraphics[width=\columnwidth]{figures/figure7.png} \caption{ Learning curve for the prediction of MP2 correlation energies per valence-occupied orbital for transition metal complexes (TM-T) and for QM7b-T as a function of the number of structures the MOB-ML model was trained on. } \label{figure:figure7} \end{figure} These results demonstrate that MOB-ML formalism can be straightforwardly applied outside of the organic chemistry universe without additional modifications. It is particularly notable that the learning efficiency for TM-T is comparable to that for QM7b-T, as seen in the relatively simple organic molecules in QM7b-T (Fig.~\ref{figure:figure7}). We note that whereas MP2 theory is not expected to be fully quantitative for transition metal complexes, \cite{weymuth_new_2014, husch_calculation_2018} it provides a demonstration of the learning efficiency of MOB-ML for transition-metal complexes in the current example; and as previously demonstrated, MOB-ML learns other correlated wave function methods with similar efficiency.\cite{welborn_transferability_2018, cheng_universal_2019} \section{Conclusions} Molecular-orbital-based machine learning (MOB-ML) provides a general framework to learn correlation energies at the cost of molecular orbital generation. In this work, we demonstrate that preservation of physical symmetries and constraints leads to machine-learning methods with greater learning efficiency and transferability. Exploiting physical principles like size consistency and energy invariances not only leads to a conceptually more satisfying method, but it also leads to substantial improvements in prediction errors for different data sets covering total and relative energies for thermally accessible organic and transition-metal containing molecules, non-covalent interactions, and transition-state energies. With the modifications presented in the current work, MOB-ML is shown to be highly data efficient, which is important due to the high computational cost of generating reference correlation energies. Only 1\% of the QM7b-T data set (containing organic molecules with seven and fewer heavy atoms) needs to be drawn on to train a MOB-ML model which produces on average chemically accurate total energies for the remaining 99\% of the data set. Without ever being trained to predict relative energies, MOB-ML provides chemically accurate relative energies for QM7b-T when training on only 0.4\% of the QM7b-T molecules. Furthermore, we have demonstrated that MOB-ML is not restricted to the organic chemistry space and that we are able to apply our framework out-of-the box to describe a diverse set transition-metal complexes when training on correlation energies for tens of molecules. Beyond data efficiency, MOB-ML models are are shown to be very transferable across chemical space. We demonstrate this transferability by training a MOB-ML model on QM7b-T and predicting energies for a set of molecules with thirteen heavy atoms (GDB13-T). We obtain the best result for GDB13-T reported to date despite only training on 3\% of QM7b-T. The successful transferability of MOB-ML is shown to result from its recasting of a typical extrapolation task (i.e., larger molecules) into an interpolation task (i.e., by predicting on the basis of size-intensive orbital-pair contributions). Even when MOB-ML enters an extrapolative regime as identified by a large Gaussian process variance, accurate results can be obtained; for example, we predict the transition-state energy for the proton transfer in malonaldehyde and interaction energies in the protein backbone-backbone interaction data set to chemical accuracy without training on transition-state-like data or non-covalent interactions, respectively. In this case, the uncertainty estimates also offer a clear avenue for active learning strategies which can further improve the model performance. Active learning offers an attractive way to reduce the number of expensive reference calculations further by picking the most informative molecules to be included in the training set. This provides a general recipe how to evolve a MOB-ML model to describe new regions of chemical space with minimal effort. Future work will focus on the expansion of MOB-ML to cover more of chemical space. Specifically, particular areas of focus include open-shell systems and electronically excited states. Physical insight from exact conditions in electronic structure theory \cite{bartlett_power_2017} will continue to guide the development of the method, with the aim of providing a machine-learning approach for energies and properties of arbitrary molecules with controlled error. \begin{acknowledgments} This work is supported in part by the U.S. Army Research Laboratory (W911NF-12-2-0023), the U.S. Department of Energy (DE-SC0019390), the Caltech DeLogi Fund, and the Camille and Henry Dreyfus Foundation (Award ML-20-196). T.H. acknowledges funding through an Early Post-Doc Mobility Fellowship by the Swiss National Science Foundation (Award P2EZP2\_184234). S.J.R.L. thanks the Molecular Software Sciences Institute (MolSSI) for a MolSSI investment fellowship. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility supported by the DOE Office of Science under contract DE-AC02-05CH11231. \end{acknowledgments} \section*{Supporting Information} Details on feature generation for all data sets used in this work, definition of error metrics, expanded results for the alkane transferability test, expanded results for the transferability within the organic chemistry space. Features and labels for all data sets used in this work. \section*{Data Availability Statement} The data that supports the findings of this study are available within the article and its supplementary material. Additional data that support the findings of this study are openly available in Caltech Data Repository.\cite{caltech_data}
1,108,101,566,179
arxiv
\section{Introduction} Fractional D3 branes have played an important r\^ole in extending AdS$_5$/CFT$_4$ dualities to settings where the gauge theory is not scale invariant. Being nothing but D5 branes wrapped on collapsed 2-cycles which exist at Calabi-Yau (CY) threefold conical singularities, they source 3-form fluxes in the geometry, which then lead to a logarithmically varying 5-form flux. The field-theoretic dual interpretation involves a cascading renormalization group (RG) flow, where the number of degrees of freedom decreases at subsequent strong coupling transitions, until the low energy gauge theory on the worldvolume of fractional D3 branes is reached in the deep infrared. In the past few years the attention has been mainly drawn to D5 branes wrapped on rigid collapsed 2-cycles, the best known examples of which are fractional D3 branes at the tip of the conifold \cite{Klebanov:2000hb} and of the complex cone over the first Del Pezzo surface \cite{Herzog:2004tr}: there the gauge theories have $\mathcal{N}=1$ supersymmetry, the cascade is an infinite sequence of Seiberg dualities \cite{Strassler:2005qs}, and the low energy confining dynamics drives either chiral symmetry breaking \cite{Klebanov:2000hb} or a runaway \cite{DSB,Intriligator:2005aw,Argurio:2007vq}. That focus was motivated by the fact that $\mathcal{N}=1$ SYM is a quite close but yet controllable relative of theories of phenomenological interests like pure YM theory and QCD, and theories with a runaway were originally hoped to provide a starting point in the search of gravity dual descriptions of supersymmetry breaking vacua \cite{DSB,Argurio:2006ew}. There is another class of fractional D3 branes, called of $\mathcal{N}=2$ kind, which are D5 branes wrapped on exceptional 2-cycles living at non-isolated singularities. The holomorphic data of their macroscopic dynamics is analogous to that of $\mathcal{N}=2$ SYM, with a Coulomb branch of supersymmetric vacua. Despite having been introduced long ago in the context of gauge/string duality \cite{Klebanov:1999rd,Bertolini:2000dk,Polchinski:2000mx,Billo:2001vg} following pioneering works on D-branes on orbifolds \cite{Douglas:1996sw,Diaconescu:1997br,Billo:2000yb} and their embedding in AdS/CFT \cite{Kachru:1998ys}, only very recently the field-theoretic interpretation of the type IIB near-horizon backgrounds generated by the backreaction of $\mathcal{N}=2$ fractional D3-branes at the $\mathbb{C}\times\mathbb{C}^2/\mathbb{Z}_2$ orbifold singularity was fully teased out \cite{Benini:2008ir}, settling a long-standing issue in the literature \cite{Polchinski:2000mx,Aharony:2000pp,Petrini:2001fk}. The cascade is understood in this case as a sequence of strong coupling transitions reminiscent of the transition between high energy and low energy theory at the baryonic root of $\mathcal{N}=2$ SQCD \cite{Argyres:1996eh}. More complicated CY singularities generically contain rigid as well as non-rigid collapsed 2-cycles. The infinite cascade which UV-completes the low energy dynamics on a generic stack of fractional D3 branes at one of such singularities while allowing us to remain in the realm of gauge/gravity duality consists of a sequence of strong coupling transitions, some of which are Seiberg dualities and others of which are $\mathcal{N}=2$ baryonic root transitions. This was confirmed in a specific example by studying the behavior of Page charges under cascade transitions on the supergravity side of the duality \cite{Argurio:2008mt}. \begin{figure}[tn] \centering \hspace{1cm} \includegraphics[width=.5\textwidth]{quiverorbi} \caption{\small Quiver diagram of the $U(N) \times U(N)$ \Nugual{2} theory on regular D3 branes at the $\mathbb{C}\times\mathbb{C}^2/\mathbb{Z}_2$ orbifold, in \Nugual{1} notation. Nodes represent unitary gauge groups, arrows connecting different nodes represent bifundamental chiral superfields, while arrows going from one node to itself represent adjoint chiral superfields. The superpotential is dictated by $\mathcal{N}=2$ supersymmetry. \label{fig:N=2_quiver}} \end{figure} This paper refines and extends the analysis of the cascading theory on regular and fractional D3 branes at the $\mathbb{C}\times\mathbb{C}^2/\mathbb{Z}_2$ orbifold carried out in \cite{Benini:2008ir} on both sides of the duality: the gauge theory side, whose quiver is depicted in the conformal case in Figure \ref{fig:N=2_quiver}, and the `gravity' side, with a metric, a RR 5-form, and a holomorphic complex scalar that has to be supplemented to account for additional massless modes arising from the twisted sector of closed string theory on the orbifold. The field strength of the complex scalar is the reduction of the complexified 2-form potential on the exceptional 2-cycle of the orbifold. The aim of the paper is twofold. We will first describe in section \ref{sec:embedding} how to embed the moduli space of the gauge theory on fractional D3 branes in the Coulomb branch of the quiver gauge theory with an infinite cascade via their Seiberg-Witten (SW) curves. This embedding will tighten the analogy between the strong coupling transitions in the cascade to the high-low energy transitions at the baryonic root of $\mathcal{N}=2$ SQCD \cite{Argyres:1996eh}, since the branch points of the curve that are related to the cascade transitions will be exactly double, like the branch points at the baryonic root. It will also allow us to find the exact twisted field configurations in the type IIB duals of those vacua, which encode the full nonperturbative dynamics on the gauge theory side. In section \ref{sec:examples} we will study those twisted field solutions for some interesting vacua in the infinitely cascading theory: we will first consider the $\mathbb{Z}_{2M}$-symmetric enhan\c con vacuum originally studied in the literature, and compare it with the approximation used in \cite{Benini:2008ir}; next we will analyze one of the $M$ vacua whose SW curves have genus zero, and which flow to the $M$ vacua of the Klebanov-Strassler theory upon mass deformation. In the second part of the paper we will employ the previous results to infer properties of fractional D3 branes at nonvanishing string coupling. In section \ref{sec:dissolution} we will show that fractional D3 branes transmute into twisted fields as soon as the string coupling is switched on. Their D5 and D3 brane charges are entirely provided by the twisted fields, with no additional sources. This is the translation to the string side of the duality of the well-known splitting of classically double branch points in the SW curve, driven by nonperturbative effects in the gauge theory, or equivalently the T-dual manifestation of suspended D4 branes in type IIA string theory becoming M5 brane tubes as soon as the string coupling does not vanish. We will show how this phenomenon solves two interrelated issues of divergences in the D3 brane charge and the warp factor which appeared in the naive type IIB solutions. In section \ref{sec:HvsC} we will generalize the observations made in the previous section to generic points of the moduli space of the quiver gauge theory on D3 branes at the $\mathbb{C}\times\mathbb{C}^2/\mathbb{Z}_2$ orbifold, with or without cascade. We will see that the so-called H-picture of \cite{Benini:2008ir}, where the integral of the NSNS $B_2$ potential on the exceptional 2-cycle remains bounded in the solution and alternates regimes of growth and decrease along the cascade, is generally singled out against the so-called C-picture, where the integral is monotonic along the cascade and diverges at large radii in the limit of infinite cascade. However, the C-picture turns out to be valid (and equivalent to the H-picture) when there is an infinite cascade with exactly double branch points associated to it in the SW curve, as in the vacua studied in the first part of the paper. We end the paper with a short summary and conclusions. We added an appendix with a discussion of a particular twisted field configuration for fractional branes at an orbifold of the conifold, which turns out to be directly related to one of the configurations studied in the body of the paper. \section{Embedding the moduli space of pure SYM into the Coulomb branch of a cascading quiver theory} \label{sec:embedding} Supergravit \footnote{With an abuse of language we will call `supergravity' the low energy theory describing the interactions of all the massless modes of closed string theory, including modes in the twisted sector in the case of orbifolds.} solutions dual to quiver gauge theory vacua whose RG flows involve an infinite cascade can be found by computing the backreaction of a (large) number of fractional D3 branes at Calabi-Yau conical singularities in the near-horizon limit, with no need of adding the regular D3 branes of the AdS$_5$/CFT$_4$ correspondence. One could naively expect that backgrounds found in this way describe holographically the low energy field theory on the fractional D3 branes: for instance $\mathcal{N}=2$ or $\mathcal{N}=1$ pure SYM, or more complicated nonconformal gauge theories whose content depends on properties of the singularity and the kind of fractional D3 branes. This point of view was indeed taken in \cite{Bertolini:2000dk}, where a background sourced by $M$ fractional D3 branes at the $A_1$ singularity was originally interpreted as dual to a vacuum of the $\mathcal{N}=2$ $SU(M)$ SYM theory hosted by the fractional branes. Such low energy theories, however, always contain asymptotically free gauge groups, and therefore by the common lore of gauge/string duality they are not expected to have weakly curved gravity duals, as opposed to what is found. This apparent contradiction is overcome by realizing that the type IIB supergravity solutions, having a finite and constant axio-dilaton, actually describe vacua of the quiver gauge theories living on regular and fractional D3 branes; an infinite cascade completes the gauge theory on fractional branes in the ultraviolet, keeping the gauge couplings bounded from below according to the value of the axio-dilaton. Moduli spaces of cascading quiver gauge theories are obviously much richer than those of the theories describing their infrared regimes: they can include not only a larger Coulomb branch, parametrized by displacements of fractional D3 branes of all possible kinds, but also a Higgs branch, parametrized by displacements of regular D3 branes.% \footnote{If the cascade is infinite these branches are infinite-dimensional. When all the fractional D3 branes are of $\mathcal{N}=2$ kind, the IR dynamics of interest can also arise from a UV conformal quiver gauge theory hosted by regular D3 branes alone, making the moduli space finite-dimensional.} However, the moduli space of the fractional brane gauge theory can be naturally embedded into the moduli space of the quiver gauge theories of regular and fractional D3 branes with infinite cascade. The background found by backreacting $M$ fractional D3 branes is not dual to a vacuum of the fractional brane theory, but rather to its embedding into the infinitely cascading quiver gauge theory of regular and fractional D3 branes. This statement may look trivial for the gauge theories on fractional branes at isolated singularities, for which only a finite number of supersymmetric vacua exists after the complex structure deformation takes place: it is well known for instance that one can associate to each of the $M$ vacua of $\mathcal{N}=1$ $SU(M)$ pure SYM a vacuum of the cascading Klebanov-Strassler theory and a dual background \cite{Klebanov:2000hb}. The content of the previous statement considerably increases when fractional D3 branes at non-isolated singularities are involved: then it becomes a statement about the embedding of the whole $(M-1)$-dimensional moduli space of the fractional brane theory into the infinite-dimensional Coulomb branch of the quiver theory with infinite cascade. Extending results of \cite{Benini:2008ir}, in this section we will explicitly provide such an embedding for the case of the gauge theory hosted by D3 branes at the $A_1$ singularity (namely the orbifold $\mathbb{C}\times\mathbb{C}^2/\mathbb{Z}_2$) in type IIB string theory. By means of Seiberg-Witten theory, its M theory realization, and the duality of M theory to type IIB string theory, we will also be able to provide the \emph{exact} type IIB twisted fields for that infinite class of vacua in analytic form. The warp factor is then determined from the twisted fields via a 2-dimensional integral on the orbifold fixed plane. In the following section we will first apply these exact results to some of the vacua studied in \cite{Benini:2008ir}, and then we will exploit them to unravel the fate of fractional D3 branes at nonvanishing string coupling in sections \ref{sec:dissolution} and \ref{sec:HvsC}. \subsection{Seiberg-Witten curves} Seiberg-Witten curves manifest themselves in M theory as holomorphic embeddings of M5 branes \cite{Witten:1997sc}, which are the uplifts (with a rescaling) of systems of D4 branes suspended between parallel NS5 branes in type IIA string theory. The low energy gauge dynamics on the suspended D4 branes is a 4-dimensional $\mathcal{N}=2$ gauge theory, whose full quantum dynamics is encoded by the M theory uplift. The $\mathcal{N}=2$ pure gauge theory is engineered by an M5 brane spanning $\mathbb{R}^{1,3}$ and a Riemann surface in $\mathbb{R}^2\times$ cylinder, and located at a point in the three additional dimensions. $\mathcal{N}=2$ supersymmetry requires the embedding to be holomorphic with respect to complex coordinates $v$ and $u$ on $\mathbb{R}^2$ and the cylinder respectively. If instead of a cylinder we consider a 2-torus, we are led to the $\mathcal{N}=2$ quiver theory with two gauge groups coupled by two bifundamental hypermultiplets, like the one of Figure \ref{fig:N=2_quiver}, possibly with different ranks. The embedding of the whole moduli space of the \Nugual{2} $SU(M)$ pure SYM theory into the moduli space of the quiver gauge theory with an infinite cascade follows from the M theory construction, as we now lay out, generalizing the analysis carried out in \cite{Benini:2008ir} for genus zero SW curves. We start with \Nugual{2} $SU(M)$ pure SYM theory. Each point of its moduli space is characterized by the SW curve fibered over it, which in terms of dimensionless variables looks as follows \cite{SU(N),Witten:1997sc}: \begin{equation}\label{SW_curve_glue_dimless} t-P_M(v)+\frac{1}{t}=0\;, \end{equation} where \begin{equation} P_M(v)\equiv\prod_{i=1}^M(v-v_i) \end{equation} is the characteristic polynomial of the adjoint scalar (in units of the nonperturbative scale $\Lambda$). The quiver gauge theory is realized in M theory as an elliptic model, defined by the torus identification \begin{equation}\label{torus_identification_u} u \equiv i\frac{x^6+ix^{10}}{2\pi R_{10}} \sim u+1\sim u+\tau\;, \end{equation} or equivalently \begin{equation}\label{torus_identification_t} t \equiv e^{2\pi i u} \sim qt\;,\qquad\qquad q\equiv e^{2\pi i\tau}\;. \end{equation} The embedding of the moduli space of \Nugual{2} $SU(M)$ pure SYM theory into the moduli space of the quiver gauge theory with an infinite cascade is easily obtained by wrapping the SW curves \eqref{SW_curve_glue_dimless} infinitely many times on the torus \eqref{torus_identification_u}-\eqref{torus_identification_t}: \begin{equation} \begin{split} 0 &= \tilde Q (t,P_M(v))=\lim_{K\to\infty} \tilde Q_K (t,P_M(v))\;,\\ \tilde Q_K (t,P_M(v))&= q^{K(K+1)} f(q)\prod_{j=-K}^K \left(q^j t - P_M(v) + \frac{1}{q^jt}\right)\;, \end{split} \end{equation} where the $q^{K(K+1)}$ factor is needed for convergence as $K\to\infty$ and \begin{equation} f(q)\equiv\prod_{l=1}^\infty (1-q^{2l})(1-q^{2l-1})^2 \end{equation} is put for later convenience. The $K\to\infty$ limit converges for any $t$ and $P_M(v)$ because $|q|<1$, and we get the curve% \footnote{By construction, the locus of solutions of equation \eqref{SW_curve_inf_cascade} for the SW curve wrapped on the torus consists of the two roots in $t$ of equation \eqref{SW_curve_glue_dimless} for the SW curve defined on the cylinder, along with all their infinitely many images under the $t\sim q t$ equivalence which defines the M theory torus as a quotient of the cylinder.} \begin{equation}\label{SW_curve_inf_cascade} \begin{split} \tilde Q (t,P_M(v)) &= f(q)\,\left(t-P_M(v)+\frac{1}{t}\right) \cdot\\ &\cdot \prod_{j=1}^\infty \left( 1 - P_M(v) t q^j + t^2 q^{2j} \right) \left( 1 - \frac{P_M(v)}{t} q^j + \frac{q^{2j}}{t^2} \right) = 0\;. \end{split} \end{equation} To the aim of proving that \eqref{SW_curve_inf_cascade} is a legitimate SW curve for the quiver gauge theory with an infinite cascade in the UV, we then define a sequence (in $K$) of SW curves for the $SU((2K+1)M)\times SU((2K+1)M)$ quiver theory with equal bare complexified gauge couplings \cite{Ennes:1999fb,Petrini:2001fk} \begin{equation}\label{SW_quiver_K} \mathcal{Q}_K(t,P_M(v))\equiv q^{K(K+1)}\left[-\mathcal{R}_K(v)\theta_3(2u|2\tau)+\mathcal{S}_K(v)\theta_2(2u|2\tau)\right]=0\;, \end{equation} with suitable characteristic polynomials of the adjoint scalars: \begin{equation}\label{characteristic_polynomials} \begin{split} \mathcal{R}_K(v) &= P_M(v) \prod_{j=1}^K \left[ P_M(v)^2 +\frac{(1-q^{2j})^2}{q^{2j}} \right] \\ \mathcal{S}_K(v) &= \left(P_M(v)+q^{-K-\frac{1}{4}}\right) \prod_{j=1}^K \left[ P_M(v)^2 +\frac{(1-q^{2j-1})^2}{q^{2j-1}} \right] \end{split} \end{equation} The $K\to\infty$ limit converges for any $t=e^{2\pi i u}$ and $P_M(v)$ since $|q|<1$, and we call \begin{equation} \label{SW_curve_inf_cascade_quiver} \mathcal{Q}(t,P_M(v))=\lim_{K\to\infty }\mathcal{Q}_K(t,P_M(v))\;. \end{equation} It is then possible to verify, as was done in \cite{Benini:2008ir}, that for any choice of $t$ and $P_M(v)$ \begin{equation} \tilde Q(t,P_M(v))= \mathcal{Q}(t,P_M(v)) \;. \end{equation} Therefore the holomorphic curve \eqref{SW_curve_inf_cascade} obtained by wrapping a SW curve of the pure SYM theory arises as the infinite cascade limit \eqref{SW_curve_inf_cascade_quiver} of a sequence of specific SW curves \eqref{SW_quiver_K}-\eqref{characteristic_polynomials} for the quiver theory on regular D3 branes at the $\mathbb{C}\times\mathbb{C}^2/\mathbb{Z}_2$ orbifold. This construction holds for any choice of the characteristic polynomial $P_M(v)$; hence it provides the promised embedding of the whole moduli space of $\mathcal{N}=2$ $SU(M)$ SYM into the infinite-dimensional Coulomb branch of the cascading quiver gauge theory on D3 branes at the $\mathbb{C}\times\mathbb{C}^2/\mathbb{Z}_2$ orbifold singularity. \subsection{Exact twisted field configurations in type IIB string theory} The type IIB supergravity solutions dual to the infinite class of cascading vacua previously discussed are easily obtained, by recalling that M theory compactified on a torus of complex structure $\tau$ is equivalent, in the zero size limit of the torus, to type IIB string theory with axio-dilaton $C_0+\frac{i}{g_s}=\tau$. The main character in the type IIB solutions is the twisted sector complex scalar \begin{equation} \label{gamma_def} \gamma\equiv c+\tau b = \frac{1}{4\pi^2\alpha'}\int_\mathcal{C} \left( C_2 + \tau B_2 \right) \;. \end{equation} $\mathcal{C}$ is the exceptional 2-cycle living at the orbifold fixed plane, and $C_2$ and $B_2$ RR and NSNS potentials. Supersymmetry requires that $\gamma$ be a holomorphic function of the complex length coordinate $z$ on the orbifold fixed plane, which is related to $v$ in M theory as $z = 2\pi \alpha'\Lambda\,v$. We will use the dimensionless $v$ instead of $z$ in the remainder of the article. By duality, the twisted sector complex scalar in type IIB is given by the distance vector between the two branches of the M5 brane on the torus: \begin{equation}\label{gamma_from_u} \gamma(v)= u_-(v)-u_+(v)\;, \end{equation} where \begin{equation}\label{u_from_t} u_\pm (v) \equiv \frac{1}{2\pi i}\log t_\pm(v) \end{equation} and $t_\pm(v)$ are the two solutions of \eqref{SW_curve_glue_dimless} at fixed $v$:% \footnote{We used the unwrapped curve \eqref{SW_curve_glue_dimless} instead of the wrapped one \eqref{SW_curve_inf_cascade} solely as a simplifying choice, since the information they encode is the same. Picking any other pair of solutions of \eqref{SW_curve_inf_cascade} and \eqref{u_from_t} which are not equivalent under \eqref{torus_identification_t} leads to the same result for the twisted sector scalar \eqref{gamma_from_u} up to its periodicities $\gamma\sim\gamma+1\sim\gamma+\tau$ which amount to large gauge transformations.} \begin{equation}\label{t_pm} t_\pm (v)= \frac{1}{2} \left[ P_M(v) \pm \sqrt{P_M(v)^2-4} \right]\;. \end{equation} Note that we implicitly chose the C-picture of \cite{Benini:2008ir}, which is naturally suited for infinite cascades with exactly double branch points, as those under discussion. In section \ref{sec:HvsC} we will discuss the H-picture, which turns out to be valid more generally, and motivate why the C-picture can be used to describe this class of vacua as well. The previous result can be recast in the form \begin{equation}\label{gamma_generic_poly} \gamma(v)=\frac{i}{\pi} \log \left[ \frac{P_M(v)}{2}+ \sqrt{\left(\frac{P_M(v)}{2}\right)^2 -1} \right]= \frac{i}{\pi} \cosh^{-1} \frac{P_M(v)}{2}\;. \end{equation} The large $v$ asymptotics is the expected $\gamma(v)\simeq \frac{iM}{\pi}\ln v$, with a convenient choice of branch cuts in the $v$ plane. Moreover, \begin{equation}\label{dgamma_generic_poly} \begin{split} d\gamma(v)=\frac{i}{\pi}\frac{dP_M(v)}{\sqrt{P_M(v)^2-4}}\;. \end{split} \end{equation} The previous exact expressions \eqref{gamma_generic_poly}-\eqref{dgamma_generic_poly} are to be compared with the naive expressions \begin{equation}\label{gamma_naive_generic_poly} \gamma_n(v)=\frac{i}{\pi} \log P_M(v) \end{equation} and \begin{equation}\label{dgamma_naive_generic_poly} \begin{split} d\gamma_n(v)=\frac{i}{\pi}\frac{dP_M(v)}{P_M(v)}\;, \end{split} \end{equation} which capture the semiclassical dynamics of the dual gauge theory but miss the nonperturbative effects encoded in the SW curve. The difference is that new branch cuts, related to the square root, appear in \eqref{gamma_generic_poly}. $d\gamma$ behaves more mildly than the naive $d\gamma_{n}$, which has simple poles at $v=v_i$: at most it has the singular behavior $d\gamma(v)\sim (v-v_{i\pm})^{-\frac{1}{2}}\, dv$ as $v\to v_{i\pm}$ if $P_M(v_{i\pm})=\pm 2$ and $ P_M'(v_{i\pm})\neq 0$ or branch points if $ P_M'(v_{i\pm})= 0$, otherwise it is analytic. The simple pole at infinity survives. \subsection{Branch points and singularities of the SW curves}\label{subsec:branc_singul} This section is devoted to the study of branch points and singularities of the SW curve \eqref{SW_curve_inf_cascade} or equivalently \eqref{SW_curve_inf_cascade_quiver}, and their relations with special values of the twisted sector complex scalar $\gamma$ in type IIB string theory. Let us start with branch points, which correspond to values of $v$ such that $u_-(v)=u_+(v)$ up to the torus equivalence $u\sim u+1\sim u+\tau$. When this happens, the two branches of the M5 brane join; after reducing to type IIA string theory, this means that at those values of $v$ the two NS5 branes cross each other with no discontinuity of the periodic scalar on their worldvolume. By construction, these branch points arise when $u\in\frac{\mathbb{Z} +\tau \mathbb{Z}}{2}$ \cite{Petrini:2001fk}. From the point of view of the solution \eqref{gamma_generic_poly} for the twisted sector scalar potential, they correspond to values of $v$ such that $\gamma(v)\in \mathbb{Z}+\tau \mathbb{Z}$, by virtue of \eqref{gamma_from_u}. In particular, those values of $v$ are defined by the condition \begin{equation}\label{branch_condition} P_M(v)=\pm \left(q^\frac{n}{2}+q^{-\frac{n}{2}}\right)=2 \cosh\left[i \pi (n\tau+m)\right]\;,\qquad n,m\in\mathbb{Z}\;. \end{equation} Notice that from the point of view of string theory these are locations where additional massless degrees of freedom might appear (in the type IIA picture tensionless strings arise where the two NS5 branes cross). Despite our poor knowledge of string and M theory in such conditions, in the present case gauge/string duality along with Seiberg-Witten theory allows us to identify the extra massless states whenever they appear. The quickest way of seeing which of those branch points give rise to additional massless modes (monopoles, dyons or gauge bosons) is to look for singularities of the SW curve. They are most easily found using its form \eqref{SW_curve_inf_cascade}, and correspond to points $(t,v)$ such that $\tilde Q(t,P_M(v))=0$ and $d\tilde Q(t,P_M(v))=0$. The defining equation of the curve $\tilde Q(t,P_M(v))=0$ requires that, for some integer $h$, $t$ and $v$ are subject to the condition \begin{equation} q^h t+\frac{1}{q^h t}=P_M(v)\;. \end{equation} Then the curve is singular if \begin{equation}\label{singul} \lim_{K\to\infty} q^{K(K+1)}\,\left[q^h\left(1-\frac{1}{q^{2h}t^2}\right)dt-dP_M(v) \right]\prod_{\substack{ l=-K\\ l\neq h}}^K \left(q^l t+\frac{1}{q^l t}-q^h t-\frac{1}{q^h t}\right)=0 \end{equation} holds too. There are two classes of possibilities, which we will name singularities of the first or of the second kind, depending on whether it is the first factor in \eqref{singul} or one of the factors in the infinite product that vanishes. Singularities of the first kind are solutions of \begin{equation} \begin{cases} q^h t =\pm 1\\ P_M(v) = \pm 2\\ P_M'(v)=0 \end{cases} \end{equation} and they may or may not exist depending on the form of the polynomial $P_M(v)$. They come from possible singularities of the (images under $t \mapsto q^h t$ of the) SW curve \eqref{SW_curve_glue_dimless} of pure $SU(M)$ SYM. Correspondingly, at such values $v=v_*$ the twisted sector scalar $\gamma(v_*)\in \mathbb{Z}$, namely $c$ is integer and $b=0$.% \footnote{Recall that a rescaling of the form $t \mapsto \alpha t$, in particular $t \mapsto q^h t$, leaves $\gamma$ invariant. } Singularities of the second kind are solutions of \begin{equation} \begin{cases} q^h t =\pm q^\frac{h-l}{2}\\ P_M(v) = \pm \left(q^\frac{h-l}{2}+q^{-\frac{h-l}{2}}\right) =2\cosh\left[ i \pi \left((l-h)\tau+m\right) \right] \end{cases} \end{equation} with $l\neq h$ and $l,m\in\mathbb{Z}$. These singularities always exist, and correspondingly $\gamma(v)\in \mathbb{Z}+\tau\mathbb{Z}_0$, namely $c$ and $b$ are integer but $b\neq 0$. They genuinely arise from the (infinitely) cascading nature of the RG flow of the vacua constructed; they are double branch points reminiscent of the baryonic root of $\mathcal{N}=2$ SQCD, in the same spirit of \cite{Benini:2008ir} but with the important difference that now these branch points are by construction \emph{exactly} double for every choice of the polynomial $P_M(v)$, precisely like those at the baryonic root of $\mathcal{N}=2$ SQCD. Mutually local massless monopoles arise, making the analogy to the baryonic root tighter. Summarizing, branch points of the SW curve for the infinite cascade appear at values of $v$ such that $\gamma(v) \in \mathbb{Z}+\tau\mathbb{Z}$. Where $\gamma(v)\in \mathbb{Z}+\tau\mathbb{Z}_0$ we have exactly double branch points related to the strong coupling transitions in the cascade; the branch points where $\gamma(v)\in \mathbb{Z}$, instead, are generically not double, but can be made so by tuning zeros of the characteristic polynomial $P_M(v)$, precisely as for the curve of the $SU(M)$ pure gauge theory. \section{Examples}\label{sec:examples} In this section we apply the general method for finding exact solutions for the twisted fields developed in the previous section to some interesting cases: the $\mathbb{Z}_{2M}$-symmetric enhan\c con vacuum whose dual is approximated by the excised version of the solution of \cite{Bertolini:2000dk,Polchinski:2000mx} with $M$ smeared tensionless fractional branes at the enhan\c con ring, and one of the $M$ $\mathbb{Z}_2$-symmetric enhan\c conless vacua whose SW curves have genus zero \cite{Benini:2008ir} and which become the Klebanov-Strassler vacua after an infinite mass deformation. \subsection{The $\mathbb{Z}_{2M}$--symmetric enhan\c con vacuum}\label{subsec:enhancon} The $\mathbb{Z}_{2M}$-symmetric enhan\c con vacuum with an infinite cascade can be obtained by means of the previously explained embedding by taking $P_M(v)=v^M$. The exact twisted sector scalar field is \begin{equation}\label{gamma_enhancon} \gamma(v)=\frac{i}{\pi} \log \left[ \frac{v^M}{2}+ \sqrt{\left(\frac{v^M}{2}\right)^2 -1} \right]= \frac{i}{\pi} \cosh^{-1} \frac{v^M}{2}\;, \end{equation} whose imaginary part is plotted in Fig. \ref{fig:b-enhancon}. \begin{figure}[tn] \centering \includegraphics[width=9cm]{b-enhancon.eps} \caption{\small $b$ field in the type IIB dual of the enhan\c con vacuum, for $M=6$ and $g_s=1$. \label{fig:b-enhancon}} \end{figure} We can approximate the previous exact result inside and outside the circle of radius $\rho_e\equiv 2^{1/M}$ and get to leading order \begin{equation}\label{approx_enhancon} \gamma(v)\simeq \begin{cases} \frac{iM}{\pi}\log v +\mathcal{O}(v^{-2M}) & \textrm{if}\quad |v|^{M}\gg 2 \\ -\frac{1}{2}+\frac{v^M}{2\pi}+\mathcal{O}(v^{2M}) & \textrm{if}\quad |v|^{M}\ll 2 \end{cases}\;, \end{equation} where $[\cdot]_-$ is the floor function and we worked in the $v$ plane with a suitable choice of branch cuts lying along the circle of radius $\rho_e$ and the real half-line $[2^{1/M},+\infty)$. The approximation employed in \cite{Benini:2008ir} is recovered upon further neglecting the first corrections in the interior. As a bonus, we also get the average value of the potential in the interior. Note also, either using the exact result or the approximation, that the 5-brane charge enclosed in a disk of radius $\rho$ centered in the origin vanishes if $\rho<\rho_e$ and equals $2M$ if $\rho>\rho_e$, as expected. The branch points of $\gamma$ \eqref{gamma_enhancon} at finite values of $v$ are the branch points of the SW curve of $SU(M)$ at the origin of the moduli space, namely the $2M$ roots of $v^{2M}=4=\rho_e^{2M}$, lying on the enhan\c con circle. They are simple branch points of the SW curve of the quiver theory. Exactly double branch points of the SW curve of the quiver theory correspond to the roots of $v^{2M}=4\left[\cosh(i\pi k\tau)\right]^2$, $k\in\mathbb{Z}_0$. Far from the enhan\c con region, where the logarithmic approximation can be applied, they lie at the intersections of circles (curves of integer $b$) with logarithmic spirals (curves of integer $c$). \subsection{The vacuum with genus 0 hyperelliptic curve}\label{subsec:genus0} A $\mathbb{Z}_{2}$-symmetric enhan\c conless vacuum with an infinite cascade, whose Seiberg-Witten curve has genus zero, is obtained by taking $P_M(v)=2T_M\left( \frac{v}{2} \right)$, where $T_M$ is the $M$-th degree Chebyshev polynomial of the first kind, which satisfies $T_M(x)=\cos(M \cos^{-1}x)=\cosh(M \cosh^{-1}x)$. The asymptotics is $P_M(v)\approx v^M$ as $v\to\infty$. The exact twisted sector scalar field \eqref{gamma_generic_poly} then becomes \begin{equation}\label{gamma_genus_zero} \gamma(v)= \frac{i}{\pi}\,\cosh^{-1} T_M\left(\frac{v}{2}\right)= \frac{iM}{\pi}\,\cosh^{-1} \frac{v}{2}\;, \end{equation} whose imaginary part is plotted in Fig. \ref{fig:b-enhanconless}. \begin{figure}[tn] \centering \includegraphics[width=9cm]{b-enhanconless.eps} \caption{\small $b$ field in the type IIB dual of the vacuum with genus zero SW curve, for $M=6$ and $g_s=1$. \label{fig:b-enhanconless}} \end{figure} The natural coordinates on the complex plane for the configuration under investigation are elliptic coordinates. If we set $v=x+iy$, we can introduce new coordinates $\omega=\mu +i\nu$ according to \begin{equation}\label{elliptic_coordinates} v = 2 \cosh\omega \qquad \Longleftrightarrow \qquad \begin{cases} x = 2 \cosh\mu \cos\nu \\ y = 2 \sinh\mu \sin\nu \end{cases} \end{equation} so that \begin{align} \frac{x^2}{\cosh^2\mu}+\frac{y^2}{\sinh^2\mu} &=4 \label{ellipses} \\ \frac{x^2}{\cos^2\nu}-\frac{y^2}{\sin^2\nu} &=4 \;. \label{hyperbolae} \end{align} We can take $\mu\in\mathbb{R}^+$, $\nu\in[-\pi,\pi]$. Equation \eqref{ellipses} tells us that curves of constant $\nu$ are ellipses and equation \eqref{hyperbolae} tells us that curves of constant $\mu$ are hyperbolae. The ellipses have semimajor axis $a= 2\cosh\mu$ and eccentricity $e= 1/\cosh \mu$; their foci are at $z=\pm 2$; at large $\mu$ they become more and more similar to circles. Using these coordinates the twisted sector scalar takes the simple form \begin{equation}\label{gamma_omega} \gamma=i\,\frac{M}{\pi}\,\omega\;. \end{equation} Therefore curves of constant $b$ are those ellipses and curves of constant $c+C_0 b$ are those hyperbolae. The branch points of $\gamma$ \eqref{gamma_genus_zero} at finite values of $v$ are the branch points of the SW curve of $SU(M)$ with $P_M(v)=2T_M\left( \frac{v}{2} \right)$, namely the points $v_m=2\cos\frac{\pi m}{M}$, $m\in\mathbb{Z}$, which lie on the degenerate ellipse \eqref{ellipses} with $\mu=0$, namely the segment $v\in[-2,2]$ on the real axis. They are branch points for the SW curves of $SU(M)$ and of the quiver theory, with the branch cuts attached to one another along this segment. The other double branch points of the SW curve of the quiver theory lie at $v=2\cos\left( \frac{\pi}{M}(m+\tau n)\right)$, with $m\in\mathbb{Z}_0$ and $n\in\mathbb{Z}$. In the region of large $|n|/(g_sM)$, where the logarithmic approximation holds, they approximately lie at the intersection of circles and logarithmic spirals: $v\simeq \exp\left[\pi\left( \frac{n}{g_sM}-i\frac{m}{M} \right)\right]$. As expected, there is no true enhan\c con, in the sense that no region of the complex plane is enclosed by the innermost of the generalized enhan\c con loci (\emph{i.e.} the $b=0$ locus), because that degenerates to a segment. The discussion of a closely related twisted field configuration, relevant to $\mathcal{N}=2$ fractional D3 branes on orbifold fixed loci with cylindrical topology such as the $\mathbb{Z}_2$ orbifold of the deformed conifold, is relegated to the appendix. \section{Dissolution of fractional branes into twisted fluxes}\label{sec:dissolution} In the previous section we employed the embedding of the moduli space of the pure gauge theory into the Coulomb branch of the quiver gauge theory with an infinite cascade and the related construction of exact solutions for twisted fields in type IIB string theory explained in section \ref{sec:embedding} for studying configurations where all the eigenvalues of the adjoint scalars in the quiver gauge theory lie in nonperturbative regions. By the construction of section \ref{sec:embedding}, we can separate in \eqref{characteristic_polynomials} the eigenvalues of the two adjoint scalars into those of $P_M(v)$ and those related to the infinite cascade. Correspondingly, in the analysis of subsection \ref{subsec:branc_singul} we also distinguished branch points of the SW curve of the quiver theory \eqref{SW_curve_inf_cascade_quiver} which are also branch points of the curve of the pure gauge theory \eqref{SW_curve_glue_dimless} (namely $n=0$ in eq. \eqref{branch_condition}) and those which are not ($n\neq 0$). The former ones may or may not be double according to the form of the polynomial $P_M(v)$, whereas the latter are exactly double by construction, in complete analogy to the branch points of $\mathcal{N}=2$ SQCD at the baryonic root. In this section we will concentrate on eigenvalues and branch points of the first class. The study of the exact dual twisted field configurations will allow us to understand the fate of fractional D3 branes in type IIB string theory at nonvanishing string coupling. If the absolute values of one of such eigenvalues is much larger than $\rho_e=2^{1/M}\approx 1$ (in units of $\Lambda$), then the eigenvalue lies in a perturbative region of the $SU(M)$ theory where the semiclassical approximation is good and the nonperturbative splitting of the two related branch points is small. In the type IIA/M theory description, the D4 brane is inflated into a very thin M5 brane tube. Neglecting nonperturbative effects in the dual gauge theory, the type IIB description is in terms of a `localized' fractional D3 brane, leading to a simple pole of $d\gamma$ at its location.% When instead the absolute value of the eigenvalue becomes comparable to or smaller than $1$, then the splitting between the two branch points becomes of the same order of magnitude as the separation between different pairs of branch points. In the type IIA/M theory description, the D4 brane is inflated into a fat M5 brane tube, that might even touch other fat tubes in more singular situations. In the type IIB description, one could think that the fractional D3 brane develops a wavefunction which is spread over a region of the size of the splitting. In the previous section we studied two configurations dual to nonperturbative points of the moduli space of the $SU(M)$ theory, the enhan\c con vacuum and the enhan\c conless vacuum whose hyperelliptic curve is a sphere. The exact form of twisted fields in such cases is already quite instructive for our purposes. The naive enhan\c con ring solution proposed in \cite{Benini:2008ir} involved $M$ tensionless fractional D3 branes smeared at a ring of radius $\rho_e$ (the enhan\c con ring). In that approximation, the tensionless smeared fractional branes were needed as sources accounting for the discontinuity of the enclosed D5 brane charge at the ring. Instead, our exact solution \eqref{gamma_enhancon} involves twisted fields only, with no need of smeared tensionless fractional brane sources. Indeed, the solution for $\gamma$ is holomorphic everywhere along the ring, except at the $2M$ branch points of the square root,% \footnote{Of course $\gamma$ is everywhere well defined on its Riemann surface.} and the discontinuity of the D5 brane charge is simply provided by the $M$ disjoint branch cuts joining them, along with the $M$ branch cuts lying between one branch point in each pair and the point at infinity. Therefore we see that at nonvanishing string coupling those fractional D3 branes, rather than becoming tensionless and uniformly smeared along the ring, completely dissolve into twisted fluxes. The same statement holds for the enhan\c conless configuration of subsection \ref{subsec:genus0}, for which all the branch cuts join to become a single one. After the transmutation, the D5 brane charges of the fractional D3 branes are provided by the monodromies of $\gamma$ around the nontrivial loops of the Riemann surface $\gamma$ is defined on. We will detail this assertion more in the following. Surprisingly enough from the naive viewpoint of type IIB string theory, we will see in the remainder of this section that the transmutation into twisted fields occurs as well for naively tensionful fractional branes, related to eigenvalues lying in a perturbative region. This dissolution also addresses two interrelated puzzles concerning localized tensionful fractional D3 branes. The resolution of the puzzles is provided by gauge theory instantonic effects encoded in the SW curve. Those effects might look negligible in the gauge theory in regimes of small coupling; in spite of that, regardless of how small they are, they turn out to be always crucial for solving the two puzzles on the dual string side, that we now explain. \subsection{Puzzles with fractional D3 branes} Consider $M$ coincident fractional D3 branes, located conventionally at $v=0$, and let them backreact. Naively, they contribute a factor $\frac{iM}{\pi}\log v$ to the twisted sector complex scalar potential $\gamma$. The twisted fluxes sourced by the fractional branes carry a D3 brane charge because of the modified Bianchi identity/equation of motion \begin{equation}\label{eq_F5} dF_5=-H_3\wedge F_3+\mathrm{sources}=\frac{i}{2}g_s (4\pi^2\alpha')^2 \,d\gamma\wedge\overline{d\gamma}\wedge\omega_2\wedge\omega_2 +\mathrm{sources}\;, \end{equation} where $\omega_2$ is a closed antiselfdual $(1,1)$ form with delta-function support on the orbifold plane, normalized as $\int_\mathcal{C} \omega_2=1$, which satisfies $\int_{ALE}\omega_2\wedge \omega_2=-\frac{1}{2}$, and $F_5$ the gauge invariant improved RR 5-form field strength. The fractional branes themselves, if tensionful, also contribute to \eqref{eq_F5} via their D3 brane charge, which in turn depends on the value of $b$ at their location. Both those contributions (twisted fluxes and D3 brane charge) source $F_5$. We now face a first problem: considering a shell $S$ with $S^5/\mathbb{Z}_2$ boundaries of outer radius $R_o$ and inner radius $R_i$ centered in the position of the fractional D3 branes, the contribution of the twisted fields generated by the fractional branes to the D3 charge is \begin{equation} \Delta Q_3^{fluxes} (S)\equiv -\frac{1}{(4\pi^2\alpha')^2}\int_{S}F_5=\frac{g_s M^2}{\pi}\log\frac{R_o}{R_i}=M[b(R_o)-b(R_i)]\;, \end{equation} which diverges to $+\infty$ as we send $R_i\to 0$ keeping $R_o$ fixed. At the same time, the charge carried by the fractional D3 branes after the backreaction is formally $M b(0)$, which equals $-\infty$ after the backreaction is taken into account. These two divergences constitute the first problem. The guts of the problem are the fact that the fractional branes could not actually be BPS, if they really had negative D3 brane charge. One could object that each fractional brane should not backreact on itself, but that does not solve the problem: imagining to separate slightly the $M$ branes, one could still compute the effect on a single brane exerted by the other branes; then, as they approach each other, the D3 brane charge of each single brane decreases, at some point becoming negative and eventually diverging to $-\infty$ in the limit where it meets one of the other branes. Note also that if we regularized the D3 brane charge carried by the fractional D3 branes by substituting $M b(0)$ with $M b(R_i)$ and added the two contributions without caring about divergences in the $R_i\to 0$ limit, we would then get a sensible answer, which is precisely the expected total D3 brane charge. The problem is analogous to the divergence of the self-energy of an electron in classical physics; there one has to assign by hand a classical size to the electron in order to avoid the divergence. If this size is instead sent to zero, then the electrostatic energy diverges. A usual prescription in classical electromagnetism is to substitute the electron with a conducting sphere whose radius, called the classical radius of the electron, is such that the whole mass of the electron is provided by the electrostatic potential energy of its field. A similar prescription could in principle be applied to the naive solutions under consideration, by promoting each fractional brane to a `conducting' BPS extended object. By `conducting' and BPS here we mean that $b$ is constant and nonnegative at the surface of this object. The previous demand does not uniquely define the surface, because of the arbitrariness of the boundary value of $b$. One might be tempted to fix the boundary value of $b$ so that the D3 brane charge of the fractional brane is the smallest allowed by supersymmetry, namely zero D3 brane charge. However, this prescription is clearly \emph{ad hoc} and does not explain if and how the problem is solved by string theory. Note also that the divergence problem can be by-passed if the fractional branes are continuously distributed. That was done in \cite{Benini:2008ir} both for tensionful antifractional branes providing an ultraviolet cutoff to the cascading RG flow and for tensionless fractional branes lying at the enhan\c con ring in the infrared region of the dual field theory. If that smearing could perhaps be physically motivated for tensionless fractional branes, because SW theory teaches us that in some sense their wavefunctions are spread over a region of comparable size to the separation between different branes, it was nothing but a trick in the case of tensionful branes, whose wavefunctions are on the contrary very localized. The trick was necessary since it was not known how to deal with isolated fractional branes. The second and related puzzle arises when trying to compute the warp factor. The contribution of twisted fluxes to the warp factor is computed via the following formula: \begin{equation}\label{warp} Z_{fl}(y,\vec{x})=4\pi \alpha'^2 g_s^2 \,\frac{i}{2} \int \frac{d\gamma(v)\wedge \overline{d\gamma(v)}}{\left[\vec{x}^2+|y-v|^2 \right]^2}\;, \end{equation} where $y\in \mathbb{C}$ is a coordinate along the directions parallel to the orbifold plane and $\vec{x}$ is a coordinate on the covering space $\mathbb{R}^4$ of the orbifold. For the very same reason underlying the divergences in the D3 brane charge -- after all, for BPS objects charge and mass are related -- if there are tensionful fractional brane sources the integral \eqref{warp} diverges \emph{everywhere} \cite{Bertolini:2000dk}: the integrand $d\gamma\wedge\overline{d\gamma}$ in \eqref{warp} close to the sources is not integrable. On the other hand, there would be an additional contribution from the (negative infinite) localized D3 brane charges of the fractional branes. After formal (and to some extent arbitrary) regularization and subtraction similar to those mentioned for the D3 brane charge, one can see that the two divergences cancel. Again, the smearing trick was used in \cite{Benini:2008ir} to by-pass the problem. \subsection{Resolution of the puzzles by transmutation of fractional branes into twisted fields} Exploiting SW theory and dualities, in section \ref{sec:embedding} we managed to find the exact twisted fields configurations \eqref{gamma_generic_poly} in type IIB string duals of a class of infinitely cascading vacua. With those solutions at hand, in the remainder of this section we will show how type IIB string theory at nonzero coupling resolves the issues of divergences explained in the previous subsection, by complete transmutation of the fractional D3 branes into twisted fields. We will find that the D3 brane charge enclosed in any finite region is finite, and that the contribution of the twisted fluxes to the metric is finite too.% \footnote{Except for the expected curvature singularity which is met when approaching the orbifold plane where localized field strengths have support.} The mechanism is general and applies both to fractional branes which are naively tensionless (eigenvalues in highly nonperturbative regions for the gauge theory) and to fractional branes which are naively tensionful (eigenvalues in perturbative regions for the gauge theory). \begin{figure}[tn] \centering \includegraphics[width=10cm]{Re-gamma-P.eps} \caption{\small Real part of $\gamma$ field in the complex plane parameterized by $P_M$. The imaginary part is shown in Fig. \ref{fig:b-enhanconless}, up to an overall factor of $M$. \label{fig:Re-gamma-P}} \end{figure} Recall the exact formula \eqref{gamma_generic_poly} for the twisted field $\gamma$ and consider its dependence on $P_M$. In the $P_M$ complex plane, $\gamma$ has a branch cut of the square root, that we conveniently take along the real segment $P_M\in[-2,2]$, and a branch cut of the logarithm, that we can take on the negative real axis along the half-line $P_M\in(-\infty,-2]$, see Fig. \ref{fig:Re-gamma-P}. Consider now a counterclockwise loop that winds once around the branch cut of the square root: picking the positive determination of the square root of 1, it is easy to compute $\oint d\gamma=-2$. We can parametrize the branch cut by setting $P_M=2\cos\beta$, with $\beta\in\mathbb{R}$ and $\beta\in[-\pi,\pi]$; hence $\gamma=-\beta/\pi$ along the cut, and we wind around it counterclockwise from $P_M=-2$ to itself as $\beta$ varies between $-\pi$ and $\pi$. We observe that $b=0$ along the curve joining $P_M=-2$ and $P_M=2$, and $c$ varies from $1$ to $0$ as $P_M$ moves along the segment rightward from below and then from $0$ to $-1$ as $P_M$ moves along the segment leftward from above. \begin{figure} \centering \includegraphics[width=12cm]{branchcurves-map.eps} \caption{\small Example of branch curves in the $v$ plane for $M$=6. They are the inverse image of the segment $[-2,2]$ under the polynomial function $P_M(v)$. \label{fig:branchcurves-map}} \end{figure} If we switch from $P_M$ to $v$, recalling that $P_M(v)$ is a degree $M$ polynomial in $v$ we get $M$ roots for $v$ as a function of $P_M$. For generic polynomials, the roots are all different and the curve joining $P_M=-2$ and $P_M=2$ along the real $P_M$ axis gives rise to $M$ curves joining pairs of branch points of the SW curve of $SU(M)$ in the $v$ plane, as shown in Figure \ref{fig:branchcurves-map}. We will name these curves `branch curves'. Along these branch curves $b=0$, and encircling any of them counterclockwise the monodromy of $c$ is $-2$. Each of these branch curves is the remnant of a fractional D3 brane, as we see from the fact that the monodromy leads to the D5 brane charge of a fractional D3 brane. Indeed the 5-brane charge is \begin{equation} Q_5 = -\frac{1}{4\pi^2\alpha'}\int_{\mathcal{C}\times \mathcal{L}} F_3=- \mathrm{Re} \oint_\mathcal{L} d\gamma= -\oint_\mathcal{L} d\gamma\;, \end{equation} where $\mathcal{L}$ is a loop in the $v$ plane, and each fractional brane has charge $2$, because of the self-intersection number of the exceptional 2-sphere $\mathcal{C}$. We remark that no fractional brane sources have to be added: fractional branes have transmuted into twisted fluxes. This is different from having tensionless smeared fractional branes, which appear as sources of D5 brane charge in the equation of motion of $\gamma$. This phenomenon is the T-dual mechanism of type IIA D4 branes inflating into M5 brane tubes or the type IIB string dual of the splitting of branch points in the SW curve of the gauge theory. The result that all fractional branes transmute into twisted fluxes can be understood as follows: in the M theory description all the information is encoded in the holomorphic embedding of an M5 brane, and under the duality between M theory and type IIB string theory the fivebrane embedding translates into the twisted sector complex scalar potential. The absence of sources follows from the smoothness of the embedding. With the correct solution at hand, there is no problem with D3 brane charge anymore: the total charge contained in any finite region is finite and positive, and there are no localized sources of D3 brane charge, unless harmless regular D3 branes are added. Indeed, the total D3 brane charge contained inside a 6-dimensional compact domain $\mathcal{S}\in \mathbb{C}\times\mathbb{C}^2/\mathbb{Z}_2$ intersecting the orbifold plane on a 2-dimensional domain $\mathcal{D}$ is \cite{Benini:2008ir} \begin{equation}\label{total_D3_charge} Q_3(\mathcal{S})=-\frac{1}{(4\pi^2\alpha')^2}\int_{\partial\mathcal{S}} F_5=\frac{1}{2}\int_\mathcal{D} dc\wedge db=-\frac{1}{2}\int_{\partial\mathcal{D}}b \,dc \;, \end{equation} where the branch curves have to be included in $\partial\mathcal{D}$. In the situation that we are considering, $b=0$ along those curves and therefore there is no such contribution to \eqref{total_D3_charge}. In the next section we will enjoy the more general possibility where both fractional and antifractional branes are present.% \footnote{We adhere to a common abuse of terminology, calling antifractional D3 brane the brane that together with a fractional D3 brane can form a regular D3 brane as a marginal bound state.} We will see that in such a case antifractional branes also transmute into fluxes, but along the corresponding branch curves $b=1$ and the monodromy of $\gamma$ is $+2$, so that their contribution to \eqref{total_D3_charge} provides the D3 brane charge of fractional/antifractional D3 brane pairs. Tuning the polynomials, it is possible to make some of the branch points of the $SU(M)$ curve collide, so that two or more of the branch curves join. In such situations we are in highly nonperturbative regimes where extra massless matter degrees of freedom (mutually local or nonlocal) appear. The most singular case is the enhan\c conless vacuum whose SW curve has genus zero, for which the branch curves all merge into a single one on the real $v$ segment between $-2$ and $2$. Finally, we can see how the problem with the warp factor disappears as well, once the correct solution for twisted fields is considered in \eqref{warp}, and taking into account that no additional contributions exist, because of the absence of sources with nonvanishing D3 brane charge (excluding regular D3 branes). Recall that the curvature diverges approaching the orbifold plane where twisted fluxes have support. The metric of the supergravity plus massless twisted fields solution can not be trusted in that region, whereas it can be trusted far from it, provided the warp factor is well defined. We will therefore concentrate on locations $\vec{x}\neq \vec{0}$. Potentially dangerous integration regions in \eqref{warp} then correspond to singularities of $d\gamma(v)$. The problem with the naive solution \eqref{dgamma_naive_generic_poly} was that the integral \eqref{warp} does not converge around the zeros of the polynomial $P_M(v)$, irrespective of $y$ and $\vec{x}$. With the exact solution \eqref{dgamma_generic_poly}, potentially dangerous integration regions are those surrounding the branch points of $\sqrt{P_M(v)^2-4}$. Let us expand $P_M(v)$ about one of those points, $v_0$: \begin{equation} P_M(v)=\pm \left[2+ a_l (v-v_0)^l+\mathcal{O}\left((v-v_0)^{l+1}\right)\right]\;,\qquad 1\leq l\leq M\;. \end{equation} Then \begin{equation} d\gamma(v) \simeq i \,\frac{l \sqrt{a_l}}{2\pi}\,(v-v_0)^{\frac{l}{2}-1} \,dv \end{equation} and therefore the contribution to \eqref{warp} coming from integration over a small neighborhood of $v_0$ with radius $\epsilon\ll 1$ is \begin{equation}\label{delta_warp} \begin{split} \delta Z(y,\vec{x})& \simeq \frac{ 4\pi \alpha'^2 g_s^2}{\left[\vec{x}^2+|y-v_0|^2 \right]^2}\,\frac{i}{2} \int_{|w|<\epsilon} \hspace{-13pt} d w \wedge \overline{d w} \; \,\frac{l^2 |a_l|}{(2\pi)^2} |w|^{l-2} = \frac{2\alpha'^2 g_s^2}{\left[\vec{x}^2+|y-v_0|^2 \right]^2}\,l |a_l|\, \epsilon^l\;, \end{split} \end{equation} which is finite. Therefore the warp factor is finite for any $\vec{x}\neq \vec{0}$. The previous analysis holds for the branch points related to $SU(M)$ SYM, which are not necessarily double. We will argue at the end of the next section that the same results are also valid for the exactly double branch points which are related to the infinite cascade. \section{Remarks on duals of generic points of the moduli space of the quiver theory}\label{sec:HvsC} So far we have studied the embedding of the moduli space of the $\mathcal{N}=2$ pure gauge theory into the infinite-dimensional Coulomb branch of the quiver gauge theory on branes at the $\mathbb{C}\times\mathbb{C}^2/\mathbb{Z}_2$ orbifold with an infinite cascade, and the related type IIB dual solutions in the so called C-picture. In the situations we have studied, an infinite cascade was required in order for all the branch points to be double, except for the $2M$ of them arising from $SU(M)$ pure gauge dynamics. In this section we will make several qualitative remarks about the correct solution for twisted fields in generic vacua of the quiver gauge theory, although we will not be able to provide an explicit analytic expression for $\gamma$: we will first consider CFT's with finite rank gauge groups in the ultraviolet like in \cite{Kachru:1998ys}, and see that the solution leads us to the so called H-picture of \cite{Benini:2008ir}, where the NSNS twisted sector potential $b$ is bounded between two adjacent integers, that we will choose to be 0 and 1. Such a picture is always valid, even in the presence of cascades with strong coupling transitions and enhan\c con bearings. Then we will comment under which (very restricted) circumstances the C-picture arises as an equivalent description of the same physics, and motivate its validity in the construction of section \ref{sec:embedding}. We can anticipate that the reason why the C-picture is not always valid, even in the presence of cascades with strong coupling transitions and enhan\c con bearings, is that generically fractional branes located in regions with strong dual gauge coupling do not form domain walls for the twisted fields, as opposed to what appeared in the smeared ring approximation used in \cite{Benini:2008ir}. The SW curve for the $\mathcal{N}=2$ $SU(N)\times SU(N)\times U(1)$ quiver gauge theory describing the low energy dynamics on $N$ regular D3 branes at the $A_1$ singularity is \cite{Witten:1997sc,Ennes:1999fb,Petrini:2001fk} \begin{equation}\label{SW_quiver} \frac{R_N(v)}{S_N(v)}=\frac{\theta_2(2u|2\tau)}{\theta_3(2u|2\tau)}\equiv g(u|\tau)\;, \end{equation} where $R_N(v)$ and $S_N(v)$ are the degree $N$ characteristic polynomials of the two adjoint scalar fields.% \footnote{The complexified gauge couplings of the two groups are chosen to be equal in the ultraviolet.} Recalling \eqref{torus_identification_t}, we will make use of the following formulae: \begin{equation} g(u|\tau)= q^{1/4}(t+t^{-1})\prod_{j=1}^\infty \frac{(1+t^2 q^{2j})(1+t^{-2} q^{2j})}{(1+t^2 q^{2j-1})(1+t^{-2} q^{2j-1})} \end{equation} and \begin{equation} g(u+\frac{\tau}{2}|\tau)=g(u|\tau)^{-1}\;,\qquad\qquad g(u+\frac{1}{2}|\tau)=-g(u|\tau)\;. \end{equation} The SW curve \eqref{SW_quiver} appears in M theory as an M5 brane embedding in $\mathbb{R}^2\times T^2$, which is holomorphic with respect to the complex coordinates $v$ and $u$ in $\mathbb{R}^2$ and $T^2$ respectively. It can be equivalently viewed as a double cover of the plane (giving the locations of two NS5 branes as functions of $v$ after reducing to type IIA string theory) or an $N$-tuple cover of the torus (giving the locations of $N$ pairs of suspended D4 branes as functions of $u$ or $t$). Note also that $g(-u|\tau)=g(u|\tau)$, which makes the double cover manifest. Given a point on the Coulomb branch of the quiver gauge theory, we can in principle invert \eqref{SW_quiver} and find the locations of the two NS5 branes $t_-(v)$ and $t_+(v)=t_-(v)^{-1}$. Then the holomorphic twisted scalar potential $\gamma(v)=c(v)+\tau\, b(v)$ in type IIB string theory is still given by \eqref{gamma_from_u}-\eqref{u_from_t}, with the difference that now $t_\pm(v)$ obey \eqref{SW_quiver}. It turns out to be convenient to define again `branch curves' in the $v$ plane, as loci of integer $b$. They connect branch points of the SW curve \eqref{SW_quiver}, where $u=0,\frac{1}{2},\frac{\tau}{2},\frac{1+\tau}{2}$ modulo periodicities of the torus \cite{Petrini:2001fk}, and pass either through zeros of $R_N(v)$ or of $S_N(v)$. At those branch points $g(u|\tau)$ takes the values $g_0(q),-g_0(q),g_0(q)^{-1},-g_0(q)^{-1}$ respectively, where \begin{equation} g_0(q)\equiv g(0|\tau)=2q^{1/4}\prod_{j=1}^\infty \frac{(1+ q^{2j})^2}{(1+ q^{2j-1})^2}\;. \end{equation} Each branch curve with even $b$ passes through a zero of $R_N(v)$, where $c$ is half-integer, and connects a branch point where $c$ is odd (and $g=-g_0(q)$) to a branch point where $c$ is even (and $g=g_0(q)$). Each branch curve with odd $b$ passes through a zero of $S_N(v)$, where $c$ is half-integer, and connects a branch point where $c$ is odd (and $g=-g_0(q)^{-1}$) to a branch point where $c$ is even (and $g=g_0(q)^{-1}$). The branch curves are remnants of fractional and antifractional D3 branes, after they transmute into twisted fluxes. Since $b(v)$ is a continuous function, if there exist two branch curves in the $v$ plane where, say, $b=0$ and $b=2$ respectively, then every continuous path connecting the two of them must cross a branch curve where $b=1$. Hence we conclude that the H-picture, where $b$ is bounded between $0$ and $1$ (by convention), is singled out, unless some branch curves join in such a way that they form a domain wall in the $v$ plane. In the H-picture each of the $N$ branch curves with $b=0$ gives a monodromy $\Delta c=-2$ and is the remnant of a fractional brane, whereas each of the $N$ branch curves with $b=1$ gives a monodromy $\Delta c=2$ and is the remnant of an antifractional brane. In the approximation employed in \cite{Benini:2008ir} to describe cascading RG flows, smeared tensionless (anti)fractional D3 branes bounded regions of running $\gamma$ and so called enhan\c con plasma regions, namely regions (possibly of vanishing area) of constant integer $b$, hence forming smooth domain walls in the $v$ plane. Twisted fields could be solved for in each domain separately (the problem is a Dirichlet problem supplemented by the D5 brane charge quantization requirement), and then the solutions were glued by requiring continuity of $b$. In that approximation we had the freedom of reversing the sign of twisted field strengths in any region bounded by enhan\c con plasma regions, provided that at the same time we reversed as well the interpretation of the smeared sources at the boundaries, interchanging tensionless fractional and antifractional branes.% \footnote{In the T-dual description, such freedom amounted to that of interpreting the same brane configuration, with smeared D4 branes of no extension in the $x^6$ direction, in terms of two intersecting fivebranes or rather two fivebranes touching each other in the enhan\c con plasma regions.} Such freedom allowed us, in the description of rotationally symmetric configurations like those more frequently studied in the literature, to switch from the H-picture, where $b$ is always bounded between 0 and 1 and which appeared to be more natural in $\mathcal{N}=2$ settings, to the C-picture, where $b$ is instead a monotonic function of the radial coordinate, like in the gravity duals of cascades of Seiberg dualities. More details can be found in \cite{Benini:2008ir}. If instead we consider the correct solutions for twisted fields, smooth domain walls appear much more rarely. This can be easily understood by recalling that already in the $\mathbb{Z}_{2M}$-symmetric enhan\c con vacuum of section \ref{subsec:enhancon} the branch points at the enhan\c con ring in the IR are not double. Each branch curve is the union of two radial segments joining the origin with two adjacent branch points lying on the enhan\c con ring. These curves do not form a domain wall at the enhan\c con ring and indeed there is no inner region where $\gamma$ is constant. It is instructive to observe what happens to branch curves in the procedure of forming the enhan\c con bearings of \cite{Benini:2008ir}, namely regions of enhan\c con plasma bounded by two concentric enhan\c con rings. They describe energy ranges in the RG flow of the dual gauge theory where the field theory is conformal and infinitely coupled. We assume $\mathbb{Z}_M$ rotational symmetry for the sake of simplicity, set $\xi=v^M$, $\Phi=\varphi^M$ and $Z=z_0^M$, and consider the polynomials \begin{equation} R=\xi^2-\Phi^2\;,\qquad\qquad S=\xi(\xi-Z)\;. \end{equation} We will then study branch curves in the $\xi$ plane. Those in the $v$ plane trivially follow after taking the $M$-th roots of the former ones. When $\Phi=0$ we can factor out $\xi$ factors in $R$ and $S$, which correspond to regular D3 branes. Then we are left with a short branch curve passing through $Z$ (arising from $R$) and a longer branch curve passing through the origin (arising from $S$) and extending in the nonperturbative region of the dual gauge theory. When $\Phi$ does not vanish but is much smaller than the nonperturbative scale in the gauge theory, the longer branch curve is split close to the origin into two branch curves, and in the interior a new branch curve, now arising from $S$, appears passing through the origin. The two long branch curves arising from $R$ extend along the enhan\c con bearing region, whereas the new inner branch curve extends along the new innermost enhan\c con region. What matters for us is that these branch curves are generically disjoint, singling out the H-picture. As shown in Figure \ref{fig:branch_curves_bear_g}, branch curves generically do not meet nor recombine as $\Phi$ varies. Furthermore, it is easy to see that nothing particular happens to the branch curves in the crossover between the perturbative and the nonperturbative region for $\Phi$. \begin{figure}[tn] \centering \includegraphics[width=3cm]{curveg1.eps} \includegraphics[width=3cm]{curveg2.eps} \includegraphics[width=3cm]{curveg3.eps} \includegraphics[width=3cm]{curveg4.eps} \includegraphics[width=3cm]{curveg5.eps} \caption{\small Branch curves related to the enhan\c con bearing (in magenta and blue) for a generic choice of the phase of $\Phi$, shown in the $\xi$ plane, with $Z,q>0$. The branch curve passing through the origin is almost invisible due to the exponential hierarchy, and the one passing through $Z$ is out of the plot. $|\Phi|$ decreases from the left to the right. \label{fig:branch_curves_bear_g}} \end{figure} \begin{figure}[tn] \centering \includegraphics[width=3cm]{curve1.eps} \includegraphics[width=3cm]{curve2.eps} \includegraphics[width=3cm]{curve3.eps} \includegraphics[width=3cm]{curve4.eps} \includegraphics[width=3cm]{curve5.eps} \caption{\small Branch curves related to the enhan\c con bearing (in magenta and blue) for imaginary $\Phi$, shown in the $\xi$ plane, with $Z,q>0$. $|\Phi|$ decreases from the left to the right, as the radius of the circle shows. \label{fig:branch_curves_bear}} \end{figure} Only at some specific values of $\Phi$ branch points collide: when $\Phi^2=-\frac{Z^2}{4}\frac{g_0(q)^2}{1-g_0(q)}$ the two branch points determined by $R=g_0 S$ coincide, and when $\Phi^2=-\frac{Z^2}{4}\frac{g_0(q)^2}{1+g_0(q)}$ the two branch points determined by $R=-g_0 S$ coincide. Let us describe what happens, starting with $\Phi$ in the perturbative region, so that in the gauge theory we have a chain of ordinary Higgs breakings $SU(2M)\times SU(2M)\times U(1)\to SU(2M)\times SU(M)\times U(1)^{M+1}\to SU(M)\times U(1)^{3M}$, eventually broken to $U(1)^{4M-1}$ by instantons in the IR. There are a short branch curve passing through $Z$, two short branch curves passing through $\pm \Phi$, and finally a branch curve passing through the origin, associated to the deep IR nonperturbative dynamics. We take $Z$ and $q$ positive for the sake of simplicity, and $\Phi$ imaginary as the coincidence of branch points requires. As $|\Phi|$ decreases towards a nonperturbative region, the two branch curves passing through $\pm \Phi$ get longer, and close to the nonperturbative regime they form two disjoint arcs on a circle of radius $|\Phi|$. What happens then is depicted in Figure \ref{fig:branch_curves_bear}, where all the branch curves except the one close to $Z$ are shown: the two branch curves with arc shape meet on one side, forming a C shape, which then acquires a bar; then the other sides of the arcs meet, and after that we are left with a circle with two bars on opposite sides. When $|\Phi|$ is further reduced the two bars get longer and the radius of the circle keeps decreasing. Eventually, in the $\Phi\to 0$ limit, the circle disappears, regular D3 branes decouple and we are left with a single long branch curve passing through the origin. When the two arc-shaped branch curves meet on one side, in the smeared fractional brane approximation one would say that an enhan\c con bearing forms, and then the C-picture could be equivalently used in the interior. However, we see from the picture that there is no domain wall at that point, and the H-picture is still the only valid description of the type IIB solution. When the other two sides of the arcs meet too, then we do have a domain wall, but there are also two additional pieces of branch curves originating from the circular part. Then the Dirichlet problems in the interior and in the exterior are independent, but still the H-picture looks preferred. Among other problems that we would face if we insisted in applying the C-picture, the additional pieces of the branch curves would behave like remnants of a noninteger number of fractional branes on the exterior part and a noninteger number of antifractional branes in the interior part, which are joined at a point. That does not seem to be a sensible description. There is however an important exception to that statement, that should be clear from the previous discussion. If all the branch points related to a generalized enhan\c con bearing happen to be double, then the branch curves which join pairs of them form a smooth domain wall, without unwanted appendices. Then the C-picture is a sensible description (at least locally). Actually, as we follow a continuous path in the $v$ plane that crosses the branch curve domain wall, the field strength $d\gamma$ picks a minus sign at the wall, so that from this point of view the C-picture perhaps looks more natural. Such a coincidence of branch points holds for the vacua that arise from the embedding of the moduli space of $SU(M)$ into that of the infinitely cascading quiver gauge theory. This justifies our use of the C-picture in sections \ref{sec:embedding} and \ref{sec:examples}. Moreover, since we solved the same Dirichlet problems as in the H-picture, up to harmless sign changes and shifts, we are guaranteed that the solution we wrote in the C-picture is correct. Differently stated, one can easily switch to the H-picture by suitably shifting and changing sign to $\gamma$ in \eqref{gamma_generic_poly} at the relevant $b\in \mathbb{Z}$ curves. Finally, following the same rationale of section \eqref{sec:dissolution}, we do not expect any divergence problems in separate contributions to the D3 brane charge and the warp factor when branch points meet. In the case of the infinitely cascading vacua of sections \ref{sec:embedding} and \ref{sec:examples}, that can be readily checked by means of the explicit solution provided in the C-picture. \section{Summary and conclusions} In this paper we have shown how to embed the moduli space of $\mathcal{N}=2$ pure SYM, the gauge theory on a stack of fractional D3 branes at the $A_1$ singularity, into the moduli space of the infinitely cascading quiver gauge theory on regular and fractional D3 branes at the same singularity. Such an embedding, which is provided at the level of Seiberg-Witten curves, along with dualities, allowed us to find an explicit expression for the exact twisted field configuration in type IIB string theory backgrounds dual to this class of vacua, whose SW curves exhibit exactly double branch points, except for at most $2M$ of them which are inherited from SYM. The result shows an interesting phenomenon: all fractional D3 branes dissolve into twisted fluxes as soon as the string coupling does not vanish. This is nothing but the dual manifestation of the well known blow-up of suspended D4 branes into M5 branes tubes in type IIA/M theory. An important aspect in the type IIB setting is that this transmutation solves divergence issues in the D3 brane charge and the warp factor, that arise if the naive solutions so far considered in the literature are used. The same phenomenon holds generally, with and without cascades, for the duals of any points in the moduli space of the quiver gauge theory on any number of D3 branes at the $A_1$ singularity. We have also remarked that in the type IIB solutions, the H-picture for twisted fields, where $b$ is bounded between 0 and 1, is generically singled out as the only valid description of the solution. Only in the case of infinite cascades with exactly double branch points, as those discussed in the first part of the paper, the so called C-picture, where $b$ grows indefinitely towards infinity like in intrinsically $\mathcal{N}=1$ setups with fractional D3 branes at isolated singularities, is an equally valid description of the same system. We expect that the phenomena which we studied in this paper appear generally in type IIB duals built out of any systems of D3 branes containing fractional D3 branes of $\mathcal{N}=2$ kind. We provide an interesting instance of that in the appendix, for an orbifold singular locus with different topology. \bigskip \bigskip \noindent {\bf \Large Acknowledgements} \medskip \noindent We are grateful to Matteo Bertolini and Cyril Closset for comments on the draft. This research was partly supported by a center of excellence supported by the Israel Science Foundation (grant No. 1468/06), the grant DIP H52 of the German Israel Project Cooperation, the BSF United States-Israel binational science foundation grant 2006157, and the German Israel Foundation (GIF) grant No. 962-94.7/2007.
1,108,101,566,180
arxiv
\section{Introduction} \label{sec:intro} Polar codes \cite{arikan} are capacity-achieving linear block codes based on the polarization phenomenon, that makes bit channels either completely noisy or completely noiseless as code length tends to infinity. While optimal at infinite code length, the error-correction performance of polar codes under successive cancellation (SC) decoding degrades at practical code lengths. Moreover, SC-based decoding algorithms are inherently sequential, which results in high dependency of decoding latency on code length. List decoding was proposed in \cite{tal_list} to improve SC performance for practical code lengths: the resulting SC-List (SCL) algorithm exhibits enhanced error-correction performance, at the cost of higher decoder latency and complexity. Product codes \cite{elias} are parallel concatenated codes often used in optical communication systems for their good error-correction performance and high throughput, thanks to their highly parallelizable decoding process. To exploit this feature, systematic polar codes have been concatenated with short block codes as well as LDPC codes \cite{pc_inner,BP_pc}. This concatenation allows the construction of very long product codes based on the polarization effect: to fully exploit the decoding parallelism, a high number of parallel decoders for the component codes need to be instantiated, leading to a high hardware cost. Authors in \cite{par_conc_sys_pol} propose to use two systematic polar codes in the concatenation scheme in order to simplify the decoder structure. Soft cancellation (SCAN) \cite{SCAN_pc} and belief propagation (BP) \cite{BP_pc} can be used as soft-input / soft-output decoders for systematic polar codes, at the cost of increased decoding complexity compared to SC. Recently, SCL decoding has been proposed as a valid alternative to SCAN and BP \cite{par_sys_list}, while authors in \cite{KoikeAkinoIrregularPT} propose to use irregular systematic polar codes to further increase the decoding throughput. In this paper, we show that the nature of polar codes inherently induces the construction of product codes that are not systematic. In particular, we show that the product of two polar codes is a polar code, that can be designed and decoded as a product code. We propose a code construction approach and a low-complexity decoding algorithm that makes use of the observed dual interpretation of polar codes. Both analysis and simulations show that the proposed code construction and decoding approaches allow to combine high decoding speed and long codes, resulting in high-throughput and good error-correction performance suitable for optical communications. \section{Preliminaries} \label{sec:prel} \subsection{Polar Codes} \label{subsec:PC} Polar codes are linear block codes based on the polarization effect of the kernel matrix $T_2 = \left[\begin{smallmatrix} 1&0\\1&1 \end{smallmatrix}\right]$. A polar code of length $N=2^n$ and dimension $K$ is defined by the transformation matrix $T_N = T_2 ^{\otimes n}$, given by the $n$-fold Kronecker power of the polarization kernel, and a frozen set $\mathcal{F} \subset \{1,\dots,N\}$ composed of $N-K$ elements. Codeword $x = [x_0,x_1,\ldots,x_{N-1}]$ is calculated as \begin{equation} x = u \cdot T_N \text{,} \label{eq:polarGen} \end{equation} where the input vector $u = [u_0,u_1,\ldots,u_{N-1}]$ has the $N-K$ bits in the positions listed in $\mathcal{F}$ set to zero, while the remaining $K$ bits carry the information to be transmitted. The frozen set is usually designed to minimize the error probability under SC decoding, such that information bits are stored in the most reliable bits, defining the information set $\mathcal{I} = \mathcal{F}^C$. Reliabilities can be calculated in various ways, e.g. via Monte Carlo simulation, by tracking the Batthacharyya parameter, or by density evolution under a Gaussian approximation \cite{polar_const}. The generator matrix $G$ of a polar code is calculated from the transformation matrix $T_N$ by deleting the rows of the indices listed in the frozen set. SC decoding \cite{arikan} can be interpreted as a depth-first binary tree search with priority given to the left branches. Each node of the tree receives from its parent a soft information vector, that gets processed and transmitted to the left and right child nodes. Bits are estimated at leaf nodes, and hard estimates are propagated from child to parent nodes. While optimal for infinite codes, SC decoding exhibits mediocre performance for short codes. SCL decoding \cite{tal_list} maintains $L$ parallel codeword candidates, improving decoding performance of polar codes for moderate code lengths. The error-correction performance of SCL can be further improved by concatenating the polar code with a cyclic redundancy check (CRC), that helps in the selection of the final candidate. \subsection{Product Codes} \label{subsec:prod} Product codes were introduced in \cite{elias} as a simple and efficient way to build very long codes on the basis of two or more short block component codes. Even if it is not necessary, component codes are usually systematic in order to simplify the encoding. In general, given two systematic linear block codes $\mathcal{C}_r$ and $\mathcal{C}_c$ with parameters $(N_r,K_r)$ and $(N_c,K_c)$ respectively, the product code $\mathcal{P} = \mathcal{C}_c \times \mathcal{C}_r$ of length $N = N_r N_c$ and dimension $K = K_r K_c$ is obtained as follows. The $K$ information bits are arranged in a $K_c \times K_r$ matrix $U$, then code $\mathcal{C}_r$ is used to encode the $K_c$ rows independently. Afterwards, the $N_r$ columns obtained in the previous step are encoded independently using code $\mathcal{C}_c$. The result is a $N_c \times N_r$ codeword matrix $X$, where rows are codewords of code $\mathcal{C}_r$ and columns are codewords of code $\mathcal{C}_c$, calculated as \begin{equation} X = G_c^T \cdot U \cdot G_r, \end{equation} where $G_r$ and $G_c$ are the generator matrices of codes $\mathcal{C}_r$ and $\mathcal{C}_c$ respectively. Alternatively, the generator matrix of $\mathcal{P}$ can be obtained taking the Kronecker product of the generator matrices of the two component codes as $G = G_c \otimes G_r$ \cite{mcw_sloane}. Product codes can be decoded by sequentially decoding rows and column component codes, and exchanging information between the two phases. Soft-input/soft-output algorithms are used to improve the decoding performance by iterating the decoding of rows and columns and exchanging soft information between the two decoders \cite{block_turbo}. Since no information is directly exchanged among rows (columns), the decoding of all row (column) component codes can be performed concurrently. \section{Product Polar Codes Design} \label{sec:prodPol} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/prod_pol_des.eps} \caption{Input matrix $U$ for a product polar code.} \label{fig:prod_pol_des} \end{figure} Product codes based on polar codes have been proposed in literature, using systematic polar codes as one of the two component codes or as both. However, the peculiar structure of polar codes has never been exploited in the construction of the product code. Both polar and product codes are defined through the Kronecker product of short and simple blocks, that are used to construct longer and more powerful codes. In the following, we prove that the product of two non-systematic polar codes is still a polar code, having a peculiar frozen set obtained on the basis of the component polar codes. This design can be extended to multi-dimensional product codes. Let us define two polar codes $\mathcal{C}_r$ and $\mathcal{C}_c$ of parameters $(N_r,K_r)$ and $(N_c,K_c)$ with transformation matrices $T_{N_r}$ and $T_{N_c}$ respectively, where $N_c = 2^{n_c}$ and $N_r = 2^{n_r}$, and $\mathcal{F}_r$ and $\mathcal{F}_c$ are the respective frozen sets. The product polar code $\mathcal{P} = \mathcal{C}_c \times \mathcal{C}_r$ is generated as follows. An $N_c \times N_r$ input matrix $U$ is generated having zeros in the columns listed in $\mathcal{F}_r$ and in the rows listed in $\mathcal{F}_c$ as depicted in Figure~\ref{fig:prod_pol_des}. Input bits are stored in the remaining $K_r K_c$ entries of $U$, row first, starting from the top left entry. Encoding is performed as for product codes: the rows of $U$ are encoded independently using polar code $\mathcal{C}_r$, namely through matrix multiplication by the transformation matrix $T_{N_r}$, obtaining matrix $U_r$. Then, the columns of $U_r$ are encoded independently using $\mathcal{C}_c$. The encoding order can be inverted performing column encoding first and row encoding next without changing the results. The resulting codeword matrix $X$ can be expressed as \begin{equation} X = T_{N_c}^T \cdot U \cdot T_{N_r}. \end{equation} In order to show that this procedure creates a polar code, let us vectorize the input and codeword matrices $U$ and $X$, converting them into row vectors $u$ and $x$. This operation is performed by the linear transformation $\row(\cdot)$, which converts a matrix into a row vector by juxtaposing its rows head-to-tail. This transformation is similar to the classical vectorization function $\vect(\cdot)$ converting a matrix into a column vector by juxtaposing its columns head-to-tail. However, before proving our claim, we need to extend a classical result of $\vect(\cdot)$ function to $\row$ function. \begin{lemma} \label{prop:row} Given three matrices $A$, $B$, $C$ such that $A \cdot B \cdot C$ is defined, then \begin{equation} \row(A \cdot B \cdot C) = \row(B) \cdot (A^T \otimes C). \end{equation} \begin{proof} The compatibility of vectorization with the Kronecker product is well known, and is used to express matrix multiplication $A \cdot B \cdot C$ as a linear transformation $\vect(A \cdot B \cdot C) = (C^T \otimes A) \cdot \vect(B)$. Moreover, by construction we have that $\vect(A^T) = (\row(A))^T$. As a consequence, \begin{align*} \row(A \cdot B \cdot C) & = (\vect((A \cdot B \cdot C)^T))^T \\ & = (\vect(C^T \cdot B ^T\cdot A^T))^T \\ & = ((A \otimes C^T) \cdot \vect(B^T))^T \\ & = (\vect(B^T))^T \cdot (A \otimes C^T)^T \\ & = \row(B) \cdot (A^T \otimes C). \end{align*} \end{proof} \end{lemma} Equipped with Lemma~\ref{prop:row} we can now prove the following proposition: \begin{proposition} \label{prop:frozen} The $(N,K)$ product code $\mathcal{P}$ defined by the product of two non-systematic polar codes as $\mathcal{P} = \mathcal{C}_c \times \mathcal{C}_r$ is a non-systematic polar code having transformation matrix $T_N = T_{N_c} \otimes T_{N_r}$ and frozen set \begin{equation} \label{eq:frozen} \mathcal{F} = \arg \min (i_c \otimes i_r) , \end{equation} where $i_r$ ($i_c$) is a vector of length $N_r$ ($N_c$) having zeros in the positions listed in $\mathcal{F}_r$ ($\mathcal{F}_c$) and ones elsewhere. \begin{proof} To prove the proposition we have to show that $x = \row(X)$ is the codeword of a polar code, providing its frozen set and transformation matrix. If $u = \row(U)$, Lemma~\ref{prop:row} shows that \begin{align*} x & = \row(X) \\ & = \row(T_{N_c}^T \cdot U \cdot T_{N_r}) \\ & = \row(U) \cdot (T_{N_c} \otimes T_{N_r}) \\ & = u \cdot T_N. \end{align*} By construction, input vector $u$ has zero entries in positions imposed by the structure of the input matrix $U$, and (\ref{eq:frozen}) follows from the definition of $U$; with a little abuse of notation, we use the $\arg \min$ function to return the set of the indices of vector $i = i_c \otimes i_r$ for which the entry is zero. Finally, $T_N = T_{N_c} \otimes T_{N_r} = T_2^{\otimes(n_c + n_r)}$ is the transformation matrix of a polar code of length $N = 2^{n_c + n_r}$. \end{proof} \end{proposition} Proposition~\ref{prop:frozen} shows how to design a product polar code on the basis of the two component polar codes. The resulting product polar code $\mathcal{P}$ has parameters $(N,K)$, with $N = N_r N_c$ and $K = K_r K_c$, and frozen set $\mathcal{F}$ designed according to \eqref{eq:frozen}. The encoding of $\mathcal{P}$ can be performed in $O(\log N)$ steps exploiting the structure of $T_N$. The sub-vectors $x_r^i$ and $x_c^j$ corresponding to the $i$-th row and the $j$-th column of $X$ represent codewords of polar codes $\mathcal{C}_r$ and $\mathcal{C}_c$ respectively. It is worth noticing that the frozen set identified for the product polar code is suboptimal, w.r.t. SC decoding, compared to the one calculated for a polar code of length $N$. On the other hand, this frozen set allows to construct a polar code as a result of the product of two shorter polar codes, that can be exploited at decoding time to reduce the decoding latency, as shown in Section~\ref{subsec:lat}. We also conjecture the possibility to invert the product polar code construction, decomposing a polar code as the product of two or more shorter polar codes. \begin{figure} \centering \includegraphics[width=0.25\textwidth]{figures/prod_pol_ex.eps} \caption{Example of product polar code design and encoding.} \label{fig:prod_pol_ex} \end{figure} Figure~\ref{fig:prod_pol_ex} shows the encoding of a product polar code generated by a $(4,2)$ polar code with frozen set $\mathcal{F}_c = \{ 0,1 \}$ as column code $\mathcal{C}_c$ and a $(4,3)$ polar code with frozen set $\mathcal{F}_r = \{ 0 \}$ as row code $\mathcal{C}_r$. This defines a product polar code $\mathcal{P}$ with $N = 16$ and $K = 6$. According to Proposition~\ref{prop:frozen}, its frozen set can be calculated through the Kronecker product of the auxiliary vectors $i_c = [0,0,1,1]$ and $i_r = [0,1,1,1]$, obtaining $\mathcal{F} = \{ 0,1,2,3,4,5,6,7,8,12 \}$. We recall that the optimal frozen set for a $(16,6)$ polar code is given by $\mathcal{F}' = \{ 0,1,2,3,4,5,6,8,9,10 \}$. \section{Low-latency Decoding of Product Polar Codes} \label{sec:dec} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/err_patt.eps} \caption{Example of overlapping of $X_r$ and $X_c$. Red squares represent mismatches, blue lines represent wrong estimations identified by Algorithm \ref{alg:find_err}.} \label{fig:err_patt} \end{figure} In this Section, we present a two-step, low-complexity decoding scheme for the proposed polar product codes construction, based on the dual nature of these codes. We propose to initially decode the code as a product code (step 1), and in case of failure to perform SC decoding on the full polar code (step 2). The product code decoding algorithm of step 1 exploits the soft-input / hard-output nature of SC decoding to obtain a low complexity decoder for long codes. We then analyze the complexity and expected latency of the presented decoding approach. \subsection{Two-Step Decoding} \label{subsec:dec} The first decoding step considers the polar code as a product code. Vector $y$ containing the log-likelihood ratios (LLRs) of the $N$ received bits is rearranged in the $N_c \times N_r$ matrix $Y$. Every row is considered as a noisy $\mathcal{C}_r$ polar codeword, and decoded independently through SC to estimate vector $\hat{u}_r$. Each $\hat{u}_r$ is re-encoded, obtaining $\hat{x}_r=\hat{u}_r\cdot T_{N_r}$: the $N_r$-bit vectors $\hat{x}_r$ are then stored as rows of matrix $X_r$. The same procedure is applied to the columns of $Y$, obtaining vectors $\hat{x}_c=\hat{u}_c\cdot T_{N_c}$, that are in turn stored as columns of matrix $X_c$. In case $X_r = X_c$, decoding is considered successful; the estimated input vector $\hat{u}$ of code $\mathcal{P}$ can thus be derived inverting the encoding operation, i.e. by encoding vector $\hat{x}=\row(X_r)$, since $T_N$ is involutory. In case $X_r \neq X_c$, it is possible to identify incorrect estimations by overlapping $X_r$ and $X_c$ and observing the pattern of mismatches. Mismatches are usually grouped in strings, as shown in Figure~\ref{fig:err_patt}, where mismatches are represented by red squares. \begin{algorithm}[t!] \caption{FindErroneousEstimations} \label{alg:find_err} \begin{algorithmic}[1] \STATE Initialize $\text{ErrRows} = \text{ErrCols} = \emptyset$ \STATE $X_d = X_r \oplus X_c \STATE $\text{NumErrRows} = \text{SumRows}(X_d) $ \STATE $\text{NumErrCols} = \text{SumCols}(X_d) $ \WHILE{$\text{NumErrRows} + \text{NumErrCols} > 0$} \STATE $e_r=\text{arg max(NumErrRows)}$ \STATE $e_c=\text{arg max(NumErrCols)}$ \IF{$\text{max(NumErrRows)} > \text{max(NumErrCols)}$} \STATE $\text{ErrRows} = \text{ErrRows} \cup \{e_r\}$ \STATE $X_d(e_r,:) = 0$ \ELSE \STATE $\text{ErrCols} = \text{ErrCols} \cup \{e_c\}$ \STATE $X_d(:,e_c) = 0$ \ENDIF \STATE $\text{NumErrRows} = \text{SumRows}(X_d) $ \STATE $\text{NumErrCols} = \text{SumCols}(X_d) $ \ENDWHILE \RETURN ErrRows, ErrCols \end{algorithmic} \end{algorithm} Even if mismatch patterns are simple to analyze by visual inspection, it may be complex for an algorithm to recognize an erroneous row or column. We propose the greedy Algorithm~\ref{alg:find_err} to accomplish this task. The number of mismatches in each row and column is counted, flagging as incorrect that with the highest count. Next, its contribution is subtracted from the mismatch count of connected rows or columns, and another incorrect one is identified. The process is repeated until all mismatches belong to incorrect rows or columns, the list of which is stored in $\text{ErrRows}$ and $\text{ErrCols}$. An example of this identification process is represented by the blue lines in Figure~\ref{fig:err_patt}. Incorrect rows can be rectified using correct columns and vice-versa, but intersections of wrong rows and columns cannot. In order to correct these errors, we propose to treat the intersection points as erasures. As an example, in a row, crossing points with incorrect columns have their LLR set to 0, while intersections with correct columns set the LLR to $+\infty$ if the corresponding bit in $X_c$ has been decoded as $0$, and to $-\infty$ if the bit is $1$. The rows and columns flagged as incorrect are then re-decoded, obtaining updated $X_r$ and $X_c$. This procedure is iterated a number $t$ of times, or until $X_r = X_c$. In case $X_r \neq X_c$ after $t$ iterations, the first step returns a failure. In this case, the second step of the algorithm is performed, namely the received vector $y$ is decoded directly, considering the complete length-$N$polar code $\mathcal{P}$. The proposed two-step decoding approach is summarized in Algorithm~\ref{alg:prod_pol}. Any polar code decoder can be used at lines 3,4 and 16. However, since a soft output is not necessary, and the decoding process can be parallelized, simple, sequential and non-iterative SC-based algorithms can be used instead of the more complex BP and SCAN. \begin{algorithm}[t!] \caption{TwoStepDecoding} \label{alg:prod_pol} \begin{algorithmic}[1] \STATE Initialize $Y_r = Y_c = Y $ \FOR{$w = 1 \dots t$} \STATE $\hat{X}_r = \text{DecodeRows}(Y)$ \STATE $\hat{X}_c = \text{DecodeCols}(Y)$ \IF{$X_r == X_c$} \STATE $\hat{x} = \row(X_r)$ \RETURN $\hat{u} = \text{PolarEncoding}(\hat{x})$ \ELSE \STATE FindErroneousEstimations \ENDIF \STATE $Y_r = (-2 \hat{X}_c + 1) \cdot \infty$ \STATE $Y_c = (-2 \hat{X}_r + 1) \cdot \infty$ \STATE $Y_r(:,\text{ErrCols}) = 0$ \STATE $Y_c(\text{ErrRows},:) = 0$ \ENDFOR \RETURN $\hat{u} = \text{Decode}(y)$ \end{algorithmic} \end{algorithm} \subsection{Decoding Latency and Complexity} \label{subsec:lat} The proposed two-step decoding of product polar codes allows to split the polar decoding process into $N_r+N_c$ shorter, independent decoding processes, whose hard decisions are compared and combined together, using the long polar code decoding only in case of failure. Let us define as $\Delta_N$ the number of time steps required by an SC-based algorithm to decode a polar code of length $N$. For the purpose of latency analysis, we suppose the decoder to have unlimited computational resources, allowing a fully parallel implementation of decoding algorithms. Using Algorithm~\ref{alg:prod_pol} to decode component codes, the expected number of steps for the proposed two-step decoder for a code of length $N = N_c N_r$ is given by \begin{equation} \label{eq:time} \Delta^{\rm P}_N = t_{avg} \Delta_{\max(N_r,N_c)} + \gamma \Delta_N~, \end{equation} where $t_{avg}\le t$ is the average number iterations, and $\max(N_r,N_c)$ assumes that the decoding of row and column component codes is performed at the same time. The parameter $\gamma$ is the fraction of decoding attempts in which the second decoding step was performed. It can be seen that as long as $\gamma\approx 0$ and $t_{avg}<<N/\max(N_r,N_c)$, then $\Delta^{\rm P}_N$ is substantially smaller than $\Delta_N$. The structure of parallel and partially-parallel SC-based decoders is based on a number of processing elements performing LLR and hard decision updates, and on dedicated memory structures to store final and intermediate values. Given the recursive structure of polar codes, decoders for shorter codes are naturally nested within decoders for longer codes. In the same way, the main difference between long and short code decoders is the amount of memory used. Thus, not only a high degree of resource sharing can be expected between the first and second decoding step; the parallelization available during the first decoding step implies that the same hardware can be used in the second step, with minor overhead. \section{Performance Results} \label{sec:res} The dual nature of product polar codes can bring substantial speedup in the decoding; on the other hand, given a time constraint, longer codes can be decoded, leading to improved error-correction performance. In this Section, we present decoding speed and error-correction performance analysis, along with simulation results. We assume an additive white Gaussian noise (AWGN) channel with binary phase-shift keying (BPSK) modulation, while the two component codes have the same parameters, i.e. $N_r = N_c$ and $K_r = K_c$. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figures/ECP1.eps} \caption{BER comparison for SC and P-SC, for codes of rate $R=(7/8)^2$.} \label{fig:ECP-SC} \end{figure} \subsection{Error-Correction Performance} \label{subsec:ECP} As explained in Section~\ref{sec:prodPol}, the frozen set identified for the code of length $N$ is suboptimal for product decoding of polar codes, that relies on the frozen set seen by component codes. On the other hand, a frozen set that can help product decoding leads to error-correction performance degradation when standard polar code decoding is applied. Figure~\ref{fig:ECP-SC} portrays the bit error rate (BER) for different codes under P-SC decoding, i.e. the proposed two-step decoding with SC as the component decoder, with parameter $t=4$, while $N = 512^2 = 262144$ and $N = 32^2 = 1024$ with rate $R = (7/8)^2$. As a reference, Figure~\ref{fig:ECP-SC} displays also curves obtained with SC decoding of a polar code of length $N=1024$ and $N=2048$, with the same rate $R=(7/8)^2$, designed according to \cite{arikan}. As expected due to the suboptimality of the frozen set, P-SC degrades the error correction performance with respect to standard SC decoding when compared to codes with the same code length $N$. However, the speedup achieved by P-SC over standard SC allows to decode longer codes within the same time constraint: consequently, we compare codes with similar decoding latency. SC decoding of $N=2048$ and $N=1024$ codes has a decoding latency similar to that of a conservative estimate for P-SC decoding of the $N=512^2$ code. The steeper slope imposed by the longer code can thus be exploited within the same time frame as the shorter codes: the BER curves are shown to cross at around $\text{BER} \simeq 10^{-7}$. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figures/ECP2.eps} \caption{BER comparison for SCL and P-SCL, for codes of rate $R=(7/8)^2$ and $L=8$.} \label{fig:ECP-SCL} \end{figure} Figure~\ref{fig:ECP-SCL} depicts the BER curves for the same codes, obtained through SCL and P-SCL decoding with a list size $L=8$, and no CRC. The more powerful SCL algorithm leads to an earlier waterfall region for all codes, with a slope slightly gentler than that of SC. The P-SCL curve crosses the SCL ones around similar BER points as in Figure~\ref{fig:ECP-SC}, but at lower $E_b/N_0$. \subsection{Decoding Latency} \label{subsec:speed} To begin with, we study the evolution of the parameters $\gamma$ and $t_{avg}$ in \eqref{eq:time} under SC decoding. Figure~\ref{fig:Gamma-SC} depicts the value of $\gamma$ measured at different $E_b/N_0$, for various code lengths and rates. The codes have been decoded with the proposed two-step decoding approach, considering $t = 4$ maximum iterations. As $E_b/N_0$ increases, the number of times SC is activated rapidly decreases towards $0$, with $\gamma<10^{-3}$ at a BER orders of magnitude higher than the working point for optical communications, which is the target scenario for the proposed construction. Simulations have shown that the slope with which $\gamma$ tends to $0$ changes depending on the value of $t$; as $t$ increases, so does the steepness of the $\gamma$ curve. Regardless of $t$, $\gamma$ tends to $0$ as the channel conditions improve. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figures/gamma.eps} \caption{Evolution of $\gamma$ with codes of different length and rate, SC component decoding, $t=4$, $N_r=N_c$, $R_r=R_c$.} \label{fig:Gamma-SC} \end{figure} The first decoding step is stopped as soon as $X_r=X_c$, or if the maximum number of iterations $t$ has been reached. Through simulation, we have observed that the average number of iterations $t_{avg}$ follows a behavior similar to that of $\gamma$, and tends to $1$ as $E_b/N_0$ increases. It is worth noting that similar considerations apply when a decoding algorithm different than SC is used, as long as the same decoder is applied to the component codes and the length-$N$ code. The trends observed with SC for $\gamma$ and $t_{avg}$ are found with P-SCL as well, and we can safely assume that similar observations can be made with other SC-based decoding algorithms. Table \ref{tab:speed} reports $\Delta_N$ required by standard SC and SCL decoders, as well as for the proposed two-step decoder P-SC and P-SCL, at different code lengths and rates. Assuming no restrictions of available resources, the number of time steps required by SC decoding is $\Delta^{\rm SC}_N=2N-2$, that becomes $\Delta^{\rm SCL}_N=2N+K-2$ for SCL decoding \cite{hashemi_FSSCL_TSP}. For P-SC and P-SCL, $\Delta^{\rm P}_N$ is evaluated for worst case (WC), that assumes $t_{avg}=t$ and $\gamma=1$, and best case (BC), that assumes $t_{avg}=1$ and $\gamma=0$. Simulation results show that $\Delta^{\rm P}_N$ tends to the asymptotic limit represented by BC decoding latency as the BER goes towards optical communication working point. As an example, for $N = 512^2 = 262144$, $K = 448^2 = 200704$ with P-SC, at $\text{BER} \simeq 2.5 \cdot 10^{-7}$, i.e. approximately eight orders of magnitude higher than the common target for optical communications, $\gamma \approx 6 \cdot 10^{-3}$ and $t_{avg}=1.1$, leading to $\Delta_N^{\rm P-SC}=5967$. This value is equivalent to $1.1\%$ of standard decoding time $\Delta_N^{\rm SC}$, while the BC latency is $0.2\%$ of $\Delta_N^{\rm SC}$. At $\text{BER} \simeq 10^{-15}$, it is safe to assume that the actual decoding latency is almost equal to BC. \begin{table} \centering \scriptsize \caption{Time step analysis for standard and two-step decoding.} \label{tab:speed} \setlength{\extrarowheight}{1.5pt} \begin{tabular}{c||c|cc||c|cc} Code & \multirow{2}{*}{$\Delta^{\rm SC}_N$} & \multicolumn{2}{c||}{$\Delta^{\rm P-SC}_N$} & \multirow{2}{*}{$\Delta^{\rm SCL}_N$} & \multicolumn{2}{c}{$\Delta^{\rm P-SCL}_N$} \\ $N$,$K$ & & WC & BC & & WC & BC \\ \hline \hline $1024, 784$ &2046 & 2294 & 62 &2830 & 3190& 90 \\ $1024, 841$ &2046 & 2294 & 62 &2876 & 3240& 91 \\ $4096, 3136$ &8190 & 8694 & 126 &11326 & 12054& 182\\ $4096, 3249$ &8190 & 8694 &126 &11508 & 12244& 184\\ $16384, 12544$ &32766 & 33782 & 254&45310 & 46774& 366\\ $16384, 13225$ &32766 & 33782 &254 &46038 & 47518& 370\\ $65536, 50176$ &131070 & 133110 &510 &181246 & 184182& 734\\ $65536, 52900$ &131070 & 133110 &510 &184155 & 187119& 741\\ $262144, 200704$ &524286 & 528374 & 1022 &724990 & 730870& 1470\\ $262144, 211600$ &524286 & 528374 & 1022 &736623 & 742555& 1483\\ \end{tabular} \end{table} \section{Conclusion} \label{sec:conc} In this paper, we have shown that the product of two non-systematic polar codes results in a polar code whose transformation matrix and frozen set are inferred from the component polar codes. We have then proposed a code construction and decoding approach that exploit the dual nature of the resulting product polar code. The resulting code is decoded first as a product code, obtaining substantial latency reduction, while standard polar decoding is used as post-processing in case of failures. Performance analysis and simulations show that thanks to the high throughput of the proposed decoding approach, very long codes can be targeted, granting good error-correction performance suitable for optical communications. Future works rely on the inversion of the proposed product polar code construction, namely rewriting any polar code as the product of smaller polar codes. \bibliographystyle{IEEEbib}
1,108,101,566,181
arxiv
\subsection{Proof of Theorem} The proof proceeds in two steps. We first derive the $L_2$ rate of convergence for the bias of the naive estimator and then we derive that of the variance. The approach of the proof closely follows the proof for the $L_2$ rate of convergence of the mean regression function itself found in \cite{xiao2019}. We start by defining some terms to simplify the notation. \\ Let $G_{n, q} = \frac{\bt\bb}{n}$ and $H_n = G_{n, q} + \lambdan \Pm$. To ease exposition, we follow \cite{zhou2000yhh} and write $\fhp$ as $$\fhp = \bqq D^{(r)}(\gn+\lambdan\Pm)^{-1}\bt\bm{y}/n$$ where $\bqq \in \mathbb{R}^{K+q-r}$ is a vector of B-Spline basis functions of order $q-r$ and $\dr$ is defined as $\dr = M^T_r\times M^T_{r-1}\times \dots \times M^T_1$ with $$ M_l = (q-1)\begin{bmatrix} \frac{-1}{t_1-t_{1-q+l}} & 0 & 0 & \dots & 0 \\ \frac{1}{t_1-t_{1-q+l}} & \frac{-1}{t_2-t_{2-q+l}} & 0 & \dots & 0 \\ 0 & \frac{1}{t_2-t_{2-q+l}} & \frac{-1}{t_3-t_{3-q+l}} & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & \dots & \frac{1}{t_{K+q-l}-t_K} \\ \end{bmatrix} $$ for $1\le l\le r\le q-2$. Let $\bpq=\bqq D^{(r)}$, implying $$ \hat{f}^{(r)}(x) = \bpq\left(G_{n, q}+\lambdan \Pm\right)^{-1}\bt\bm{y}/n$$ We use the identity $(A+B)^{-1} = A^{-1} - A^{-1}B(A+B)^{-1}$ to expand the inverse term in the estimator. This later allows us to split the bias term into the part due to approximating $\fp$ with a spline (approximation bias) and the other part due to penalization (shrinkage bias). \begin{eqnarray*} (G_{n, q}+\lambdan \Pm)^{-1} &=& G_{n,q}^{-1}-G_{n,q}^{-1}(\lambdan \Pm)H_n^{-1} \\ &=& G_{n,q}^{-1}-H_n^{-1}(\lambdan \Pm)G_{n,q}^{-1} \end{eqnarray*} where the last equality is by symmetry.\\ Substituting into $\fph$, we have: \begin{eqnarray*} \implies \hat{f}^{(r)}(x) &=& \bpq\left(\Gni - \Hni(\lambdan \Pm)\Gni\right)\Bt\y/n\\ &=& \bpq\Gni\Bt\y/n - \bpq\Hni\lambdaP\Gni\Bt\y/n \end{eqnarray*} We now focus on the bias of the naive estimator, $\efph - \fp$. From Lemma \ref{lm1}, $\exists s_f\in \Sqt$, the space of spline functions of order $q$ defined on knots $\underline{\mathbf{t}}$ such that $||f^{(r)} - s_f^{(r)}|| = O(h^{q-r}) + o(h^{p-r}).$ The bias of $\fhp$ can be written as: \begin{equation} \label{eqn:bias} \efph - \fp = \left[E\left(\fph\right) - \sfp\right] + \left[\sfp - \fp\right] \end{equation} Equation \eqref{eqn:bias} above allows us to separately evaluate the approximation bias and shrinkage bias for estimating $\fp$. Notice that Lemma \ref{lm1} provides information on the rate of convergence on the second term in \eqref{eqn:bias}, we will next focus expressing the first term in a form that isolates the effect of penalization on the bias. Substituting the previously derived expression for $\fph$ into the first term, we have \begin{eqnarray*} E\fph - \sfp &=& \bpq\Gni\Bt\f/n - \sfp \\ && - \bpq\Hni\lambdaP\Gni\Bt\f/n \\ &=& \bpq\gammab - \sfp - \bpq\hni\lambdap\gammab \end{eqnarray*} where $\gammab = \Gni\Bt\f/n$ and $\f = E\left[\y\right]$. But $\sfp = \bpq\Gni\Bt\sfb/n$, where $\sfb = \left\{s_f(x_1), s_f(x_2), \dots, s_f(x_n)\right\}$ \begin{eqnarray} \label{eqn:pen_bias} \implies E\fph - \sfp &=& \bpq\gammab - \bpq\Hni\lambdaP\gammab \nonumber \\ && - \bpq\Gni\Bt\sfb/n \nonumber\\ &=& \bpq\Gni\Bt\f/n - \bpq\Gni\Bt\sfb/n \nonumber\\ && - \bpq\Hni\lambdaP\gammab \nonumber\\ &=& \bpq\Gni\Bt(\f-\sfb)/n - \bpq\Hni\lambdaP\gammab \nonumber\\ &=& \bpq\Gni\alphab - \bpq\hni\lambdaP\gammab \end{eqnarray} where $\alphab = \Bt(\f-\sfb)/n$ Let $Q(x)$ be a distribution of $x$ on $\left[0, 1\right]$ with positive continuous density $q(x)$. Then substituting \eqref{eqn:pen_bias} into \eqref{eqn:bias} and using the triangle inequality, we can evaluate the squared bias of $\fhp$ as: \begin{eqnarray} \label{eqn:bias_l2} \int_0^1 \left(\efph - \fp\right)^2 q(x)dx &\le& \int_0^1\left(\sfp - \fp\right)^2q(x)dx \nonumber\\ && + \alphabt\gni\gp\gni\alphab\\ && + \gammabt\lambdap\hni\gp\hni\lambdap\gammab \nonumber \end{eqnarray} where $\gp=\int_0^1 \bpqt\bpq q(x)dx$. The first and second terms in \eqref{eqn:bias_l2} represent the part of the bias due to using spline functions to estimate $\fp$, and the last term represents the part of the bias due to penalization. Observe that, by Lemma \ref{lm1}, \begin{eqnarray*} \int_0^1\left(\sfp - \fp\right)^2q(x)dx &\le& q_{\max} \int_0^1 \left(\sfp - \fp\right)^2dx \\ &=& O\left(h^{2(q-r)}\right) + o\left(h^{2(p-r)}\right) \end{eqnarray*} where, $q_{\max} = \displaystyle \max_{0\le x\le 1}q(x) < \infty$. For the second term in \eqref{eqn:bias_l2}, we use the result $||\gp||_\infty = O(h^{-2r})$, from Lemma \ref{lm2}. We also use $||\gni||_\infty=O(h^{-1})$ from Lemma \ref{lm4} and Lemma 6.10 of \cite{agarwal1980} that $||\alphab||_{\max}=o(h^{p+1})$. Let $\gph$ be square and symmetric matrix such that $\gp=\gph\gph$. We write \begin{eqnarray*} \alphabt\gni\gp\gni\alphab &=& \left(\gph\gni\alphab\right)^T\left(\gph\gni\alphab\right)\\ &=& ||\gph\gni\alphab||_2^2 \\ &\le& ||\alphab||_2^2||\gph\gni||_2^2\\ &\le& ||\alphab||_2^2||\gph||_2^2||\gni||_2^2\\ &=& o(h^{2p+2})O(h^{-2r})O(h^{-2}) \\ &=& o(h^{2p-2r}) \end{eqnarray*} Next, we focus on the part of the bias due to penalization as given by the third term in \eqref{eqn:bias_l2}. First, note that from \cite{deboor1978} and Lemma 5.2 of \cite{zhou2000yhh}, $\dr$ in $\bpq=\bqq D^{(r)}$, is such that $$||D^{(r)}||_\infty = O(h^{-r})$$ This can be easily seen by inspecting the elements of $\dr$. $$\therefore \bpqt\bpq=D^{(r)T}\bqqt \bqq\dr = O(h^{-2r})\bqqt\bqq$$ Thus, we can write \begin{eqnarray*} \gp&=&\int_0^1 \bpqt\bpq q(x)dx \\ &=& O(h^{-2r})\int_0^1 \bqqt\bqq q(x)dx \\ &=& O(h^{-2r}) G_{q-r} \end{eqnarray*} where $G_{q-r} = \int_0^1 \bqqt\bqq q(x)dx$. Also, by the WLLN, $G_{n, q-r}=G_{q-r}+o(1)$.\\ Therefore: \begin{eqnarray*} \psib &=& O(h^{-2r})\gammabt\lambdap\hni \\ && \times\ G_{q-r}\hni\lambdap\gammab\\ &=& O(h^{-2r})\gammabt\lambdap\hni \\ && \times\ \gnqq\hni\lambdap\gammab \end{eqnarray*} where $\gnqq = B_{q-r}^TB_{q-r} / n$, the version of $\gn$ based on B-splines of degree $q-r$. Note that the decay of the eigenvalues of $G_{n, q}$ does not depend on $q$ (see Lemma \ref{lm3}). Therefore, we will use $G_{n, q}$ instead of $G_{n, q-r}$ in the derivations that follow for asymptotic order. This simplifies the calculations since $\hni$ depends on $\gn$. Using \begin{eqnarray*} \hni&=&\left[\gn+\lambdap\right]^{-1} \\ &=& \left[\gnh\left(\gnh+\lambdan\gnmh \Pm\right)\right]^{-1}\\ &=& \left(\gnh+\lambdan\gnmh \Pm\right)^{-1}\gnmh \end{eqnarray*} we can write \begin{eqnarray} \label{eqn:3inner} \lambdap\hni\gn\hni\lambdap &=& \lambdap\left(\gnh+\lambdan\gnmh \Pm\right)^{-1}\gnmh\gn \nonumber \\ && \times\ \left(\gnh+\lambdan\gnmh \Pm\right)^{-1}\gnmh\lambdap \nonumber \end{eqnarray} Let $\tilde{P} = \tildp$ $\implies \tilde{P}\gnh = \gnmh\lambdap$ Substituting into \eqref{eqn:3inner}, we have \begin{eqnarray*} \lambdap\hni\gn\hni\lambdap &=& \lambdap\left(\gnh+\tilde{P}\gnh\right)^{-1}\gnh \\ && \times\ \left(\gnh+\tilde{P}\gnh\right)^{-1}\tilde{P}\gnh\\ &=& \lambdap\gnmh\left(I+\tilde{P}\right)^{-1}\gnh\gnmh \\ && \times\ \left(I+\tilde{P}\right)^{-1}\tilde{P}\gnh\\ &=& \gnh\tilde{P}(I+\tilde{P})^{-2}\tilde{P}\gnh \end{eqnarray*} where in the second equality, we've used the fact that $\gnh+\tilde{P}\gnh = (I+\tilde{P})\gnh$ and that $\lambdap\gnmh=\gnh\tilde{P}$ in the last equality. Using the above, we can then write: \begin{eqnarray*} \gammabt\lambdap\hni\gn\hni\lambdap\gammab &=& \gammabt\gnh\tilde{P}\left(I+\tilde{P}\right)^{-2}\tilde{P}\gnh\gammab \end{eqnarray*} From the fact that $\tilde{P}\left(I+\tilde{P}\right)^{-2}\tilde{P}\le ||\tilde{P}||_2\tilde{P}$, \begin{eqnarray*} \psib &=& O(h^{-2r})||\tilde{P}||_2\gammabt\gnh\tilde{P}\gnh\gammab \\ &=& O(h^{-2r})||\tilde{P}||_2\gammabt\lambdap\gammab\\ &=& O(h^{-2r})||\gnmh\lambdap\gnmh||_2\gammabt\lambdap\gammab \\ &=& O(h^{-2r})||\gni||_2||\lambdap||_2 \gammabt\lambdap\gammab \\ \end{eqnarray*} where we have used $\gnh\tilde{P}\gnh=\lambdap$ in the second equality and substituted $\tilde{P}$ in the third. By Assumption 5, $||\Pm||_2=O(h^{1-2m})$ and from Lemma 6, $\gammabt P_m\gammab = O(1)$.\\ Therefore: \begin{eqnarray*} \psib &=& O(h^{-2r})O(h^{-1})O(\lambdan h^{1-2m})O(\lambdan) \\ &=& O(\lambdan^2h^{-2m-2r}). \end{eqnarray*} Also, from $\tilde{P}(I+\tilde{P})^{-2}\tilde{P}\le \tilde{P}$, we have \begin{eqnarray*} \psib &=& O(h^{-2r})\gammabt\gnh\tilde{P}(I+\tilde{P})^{-2}\tilde{P}\gnh\gammab \\ &=& O(h^{-2r})\gammabt\gnh\tilde{P}\gnh\gammab\\ &=& O(h^{-2r})\gammabt\lambdap\gammab \\ &=& O(\lambdan h^{-2r}) \end{eqnarray*} $\therefore \psib = O\left\{\min\left(\lambdan^2h^{-2m-2r}, \lambdan h^{-2r}\right)\right\}$\\ This concludes the proof for bias in \eqref{eqn:bias_l2}. Next, we look at the variance part: \begin{eqnarray*} Var(\fph) &=& Var\left(\bpq\hni\bt\y/n\right)\\ &=& \bpq\hni\bt Var(\y/n) \bb\hni\bp \\ &=& \frac{\sigma^2}{n} tr\left\{\bpq\hni\bt \bb/n\hni\bp\right\} \\ &=& \frac{\sigma^2}{n} tr\left\{\hni\gn\hni\bp\bpq\right\} \end{eqnarray*} \begin{eqnarray*} \int_0^1 Var(\fph)q(x)dx &=& \frac{\sigma^2}{n}tr\left\{\hni\gn\hni\gp\right\} \\ &=& O(h^{-2r})\frac{\sigma^2}{n}tr\left\{\hni\gn\hni\gn\right\} \end{eqnarray*} From \begin{eqnarray*} \hni &=& \left(\gn+\lambdap\right)^{-1} \\ &=& \left[\gn\left(I+\gni\lambdap\right)\right]^{-1}\\ &=& \left[I+\gni\lambdap\right]^{-1}\gni \end{eqnarray*} $$\implies \hni\gn = \left[I+\gni\lambdap\right]^{-1}.$$ Note that $\gni\lambdap = \gnmh\gnmh\lambdap$ and by the rotation property of the trace, \begin{eqnarray*} tr\left[\gni\lambdap\right] &=& tr\left[\gnmh\gnmh\lambdap\right] \\ &=& tr\left[\gnmh\lambdap\gnmh\right]\\ &=& tr\left[\tilde{P}\right] \end{eqnarray*} \begin{eqnarray*} \therefore \int_0^1 Var(\fhp)q(x)dx &=& O(h^{-2r})\frac{\sigma^2}{n}tr\left[(I+\tilde{P})^{-2}\right]\\ &=& O(h^{-2r})\frac{\sigma^2}{n}||(I+\tilde{P})^{-2}||_F^2 \\ &=& O(h^{-2r})\frac{\sigma^2}{n}O\left\{\frac{1}{\max(h, \lambdan^{1/2m})}\right\} \\ &=& O(h^{-2r})\frac{\sigma^2}{n}O\left\{\min(h^{-1}, \lambdan^{-1/2m})\right\}\\ &=& O(K^{2r})\frac{\sigma^2}{n}O\left\{\min(K, \lambdan^{-1/2m})\right\}\\ &=& O\left(\frac{K_e}{n}\right) \end{eqnarray*} Where in the above, we have used $||(I+\tilde{P})^{-2}||_F^2 = O\left\{\frac{1}{\max(h, \lambda^{1/2m})}\right\}$ from Lemma 5.2 of \cite{xiao2019}, $K\sim h^{-1}$, and $K_e = \min\left\{K^{2r+1}, K^{2r}\lambdan^{-1/2m}\right\}$ This completes the proof of the theorem. \subsection{\textbf{Technical Lemmas}} \begin{lemma} \label{lm1} Let $f\in\mathcal{C}^p$, then there exists $s_f\in \Sqt$ such that $$||f^{(r)}-s_f^{(r)}|| = O(h^{q-r}) + o(h^{p-r})$$ for all $r = 0, 1, \dots, q - 2$ and $p \le q$. Here, $b(x) = -\frac{f^{(q)}(x)h_i^q}{q!}B_q\left(\frac{x-t_i}{h_i}\right)$ where $B_q(.)$ is the $q^{th}$ Bernoulli polynomial defined as $B_0(x)=1$, and $B_k(x) = \displaystyle \int_0^x kB_{k-1}(x)dx + B_k$ and $B_k$ is chosen such that $\int_0^1B_k(x)dx=0$. \\ $B_k$ is known as the $k^{th}$ Bernoulli number \citep{barrow1978}. This Lemma also appears in \cite{xiao2019} where the general result in \cite{barrow1978} is adapted to prove the case where $p<q$. \end{lemma} \subsubsection*{Proof of Lemma \ref{lm1}} We provide a proof for the case where $p=q$ and refer to Remark 3.1 of \cite{xiao2019} for the case where $p < q$. \cite{xiao2019} showed that when $p<q$, $||f^{(r)}-s_f^{(r)}|| = o(h^{p-r})$.\\ For $p=q$, first note that under Assumption \ref{assumption:3}, \cite{barrow1978} showed that $$\displaystyle \inf_{s(x)\in \Sqt}||\fp - s^{(r)}(x) + b^{*(r)}(x)||_{L_\infty} = o(h^{q-r})$$ This means, there exists an $s_f(x) \in \Sqt$ such that $$||\fp - \sfp + b^{*(r)}(x)|| = o(h^{q-r})$$ where $b^*(x) = -\frac{f^{(q)}(t_i)h_i^q}{q!}B_q\left(\frac{x-t_i}{h_i}\right)$ and $b^{*(r)}$ is the $r^{th}$ derivative of $b^*(x)$. With $p=q$, we have that $f\in\mathcal{C}^q[0, 1]$. Therefore, from Taylor's theorem, $f^{(q)}(x) = f^{(q)}(t_i) + o(1)$. $$\implies b(x) = b^*(x) + o(h^q)$$ The derivative of the Bernoulli polynomial of order k is given by $\bm{B}'_k(x) = \bm{B}_{k-1}(x)$ \citep{barrow1978}, it therefore follows that $$b^{(r)}(x) = b^{*(r)}(x) + o(h^{q-r})$$ for $r=0, 1, 2, \dots, q - 2$. But $||b^*|| = O(h^q)$ by definition, giving $||b^{(r)}|| = O(h^{q-r})$. Combining this with the case where $p<q$, we have that $||f^{(r)}-s_f^{(r)}|| = O(h^{q-r})+o(h^{p-r})$ for all $p\le q$. \vspace{15pt} \begin{lemma} \label{lm2} Given $\gp=\int_0^1 \bp\bpq q(x)dx$, $$||\gp||_\infty = O(h^{-2r})$$ \end{lemma} \subsubsection*{Proof of Lemma \ref{lm2}} Note that $\bpq = \bqq D^{(r)}$ \begin{eqnarray*} \therefore \gp &=& \int_0^1 \bqq D^{(r)}D^{T(r)}\bqqt q(x)dx \\ &=& O(h^{-2r})\int_0^1 \bqq\bqqt q(x)dx \\ &=& O(h^{-2r})\times q_{\max}\\ &=& O(h^{-2r}) \end{eqnarray*} Where $q_{\max} = \displaystyle \max_{x\in [0, 1]} q(x) < \infty$. Also, note that B-spline bases are bounded by 1 $\forall x\in [0, 1]$. \vspace{15pt} \begin{lemma} \label{lm3} Let $\gn = \bt\bm{B}/n$ where $\bm{B}=[B(x_1), B(x_2), \dots, B(x_n)]^T \in \mathbb{R}^{n\times K}$ is a matrix of basis functions with each $B(x)\in \mathbb{R}^{K}$ being a vector of basis functions of order $q$ at $x$. Then $$||\gni||_\infty = O(h^{-1})$$ \end{lemma} \subsubsection*{Proof of Lemma \ref{lm3}} This Lemma is adapted from \cite{zhou1998} and the key idea is to show that the elements of $\gni$ decay exponentially and of order $h^{-1}$. We provide the proof here for convenience. Let $\lambda_{\max} = \displaystyle \max_{ \sum_{i=1}^K a_i^2 = 1 } ||\gn \ab ||_2$ and $\lambda_{\min} = \displaystyle \min_{ \sum_{i=1}^K a_i^2 = 1 } ||\gn\ab||_2$ be the maximum and minimum eigen values of $\gn$. Since $\gn$ is a band matrix, we use Theorem 2.2 of \cite{demko1977} by showing that the conditions of the theorem are satisfied. First, note that \begin{eqnarray*} ||\lambda^{-1}_{\max}\gn||_2 &=& \lambda^{-1}_{\max}||\gn||_2 \\ &=& \lambda^{-1}_{\max} \displaystyle \max_{ \sum_{i=1}^K z_i^2 = 1 } ||\gn\z||_2 \\ &\le& 1 \end{eqnarray*} Where the $\max$ term in the second equality gives some eigen value which is at most $\lambda_{\max}^{-1}$. Also, \begin{eqnarray*} ||\lambda_{\max}\gni||_2 &=& \frac{\lambda_{\max}}{\lambda_{\min}}||\lambda_{\min}\gni||_2\\ &\le& \frac{\lambda_{\max}}{\lambda_{\min}} \end{eqnarray*} Lemma 6.2 of \cite{zhou1998} provides bounds on the eigen values of $\gn$. In particular, for large $n$, there exist constants $c_1$ and $c_2$ such that $$c_1h/2 \le\lambda_{\min} \le \lambda_{\max} \le 2c_2h$$ Therefore by Theorem 2.2 of \cite{demko1977}, there exists constants $c > 0$ and $\gamma\in (0, 1)$ which depend only on $c_1$, $c_2$ and $q$ such that: \begin{equation} \label{eqn:gni} |\lambda_{\max} g_{ij}| \le c\gamma^{|i-j|} \end{equation} where $g_{ij}$ is the $(i,j)$th element of $\gni$. From equation \eqref{eqn:gni}, $$|g_{ij}| \le c\lambda_{\max}^{-1}\gamma^{|i-j|}\le 2(c/c_1)h^{-1}\gamma^{|i-j|}$$ This completes the proof of Lemma \ref{lm3}. \begin{lemma} \label{lm4} Suppose $\gammab = \gni\bt\f/n$ and $P_m$ is the penalty matrix for the penalized spline estimator in \eqref{eqn:naive}, then $$\gammabt P_m \gammab = O(1)$$ \end{lemma} \subsubsection*{Proof of Lemma \ref{lm4}} This Lemma is adapted from Lemma 8.4 of \cite{xiao2019} which puts a bound on the penalty matrix of the penalized spline estimator. The proof follows closely the proof from \cite{xiao2019}. Observe that \begin{eqnarray*} \gni\bt\f/n &=& \gni\bt(\f-\sfb)/n + \gni\bt\sfb/n \\ &=& \gni\bt(\f-\sfb)/n + \betab \\ &=& \gni\alphab + \betab \end{eqnarray*} where $\betab = \gni\bt\sfb/n$ and $\alphab = \bt(\f-\sfb)/n$. \begin{eqnarray} \label{eqn:gamma2} \left(\gammabt P_m\gammab \right)^{\frac{1}{2}} &\le& \left(\alphabt\gni P_m\gni\alphab\right)^{\frac{1}{2}} + \left(\betab^T P_m\betab\right)^{\frac{1}{2}} \end{eqnarray} since $P_m$ is positive semi-definite. By Assumption, $\betab^TP_m\betab = O(1)$, therefore showing that the first term in \ref{eqn:gamma2} is O(1) completes the proof. In the following, we use the following matrix relations. Let $A\in \mathbb{R}^{m\times n}$, then \begin{equation} \label{mat:identity1} \frac{1}{\sqrt{n}}||A||_\infty\le||A||_2\le \sqrt{m}||A||_\infty \end{equation} Also, let $\pmh$ be a square symmetric matrix such that $P_m = \pmh\pmh$. Observe that \begin{eqnarray} \alphabt\gni P_m\gni\alphab &=& \left(\pmh\gni\alphab\right)^T\left(\pmh\gni\alphab\right)\\ &=& ||\pmh\gni\alphab||_2^2 \\ &\le& ||\alphab||_2^2||\pmh\gni||_2^2 \\ &\le& ||\alphab||_2^2||\pmh||_2^2||\gni||_2^2 \\ &\le& ||\alphab||_2^2||\pmh||_2^2K||\gni||_\infty^2 \\ &=& o(h^{2p+2})O(h^{1-2m})O(h^{-1})O(h^{-2}) \\ &=& o(h^{2p-2m}) \\ &=& o(1) \end{eqnarray} since $p\ge m$. Inequalities (12) and (13) are by Cauchy Schwartz inequality, we have used the matrix identity in \eqref{mat:identity1} in inequality (14). Also, We have used the result by \cite{agarwal1980} for $||\alphab||_2^2$ and the assumption that $||P_m||_2 = O(h^{1-2m})$. Finally, Lemma \ref{lm3} has been used in inequality (15) for $||\gni||_\infty$. \subsection{Rates of Convergence for Naive Local Polynomial Estimators} When estimating the $r^{th}$ derivative of the mean regression function with a local polynomial of degree $p$, several authors (\citealp[]{fan1996local, ruppert_wand_1994}) recommend using odd $p-r$. In this section, we lay out an argument that the naive bandwidth under- or over- smooths when $p$ and $p-r$ have different parities and that only even derivatives can be optimally estimated by the naive estimator. Table \ref{table:parities} below shows the four (4) potential parity combinations for $p$ and $p-r$. We show next that the naive estimator achieves the optimal rate of convergence when used to estimate $p-r$ only for cases I and IV (where $p$ and $p-r$ have same parity, equivalently, when $r$ is even). \begin{table}[hbt!] \centering \begin{tabular}{|c|lcc|} \hline \multicolumn{4}{|r|}{$\bm{p-r}$} \\ \hline ~&~& odd & even \\ \multirow{2}{*}{$\bm{p}$} & odd & $I$ & $II$\\ ~& even & $III$ & $IV$ \\ \hline \end{tabular} \caption{Parity combinations of $p$ and $p-r$ when estimating the $r^{th}$ derivative of a mean regression function with $p^{th}$ degree local polynomial regression.} \label{table:parities} \end{table} Let $\mrh$ be a $p^{th}$-degree local polynomial estimate of the $r^{th}$ ($r\le p)$ derivative of the mean regression function, $m(x)$ at a point $x$ such that $m^{(p+1)}(\cdot)$ is continuous in a neighborhood of $x$. Let also $h$ be the bandwidth of $\mrh$ such that $h = o(n)$ and $nh \to \infty$, then we know from \cite{ruppert_wand_1994} that $$\text{IMSE}\left(\mrh\right) = o\left(h^{2(p+1-r)}\right) + O\left(\frac{1}{nh^{2r+1}}\right)$$ for odd $p-r$ and $$\text{IMSE}\left(\mrh\right) = o\left(h^{2(p+2-r)}\right) + O\left(\frac{1}{nh^{2r+1}}\right)$$ for even $p-r$. Note that the naive estimator uses the optimal bandwidth when estimating $m(\cdot)$ itself, thus, when $r = 0$. In the above, IMSE is the integrated mean squared error. First, we will derive the rates of convergence for the optimal bandwidth for the naive estimator $(r=0)$ for both the odd $p$ and even $p$ cases. We will then compare how these naive rates of convergence compare with the optimal bandwidths for estimating $p-v$ for both parity scenarios. For odd $p$ (thus, $r=0$ and $p-r$ is odd), $$\text{IMSE}\left(\mh\right) = o\left(h^{2(p+1)}\right) + O\left(\frac{1}{nh}\right)$$ To get the rate of convergence of the optimal bandwidth, we derive the $h$ that minimizes the IMSE (ignoring constants). From: \begin{eqnarray*} 2(p+1)h^{2p+1} - n^{-1}h^{-2} &=& 0 \\ 2(p+1)h^{2p+1} &=& \frac{1}{nh^2} \\ h^{2p+3} &=& \frac{n^{-1}}{2(p+1)} \end{eqnarray*} $\therefore \hon = O\left(n^{-\frac{1}{2p+3}}\right)$. Here, we use $\hon$ for the optimal bandwidth for the naive estimator when $p$ is odd. For even $p$ (thus, $r=0$ and $p-r$ is even), $$\text{IMSE}\left(\mh\right) = o\left(h^{2(p+2)}\right) + O\left(\frac{1}{nh}\right)$$ From: \begin{eqnarray*} 2(p+2)h^{2p+3} - n^{-1}h^{-2} &=& 0 \\ 2(p+2)h^{2p+3} &=& \frac{1}{nh^2} \\ h^{2p+5} &=& \frac{n^{-1}}{2(p+2)} \end{eqnarray*} $\therefore \hen = O\left(n^{-\frac{1}{2p+5}}\right)$. $\hen$ is the optimal bandwidth for the naive estimator when $p$ is even. We now analyse the achieved rates of convergence for estimating the $r^{th}$ derivative of the mean regression function, $m$ and how those rates compare with the naive estimator. We consider the four (4) cases in Table \ref{table:parities} above. \textit{Case I}: \textbf{$p$ odd and $p-r$ odd (thus, $r$ is even).} $$\text{IMSE}\left(\mrh\right) = o\left(h^{2(p+1-r)}\right) + O\left(\frac{1}{nh^{2r+1}}\right)$$ From \begin{eqnarray*} 2(p+1-r)h^{2p-2r+1} - (2r+1)n^{-1}h^{-2r-2} &=& 0\\ 2(p+1-r)h^{2p-2r+1} &=& \frac{2r+1}{nh^{2r+2}} \\ h^{2p+3} &=& \frac{2r+1}{2(p+1-r)n} \end{eqnarray*} $\therefore h_{opt} = O\left(n^{-\frac{1}{2p+3}}\right)$, this is the same rate achieved by $\hon$. Therefore, the naive bandwidth achieves the same rate as the optimal bandwidth for estimating $p-r$ in this case. \textit{Case II}: \textbf{$p$ odd and $p-r$ even (thus, $r$ is odd).} $$\text{IMSE}\left(\mrh\right) = o\left(h^{2(p+2-r)}\right) + O\left(\frac{1}{nh^{2r+1}}\right)$$ By similar approach as in Case I, we get $h_{opt} = O\left(n^{-\frac{1}{2p+5}}\right)$, this rate is different from that achieved by the naive estimator $\hon$ for odd $p$. The consequence of using the naive bandwidth in this case is that, it shrinks faster than the optimal rate which may result in over-smoothing. \textit{Case III}: \textbf{$p$ even and $p-r$ odd (thus, $r$ is odd).} $$\text{IMSE}\left(\mrh\right) = o\left(h^{2(p+1-r)}\right) + O\left(\frac{1}{nh^{2r+1}}\right)$$ Again, similar to Cases I and II above, $h_{opt} = O\left(n^{-\frac{1}{2p+3}}\right)$, this rate is different from that achieved by the naive estimator $\hen$ for even $p$ which is $O\left(n^{-\frac{1}{2p+5}}\right)$. Unlike in case II, the consequence of using the naive bandwidth in this case is that, it shrinks at a slower rate than the optimal rate which may result in over-smoothing. \textit{Case IV}: \textbf{$p$ even and $p-r$ even (thus, $r$ is even).} $$\text{IMSE}\left(\mrh\right) = o\left(h^{2(p+2-r)}\right) + O\left(\frac{1}{nh^{2r+1}}\right)$$ From \begin{eqnarray*} 2(p+2-r)h^{2p-2r+3} - (2r+1)n^{-1}h^{-2r-2} &=& 0\\ 2(p+2-r)h^{2p-2r+3} &=& \frac{2r+1}{nh^{2r+2}} \\ h^{2p+5} &=& \frac{2r+1}{2(p+2-r)n} \end{eqnarray*} $\therefore h_{opt} = O\left(n^{-\frac{1}{2p+5}}\right)$, this is the same rate achieved by $\hen$ for even $p$. Therefore, the naive bandwidth achieves the same rate as the optimal bandwidth for estimating $p-r$ in this case. Thus, the naive estimator can only optimally estimate even-order derivatives for Local Polynomial Regression. \section{Introduction} \input{background} \section{Main Results} \input{main_results} \section{Simulations} \label{sec:simulation} \input{simulations} \section{Conclusion} \input{conclusion} \newpage \section*{Appendix} \input{appendix} \clearpage \listoffigures \listoftables \par \bibhang=1.7pc \bibsep=2pt \fontsize{9}{14pt plus.8pt minus .6pt}\selectfont \renewcommand\bibname{\large \bf References} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\def\urlprefix{URL}\fi \section{Penalized Splines \& the Naive Derivative Estimator} \subsection{Splines} \begin{sloppypar} Splines provide a flexible mechanism to estimate derivatives of the the mean regression function $f$, and in the case of estimating the function itself, they have been shown to do so at the best possible rates of convergence (\citealp[]{xiao2019, zhou1998, stone1982}). A spline is a piece-wise polynomial with continuity conditions at the points where the pieces join together (called knots).\\ More specifically, for $q\ge2$, we let $$\sqt = \left\{ s\in \mathcal{C}^{q-2}(\calk): s\text{ is a $q$-order polynomial on each }[t_i, t_{i+1}] \right\}$$ be a space of $q-$order splines over $\calk = [0, 1]$ with knot locations $\underline{\mathbf{t}} = (t_0, t_1, \dots, t_{K+1})$ where $t_0 = 0$, $t_{K+1}=1$ and $t_i < t_j$ $\forall_{i < j}$. For $q=1$, $\sqt$ consists of step functions with jumps at the knots. \end{sloppypar} This space has a number of equivalent bases and one notable for having stable numerical properties is the \textit{B-Spline} basis (\citealp[]{deboor1978, ruppert_wand_carroll_2003, schumaker_2007}). \cite{deboor1978} defines the $q^{th}$ order B-spline basis function $\bjqx$ over the knot locations $\underline{\mathbf{t}}$ through a recurrence relation. \cite{eilers2010} show that when the distance between the knots is constant, $\bjqx$ reduces to $$\bjqx = \frac{(-1)^q\Delta^q(x-t_j)_+^{q-1}}{(q-1)!h^{q-1}}$$ where $\Delta$ is the backward difference operator ($\Delta t_j=t_j - t_{j-1}$), and $h$ is the common distance between the knots. Observe that $\bjqx$, in this case, is a rescaled $q$-order difference of truncated polynomials. To get a complete set of B-Spline basis, we need $2q$ extra knots with $q$ knots on each side of $\left[0, 1\right]$. This is referred to as the expanded basis (\citealp[]{eilers2010}). Without losing generality, we will assume a B-Spline basis for $\sqt$ for the rest of this paper. We refer the reader to \cite{deboor1978, schumaker_2007, eilers1996} for an introduction to B-Splines, and \cite{eilers2010} for how the B-Spline basis compares to the Truncated Polynomial Functions (TPF) on metrics including fit quality, numerical stability, and multidimensional smoothing. \subsection{Penalized Splines \& the Naive Estimator} Penalized splines are often viewed as a compromise between regression and smoothing splines because they combine penalization and low rank bases to achieve computational efficiency. They vary slightly based on the basis functions used and the object of penalization. For example, P-Splines (\citealp[]{eilers1996}) use B-Spline basis functions and penalize differences of the coefficients to a specific order. In this section, we will focus on P-Splines. Our later results hold for the general penalized spline estimator defined by \cite{xiao2019}. A P-Spline estimator of $f$ in \eqref{eqn:1} based on an iid sample of size $n$ finds a \textit{spline} function $g(x) = B(x)\alphab$, that minimizes: \begin{equation} \label{eqn:obj} Q(\alphab, \lambdan) = \frac{1}{n} \sum_{i=1}^n \left(y_i - B(x_i)\alphab\right)^2 + \lambdan \alphabt\Pm\alphab \end{equation} where, $\bm{\alphab}=\left(\alpha_1, \alpha_2, \dots, \alpha_{K+q}\right)$ is a vector of coefficients, and $B(x_i) = \left[B_{1,q}(x_i), B_{2,q}(x_i), \dots, B_{K+q,q}(x_i)\right] \in \mathbb{R}^{K+q}$ is a vector of basis functions at $x_i$, for $i = 1, 2, \dots, n$. The penalty matrix $\Pm=\dtm \in \mathbb{R}^{(K + q) \times (K + q)}$ where $\dm\alphab = \Delta^m\alphab$ is a vector of $m^{th}$ order differences of $\alphab$. Finally, $\lambdan \ge 0$ is the smoothing parameter and needs to be chosen. Three prevalent methods for choosing $\lambdan$ are Generalized Cross Validation (GCV), Maximum Likelihood (ML), and Restricted (or residual) Maximum Likelihood (REML); we refer the reader to \cite{wood_2017} chapter 4 and \cite{ruppert_wand_carroll_2003} Chapters 4 and 5 for details. Minimizing \eqref{eqn:obj} with respect to $\alphab$ gives $\alphabh=\left(\frac{\bt\bb}{n} + \lambdan \Pm\right)^{-1}\frac{\bt y}{n}$ which results in $\hat{f}(x) = B(x)\alphabh$. Here, $\bm B=\left[B(x_1), B(x_2), \dots, B(x_n)\right]^T \in \mathbb{R}^{n\times K+q}$. From $\hat{f}$, we can derive the naive estimator of the $r^{th}$ derivative of $f$ as follows: \begin{eqnarray} \label{eqn:naive} \hat{f}^{(r)}(x) &=& \frac{d^{(r)}}{dx}B(x)\alphabh \nonumber \\ &=& \frac{d^{(r)}}{dx}\left(\sum_{j=1}^{K+q}\alphah_jB_{j, q}(x)\right) \nonumber \\ &=& \displaystyle\sum_{j=1}^{K+q-r}\alphah_j^{(r)}B_{j, q-r}(x) \end{eqnarray} where $\alphah_j^{(r)} = (q-r)\frac{\left(\alphah_{j+1}^{(r-1)}-\alphah_{j}^{(r-1)}\right)}{t_j-t_{j-q+r}}$, with $\alphah_j^{(0)} = \alphah_j$ for $1\le j\le K+q-r$, and $r = 1, 2, \dots, q - 2$. (\citealp[]{deboor1978, zhou2000yhh}). \cite{xiao2019} showed that under some conditions on the distribution of the knots and $\lambdan$, $\hat{f}$ achieves the optimal $L_2$ rate of convergence (\citealp[]{stone1982}) to the true $f$ but they do not discuss derivative estimators. We extend this result to show that under same conditions that do not depend of the order of the derivative, the naive derivative estimator $\hat{f}^{(r)}$ of $f^{(r)}$ achieves optimal $L_2$ rates of convergence. \subsection{Motivation} In some cases, emphasis is not necessarily on the mean regression function, but instead on some derivative of the function. For instance, in human growth studies, the first derivative of the function relating height and age gives the speed of growth \cite{muller1988, ramsay2002}. In mechanics, the first and second derivatives of displacement as a function of time give velocity and acceleration respectively. Also, estimates of some derivative of the true function, $f$, are used in plug-in bandwidth selection techniques such as Local Polynomial Regression \cite{wand1994kernel}. Given the non-parametric model \eqref{eqn:1}, a natural question is how do we estimate some derivative $r$ of $f$? \subsection{Approaches to Derivative Estimation} There are two broad approaches used in estimating some derivative of the mean regression function $f$ in \eqref{eqn:1}. One class of methods estimates derivatives by first estimating the true function $f$ and differentiating the estimated function. The other class attempts to estimate derivatives directly from the scatterplot. \subsubsection{Penalized Splines \& the Naive Estimator} To estimate the $m^{th}$ derivative of the mean regression function in \eqref{eqn:1}, it's reasonable to take the $r^{th}$ derivative of the penalized spline fit $\hat{f}_{pen}$. \cite{simpkin2013} used multiple penalties to improve the fit of the first derivative of the mean regression function. When using P-Splines, it's straightforward to compute the derivative of $\hat{f}_{pen}$. With equally spaced knots, derivative of B-Spline basis depends on differences in the B-Spline coefficients and lower degree B-Spline basis functions. Given the B-Spline of order $q$ $$g(x; m) = \sum_{i = 1}^K B_i(x; m)\alpha_i$$ The first derivative of $g$, when the knots are equally spaced is given by $$g'(x; m) = \sum_{i = 1}^K (m-1)B_i(x, m-1)\Delta \alpha_i / h$$ where $h$ is the common distance between the knots. It is therefore straightforward to compute the derivative of $\hat{f}_{pen}(x)$ as \begin{equation} \label{eqn:naive} \hat{f}_{pen}'(x) &=& B'^T(x)(B^TB/n + \lambda P_q)^{-1}B^Ty/n \end{equation} whee $B'^{T}(x) \in \mathbb{R}^K$ is a vector (scaled) of first derivatives of the B-Spline basis functions. We refer to $\hat{f}_{pen}'$ as the \textit{Naive Estimator} of the first derivative of $f$. \subsubsection{Local Polynomial Regression} Another popular way to smooth a scatterplot $\left\{x_i, y_i\right\}_{i=1}^n$ is Local Polynomial Regression \cite{wand1994kernel}. Given a non-parametric model in \eqref{eqn:1}, a local polynomial regression estimates $f(x_i)$ using Kernel $K$ and bandwidth $h$ by Weighted Least Squares with weights given by the density of the Kernel at each point $x_j$, $1\le j \le n$. Let $\bm{y}=\left(y_1, \dots, y_n\right)^T$ be a vector of responses, and define $$ \bm{X_x} = \begin{bmatrix} 1 & x_1-x & \hdots & (x_1-x)^p\\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_n - x & \hdots & (x_n-x)^p \end{bmatrix} $$ $$\bm{W_x} = diag\{K_h(x_1-x), \dots, K_h(x_n-x)\}$$ Using Weighted Least Squares (WLS) with design matrix $\bm{X_x}$ and weight matrix $\bm{W_x}$, \begin{equation} \hat{f}_h(x_i) = \bm{e_1^T(X_x^TW_xX_x)^{-1}X_x^TW_xy} \end{equation} is the local polynomial estimate of $f(x_i)$ of degree $p$ and bandwidth $h$. $\bm{e_1}\in \mathbb{R}^{p+1}$ is a vector with first element of 1 and zeros everywhere. Thus, $\hat{f}_h(x_i)$ is the intercept of the weighted least squares solution $\bm{\hat{\beta}=(X_x^TW_xX_x)^{-1}X_x^TW_xy}$. It's important to note that $\hat{f}_h(x_i)$ heavily depends on $h$ which controls the amount of smoothing of the estimator. A large value of $h$ results in more smoothing while smaller values result in less smoothing. To estimate some $m^{th}$ derivative of $f$, $f^{(m)}(x)$, it's natural to use the intercept term of the $m^{th}$ derivative of the local polynomial being fitted, for $m \le p$. This is given by: \begin{equation} \hat{f}^{(m)}_h(x_i) = m!\bm{e_{m+1}^T(X_x^TW_xX_x)^{-1}X_x^TW_xy} \end{equation} where $\bm{e_{m+1}} \in \mathbb{R}^{p+1}$ is a vector with $(m+1)^{th}$ element of 1 and zeros everywhere. \subsubsection{Empirical Derivatives} Unlike the Naive Estimator and Local Polynomial Regression Estimator, empirical derivative methods seek to estimate the derivative of $f$ directly without first estimating $f$ itself. This is usually achieved by approximating the derivative from the scatterplot data and using the approximated data to model the derivative. For instance, \cite{brabanter2013} defined the estimated $m^{th}$ derivative of $f$ as \begin{equation} f_i^{(m)}=\sum_{j=1}^{k_m}w_{j, m} \left(\frac{Y_{i+j}^{(m-1)}-Y_{i-j}^{(m-1)}}{x_{i+j}-x_{i-j}}\right) \end{equation} where the weights $w_{j,m}$ are chosen to minimize the variance of the estimator for $m=1$ and intuitively for $m\ge 2$. \cite{daitong2016} used the estimator \begin{equation} DY_i = \sum_{k=0}^r d_kY_{i+k},\ \ \ 1\le i \le n-r \end{equation} for $f^{'}(x_i)$. The $d_k$'s are chosen to minimize the estimation variance. \subsubsection{} \section{Naive Derivative Estimator achieves optimal $L_2$ rate of convergence} In this section, we provide our main result in Theorem \ref{thm:1} and remark on how this result relates to regression and smoothing splines. Note that the findings in this section apply to the general penalized spline estimator as defined by \cite{xiao2019}. This general estimator is based on the realization that the various types of penalized splines differ mainly by their penalty matrices. However, the eigenvalues of the penalty matrices decay at similar rates, making their unified asymptotic study tractable. We refer the interested reader to a derivation of the decay rates of various penalty matrices in \cite{xiao2019}. \subsection{Notation} We start by defining the following notations relating to norms and limits. For a real matrix $A$, $||A||_\infty = \displaystyle \max_i \sum_{j} |a_{ij}|$ is the largest row absolute sum. $||A||_2$ is the operator norm of $A$ induced by the vector norm $||.||_2$. $||A||_F = \sqrt{tr(A^TA)}$ is the Frobenius norm. For a real vector, $||\ab|| = \displaystyle \max_i |a_i|$. For a real-valued function $g(x)$ defined on $\calk\subset \mathbb{R}$, $||g|| = \displaystyle \sup_{x\in\calk} |g(x)|$ and $||g||_{L2}=\left(\displaystyle\int_{x\in\mathcal{K}}\left(g(x)\right)^2dx\right)^{1/2}$ is the $L_2$-norm of $g$. For two real sequences $\{a_n\}_{n\ge1}$ and $\{b_n\}_{n\ge1}$, $a_n\sim b_n$ means $\displaystyle\lim_{n\to\infty}\frac{a_n}{b_n} = 1$. \subsection{Assumptions} Next, we state assumptions on the knot placement and penalty matrix. We note that these assumptions are the same as those made in \cite{xiao2019} for the asymptotic analysis of estimates of functions rather than derivatives. \begin{enumerate} \item $K=o(n)$. \label{assumption:1} \vspace{15pt} \item $\displaystyle \max_{1\le i\le K} |h_{i+1}-h_i| = o(K^{-1})$, where $h_i = t_i - t_{i-1}$. \label{assumption:2} \vspace{15pt} \item $\frac{h}{\displaystyle \min_{1\le i\le K}h_i} \le M$, where $h = \displaystyle \max_{1\le i\le K}h_i$ and $M>0$ is some predetermined constant. \label{assumption:3} \vspace{10pt} \item For a deterministic design, $$\displaystyle \sup_{x\in [0, 1]}|Q_n(x) - Q(x)| = o(K^{-1})$$ where $Q_n(x)$ is the empirical CDF of $x$ and $Q(x)$ is a distribution with continuously differentiable positive density $q(x)$. \label{assumption:4} \item The penalty matrix $\Pm$ is a banded symmetric positive semi-definite square matrix with a finite bandwidth and $||\Pm||_2 = O(h^{1-2m})$. This assumption is similar to Assumption 3 of \cite{xiao2019} where it is stated in terms of the eigenvalues of $\Pm$. This assumption is verifiable for P-Splines, O-Splines and T-Splines. See Propositions 4.1 and 4.2 of \cite{xiao2019}. \label{assumption:5} \item $\lambdan = o(1).$ \label{assumption:6} \end{enumerate} Assumptions \eqref{assumption:2} and \eqref{assumption:3} are necessary conditions on the placements of the knots and also imply that $h\sim K^{-1}$. This ensures that $M^{-1} < Kh < M$ and is necessary for numerical computations (\citealp[]{zhou1998}). \begin{theorem} \label{thm:1} Let the mean regression function in \eqref{eqn:1} be such that $f \in \mathcal{C}^p\mathcal{(K)}$. Under Assumptions \eqref{assumption:1} - \eqref{assumption:6} above, and for $m\le \min(p, q)$: \begin{eqnarray*} \mathbb{E}\left(|| \hat{f}^{(r)} - f^{(r)} ||_{L_2}^2\right) &=& O\left(\frac{K_e}{n}\right) + O\left(K^{-2(q-r)}\right) + o(K^{-2(p-r)}) \\ && + O\{ \min(\lambdan^2K^{2m+2r}, \lambdan K^{2r}) \} \end{eqnarray*} where $K_e = \min\left\{K^{2r+1}, K^{2r}\lambdan^{-1/2m}\right\}$ and $r = 1, 2, \dots, q-2$. \end{theorem} The proof of the theorem is given in the appendix. \subsection{\textbf{Remarks}} \begin{remark} \label{rmk:1} \noindent The asymptotics of penalized splines are either similar to those of regression splines or smoothing splines depending on how fast the number of knots increases as the sample size increases (\citealp[]{Claeskens2009, xiao2019}). This creates two scenarios: the small number of knots scenario with asymptotics similar to regression splines and the large number of knots scenario with asymptotics similar to smoothing splines. We explore the rates of convergence of the naive estimator under each of these scenarios in Remarks 1a and 1b below. \noindent {\bf Remark 1a} (Small number of knots scenario): Suppose the mean regression function is $q$-times continuously differentiable, where $q$ is the order of the spline used to estimate $f$. Thus, $f\in \mathcal{C}^q(\mathcal{K})$. Also suppose $\lambdan K^{2m} = O(1)$, then \begin{eqnarray*} \mathbb{E}\left(|| \hat{f}^{(r)} - f^{(r)} ||_{L_2}^2\right) &=& O\left(\frac{K_e}{n}\right) + O\left(K^{-2(q-r)}\right) + o(K^{-2(p-r)}) \\ && + O\{ \min(\lambdan^2K^{2m+2r}, \lambdan K^{2r}) \}\\ &=& O\left(\frac{K^{2r+1}}{n}\right) + O\left(K^{-2(q-r)}\right) + O( \lambdan^2K^{2m+2r}). \end{eqnarray*} Choosing $K$ such that $K\sim n^{\frac{1}{2q+1}}$ and $\lambdan = O(n^{-(q+m)/(2q+1)})$, the estimator $\hat{f}^{(r)}$ of $f^{(r)}$ converges at the optimal $L_2$ rate of $n^{-\frac{(q-r)}{2q+1}}$. In the above, we have used the fact that $p=q$ and that $\min\left\{\lambdan^2K^{2m+2r}, \lambdan K^{2r}\right\} = \lambdan^2K^{2m+2r}$, $K_e = K^{2r+1}$ for $\lambdan K^{2m} = O(1)$. We note that the $\lambda_n$'s rate of decrease does not depend on $(r),$ the order of the derivative. \noindent {\bf Remark 1b} (Large number of knots scenario): Suppose $f\in \mathcal{C}^m(\mathcal{K})$, and there exists a sufficiently large constant, $C$, independent of $K$ such that for $K\ge C^{1/2m}\lambdan^{-1/2m} = C^{1/2m}n^{\frac{1}{2m+1}}$, with $m \le q$, we have \begin{eqnarray*} \mathbb{E}\left(|| \hat{f}^{(r)} - f^{(r)} ||_{L_2}^2\right) &=& O\left(\frac{K_e}{n}\right) + O\left(K^{-2(q-r)}\right) + o(K^{-2(p-r)}) \\ &&+ O\{ \min(\lambdan^2K^{2m+2r}, \lambdan K^{2r}) \}\\ &=& O\left(\frac{K^{2r}\lambdan^{-1/2m}}{n}\right) + O\left(K^{-2(q-r)}\right) + o\left(K^{-2(m-r)}\right) \\ && + O( \lambdan K^{2r}). \end{eqnarray*} Choosing $\lambdan$ such that $\lambdan\sim n^{-2m/(2m+1)}$, the estimator $\hat{f}^{(r)}$ of $f^{(r)}$ converges at the optimal $L_2$ rate of $n^{-\frac{(m-r)}{2m+1}}$. Again, we note that the $\lambda_n$'s rate of decrease does not depend on $(r).$ \end{remark} \begin{remark} \label{rmk:2} \noindent While the naive estimator of the derivative achieves an optimal rate of convergence, that does not mean that the naive approach is optimal in a finite sample. We compare the performance of the naive estimator to an ``oracle estimator" that minimizes mean integrated squared error in Section \hyperref[sec:finitesample]{4.1.4}. \end{remark} \begin{remark} \label{rmk:3} \noindent The theorem is derived under conditions on the growth in the number of knots, the spacings between them, and the smoothing parameter ($\lambda_n$). Specific rates of growth for $K$ and for $\lambda_n$ in Remarks 1a and 1b led to optimal rates of convergence. That said, it is not clear whether standard ways of choosing smoothing parameters would lead to optimal rates of converged. This too is explored in Section \hyperref[sec:simulation]{4}. \end{remark} \subsection{Overview} In this section, we present a simulation to assess the naive estimator's rate of convergence and its finite-sample performance. The simulation is divided into three parts. The first part examines the $L_2$ rates of convergence of the naive estimator when GCV and REML are used to choose the smoothing parameter. The second part of this section focuses on the finite sample performance of the naive estimator. We compared it to an ``oracle'' method that uses knowledge of the true function (or derivatives) to choose the optimal smoothing parameter. That ``oracle'' method is not a practical estimator, but it provides an upper bound benchmark for P-spline performance. Finally, the third part of this section compares the naive method to other derivative estimation methods in the literature. Except where noted, we use the same mean regression function $f$ as \cite{brabanter2013}. We simulated data $\left\{x_i, y_i\right\}_{i=1}^n$ from the model: $$Y_i = f(x_i) + \varepsilon_i, \ \forall\ 1\le i\le n$$ where $x_i$'s are a grid over $\mathcal{K} = \left[0,1\right]$, $\varepsilon_i$'s are iid with $\varepsilon_i \sim N(0, \sigma^2 = 0.1^2)$ and \begin{equation} \label{eqn:barbranter} f(x)= 32e^{-8(1-2x)^2}\left(1-2x\right) \end{equation} Figure \ref{fig:funcs} shows the mean regression function in \eqref{eqn:barbranter} and its first two derivatives. We use a range of sample sizes as shown in the results. \begin{figure}[H] \centering \subfloat[\centering {\scriptsize Mean regression function $f(x)$}]{{\includegraphics[width=4.6cm]{files/f.pdf} }}% \subfloat[\centering \scriptsize First derivative of $f(x)$]{{\includegraphics[width=4.6cm]{files/fp.pdf} }}% \subfloat[\centering \scriptsize Second derivative of $f(x)$]{{\includegraphics[width=4.6cm]{files/fpp.pdf} }}% \caption{Mean regression function with its first two derivatives.}% \label{fig:funcs}% \end{figure} As discussed in \cite{xiao2019}, \cite{Claeskens2009}, and our Remark \eqref{rmk:1}, the asymptotics of the penalized spline estimator are similar to those of Regression Splines (small K scenario) or Smoothing Splines (large K scenario), depending on the rate at which the number of knots, $K$, increases with the sample size, $n$. In our simulation, we considered these two scenarios: when $K$ increases slowly with $n$, and when $K$ increases at a faster rate with $n$. For the slow $K$ scenario we use $K\sim n^{\frac{1}{2p+1}},$ and $K$ in the fast scenario is chosen such that $K\ge C^{1/p}\lambdan^{-1/2p}$ for some large constant, $C$. We investigated the $L_2$ rate of convergence for the first two derivatives of the mean regression function in \eqref{eqn:barbranter} using a P-Spline with $2^{nd}$ ($m=2$) order penalty (\citealp[]{eilers1996}). Note that with $m=2$, the equivalent kernel methodology (\citealp[]{silverman1984}, Lemma 9.13 of \citealp[]{xiao2012local}) implies that the assumed differentiability of $f$ is $p=2m = 4$. \cite{stone1982} provided optimal rates of convergence for non-parametric regression estimators. The optimal rate of convergence for a non-parametric estimator of the $r^{th}$ derivative of $g:\mathbb{R}^d \rightarrow \mathbb{R}$ where $g \in \mathcal{C}^p$ is given by $n^{-\frac{p-r}{2p+d}}$, in our simulations, we have the optimal $L_2$ rate of convergence for estimating the $r^{th}$ derivative of $f$ as: $$ n^{-\frac{p-r}{2p+d}} = n ^ {-\frac{4-r}{2 \times 4 + 1}} = n^{-\frac{1}{9}(4-r)}$$ \subsubsection{$L_2$ Convergence of the Naive Estimator} Figure \ref{fig:results_gcv} illustrates the $L_2$ rate of the naive estimator when the smoothing parameter $\lambdan$ is chosen by the GCV approach. The naive estimator achieves the optimal $L_2$ rates of convergence for the mean regression function and its first two derivatives when GCV is used to choose the smoothing parameter, but it is slightly slower for REML. This deviation from the optimal rate using REML appears to worsen for higher derivatives. Also, we observed that the fast $K$ scenario was overall slightly slower than the slow $K$ scenario for REML. These results agree with known results in the literature for smoothing splines when estimating the mean regression function. For instance, \cite{craven_smoothing_1978} showed that GCV achieves the optimal rate of convergence when used to choose the smoothing parameter in smoothing splines. However, \cite{wahba1985} found that Maximum Likelihood (ML) based methods may be slower than GCV for sufficiently smooth functions. \begin{figure}[H] \centering \subfloat[\centering \scriptsize $L_2$ rate of convergence for $\hat{f}$]{{\includegraphics[width=4.6cm]{files/slow_k.pdf} }}% \subfloat[\centering \scriptsize $L_2$ rate of convergence for $\hat{f}'$]{{\includegraphics[width=4.6cm]{files/slow_k_p.pdf} }}% \subfloat[\centering \scriptsize $L_2$ rate of convergence for $\hat{f}''$]{{\includegraphics[width=4.6cm]{files/slow_k_pp.pdf} }}% \centering \subfloat[\centering \scriptsize $L_2$ rate of convergence for $\hat{f}$]{{\includegraphics[width=4.6cm]{files/fast_k.pdf} }}% \subfloat[\centering \scriptsize $L_2$ rate of convergence for $\hat{f}'$]{{\includegraphics[width=4.6cm]{files/fast_k_p.pdf} }}% \subfloat[\centering \scriptsize $L_2$ rate of convergence for $\hat{f}''$]{{\includegraphics[width=4.6cm]{files/fast_k_pp.pdf} }}% \caption{$L_2$ convergence rates for $f$ and its first two derivatives under two scenarios for increasing $K$ with $n$. Figures (a-c) show results for slowly increasing $K$ scenario while Figures (d-f) show results for the fast increasing $K$ scenario. The smoothing parameter $\lambdan$ is chosen by the GCV method.}% \label{fig:results_gcv}% \end{figure} Table \ref{table:naive} below summarizes the rates of convergence of the naive estimator for estimating derivatives of the mean regression function in \eqref{eqn:barbranter} under the various scenarios of the number of knots $K$ as $n$ increases. \begin{table}[h!] \tiny \centering \begin{tabular}{ |P{1cm}|P{2cm}|P{2cm}|P{3cm}|P{3cm}| } \hline \textbf{$\lambdan$ Method} & \textbf{Target} & \textbf{Optimal $L_2$ Rate} & \textbf{Slow $K$} & \textbf{Fast $K$} \\ \hline & $f$ & $-0.44$ & $-0.45 (-0.45, -0.44)$ & $-0.45 (-0.45, -0.44)$ \\ GCV & $f'$ & $-0.33$ & $-0.34 (-0.35, -0.33)$ & $-0.34 (-0.35, -0.33)$ \\ & $f''$ & $-0.22$ & $-0.22 (-0.23, -0.21)$ & $-0.22 (-0.23, -0.21)$ \\ \hline & $f$ & $-0.44$ & $-0.44 (-0.44, -0.43)$ & $-0.43 (-0.44, -0.43)$ \\ REML & $f'$ & $-0.33$ & $-0.32 (-0.32, -0.31)$ & $-0.31 (-0.32, -0.31)$ \\ & $f''$ & $-0.22$ & $-0.19 (-0.19, -0.18)$ & $-0.18 (-0.18, -0.17)$ \\ \hline \end{tabular} \caption{Summary of $L_2$ rates of convergence for estimating the mean regression function in \eqref{eqn:barbranter} and its first two derivatives.} \label{table:naive} \end{table} \subsection{Finite sample performance of naive estimator.} \label{sec:finitesample} In this section we compare the naive estimator to an ``oracle'' method that uses knowledge of the true form of the target (mean regression function or its derivatives) to choose the optimal amount of smoothing, which we did with a grid search. While this ``oracle'' is not an estimator, it shows the minimum loss when estimating the function in question with a penalized spline. GCV was used to choose the appropriate smoothing parameter for the various spline-based estimators in what follows. In Figure \ref{fig:results_median_fits} below, we show the naive estimator That corresponds to the median MSE in the Monte Carlo experiment. To summarize, we see that the naive estimator appears to accurately estimate both the true mean regression function ($f$) and its first derivative ($f'$). However, we observe some lack of fit around the boundaries of the second derivative, ($f''$). \begin{figure}[H] \centering \subfloat[\centering]{{\includegraphics[width=4.5cm]{files/m_f.pdf} }}% \subfloat[\centering]{{\includegraphics[width=4.5cm]{files/m_fp.pdf} }}% \subfloat[\centering]{{\includegraphics[width=4.5cm]{files/m_fpp.pdf} }}% \caption{Median Monte-Carlo fits of the mean regression function in \eqref{eqn:barbranter} with its first two derivatives using the naive and oracle estimators.} \label{fig:results_median_fits}% \end{figure} Next, Figure \ref{fig:results_oracle} shows the mean percentage difference between the naive and oracle methods for the mean regression function in \eqref{eqn:barbranter} and its first two derivatives across the two increasing $K$ scenarios. Overall, in comparison to the oracle method, the naive estimator's finite sample performance degrades with increasing derivatives, with an average error difference (logarithmic scale) of about 0.5\% for the mean regression function, 17\% for its first derivative, and 29\% for its second derivative. While the naive penalized spline derivative estimator is shown to converge at the optimal $L_2$ rate of convergence (Theorem \ref{thm:1}), it may also have higher mean squared error in finite samples, especially for higher derivatives. We note that the results summarized in Figure \ref{fig:results_oracle} are similar for the two increasing $K$ scenarios. \begin{figure}[H] \centering \subfloat[\centering]{{\includegraphics[width=4.2cm]{files/slow_k_f.pdf} }}% \subfloat[\centering]{{\includegraphics[width=4.2cm]{files/slow_k_fp.pdf} }}% \subfloat[\centering]{{\includegraphics[width=4.2cm]{files/slow_k_fpp.pdf} }}% \centering \subfloat[\centering]{{\includegraphics[width=4.2cm]{files/fast_k_f.pdf} }}% \subfloat[\centering]{{\includegraphics[width=4.2cm]{files/fast_k_fp.pdf} }}% \subfloat[\centering]{{\includegraphics[width=4.2cm]{files/fast_k_fpp.pdf} }}% \caption{$L_2$ convergence rates for $f$ and its first two derivatives with two scenarios for increasing $K$ with $n$ and how they compare with their corresponding oracle estimators. Figures (a-c) show results for slowly increasing $K$ scenario while Figures (d-f) show results for the fast increasing $K$ scenario. The smoothing parameter $\lambdan$ is chosen by the GCV method.}% \label{fig:results_oracle}% \end{figure} \subsection{Comparison with other methods} In this section, we compare the finite sample MSE of the naive estimator to other derivative estimation methods in the literature. We considered the adaptive penalty penalized spline estimator by \cite{simpkin2013}. We also used the linear combination method of \cite{daitong2016}, but it consistently had higher MSE values and results are not shown. We evaluated the methods using three mean regression functions from the literature (\citealp[]{brabanter2013,daitong2016}). As proxies for low, medium, and high noise scenarios, we considered noise levels that were 10 percent, 30 percent, and 60 percent of the range of each function. This was to understand how the methods compare at different levels of noise. The following are the three functions considered: \begin{equation*} f_1(x) = \sin^2(2\pi x) + \log(4/3 + x) \quad \textrm{for} \quad x\in[-1, 1], \end{equation*} \begin{equation*} f_2(x) = 32 e^{-8(1-2x)^2}(1-2x) \quad \textrm{for} \quad x\in[0, 1], \end{equation*} and the doppler function \begin{equation*} f_3(x) = \sqrt{x(1-x)} \sin\left(\frac{2.1\pi}{x+0.05}\right) \quad \textrm{for} \quad x\in[0.25, 1]. \end{equation*} Figure \ref{fig:results_comparisons} below shows the results for estimating the first (panel a) and second (panel b) derivatives of the three mean regression functions across the three noise levels. These results indicate that the adaptive penalty methods and the naive method often perform similarly, depending on the form of the function, the noise level, and the order of the derivative. We also note that the adaptive penalty method sometimes performs better than the oracle method. This is possible since the oracle method only finds the best P-splines estimate with the form of the penalty held constant. \begin{figure}[H] \centering \subfloat[\centering]{{\includegraphics[width=7cm]{files/comparison_fp.pdf} }}% \subfloat[\centering]{{\includegraphics[width=7cm]{files/comparison_fpp.pdf} }}% \caption{Comparing derivative estimation methods in reference to the oracle estimator across different functions and noise levels. Panel (a) shows results for estimating first derivatives of the mean regression functions $f_1$, $f_2$ and $f_3$ while Panel (b) shows results for estimating second derivatives.}% \label{fig:results_comparisons}% \end{figure}
1,108,101,566,182
arxiv
\section{Introduction} The quest to control electron spin in a solid has continued to date to advance progress in modern information technology~\cite{Wolf2001,Awschalom2007,Waldrop2016}. A series of potential future spintronic devices have been established, including spin transistors~\cite{Datta1990,Chuang2015}, all-spin logic gates \cite{Dery2007,BehinAein2010,Wen2016}, spin memories~\cite{Kikkawa1997,Kroutvar2004}, and spin lasers~\cite{Gothgen2008,Iba2011,Lee2014,FariaJunior2015,Lindemann2019}. Especially for spin transistors, one usual fundamental requirement is the precise and reliable manipulation of spin orientation and lifetime. In semiconductors, the spin-orbit (SO) coupling generates an effective magnetic field $\beq{\Omega}$, called SO field, which enables coherent control of the electron spin. At the same time, the SO coupling induces the detrimental effect of spin decoherence via an efficient process known as D'yakonov-Perel' (DP) spin relaxation~\cite{Dyakonov1971a}. The origin of this effect lies in the wave-vector $(\beq{k})$ dependence of the SO field together with the presence of disorder. Collisions of the spin carriers with impurities, phonons, or other carriers change the wave vector and thereby the spin precession axis uncontrolledly, which leads to randomization of the spins. A way to overcome this problem is the realization of special symmetries that allow the emergence of persistent spin textures, as was found in electron and hole gases for appropriately tuned Rashba \cite{Rashba1960,Bychkov1984} and Dresselhaus \cite{Dresselhaus1955} SO strengths, strain, or curvature radius in tubular systems~\cite{Schliemann2003,Bernevig2006,Trushin2007,Sacksteder2014, Dollinger2014,Wenk2016,Kammermeier2016,Kozulin2019,Kammermeier2020}. In general, this symmetry becomes manifest in a SO field that is collinear in $\beq{k}$-space, and in spin-split circular Fermi contours $\epsilon_\pm$ that are related to each other by a shift of a constant wave vector $\pm\beq{Q}$, i.e., $\epsilon_-(\beq{k})=\epsilon_+(\beq{k}+\beq{Q})$~\cite{Bernevig2006}. The collinearity of the SO field preserves any parallel-oriented homogeneous spin texture. The second characteristic is associated with a new type of exact SU(2) spin-rotation symmetry of the Hamiltonian that allows for a full representation of the Lie algebra su(2)~\cite{Bernevig2006,Schliemann2017}. It is fulfilled when the SO field consists of first angular harmonics in the wave vector and ensures that the spins undergo a well-defined spin precession that is independent of the propagated path and, thus, robust against $\beq{k}$-randomizing disorder scattering~\cite{Schliemann2003,Wenk2016}. This property allows the existence of an additional \textit{inhomogeneous} spin texture, which due to their spiral structure is known as a \textit{persistent spin helix} (PSH)~\cite{Bernevig2006}. As a decisive advantage over a homogeneous texture, the PSH facilitates a controllable spin precession over long distances. It also entails numerous distinctive features in quantum transport that support experimental investigations~\cite{Sinitsyn2004,Shen2004,Schliemann2006,Badalyan2009,Kohda2012b, Li2013,Schliemann2017,Kohda2017,Liu2020}. \\\indent In planar two-dimensional electron gases (2DEGs) with a zinc-blende structure, the existence of a PSH is well-established in quantum wells grown along the [001], [110], and [111] high-symmetry crystal axes~\cite{Zutic2004,Schliemann2017,Kohda2017}. As illustrated in Fig.~\reffig{one}(e), the respective collinear SO field $\beq{\Omega}_{\rm PSH}$ for effectively $\beq{k}$-linear SO couplings is either purely aligned with the 2DEG plane, out-of-plane, or vanishes completely. Recently, it was predicted that a PSH can also be realized in low-symmetry growth directions provided that at least two Miller indices agree in modulus and the ratio of Rashba and effective $\beq{k}$-linear Dresselhaus SO coefficients fullfills a certain relation~\cite{Kammermeier2016}. Thereby, the angle between growth direction and collinear SO field can be configured arbitrarily giving rise to new formations of a PSH \{cf. Fig.\,\reffig{one}(e) for the exemplary SO field of [225] and [221]-oriented 2DEGs\}. \\\indent The stability of the PSH is, however, limited by an additional SO field arising from the $\beq{k}$-cubic Dresselhaus SO coupling that is generically present in these systems. While its inclusion may not destroy the collinearity of the SO field, the presence of higher angular harmonics in the wave vector generally breaks the exact SU(2) spin-rotation symmetry of the Hamiltonian and causes a decay of the PSH. Apart from this, it gives rise to new characteristic (in)homogeneous spin textures with extraordinary long lifetime, which for the sake of distinction we call long-lived, and the superior one among them is called \textit{longest}-lived spin textures. In reciprocal space, the $\beq{k}$-cubic Dresselhaus SO field holds three-fold rotational symmetry, and its orientation and magnitude depend strongly on the growth direction as shown in Figs.\,\reffig{one}(c) and \reffig{one}(f). As a consequence, the geometrical relations between the collinear and the symmetry-breaking part of the total SO field are complicated, and the induced relaxation effect is sensitive to the orientation of the quantum well. Previous studies on the impact of the $\beq{k}$-cubic Dresselhaus SO field on the stability of the PSH were restricted to the well-established cases of [001] \cite{Liu2012,Koralek2009,Luffe2011,Luffe2013,Walser2012a,Kohda2012b,Salis2014,Kurosawa2015,Poshakinskiy2015,Ferreira2017}, [110] \cite{Ohno1999,Tarasenko2009,Iizasa2018}, and [111] \cite{Balocchi2011} quantum wells. \\\indent In this paper, we systematically explore the robustness of the PSH against the spin decoherence caused by the $\beq{k}$-cubic Dresselhaus SO field in zinc-blende 2DEGs of general crystal orientations. The lifetime of the PSH is juxtaposed with that of the long-lived spin textures. We complement a numerical Monte Carlo simulation of the random walk of collectively excited spins with an analysis of the spin diffusion equation to determine the PSH-lifetime-dependence on the growth direction. The Monte Carlo approach resembles the experimental situation of a time-resolved magneto-optical Kerr-rotation microscopy that has been shown for some growth directions to be more suitable for the PSH-lifetime extraction than magneto-conductance measurements of weak antilocalization. The reason is that the latter characteristics are predominantly determined by the \textit{longest}-lived spin textures. These often correspond to homogeneous spin textures whose extraordinarily long lifetimes prevent the emergence of the weak-antilocalization features necessary for reliable parameter fitting, as is the case, for instance, in [110] and [111] quantum wells~\cite{Hassenkam1997,Kammermeier2016,Iizasa2018}. The supplementing analysis of the spin diffusion equation grants insight into the underlying physical mechanisms and allows us to derive an analytic expression for the general PSH lifetime and the special growth directions. \\\indent Our results reveal that the most robust PSH can be formed in quantum wells grown along a low-symmetry growth direction that is well approximated by a [225] lattice vector. These systems yield a 30\% PSH lifetime enhancement compared to conventional [001]-oriented 2DEGs and require a negligible Rashba coefficient, allowing the realization of the most stable PSH in nearly symmetric quantum wells. The origin of the suppressed spin relaxation along this direction is traced back to the strength and orientation of the $\beq{k}$-cubic Dresselhaus SO field. For the strength, a growth direction is favorable where the magnitude of the $\beq{k}$-cubic Dresselhaus SO coupling is reduced. For the orientation, it is shown that, in an optimal configuration, the $\beq{k}$-cubic Dresselhaus SO field is perpendicular to the collinear SO field, and therewith lies in the rotation plane of the PSH. In this case, the $\beq{k}$-cubic Dresselhaus SO field has globally the largest parallel component to the spin orientation of the PSH, which minimizes the relaxation. Uniting both properties renders a growth direction ideal that is close to the [225] axis. Since the PSH rotation axis for the [225] quantum wells is only weakly tilted out of the 2DEG plane, the PSH can be observed in conventional optical spin orientation measurements where spins are excited and detected along the growth direction. As a further advantage, we find that the lifetime of the long-lived homogeneous spin textures is particularly short in these systems, which supports the likelihood of observing weak antilocalization characteristics in magnetotransport measurements. Our findings provide an complete and comprehensive picture of the stability of the PSH and the lifetime of the long-lived spin textures in general growth directions. We identify the longest achievable lifetime for a PSH in the presence of $\beq{k}$-cubic Dresselhaus SO couplings in 2DEGs. \\\indent This paper is organized as follows. In Sec.~\refchap{sec:GenericPSH}, we introduce the general SO field for PSH hosting quantum wells. In Sec.~\refchap{sec:cubic}, we first discuss the impact of the $\beq{k}$-cubic SO field, and then we investigate its effect on the robustness of the PSH using Monte Carlo simulation and the spin diffusion equation in Secs.~\refchap{sec:MonteCarlo} and \refchap{sec:sde}, respectively. In Sec.~\refchap{sec:analytic}, we derive analytical expressions for the relaxation rate of the PSH as well as the homogeneous spin texture, and we discuss the critical origin of the suppressed PSH decay for [225] quantum wells. Lastly, we explore the experimental accessibility of the PSH in Sec.~\refchap{sec:accessibility}, and we close with a conclusion in Sec.~\refchap{sec:conlusion}. \begin{figure*}[htbp] \centering \includegraphics[keepaspectratio, scale=.5]{figure1.pdf} \caption{(a) Illustration of the 2DEG-inherent coordinate system with respect to the crystal axes. The general growth direction $\hat{\bf n}$, that permits spin conservation, is characterized by the polar angle $\theta\in[0,\pi/2]$ in the [110]-[001] plane measured from the [001] axis. The basis is defined as $\hat{\beq{x}}=(n_z,n_z,-2\eta)/\sqrt{2}$, $\hat{\beq{y}}=(-1,1,0)/\sqrt{2}$, and $\hat{\beq{z}}=\hat{\bf n}=(\eta,\eta,n_z)$, where $\eta=\sin\theta/\sqrt{2}$ and $n_z=\sqrt{1-2\eta^2}$. Both collinear SO field $\beq{\Omega}_{\rm PSH}$, Eq.~(\ref{soPSHfield}), for appropriately tuned Rashba and $\beq{k}$-linear Dresselhaus SO coefficients, and $\beq{k}$-cubic Dresselhaus SO field $\beq{\Omega}_{\rm 3}$, Eq.~(\ref{thirdangular}), are depicted in $\beq{k}$-space for specific growth directions in (e) and (f), respectively. The mean squares $\langle\beq{\Omega}_{\rm PSH}^2\rangle$ and $\langle\beq{\Omega}_{\rm 3}^2\rangle$, averaged over all angles $\varphi$ of the in-plane wave vector $\bm{k}$, are shown in (b) and (c), respectively. The orientation $\hat{\bm{u}}_{\rm PSH}$ of the PSH field, Eq.~(\ref{upsh}), is emphasized by the blue arrow in (e), which encloses the angle $\xi=\mathrm{arccos(\eta/\sqrt{2-3\eta^2})}$ with the surface normal $\hat{\bf n}$. The angle $\xi$ changes continuously from $\pi/2$ (in-plane) for [001] to 0 (out-of-plane) for [110]-oriented quantum wells. The $\theta$-dependent PSH wave vectors $\beq{Q}=Q \hat{\beq{y}}$, Eq.~(\ref{Qvalue}), which define the pitch of the PSH, Eq.~(\ref{pshorien}), are displayed as bold black arrows in (e). The real space structure of the PSH $\bm{s}_{\rm PSH}$, Eq.~\protect\refeq{pshorien}, determined by $\beq{\Omega}_{\rm PSH}$ is illustrated in (d). The spin precession direction is reversed between [111] and [110] since $(\beq{\Omega}_{\rm PSH})_x$ switches sign.} \label{one} \end{figure*} \section{Persistent spin helix in generic 2D electron gases}\label{sec:GenericPSH} \subsection{Spin-orbit fields for general growth directions}\label{sec:SOfields} The PSH emerges under the precondition that at least two growth-direction Miller indices agree in modulus~\cite{Kammermeier2016}. Without loss of generality, we focus on a general growth direction given by the unit vector $\hat{\bf n}$ lying in the first quadrant of the [110]-[001] crystal plane, i.e., $\hat{\bf n}=(\sin\theta/\sqrt{2},\sin\theta/\sqrt{2},\cos\theta)$. Here, the underlying basis vectors point along the high-symmetry crystal directions [100], [010], and [001] and $\theta\in[0,\pi/2]$ denotes the polar angle measured from the [001] axis as shown in Fig.\,\reffig{one}(a). For convenience and to adopt the notation of Ref.~\onlinecite{Kammermeier2016}, we introduce the parameter $\eta=\sin\theta/\sqrt{2}$, which implies that $n_z=\sqrt{1-2\eta^2}$, and we define new Cartesian basis vectors $\hat{\beq{x}}=(n_z,n_z,-2\eta)/\sqrt{2}$, $\hat{\beq{y}}=(-1,1,0)/\sqrt{2}$, and $\hat{\beq{z}}\equiv\hat{\bf n}=(\eta,\eta,n_z)$. In this representation, the $\hat{\beq{x}}$ and $\hat{\beq{y}}$ axes span the conduction plane of the 2DEG, while the $\hat{\beq{z}}$ axis corresponds to the quantum-well growth direction. \\\indent In the vicinity of the $\Gamma$-point, the 2DEG is described by the Hamiltonian \begin{equation} \mathcal{H}=\cfrac{\hbar^2k^2}{2m}+\cfrac{\hbar}{2}\,(\beq{\Omega}_1+\beq{\Omega}_{\rm 3})\cdot\beq{\sigma} \label{Hamiltonian} \end{equation} with effective electron mass $m$, in-plane wave vector $\beq{k}=(k_x,k_y)$, and the vector of Pauli matrices $\beq{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$. The SO fields for the 2DEG-inherent coordinate system are derived from a general expression of the 2D-confined SO Hamiltonian as shown in Ref.~\onlinecite{Kammermeier2016} and briefly outlined in Appendix~\ref{app:SOF}. The SO field contributions \begin{align} {\bf\Omega}_1&=\cfrac{2k}{\hbar} \begin{pmatrix} \left[\alpha+\beta^{(1)}(1+3\eta^2)n_z\right]\sin\varphi\\ \left[-\alpha+\beta^{(1)}(1-9\eta^2)n_z\right]\cos\varphi\\ -\sqrt{2}\beta^{(1)}\eta(1-3\eta^2)\sin\varphi \end{pmatrix} ,\label{firstangular} \end{align} and \begin{align} \beq{\Omega}_{\rm 3}&=\cfrac{2k}{\hbar}\beta^{(3)} \begin{pmatrix} (1-3\eta^2)n_z\sin3\varphi\\ -(1-3\eta^2)n_z\cos3\varphi\\ 3\sqrt{2}\eta(1-\eta^2)\sin3\varphi \end{pmatrix} \label{thirdangular} \end{align} are sorted in terms of first and third angular harmonics in the in-plane wave vector, which is represented in polar coordinates, i.e., $k_x=k\cos\varphi$ and $k_y=k\sin\varphi$, with in-plane polar angle $\varphi$. Here, the first angular harmonic contribution $\beq{\Omega}_1$ involves the Rashba and effective $\beq{k}$-linear Dresselhaus SO coefficients $\alpha=\gamma_{\rm R}\langle\mathcal{E}_z\rangle$ and $\beta^{(1)}=\gamma_{\rm D}(\langle k_z^2\rangle - k^2/4)$, respectively. Both coefficients scale with the material-specific bulk parameters $\gamma_{\rm R}$ and $\gamma_{\rm D}$ and constitute an average over the ground-state wave function determined by a self-consistently calculated confinement potential. Thus, any inhomogeneities, e.g., due to local fluctuations of the doping ions at the sides of the quantum well~\cite{Sherman2003a,Sherman2003b}, are disregarded. The Rashba SO coupling is characterized by an electric field $\beq{\mathcal{E}}=\mathcal{E}_z\hat{\beq{z}}$ originating from a potential gradient along the growth direction $\hat{\beq{z}}$. The effective $\beq{k}$-linear Dresselhaus SO coefficient is predominantly determined by the width and structure of the quantum well through the expectation value $\langle k_z^2\rangle$. For instance, an infinite square-well potential of width $a$ yields $\langle k_z^2\rangle=(\pi/a)^2$. Aside from this, the coefficient includes a small term $\propto k^2$ resulting from the first angular harmonic part of the $\beq{k}$-cubic Dresselhaus SO field. The third angular harmonic $\beq{k}$-cubic Dresselhaus contribution $\beq{\Omega}_3$ is distinguished by the prefactor $\beta^{(3)}=\gamma_{\rm D}k^2/4$. Due to the proportionality $\propto k^2$, both Dresselhaus coefficients depend on the carrier sheet density $n_s$, i.e., at zero temperature $k$ is evaluated at the Fermi wave vector $k_{\rm F}=\sqrt{2\pi n_s}$. For comparison, it is practical to work with the ratios of the SO coefficients where we employ the definitions $\Gamma_1=\alpha/\beta^{(1)}$ and $\Gamma_3=\beta^{(3)}/\beta^{(1)}$ hereafter. \subsection{Collinear spin-orbit field and emergent persistent spin textures}\label{sec:PSHfield} A vanishing of the $\beq{k}$-cubic SO contribution, i.e., $\Gamma_3=0$, together with an optimal ratio of the $\beq{k}$-linear SO coefficients, i.e., $\Gamma_1=\Gamma_0:=(1-9\eta^2)n_z$, ensures a SO field $\beq{\Omega}_{\rm PSH} :=\beq{\Omega}_1(\alpha= \beta^{(1)} \Gamma_0)$, in the following denoted as PSH field, that is collinear in $\beq{k}$-space and reads as~\cite{Kammermeier2016} \begin{align} \beq{\Omega}_{\rm PSH} &=\cfrac{2k}{\hbar}\beta^{(1)} \begin{pmatrix} 2(1-3\eta^2)n_z\\ 0\\ -\sqrt{2}\eta(1-3\eta^2) \end{pmatrix} \sin\varphi. \label{soPSHfield} \end{align} As depicted in Figs.~\reffig{one}(b) and \reffig{one}(e), both the orientation and magnitude of $\beq{\Omega}_{\rm PSH}$ alter with the growth direction. The field is generally oriented perpendicular to the $\hat{\beq{y}}$ axis and encloses the angle $\xi=\mathrm{arccos(\eta/\sqrt{2-3\eta^2})}$ with the $\hat{\beq{z}} (\hat{\bf n})$ axis. Thereby, it allows a continuous modulation from an in-plane configuration for [001] to an out-of-plane configuration for [110] quantum wells. It generally vanishes for $\beq{k}\parallel\hat{\beq{x}}$ and maximizes for $\beq{k}\parallel\hat{\beq{y}}$ where the corresponding strength $\| \beq{\Omega}_{\rm PSH} \|_{\rm max}$ is largest for $\eta=0$, which corresponds to a [001] growth direction. In the special situation of a [111] 2DEG, i.e., $\eta=1/\sqrt{3}$, $\beq{\Omega}_{\rm PSH}$ vanishes completely. \\\indent The PSH field leads to an SU(2) spin-rotation symmetry of the Hamiltonian, Eq.~(\ref{Hamiltonian}), that remains intact in the presence of spin-independent disorder and interactions~\cite{Bernevig2006,Schliemann2017}. Considering a general spin density ${\beq{s}}(\beq{r},t)$ in real space with position vector $\beq{r}$ and time $t$, here and in the following locally and initially normalized as $\|{\beq{s}}(\beq{r},0)\|=1$, the SU(2) symmetry gives rise to two kinds of {persistent spin textures}: (i) A homogeneous spin texture ${\beq{s}}_{\rm homo}=\pm\hat{\bm{u}}_{\rm PSH}$ that is collinear with the direction of the PSH field \begin{equation} \hat{\bm{u}}_{\rm PSH}=\cfrac{{\rm sgn}(1-3\eta^2)}{\sqrt{2-3\eta^2}} \begin{pmatrix} -\sqrt{2}n_z\\ 0\\ \eta \end{pmatrix}, \label{upsh} \end{equation} which we define here as the unit vector of $\beq{\Omega}_{\rm PSH}(k_y<0)$ [Fig.\,\reffig{one}(e)]. (ii) The PSH \begin{align} {\beq{s}}_{\rm PSH}(\beq{r})=&(\hat{\beq{y}}\times\hat{\bm{u}}_{\rm PSH})\cos({\beq{Q}}\cdot{\beq{r}}) - \hat{\beq{y}}\sin({\beq{Q}}\cdot{\beq{r}}), \label{pshorien} \end{align} which spatially precesses about the $\hat{\bm{u}}_{\rm PSH}$ orientation and along the direction of $\pm\hat{\beq{y}}$ ($\varphi=\pm\pi/2$) [Fig.~\reffig{one}(d)]. Here, we neglected an arbitrary phase shift for simplicity. The sign function in Eq.~\refeq{upsh} implies an inversion of the precession axis of ${\beq{s}}_{\rm PSH}$ at [111] due to the sign switching of $(\beq{\Omega}_{\rm PSH})_x$ [cf. Figs.~\reffig{one}(e) and \reffig{one}(d)]. The pitch of the helix is defined by the PSH wave vector $\beq{Q}=Q\hat{\beq{y}}$ characterized by the maximum strength $\| \beq{\Omega}_{\rm PSH}\|_{\rm max}$, i.e., \cite{Kammermeier2016} \begin{equation} Q(\eta)=\frac{m}{\hbar k} \| \beq{\Omega}_{\rm PSH} \|_{\rm max}=\Qo\sqrt{1-3\eta^2/2}|1-3\eta^2|, \label{Qvalue} \end{equation} where $Q(0):=\Qo = 4m\beta^{(1)}/\hbar^2$ represents the PSH wave-vector amplitude for a [001] 2DEG. We define the spin precession length $L$ as the (minimal) spatial length of one precession cycle, i.e., $L(\eta)=2\pi/Q(\eta)$, as illustrated in Fig.\,\reffig{one}(d) for $L(0):=L_0$. The PSH wave vector is displayed as black bold arrows sketched in Fig.~\reffig{one}(e). In accordance with the dependence of $\| \beq{\Omega}_{\rm PSH} \|_{\rm max}$ on $\eta$, the magnitude of the PSH wave vector continuously decreases from the global maximum at [001] ($\eta=0$) until it vanishes at [111] ($\eta=1/\sqrt{3}$), and then it increases again until a local maximum is recovered at [110] ($\eta=1/\sqrt{2}$). \section{Stability of the spin helix}\label{sec:stability} \subsection{Impact of cubic Dresselhaus field}\label{sec:cubic} Taking into account the $\beq{k}$-cubic Dresselhaus SO field $\beq{\Omega}_{\rm 3}$ that involves the third angular harmonics in the wave vector, the PSH $\beq{s}_{\rm PSH}$ acquires a relaxation factor $\exp(-t/\tau_{\rm PSH})$ due to the violation of the exact SU(2) spin-rotation symmetry of the Hamiltonian. The consequential finite PSH lifetime $\tau_{\rm PSH}$ depends on the strength and structure of $\beq{\Omega}_{\rm 3}$ and its non-trivial geometric relations to the PSH field $\beq{\Omega}_{\rm PSH}$. Notably, the breaking of SU(2) symmetry is not necessarily accompanied by a destruction of the collinearity of the SO field but can be solely due to the presence of third angular harmonics in the wave vector. Thus, the inclusion of $\beq{\Omega}_{\rm 3}$ may continue to allow for a homogeneous spin texture with infinite lifetime but perhaps distinct orientation. As will be discussed in more detail in Sec.~\ref{sec:analytic}, the underlying reason is that, in general, neither the PSH nor the homogeneous persistent spin texture, as defined in the previous section, are eigenstates of the spin diffusion equation any longer. The eigenstates of the spin diffusion equation with the total SO field $\beq{\Omega}_{\rm PSH}+\beq{\Omega}_{\rm 3}$ give rise to new characteristic spin textures with particularly long spin lifetimes, which can have a different real-space structure. These spin textures are, in the following, referred to as \textit{long-lived homogenous} and \textit{long-lived helical spin textures} depending on whether or not their spin orientation modulates in real space. \\\indent As highlighted in Fig.\,\reffig{one}(f), the SO field $\beq{\Omega}_{\rm 3}$ holds three-fold rotational symmetry, and its orientation and magnitude exhibit rich variations with the growth direction. For a clearer understanding, we display in Fig.\,\reffig{one}(c) the $\theta$-dependence of the components of $\beq{\Omega}_{\rm 3}$ together with the squared magnitude averaged over the directions of the in-plane wave vector, i.e., $\langle \beq{\Omega}_{\rm 3}^2\rangle=\int_0^{2\pi}\beq{\Omega}_{\rm 3}^2\,{\rm d}\varphi/(2\pi)$. While for small angles of $\theta$ the in-plane components dominate, they become insignificant for large angles. The SO field is oriented purely in-plane for [001] and purely out-of-plane for [111] and [110] growth directions. The latter two directions are special since $\beq{\Omega}_{\rm 3}$ and therewith the total SO field is collinear and, hence, a homogeneously $\hat{\bm{z}}$-polarized spin texture does not decay. The mean square $\langle \beq{\Omega}_{\rm 3}^2\rangle$ shows only weak modulations with $\theta$ where a global minimum (maximum) is obtained for [001] ([111]) quantum wells. \\\indent Hence, the individual modulations of the PSH field $\beq{\Omega}_{\rm PSH}$ and the $\beq{k}$-cubic Dresselhaus SO field $\beq{\Omega}_{\rm 3}$ yield an intricate dependence of the robustness of the persistent spin textures on the growth direction, which will be elaborated on in detail below. \begin{figure*} \centering \includegraphics[keepaspectratio, scale=0.5]{figure2.pdf} \caption{(a) Monte-Carlo-simulated spatiotemporal evolution of the spin polarization $s_\perp$ along the $\hat{\beq{y}}$ axis is shown in the first row for selected growth directions and a relative strength $\Gamma_3=0.08$ of the $\beq{k}$-cubic field. Reconstructions of the simulated $s_\perp$ by using the fit function Eq.\,\protect\refeq{sperp} are depicted in the second row. (b) Snapshot of $s_\perp$ along $\hat{\beq{y}}$ at time $t=\SI{1}{ns}$ for continuous variations of the growth angle $\theta$. (c) Map of the minima of the eigenvalues $\lambda_n(\beq{q})$ of the spin diffusion operator $\sdo$ in dependence of $\bm{q}=(q_x=0,q_y)$ and $\theta$ for $\Gamma_3=0.16$ in terms of the spin precession rate $1/\tau_0=D_sQ_0^2/(4\pi^2)$ for [001] quantum wells. Black solid and dashed lines indicate the slices of $\beq{q}=\beq{Q}$ and $\beq{q}=\beq{0}$, respectively. Extracted values of $Q$ from Monte Carlo simulations for $\Gamma_3=0.16$ using Eq.~\protect\refeq{sperp} are displayed as light blue diamonds. (d) Computed eigenvalues (black solids) gathered along $\beq{q}=\beq{Q}$ in comparison with the extracted spin relaxation rate from Monte Carlo simulation (colored circles) for several $\Gamma_3$ values. The eigenvalue for $\beq{q}=\beq{0}$ is shown only at $\Gamma_3=0.16$ as a black dashed line. (e) PSH condition $\alpha/\beta^{(1)}=\Gamma_0$ as a function of $\theta$.} \label{two} \end{figure*} \subsection{Numerical Monte Carlo simulation}\label{sec:MonteCarlo} To explore the relaxation of the PSH, we first conduct a numerical Monte Carlo simulation of the spin-random walk in a disordered system~\cite{Kiselev2000}. Considering zero temperature and electronic states centered at the Fermi energy, an ensemble of $\SI{5e4}{spins}$ ($\beq{S}$) is aligned perpendicular to $\beq{\Omega}_{\rm PSH}$ at the initial time ${t=0}$. The states are uniformly distributed over the Fermi circle with approximately isotropic Fermi wave vector $k_{\rm F}=\sqrt{2\pi n_s}$, where we select a carrier sheet density of $n_s=\SI{1.7e15}{m^{-2}}$. The carriers undergo a quasi-classical random walk with a ballistic motion between scattering events, which are considered elastic, isotropic, uncorrelated, and spin-independent. The time evolution is characterized by a mean elastic scattering time $\tau=2D_s/v_{\rm F}^2=\SI{1.88}{ps}$ with Fermi velocity $v_{\rm F}=\hbar k_{\rm F}/m$ corresponding to a 2D diffusion constant $D_s=\SI{0.03}{m^2/s}$ and an effective mass of GaAs $m=0.067\, m_0$, where $m_0$ denotes the bare electron mass. In each time interval of the ballistic motion, the spins propagate with Fermi velocity $v_{\rm F}$ while precessing about the SO field following the differential equation $(\partial /\partial t)\beq{S}=(\beq{\Omega}_{\rm PSH}+\beq{\Omega}_{\rm 3})\times\beq{S}$. After time $t\gg\tau$, we locally detect spin projections perpendicular to $\beq{\Omega}_{\rm PSH}$ and thereby extract the spin density component $s_\perp(\beq{r},t)$. In accordance with the typical experimental scenario of an optical spin excitation, we assume an initialized Gaussian spin distribution in real space centered at $\beq{r}=\bm{0}$ with sigma width $w=\SI{0.5}{\mu m}$ defining the laser spot size. We fix the effective $\beq{k}$-linear Dresselhaus SO coefficient $\beta^{(1)}=\SI{5.0}{meV\AA}$ throughout all simulations while the Rashba SO parameter $\alpha$ varies with $\eta$ $(\theta)$ according to the relation $\Gamma_0$. Among all crystal orientations, this gives a minimal pitch of $L_0=\SI{3.58}{\mu m}$, obtained for [001] quantum wells, which is much larger than mean free path $\tau v_{\rm F}=\SI{0.339}{\mu m}$, ensuring the DP regime of all Monte Carlo simulated spin dynamics. To clarify the impact of the symmetry-breaking SO field $\beq{\Omega}_{\rm 3}$, we modify the $\beq{k}$-cubic Dresselhaus parameter $\beta^{(3)}$ independently of $\beta^{(1)}$ although this is difficult to achieve in experiment due to the mutual dependence on the carrier density. \\\indent The first row of Fig.~\reffig{two}(a) collects the Monte-Carlo-simulated time evolution of $s_{\perp}$ along the $\hat{\beq{y}}$ axis for several growth directions distinguished by $\theta$ with influence of $\beq{k}$-cubic SO field $\beq{\Omega}_{\rm 3}$ of relative strength $\Gamma_3=0.08$ ($\beta^{(3)}=\SI{0.4}{meV\AA}$). The initialized Gaussian spin polarization evolves into a {helical texture} with distinct precession lengths $L$, which reflects the $\theta$-dependence of the wave vector $Q$, Eq.~\refeq{Qvalue}. To highlight the continuous changes of the spin precession length, we plot the spatial evolution of $s_{\perp}$ at time $t=\SI{1}{ns}$ in dependence of $\theta$ in Fig.\,\reffig{two}(b). For the [111] orientation, the helical structure disappears, which is consistent with the vanishing of $\beq{\Omega}_{\rm PSH}$ and $Q$. \\\indent To extract the wave vector $Q$ and the relaxation rate of the remnant helical spin density $1/\tau_{\rm hel}$, we fit the data of the Monte Carlo simulation using the function~\cite{Salis2014} \begin{align} s_{\perp}=& \frac{w^2}{w^2 + 2D_st}\,\exp{\left[{-\frac{y^2+2w^2Q^2D_st}{2(w^2 + 2D_st)}}\right]}\nonumber\\ &\times \exp\left({-\cfrac{t}{\tau_{\rm hel}}}\right)\cos{\left( \frac{2D_st}{w^2 + 2D_st}{Q}\,y\right)}. \label{sperp} \end{align} Setting $1/\tau_{\rm hel}$ and $Q$ as a free parameters, we fit the data by Eq.~\refeq{sperp} and obtain good agreement with Monte Carlo simulation as shown in the second row of Fig.~\reffig{two}(a). It is noteworthy that we also carried out marginal adjustments of $D_s$ in the fitting procedure to compensate for minor deviations arising from the input value ($\approx3$\%) due to slight fluctuations of $\tau$ in the numerical simulation. \\\indent In the subsequent sections, we discuss the $\theta$-dependence of the extracted parameters and show explicitly that low-symmetry quantum wells near a [225] orientation constitute the ideal system to maximize the PSH lifetime and explain the physical origin. \subsection{Spin diffusion equation}\label{sec:sde} To take a closer look at the impact of the $\beq{k}$-cubic SO field on the PSH dynamics obtained by the Monte Carlo simulation and to elucidate the underlying physical mechanism, we study the spin decoherence using the spin diffusion equation. Numerous papers were devoted to tracking the spatiotemporal evolution of a spin density in different parameter regimes using semiclassical~\cite{Malshukov2000,Schwab2006,Yang2010,Luffe2011} or diagrammatic~\cite{Burkov2004,Stanescu2007,Wenk2010,Liu2012,Poshakinskiy2015} approaches. \\\indent In this paper, we concentrate on the low-energy regime with weak SO coupling and disorder at zero temperature. Selecting the Fourier representation with small frequencies $\omega$ and in-plane wave vectors $\beq{q}$, $\beq{k}$, the dynamics of the Fourier-transformed spin density $\tilde{\beq{s}}(\beq{q},\omega)={\int {\rm d}r^2 \int{\rm d}t\,e^{i(\omega t-\beq{q}\cdot\beq{r} )}{\beq{s}}(\beq{r},t)}$ is governed by the diffusion equation~\cite{Wenk2010,wenkbook,Kammermeier2016} \begin{equation} \beq{0}=(D_s\beq{q}^2-i\omega+1/\hat{\beq{\tau}}_{\rm DP})\tilde{\beq{s}}(\beq{q},\omega)-\cfrac{2\hbar\tau }{im} \langle(\beq{k}\cdot\beq{q})\beq{\Omega}\rangle\times\tilde{\beq{s}}(\beq{q},\omega) \label{eq:diffeq0} \end{equation} with the DP spin relaxation tensor $(1/\hat{\bm{\tau}}_{\rm DP})_{ij}={\tau(\langle\bm{\Omega}^2\rangle\delta_{ij}-\langle\Omega_i\Omega_j\rangle)}$. The average $\langle.\rangle$ is performed over all polar angles $\varphi$ of the wave vector $\beq{k}$. Notably, if we account for anisotropic scattering, the first and third angular harmonic SO fields involve distinct scattering times $\tau_1$ and $\tau_3$, respectively~\cite{Knap1996}. The above equation is still valid in this case, though, if one replaces $\tau\rightarrow \tau_1$ and $\beta^{(3)}\rightarrow \beta^{(3)}\sqrt{\tau_3/\tau_1}$. It is practical to rewrite Eq.~(\ref{eq:diffeq0}) in terms of a diffusion operator $\sdo$, which comprises all dynamical properties and yields \begin{equation} \beq{0}=\left[\sdo(\beq{q})-i\omega\right]\tilde{\beq{s}}(\beq{q},\omega). \label{diffeq} \end{equation} The eigenvalues $\lambda_n(\beq{q})$ ($n=1,2,3$) of $\sdo(\beq{q})$ describe spin relaxation rates of a system with arbitrary Rashba and Dresselhaus SO fields according to Eqs.~(\ref{firstangular}) and (\ref{thirdangular}). The explicit expression of $\sdo$ is presented in App.~\refchap{app1}. \\\indent Fig.~\reffig{two}(c) shows the smallest of the three eigenvalues $\lambda(\beq{q})_{\rm min}$ as a function of the wave vector {$q_y$ $(q_x=0)$ and the growth angle $\theta$ under the PSH condition $\Gamma_1=\Gamma_0$ and for $\Gamma_3=0.16$. As emphasized by the black dotted and solid lines in Fig.~\reffig{two}(c), the eigenvalues exhibit generally three minima, where one occurs at $\beq{q}=\beq{0}$ and the other two at finite $\beq{q}=\pm\beq{Q}$, whose magnitude, despite the $\beq{k}$-cubic SO terms, is perfectly described by Eq.~(\ref{Qvalue}). The local minima refer to the long-lived spin textures whereas the global minimum defines the \textit{longest}-lived or superior spin texture, which, depending on $\beq{q}$, can be either homogeneous ($\beq{q}=\beq{0}$) or helical ($\beq{q}\neq\beq{0}$). We also show that the values of $Q$ extracted by Monte Carlo simulation in terms of $Q_0$, the light blue diamonds in Fig.~\reffig{two}(c), agree well with the ideal functional behavior in Eq.\,\refeq{Qvalue}. \\\indent We first focus on the helical texture and plot the spin relaxation rate $\lambda(\beq{Q})_{\rm min}=1/\tau_{\rm hel}$ in dependence of $\theta$ for several values of $\Gamma_3$ (black solid lines) together with the respective results obtained by the Monte Carlo simulation (colored circles) in Fig.~\reffig{two}(d). To emphasize that the parameters are selected in the desired regime where the spin lifetime exceeds the spin precession time, the spin relaxation rate is displayed in units of $1/\tau_0=D_s\Qo^2/(4\pi^2)$ corresponding to the maximal spin precession rate which is obtained for [001] quantum well. We find excellent agreement for all $\Gamma_3$ values between the two different approaches and see a rich dependence of $1/\tau_{\rm hel}$ on $\theta$. \\\indent Firstly, the [110] direction shows a less robust spin texture compared to [001], being consistent with previous calculation \cite{Iizasa2018}. Secondly, the salient vanishing of $1/\tau_{\rm hel}$ at [111] results from the existence of a homogeneous persistent spin texture as $\beq{\Omega}_{\rm PSH}=\beq{0}$ and $\beq{\Omega}_{\rm 3}$ is collinear with the [111] axis. Since both excited and detected spin textures contain a finite component parallel to [111], the extracted spin relaxation rate corresponds to the homogeneous texture and not to a helical one. Similar argument holds in the vicinity of [111], where the wave vector $Q$ is negligible and the long-lived texture is basically homogeneous. As a homogeneous texture lacks the ability for manipulable spin orientation, we shall not be interested in these growth directions. Thirdly and most remarkably, another local minimum occurs near the [225] low-symmetry growth direction as emphasized by the colored grid line in Fig.\,\reffig{two}(d).} Compared to conventional [001]-oriented 2DEGs, we find here a spin lifetime enhancement of 30\% while a helical spin texture is retained. Further attractiveness of [225] is that the Rashba coefficient $\alpha$ almost vanishes for the PSH condition $\Gamma_0$ as calculated in Fig.\,\reffig{two}(e). It implies that a symmetric quantum well already exhibits a PSH without the need to tune $\alpha$ electrically~\cite{Nitta1997}. This reduces complications arising from a possible inhomogeneity of $\alpha$, which causes local imbalances of the ratio of $\alpha/\beta^{(1)}$ and constitutes an additional source of spin relaxation~\cite{Sherman2003a,Sherman2003b,Liu2006,Glazov2010,Bindel2016}. The actual vanishing point of $\alpha$ is at $\eta=1/3$ corresponding to an irrational Miller index, but it is well approximated by a [225] direction~\cite{Kammermeier2016}. \\\indent We now turn to the spin relaxation rate $\lambda(\beq{0})_{\rm min}$ of the homogeneous texture, which is displayed in Fig.~\reffig{two}(d) as a black dotted line for $\Gamma_3=0.16$. The rate vanishes at [111] and [110] because the total SO field remains collinear. If we compare with the pertaining helical rate, the lifetime of the helical texture is only clearly superior to that of the homogeneous texture in the vicinity of the [225] direction. At [225] we generally find $\lambda(\beq{0})_{\rm min}\approx 2\lambda(\beq{Q})_{\rm min}$ for arbitrary reasonable values of $\Gamma_3$. This implies that a [225]-oriented 2DEG is also suitable for an experimental spin lifetime extraction using magneto-conductance measurements of the weak antilocalization, as further discussed in Sec.~\ref{sec:accessibility}. \\\indent In the following, we analyze the growth-direction dependence of the long-lived spin textures in detail and elucidate the origin of the $\theta$-dependent robustness of the PSH. \subsection{Analytic discussion}\label{sec:analytic} \begin{figure*} \centering \includegraphics[keepaspectratio, scale=.5]{figure3.pdf} \caption{(a) Spin relaxation rate of the long-lived helical spin texture $\lambda(\beq{Q})_{\rm min}=1/\tau_{\rm hel}$ (colored solid lines) in comparison with the PSH relaxation rate $1/\tau_{\rm PSH}$ [Eq.~(\ref{analytical2})] (black solid lines) for several $\Gamma_3$ values in units of the spin precession rate in [001] quantum wells $1/\tau_0$. The differences between the eigenvalue $\lambda(\beq{Q})_{\rm min}$ and $1/\tau_{\rm PSH}$ in the vicinity of [111] implies that the long-lived helical spin texture deviates from the PSH. A local minimum is found at $\eta\approx0.341$, which is close to [225], where $\eta\approx0.348$ (red grid line). (b) Relaxation contributions parallel ($\langle \beq{\Omega}_\parallel^2\rangle\tau$) [Eq.~(\ref{flucpara})] and perpendicular ($\langle \beq{\Omega}_\perp^2\rangle\tau/2$) [Eq.~(\ref{flucperp})] to $\beq{\Omega}_{\rm PSH}$ are plotted for $\Gamma_3=0.16$ as dashed and solid lines, respectively. The parallel contribution vanishes at $\eta\approx0.388$. In (a) and (b), the respective points $\eta\approx0.341$ and $\eta\approx0.388$ are highlighted as triangles and circles. (c) Line shapes of $\lambda(\beq{Q})_{\rm min}$ and $1/\tau_{\rm PSH}$ are compared for several $\Gamma_3$ values in units of the PSH relaxation rate $1/\tau_{\rm psh,0}$ for [001] quantum wells. The smallest selected value $\Gamma_3=0.017$ corresponds to the experimentally extracted ratio in Ref.~[\onlinecite{Walser2012b}]. The range of growth angles where both rates deviate gets narrower as $\Gamma_3$ decreases. (d) Mean squares of the SO strengths $\langle\beq{\Omega}_{\rm PSH}^2\rangle$ and $\langle\beq{\Omega}_{\rm 3}^2\rangle$ for different $\Gamma_3$ values with colors according to (c). The range of growth directions where $\langle\beq{\Omega}_{\rm 3}^2\rangle$ exceeds $\langle\beq{\Omega}_{\rm PSH}^2\rangle$ increases with $\Gamma_3$ yielding large deviations of the long-lived helical spin texture from the PSH and, thus, distinct relaxation rates.} \label{three} \end{figure*} After including the $\beq{k}$-cubic SO field $\beq{\Omega}_{\rm 3}$, the formerly persistent spin textures $\beq{s}_{\rm PSH}$ and $\beq{s}_{\rm homo}$, as defined in Sec.~\ref{sec:PSHfield}, are no longer eigenstates of the system. For this reason, the decay of these textures is in general not described by a single exponential function. However since $\beq{\Omega}_{\rm 3}$ usually constitutes a small correction to $\beq{\Omega}_1$, it is a good approximation to assume a single effective relaxation factor $\exp(-t/\tau_{\rm psh,homo})$ whose relaxation rate is given by projecting the diffusion operator $\sdo$ on these textures, i.e., $1/\tau_{\rm PSH}\equiv \langle\tilde{\beq{s}}_{\rm PSH}|\sdo (\beq{Q})|\tilde{\beq{s}}_{\rm PSH}\rangle$ and $1/\tau_{\rm homo}\equiv \langle\tilde{\beq{s}}_{\rm homo}|\sdo (\beq{0})|\tilde{\beq{s}}_{\rm homo}\rangle$. Comparing these relaxation rates with those of the long-lived spin textures provides a deeper insight into the impact of the $\beq{k}$-cubic SO field. \subsubsection{Relaxation of the spin helix} To reveal the underlying physical picture for the reduced relaxation in [225] quantum wells, however, it is more instructive to note that the contributions of $\beq{\Omega}_{\rm 3}$ in the diffusion operator are decoupled from $\beq{q}$ and $\beq{\Omega}_{\rm PSH}$. This happens because the mixing of first and third angular harmonics in the wave vector averages to zero. Thus, the relaxation is purely determined by the $\beq{\Omega}_{\rm 3}$ terms in the DP tensor $1/\hat{\beq{\tau}}_{\rm DP}$ in Eq.~(\ref{eq:diffeq0}). Recalling that in the DP formalism only perpendicular components of the SO field to the given spin orientation lead to a relaxation, we can apply this to the texture of the PSH and obtain \begin{align} \frac{1}{\tau_{\rm PSH}}&=\tau\int_0^{L}\frac{{\rm d}y}{L}\langle\left(\beq{\Omega}_3\times\beq{s}_{\rm PSH}\right)^2\rangle, \label{integrals} \end{align} The integral represents the spatial average over a full spin precession of the PSH, e.g., $y\in[0,L]$. This means that the component of the $\beq{k}$-cubic SO field parallel to $\beq{s}_{\rm PSH}$ does not contribute to relaxation. According to this scenario, it is now practical to decompose $\beq{\Omega}_{\rm 3}$ as $\beq{\Omega}_{\rm 3}=\beq{\Omega}_{\parallel}+\beq{\Omega}_{\perp}$, where $\beq{\Omega}_{\parallel}$ and $\beq{\Omega}_{\perp}$ are parallel and perpendicular to $\beq{\Omega}_{\rm PSH}$, respectively. Further decomposing $\beq{\Omega}_\perp=\beq{\Omega}_{\perp,1}+\beq{\Omega}_{\perp,2}$ with $\beq{\Omega}_{\perp,1}\parallel\hat{\beq{y}}$ and $\beq{\Omega}_{\perp,2}\parallel(\hat{\beq{y}}\times\hat{\bm{u}}_{\rm PSH})$ simplifies Eq.~\refeq{integrals} and produces the analytical solution of the growth-dependent PSH relaxation rate, that is, \begin{align} \cfrac{1}{\tau_{\rm PSH}}&= \langle \beq{\Omega}_{\parallel}^2\rangle \tau +\cfrac{\langle \beq{\Omega}_{\perp}^2\rangle\tau}{2} \label{analytical1}, \end{align} which gives explicitely \begin{align} \cfrac{1}{\tau_{\rm PSH}}&=\cfrac{4\pi^2}{\tau_0}\cfrac{3 - 17 \eta^2 + 85 \eta^4 - 171 \eta^6 + 108 \eta^8}{8 - 12 \eta^2}\Gamma_3^2,\label{analytical2} \end{align} as a consequence of the relaxation contributions \begin{align} \langle\beq{\Omega}_\parallel^2\rangle\tau={}&\cfrac{4\pi^2}{\tau_0}\cfrac{(1 - 8 \eta^2 + 9 \eta^4)^2}{4-6\eta^2}\Gamma_3^2, \label{flucpara}\\ \cfrac{\langle \beq{\Omega}_{\perp}^2\rangle\tau}{2}={}&\cfrac{4\pi^2}{\tau_0}\cfrac{(1-\eta^2)(1+18\eta^2-27\eta^4)n_z^2}{8-12\eta^2}\Gamma_3^2. \label{flucperp} \end{align} As it becomes apparent from Eq.~\refeq{analytical1}, the parallel ($\beq{\Omega}_\parallel$) and perpendicular ($\beq{\Omega}_{\perp}$) components of $\beq{\Omega}_{\rm 3}$ to $\beq{\Omega}_{\rm PSH}$ affect the PSH relaxtion with different weighting, 1 and 1/2. This results from the fact that $\beq{\Omega}_\parallel$ generates relaxation of the local spin orientation of the PSH over the full precession cycle as it is generally perpendicular to $\beq{s}_{\rm PSH}$, whereas $\beq{\Omega}_{\perp}$ is locally parallel to $\beq{s}_{\rm PSH}$, which partially protects the PSH from relaxation. \\\indent Figure \reffig{three}(a) compares the PSH relaxation rate $1/\tau_{\rm PSH}$ using the analytical expression Eq.~\refeq{analytical2} with the rate of the long-lived helical textures $1/\tau_{\rm hel}=\lambda(\beq{Q})_{\rm min}$ calculated with Eq.~\refeq{diffeq} for several $\Gamma_3$ values. Aside from a narrow region in the vicinity of [111], the PSH relaxation rate matches well the long-lived helical rate for all selected $\Gamma_3$. The close agreement indicates that the structure of the long-lived helical spin textures does not deviate much from the PSH. We find that the local minimum of $1/\tau_{\rm PSH}\approx 0.78/\tau_{\rm psh,0}$, where $1/\tau_{\rm psh,0}=3\pi^2\Gamma_3^2/(2\tau_0)$ is the PSH relaxation rate for [001] quantum wells, emerges at $\eta=\sqrt{5-\sqrt{13}}/\sqrt{12}\approx0.341$ [cf. the triangle at the curve for $\Gamma_3=0.16$ in Fig.\,\reffig{three}(a)], which is indeed close to [225], where $\eta=2/\sqrt{33}\approx 0.348$ (red grid lines in Fig.~\reffig{three}). From the different weighting in Eq.~\refeq{analytical1}, it is reasonable to assume that this local minimum coincides with a minimum of the relaxation term $\langle\beq{\Omega}_\parallel^2\rangle\tau$, Eq.~\refeq{flucpara}, which means that $\beq{\Omega}_{\rm 3}$ is perpendicular to $\beq{\Omega}_{\rm PSH}$. In Fig.\,\reffig{three}(b), we display the individual relaxation contributions in terms of $1/\tau_0$, Eqs.\,\refeq{flucpara} and \refeq{flucperp}. The blue dashed line represents the contribution by $\beq{\Omega}_\parallel$, which drops to zero at $\eta=\sqrt{4-\sqrt{7}}/3\approx 0.388$ as indicated by a black circle. Hence, the vanishing point of the parallel contribution at $\eta\approx 0.388$ does not perfectly agree with the suppressed relaxation rate as shown in Fig.\,\reffig{three}(a). The reason is that, the magnitude of the perpendicular contribution varies simultaneously, as shown by the blue solid line in Fig.~\reffig{three}(b), and it holds a large magnitude at $\eta\approx 0.388$ (black circle). After normalizing the $\beq{k}$-cubic field $\beq{\Omega}_{\rm 3} \rightarrow \beq{\Omega}_{\rm 3} / \langle\|\beq{\Omega}_{\rm 3}\|\rangle$, we find that the minimum occurs precisely at the expected value of $\eta\approx 0.341$, which confirms the previous assumption. Consequently, the local suppression of the PSH relaxation rate is a combined effect of the interplay between the magnitude and orientation of $\beq{\Omega}_{\rm 3}$. Furthermore, the PSH relaxation rate becomes largest at [110] where $\langle\beq{\Omega}_\parallel^2\rangle\tau$ has a global maximum even though the perpendicular contribution vanishes since $\beq{\Omega}_{\rm 3}$ is parallel to $\beq{\Omega}_{\rm PSH}$. \\\indent We address now the role of the magnitude of $\beq{\Omega}_{\rm 3}$ in the vicinity of growth directions where the spin relaxation rates of PSH and long-lived helical textures deviate, i.e., near [111]. Figure \reffig{three}(c) compares both relaxation rates, in units of $1/\tau_{\rm psh,0}$, for different values of $\Gamma_3$ ranging from $0.017$ to $0.16$ [cf. Fig.~\reffig{three}(a)]. The former magnitude was experimentally obtained in Ref.~[\onlinecite{Walser2012b}]. We see that the growth-angle range where the line shapes of the relaxation rates of both texture types deviate becomes narrower as $\Gamma_3$ decreases. The origin of the deviation is explained by the magnitude ratio between $\beq{\Omega}_{\rm PSH}$ and $\beq{\Omega}_{\rm 3}$. Figure \reffig{three}(d) shows the average SO field magnitudes $\langle\beq{\Omega}_{\rm PSH}^2\rangle$ and $\langle\beq{\Omega}_{\rm 3}^2\rangle$ in units of $1/(\tau \tau_0)$, where the plot colors are chosen in accordance with Fig.~\reffig{three}(c). Since $\langle\beq{\Omega}_{\rm PSH}^2\rangle$ rapidly drops down towards zero near [111], $\langle\beq{\Omega}_{\rm 3}^2\rangle$ exceeds $\langle\beq{\Omega}_{\rm PSH}^2\rangle$ and the SO field is dominated by $\beq{\Omega}_{\rm 3}$. Here, the PSH field is negligible compared to the $\beq{k}$-cubic SO field, implying that the eigenstate of diffusion operator strongly differs from the PSH. The range of growth angles where $\beq{\Omega}_{\rm 3}$ is dominant enlarges as $\Gamma_3$ increases. \\\indent Since for practical applications spin functionalities must be implemented within the spin lifetime, it is desirable that $\Gamma_3$ is small enough that the spin precession length $L$ is much larger than the spin relaxation length of the long-lived helical spin texture $l_s=\sqrt{D_s\tau_{\rm hel}}=\sqrt{D_s}/\sqrt{\lambda(\beq{Q})_{\rm min}}$, i.e., $l_s / L \gg 1$. The ratio $l_s / L$ is plotted in Fig.~\reffig{four}, which shows that for most growth directions a reasonable magnitude of $\Gamma_3$ is of the order of $10^{-2}$ or smaller. As a concrete example for typical experimental values, we list $\tau_{\rm PSH}$ along other relevant quantities for several quantum wells with different growth directions in Tab.~\ref{tab:values}. Consequently, for parameters of interest the growth-angle range where the rates of the PSH and the long-lived helical texture deviate is quite narrow [cf. Fig.~\ref{three}(c)], and Eq.~\refeq{analytical2} is a good solution for general growth directions. Also, low-angle growth directions from [001] to approximately [225] are less sensitive to the relaxation due to $\beq{k}$-cubic terms and, therefore, suitable candidates for applications. Finally, it should be mentioned that comparing the spin precession lengths of the PSH in [225] and [001] quantum wells, the former is larger by approximately 50\%. \begin{figure} \centering \includegraphics[keepaspectratio, scale=0.5]{figure4.pdf} \caption{Ratio between spin precession length $L=2\pi/Q$ and spin relaxation length of the long-lived helical spin texture $l_s=\sqrt{D_s}/\sqrt{\lambda(\beq{Q})_{\rm min}}$ is summarized for several $\Gamma_3$ values.} \label{four} \end{figure} \subsubsection{Relaxation of the homogeneous spin texture} While the PSH plays a prominent role for the functionality of spin transistors, the dynamics of homogeneous spin textures is more relevant in other devices such as spin lasers~\cite{Gothgen2008,Iba2011,Lee2014,FariaJunior2015,Lindemann2019}. For instance, a homogeneous spin texture is typically generated by optical spin orientation due to interband absorption of circularly polarized light when the illumination spot size exceeds the sample size~\cite{Dyakonov1971,Dyakonov1984}. \\\indent Thus, for a comprehensive understanding we investigate now the relaxation of the long-lived spin texture ${\beq{s}}_{\rm homo}=\pm\hat{\bm{u}}_{\rm PSH}$, which is homogeneously aligned parallel to the direction of the PSH field. Following above arguments, its relaxation rate can be computed as $1/\tau_{\rm homo}=\tau\langle\left(\beq{\Omega}_3\times\hat{\bm{u}}_{\rm PSH}\right)^2\rangle$, which gives \begin{align} \frac{1}{\tau_{\rm homo}}&=\cfrac{4\pi^2}{\tau_0}\cfrac{n_z^2(1+17 \eta^2 -45 \eta^4 +27 \eta^6 )}{4 - 6 \eta^2}\Gamma_3^2. \label{eq:relaxation_homo} \end{align} In Fig.~\ref{fig:homo}(a), the spin relaxation rate $1/\tau_{\rm homo}$ (black dashed lines) is displayed together with the rate of the long-lived homogeneous spin texture $\lambda(\beq{0})_{\rm min}$ (colored solid lines) for different values of $\Gamma_3$ in units of the spin precession rate in [001] quantum wells $1/\tau_0$. Similarly to the PSH and the long-lived helical spin texture, we find good agreement between both relaxation rates, apart from a narrow region near [111]. The range of growth angles where both rates deviate becomes smaller as $\Gamma_3$ is reduced, which is depicted in Fig.~\ref{fig:homo}(b). Typical values of $\tau_{\rm homo}$ for realistic parameter configurations are listed in Tab.~\ref{tab:values} for several quantum wells with distinct orientations. For better comparison, the rates are rescaled in units of the relaxation rate $1/\tau_{\rm homo,0}=\pi^2\Gamma_3^2/\tau_0$ of ${\beq{s}}_{\rm homo}$ for [001] quantum wells. In both figures, we notice that the relaxation rates have a pronounced global maximum, which we generically find to occur at $\eta\approx 0.431$ and which yields an increase by a factor of 2.4 compared with the rate for [001] quantum wells $1/\tau_{\rm homo,0}$. For the [225] quantum wells, we find that the relaxation rate of the homogeneous mode is larger by a factor of 1.94 compared to the PSH relaxation rate $1/\tau_{\rm PSH}$, Eq~(\ref{analytical2}), at [225]. On the contrary, quantum wells with growth direction ranging from the vicinity of [111] to [110] facilitate very long spin lifetimes, with [111] and [110] offering persistent solutions. Lastly, we plot in Fig.~\ref{fig:homo}(c), for the analogous values of $\Gamma_3$ as in Fig.~\ref{fig:homo}(b), the angle between the quantum-well growth direction and the spin orientation of the long-lived homogeneous spin texture (colored solid lines) and the homogeneous texture ${\beq{s}}_{\rm homo}$ (black dashed line), respectively. The latter angle is given by the expression $\xi=\mathrm{arccos(\eta/\sqrt{2-3\eta^2})}$. In the small region in the vicinity of [111], the spin orientation of the long-lived homogeneous spin texture becomes nearly parallel to the growth direction and strongly differs from ${\beq{s}}_{\rm homo}$, which explains the large discrepancy of the corresponding spin relaxation rates $1/\tau_{\rm homo}$ and $\lambda(\beq{0})_{\rm min}$. \\\indent As an implication for spintronic devices where long-lived homogeneous spin textures are desirable, e.g., for threshold reduction in spin lasers~\cite{Gothgen2008,Iba2011,Lee2014}, the [111] and [110] quantum wells are most appealing. Aside from the diverging spin lifetime, the corresponding spin polarization is perpendicular to the quantum-well plane ($\xi=0$), which often corresponds to the favorable excitation direction. \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}} \ra{1.3} \begin{table} \centering \caption{Realistic values in GaAs quantum wells for the relaxation times of the PSH $\tau_{\rm PSH}$ and the long-lived homogeneous spin texture $\tau_{\rm homo}$ using Eqs.~(\protect\ref{analytical2}) and (\protect\ref{eq:relaxation_homo}), respectively. We employ an effective mass $m=0.067m_0$, a carrier sheet density $n_s=\SI{1.7e15}{m^{-2}}$, and spin diffusion constant $D_s=\SI{0.03}{m^2/s}$. Accordingly, for a bulk Dresselhaus parameter $\gamma_{\rm D}=\SI{8.0}{eV\AA^3}$~\cite{Kohda2017}, the effective cubic Dresselhaus coefficient becomes $\beta^{(3)}=\gamma_{\rm D}k_{\rm F}^2/4=\gamma_{\rm D}\pi n_s/2=\SI{0.21}{meV\AA}$. Using a typical linear Dresselhaus coefficient $\beta^{(1)}=\SI{5.0}{meV\AA}$~\cite{Walser2012b} yields $\Gamma_3=\beta^{(3)}/\beta^{(1)}=0.042$. The Rashba coefficient $\alpha$ and the spin precession length $L$ follow from the growth-direction-dependent relations $\alpha(\eta)=\Gamma_0(\eta)\beta^{(1)}$ and $L(\eta)=2\pi/Q(\eta)$ (cf. Sec.~\ref{sec:PSHfield}). } \vspace{0.2cm} \begin{ruledtabular} \begin{tabular}{ccccc} ${\hat{\bf n}}$ & $\tau_{\rm PSH}$ (ns) & $\tau_{\rm homo}$ (ns) & $\alpha$ ($\SI{}{meV\AA}$) & $L$ ($\mathrm{\mu m}$)\\ \hline [001] & 15.8 & 23.7 & 5.0& 3.58\\\relax [115] & 18.0 & 15.4 & 3.2& 4.14\\\relax [225] & 20.3 & 10.5 & $-0.4$& 6.22\\\relax [111] & - & $\infty$ & $-5.8$ & $\infty$\\\relax [221] & 11.0 & 34.9 & $-5.0$& 18.6\\\relax [110] & 10.5 & $\infty$ & 0 & 14.3 \\ \end{tabular} \end{ruledtabular} \label{tab:values} \end{table} \begin{figure}[t] \centering \includegraphics[keepaspectratio, scale=0.5]{homo.pdf} \caption{(a) Spin relaxation rates of the long-lived homogeneous texture $\lambda(\beq{0})_{\rm min}$ (colored solid lines) in comparison with the relaxation rate $1/\tau_{\rm homo}$ [Eq.~(\ref{eq:relaxation_homo})] of the homogeneous spin texture $\beq{s}_{\rm homo}=\pm \hat{\bm{u}}_{\rm PSH}$ (black dashed line) for different values of $\Gamma_3$ in units of the spin precession rate in [001] quantum wells $1/\tau_0$. The latter texture is collinear with the the PSH field orientation $\pm\hat{\bm{u}}_{\rm PSH}$ and, thus, persistent for $\beq{\Omega}_{\rm 3}=\beq{0}$ (cf. Sec.~\ref{sec:GenericPSH}). (b) Line shapes of $\lambda(\beq{0})_{\rm min}$ (colored solid lines) and $1/\tau_{\rm homo}$ (black dashed line) are shown for several $\Gamma_3$ values in units of the relaxation rate $1/\tau_{\rm homo,0}$ for [001] quantum wells. Analogously to the comparison of the long-lived helical texture with the PSH, Fig.~\ref{three}(c), the range of growth angles where both relaxation rates deviate gets narrower as $\Gamma_3$ decreases. The smallest selected value $\Gamma_3=0.017$ corresponds to the experimentally extracted ratio in Ref.~[\onlinecite{Walser2012b}]. (c) Angle between the quantum-well growth direction and the spin orientation of the long-lived homogeneous spin texture (colored solid lines), or the homogeneous texture $\beq{s}_{\rm homo}$ (black dashed line), which is $\xi=\mathrm{arccos(\eta/\sqrt{2-3\eta^2})}$, for the analogous values of $\Gamma_3$ as in (b).} \label{fig:homo} \end{figure} \begin{figure}[t] \centering \includegraphics[keepaspectratio, scale=0.5]{figure5.pdf} \caption{ (a) Monte-Carlo simulated component $s_z$ at time $t=\SI{1}{ns}$ of an initialized spin density $\beq{s}(t=0)\parallel \hat{\beq{z}}$ for $\Gamma_3=0.12$ in dependence of the growth angle $\theta$. A helical spin texture emerges from [001] to approximately [111], while a homogeneous texture is prevalent from [111] to [110]. (b)~Yellow diamonds represent the extracted wave vector $Q$ while black solid line is calculated by Eq.~(\ref{Qvalue}). Green solid line shows the angle $\xi$ between PSH field $\beq{\Omega}_{\rm PSH}$ and surface normal~$\hat{\beq{z}}$. (c)~Extracted spin relaxation rate is shown as yellow circles. Computed eigenvalues for homogeneous ($\lambda(\beq{0})_{\rm min}$) and helical ($\lambda(\beq{Q})_{\rm min}$) spin texture are displayed as blue and black dotted lines, respectively. The global minimum of the relaxation rate corresponding to the \textit{longest}-lived spin textures is highlighted as solid line on the curves for the homogeneous and helical relaxation rates.} \label{five} \end{figure} \subsection{Experimental accessibility}\label{sec:accessibility} Finally, we discuss the accessibility of the PSH by optical experiments such as time-resolved Kerr-rotation microscopy, in which spins are typically excited and measured perpendicular to the quantum-well plane. To obey this experimental restriction, we simulate the spatiotemporal evolution of the initial spin texture $\beq{s}(t=0)\parallel\hat{\beq{z}}$ using the Monte Carlo simulation as described in Sec.~\ref{sec:MonteCarlo}. We employ the same parameter settings as before, where we restrict ourselves to the case of $\Gamma_3=0.12$. In this new configuration, the spin excitation direction is tilted with respect to the direction of $\beq{\Omega}_{\rm PSH}$ and therefore simultaneously polarizes long-lived homogeneous and helical spin textures even for $\Gamma_3=0$. The respective tilt angle depends on the quantum-well growth direction. For the specific growth direction [001], the homogeneous texture is not excited as $\beq{\Omega}_{\rm PSH}$ is completely in-plane~\cite{Salis2014} whereas we polarize only the homogeneous texture at [111] and [110] due to the configurations $\beq{\Omega}_{\rm PSH}=0$, $\beq{\Omega}_{\rm 3}\parallel \hat{\beq{z}}$ and $\beq{\Omega}_{\rm PSH}\parallel \hat{\beq{z}}$, $\beq{\Omega}_{\rm 3}\parallel \hat{\beq{z}}$, respectively. \\\indent Figure~\reffig{five}(a) shows the spatial distribution of $s_z$ along the $\hat{\beq{y}}$ axis collected for various $\theta$ values at the slice of time $t=\SI{1}{ns}$. The appearance of a helical texture for growth angles $\theta$ from 0 to approximately $\pi/4$ is obvious. We attribute this to the large angle $\xi$ between $\beq{\Omega}_{\rm PSH}$ and the $\hat{\beq{z}}$ axis [cf. Fig.~\reffig{five}(b)], which allows to excite the helical texture. Using the fit function Eq.~\refeq{sperp}, we extract the wave vector $Q$ and the spin relaxation rate, which are displayed in Figs.~\reffig{five}(b) and \reffig{five}(c). For angles $\theta<\pi/4$, we obtain a spin relaxation rate and wave vector $Q$ that agree well with the ideal values for the PSH [cf. Eqs.~(\ref{Qvalue}) and (\ref{analytical2})]. Therefore, we can readily access the PSH for different directions including the superior lifetime direction [225] in conventional optical measurements and take advantage of manipulable spin orientation. For larger angles $\theta$ corresponding to growth directions from [111] to [110], the angle $\xi$ further decreases and the extracted relaxation rates and wave vectors belong to the homogeneous spin textures, which have the superior lifetime along these directions. Also, the spin precession length of the helical textures becomes very long, which makes it difficult to distinguish from the homogeneous texture. Apart from that, the magnitude of the PSH field becomes insignificant in comparison to the $\beq{k}$-cubic SO field, which yields large deviations of the long-lived helical spin texture from the PSH and makes the fit function Eq.~\refeq{sperp} unsuitable. \\\indent Finally, we look at the prospects to study the PSH lifetime limitation in magnetoconductance measurements of the weak antilocalization. The characteristic weak-antilocalization feature, namely the position of the magnetoconductance minima, which is necessary for a reliable parameter fitting, is predominantly determined by the \textit{longest}-lived spin texture~\cite{Faniel2011,Yoshizumi2016,Kammermeier2016}. In particular, if its lifetime is much longer than the electron-dephasing time, the minima disappear and only weak-localization features are seen. Hence, to extract the PSH lifetime and the related $\beq{k}$-cubic Dresselhaus SO coefficient, it is desirable that the minima are still observable at the optimal ratio of $\beq{k}$-linear SO coefficients $\alpha/\beta^{(1)}=\Gamma_0$, where for $\beq{\Omega}_{\rm 3}=0$ persistent spin textures appear, and that the helical spin texture is superior. The relaxation rate of the \textit{longest}-lived spin texture for general growth directions is highlighted by the solid line in Fig.\,\reffig{five}(c) where the black and blue color corresponds to the helical and homogeneous spin texture, respectively. We see that the homogeneous texture shows clear dominance for a wide range of growth directions. The lifetime discrepancy between helical and homogeneous textures is most pronounced in [110] quantum wells, where the homogeneous rate vanishes while the helical rate reaches a maximum. For this reason, we only observe weak localization even for a large $\beq{k}$-cubic SO field in [110] quantum wells due to the presence of persistent homogenous spin textures~\cite{Iizasa2018,Hassenkam1997}. Here, observing a crossover to weak antilocalization requires the breaking of the PSH condition $\Gamma_0$ via an increasing Rashba term~\cite{Hassenkam1997}. Similarly, we can expect the extraordinary small relaxation rates of the homogeneous spin textures for growth directions between [111] and [110] to prevent the emergence of the weak antilocalization features. Intriguingly, the homogeneous relaxation rate far exceeds the helical rate in the vicinity of the growth direction [225]. The large separation of the two relaxation branches implies that such quantum wells are suitable to extract the PSH lifetime as well as the $\beq{k}$-cubic Dresselhaus SO strength in magnetotransport measurements. Also, the required near absence of the Rashba coefficient in the PSH case in [225] simplifies the parameter fitting process. \\\indent To sum up, [225] quantum wells are not only good candidates to enhance the PSH lifetime but also represent an excellent platform for a comparative study of the spin relaxation in optical and magnetotransport measurements. \section{Conclusion}\label{sec:conlusion} We have investigated the lifetime limitations of the PSHs due to the presence of third angular harmonics in the $\beq{k}$-cubic Dresselhaus SO field in 2DEGs of general growth directions. A numerical approach using Monte Carlo simulations in conjunction with an analysis of the spin diffusion equation provides detailed knowledge of the robustness of the long-lived spin textures. \\\indent Our findings reveal that a crystal orientation where the $\beq{k}$-cubic SO field is perpendicular to the collinear SO field suppresses the decay of the PSH because it partially protects local spin orientations from relaxation. In combination with additional small modulations of the $\beq{k}$-cubic SO field magnitude, it is shown that the most robust PSH is formed in a low-symmetry quantum well whose orientation is well approximated by a [225] lattice vector. Remarkably, the realization of a PSH in such a system requires a nearly vanishing Rashba SO coefficient. This enables the utilization of a symmetric quantum well which mitigates complications that are connected to the electrical gate-tuning or a spatial inhomogeneity of the Rashba SO strength. We demonstrate explicitely that the PSH in [225] quantum wells can be experimentally accessed by optical spin excitation/detection measurements. Additionally, we point out that the PSH lifetime clearly exceeds the one of the long-lived homogeneous spin textures which makes it also a suitable candidate for an experimental extraction via weak (anti)localization measurements. \\\indent The results provide a complete picture of the stability of the PSH and the long-lived spin textures in general growth directions and define the longest possible PSH lifetime in 2DEGs in presence of $\beq{k}$-cubic Dresselhaus SO couplings. \section*{Acknowledgement} D. Iizasa, M. Kohda, and J. Nitta are supported via Grant-in-Aid for Scientific Research (Grant No. 15H02099, No. 15H05854, No. 25220604, and No. 15H05699) by the Japan Society for the Promotion of Science. D. Iizasa thanks the Graduate Program of Spintronics at Tohoku University, Japan, for financial support. M. Kammermeier and U. Z\"ulicke acknowledge the support by the Marsden Fund Council from New Zealand government funding (contract no.\ VUW1713), managed by the Royal Society Te Ap\={a}rangi.
1,108,101,566,183
arxiv
\section{Introduction} \subsection{X-ray QPO in AGN} Both stellar mass black hole binaries (BHB) and active galactic nuclei (AGN) are powered by gas accreting onto a central black hole, and their observational properties are determined primarily by the black hole mass, mass accretion rate and spin. This relative simplicity should allow us to scale their observed X-ray properties such as variability and spectra between these two very different black hole mass systems. However, while the broadband power spectral densities (PSD) do show some similarities (M$^{c}$Hardy et al. 2006, 2007), the BHB show strong quasi-periodic oscillations (QPOs) at both low frequencies (0.1-10~Hz: potentially from Lense-Thirring precession: Stella \& Vietri 1998; Ingram, Done \& Fragile 2009; Veledina, Poutanen \& Ingram 2013) and high frequencies (100s of Hz: potentially related to the Keplerian period of the innermost disc: Remillard \& McClintock 2006), which are generally absent in AGN. The lack of QPO detections in AGN is probably mainly due to the much longer timescale of AGN QPO predicted by scaling their from BHB. Even the lowest-mass AGN of $\sim 10^6 M_\odot$ would have predicted mass-scaled low-frequency QPOs at 0.1-10 day timescales, which makes them difficult to study with the restricted duration of continuous X-ray exposures (Vaughan \& Uttley 2005, 2006). More typical local AGN with masses of $\sim 10^{7-8} M_\odot$ would imply much worse data windowing problems. Instead, high-frequency QPOs provide a better potential match to observational constraints for the lowest-mass AGN. These are observed locally as Narrow-line Seyfert 1 galaxies (NLS1s), which are accreting at high Eddington ratios (e.g. Done \& Jin 2016; Jin, Done \& Ward 2016, 2017a,b). Indeed, the first AGN X-ray QPO was discovered in the NLS1 RE J1034+396\ with a period of $3730\pm60$ s (Geirli\'{n}ski et al. 2008). Since then a few X-ray QPOs with lower significances have been reported in NLS1s, such as 1H 0707-495 (Pan et al. 2016; Zhang et al. 2018), MS 2254.9-3712 (Alston et al. 2015), Mrk 766 (Zhang et al. 2017), MCG-06-30-15 (Gupta et al. 2018). A couple of Seyfert 2s were also reported to exhibit a QPO, including 2XMM J123103.2+110648 (Lin et al. 2013) and XMMU J134736+173403 (Carpano \& Jin, 2018). X-ray QPOs were reported in tidal disruption events around super-massive black holes, such as Swift J164449.3+573451 (Reis et al. 2012) and ASASSN-14li (Pasham et al. 2019). Recently, a new type of X-ray periodic signal given the term quasi-periodic eruption (QPE) has been reported in the Seyfert 2 galaxy GSN 069, whose black hole mass is estimated to be $\sim 4\times10^{5}M_{\odot}$ (Shu et al. 2018; Miniutti et al. 2019), although the properties of X-ray QPE are very different from QPO. \begin{table*} \centering \caption{List of {\it XMM-Newton}\ Observations on RE J1034+396. GTI is the integrated good exposure time in EPIC-pn after removing intervals containing background flares. Obs-1 and Obs-2 are in the full-frame mode, while the other observations are all in the small-window mode. $N_{\rm H,host}$ is the best-fit host galaxy absorption (see Section~\ref{sec-spec-sx}), and the Galactic absorption is fixed at $1.36\times10^{20}$cm$^{-2}$. $F_{\rm 0.3-2 keV}$ is the absorbed 0.3-2 keV flux. Errors indicate the 90\% confidence range. $f_{\rm QPO}$ and $Q_{\rm QPO}$ are the QPO frequency and quality factor in the 1-4 keV band as reported by Alston et al. (2014) for the first 8 observations. The data of Obs-1 is not good enough for the QPO analysis due to severe background contamination. Obs-3 and Obs-6 are the two observations when the QPO is not detected. Obs-9 is our new observation.} \begin{tabular}{@{}lccccccccccc@{}} \hline Obs No. & ObsID & Obs Date & On-Time & GTI & $N_{\rm H,host}$ & $F_{\rm 0.3-2 keV}$ & $F_{\rm 2-10 keV}$ & $\Gamma_{\rm 0.3-2 keV}$ & $\Gamma_{\rm 2-10 keV}$ & $f_{\rm QPO}$ & $Q_{\rm QPO}$ \\ & & & (ksec) & (ksec) & ($10^{20}$cm$^{-2}$) &\multicolumn{2}{c}{(10$^{-12}$ erg cm$^{-2}$ s$^{-1}$)} & & & ($10^{-4}$Hz) & \\ \hline Obs-1 & 0109070101 & 2002-05-01 & 12.8 & 1.8 & 0.00$^{+0.58}_{l}$ & 8.72$^{+0.29}_{-0.29}$ & 0.95$^{+0.39}_{-0.39}$ & 3.86$^{+0.05}_{-0.05}$ & 1.91$^{+0.37}_{-0.27}$ & -- & -- \\ Obs-2 & 0506440101 & 2007-05-31 & 91.1 & 79.5 & 1.00$^{+0.16}_{-0.16}$ & 8.52$^{+0.04}_{-0.04}$ & 1.16$^{+0.04}_{-0.04}$ & 3.81$^{+0.01}_{-0.01}$ & 2.06$^{+0.06}_{-0.06}$ & 2.7 & 24 \\ Obs-3 & 0561580201 & 2009-05-31 & 60.8 & 43.4 & 1.08$^{+0.11}_{-0.11}$ & 11.27$^{+0.04}_{-0.04}$ & 0.82$^{+0.03}_{-0.03}$ & 4.20$^{+0.01}_{-0.01}$ & 2.09$^{+0.05}_{-0.05}$ & $\times$ & $\times$ \\ Obs-4 & 0655310101 & 2010-05-09 & 44.3 & 19.3 & 0.00$^{+0.17}_{l}$ & 7.97$^{+0.05}_{-0.05}$ & 1.07$^{+0.05}_{-0.05}$ & 3.73$^{+0.01}_{-0.01}$ & 2.03$^{+0.06}_{-0.06}$ & 2.7 & 11 \\ Obs-5 & 0655310201 & 2010-05-11 & 53.0 & 31.2 & 0.00$^{+0.18}_{l}$ & 7.92$^{+0.04}_{-0.04}$ & 1.13$^{+0.04}_{-0.04}$ & 3.71$^{+0.01}_{-0.01}$ & 1.97$^{+0.05}_{-0.05}$ & 2.5 & 13 \\ Obs-6 & 0675440301 & 2011-05-07 & 32.2 & 18.2 & 2.29$^{+0.16}_{-0.16}$ & 13.56$^{+0.07}_{-0.07}$ & 1.02$^{+0.05}_{-0.05}$ & 4.40$^{+0.01}_{-0.01}$ & 1.96$^{+0.06}_{-0.06}$ & $\times$ & $\times$ \\ Obs-7 & 0675440101 & 2011-05-27 & 36.0 & 14.7 & 0.01$^{+0.26}_{-0.01}$ & 8.91$^{+0.07}_{-0.07}$ & 1.20$^{+0.07}_{-0.07}$ & 3.86$^{+0.01}_{-0.01}$ & 1.97$^{+0.07}_{-0.07}$ & 2.6 & 9 \\ Obs-8 & 0675440201 & 2011-05-31 & 29.4 & 12.6 & 0.04$^{+0.27}_{-0.04}$ & 8.12$^{+0.07}_{-0.07}$ & 1.24$^{+0.07}_{-0.07}$ & 3.73$^{+0.01}_{-0.01}$ & 1.87$^{+0.07}_{-0.07}$ & 2.6 & 7 \\ Obs-9 & 0824030101 & 2018-10-30 & 71.6 & 64.7 & 0.00$^{+0.02}_{l}$ & 7.99$^{+0.03}_{-0.03}$ & 1.09$^{+0.03}_{-0.03}$ & 3.73$^{+0.01}_{-0.01}$ & 2.01$^{+0.05}_{-0.05}$ & 2.8 & 20 \\ \hline \end{tabular} \label{tab-obs} \end{table*} \subsection{RE J1034+396} RE J1034+396\ is a well studied AGN located at $z$ = 0.042. It has an extraordinary steep soft X-ray spectrum compared to other AGN (Puchnarewicz et al. 1995; Wang \& Netzer 2003; Casebeer et al. 2006; Crummy et al. 2006) though much of this is probably due to the disc itself (Done et al. 2012; Jin et al. 2012a,b,c). The hydrogen Balmer emission lines of RE J1034+396\ have a full width half maximum (FWHM) of $\lesssim 1500$ km s$^{-1}$, defining the source as a NLS1 galaxy (Puchnarewicz et al. 1995; Mason, Puchnarewicz \& Jones 1996; Gon\c{c}alves, V\'{e}ron \& V\'{e}ron-Cetty 1999; Bian \& Huang 2010). Its black hole mass is estimated to be $10^6-10^7~M_{\odot}$ (see Czerny et al. 2016 for a summary of several different mass estimates), with the most probable mass range being $(1-4)\times10^{6}~M_{\odot}$ (Gerli\'{n}ski et al. 2008; Middleton et al. 2009; Bian \& Huang 2010; Jin et al. 2012a; Chaudhury et al. 2018). The mass accretion rate of RE J1034+396\ is close to or slightly above the Eddington limit (Jin et al. 2012a; Czerny et al. 2016). The most notable phenomenon of RE J1034+396\ is the QPO signal detected in its X-ray emission, which is the first significant detection of an X-ray QPO in AGN (Gierli\'{n}ski et al. 2008). Since then many studies have been conducted in order to understand the physical origin of this QPO, as well as its potential trigger. The QPO varies significantly in its root-mean-square (RMS) amplitude between different observations, but not in its frequency. The QPO signal was most significant in the first detection during the {\it XMM-Newton}\ observation in 2007 (Gierli\'{n}ski et al. 2008; Middleton et al. 2009). Then it was detected in only 4 of the 6 subsequent {\it XMM-Newton}\ observations made before 2011 (Alston et al. 2014). The high coherence of this QPO signal ($Q\gtrsim10$) is comparable to the high-frequency QPO at 67 Hz seen in the high mass accretion rate state of the BHB GRS 1915+105 (M$~=12.4^{+2.0}_{-1.8}~M_{\odot}$, Reid et al. 2014). This is also consistent with the mass scaling if the mass of RE J1034+396\ is $(1-4)\times10^{6}~M_{\odot}$ (Middleton, Uttley \& Done 2011; Czerny et al. 2016; Chaudhury et al. 2018). The RMS of the QPO is energy dependent, showing that the QPO spectrum is subtly different to the time averaged spectrum, and the hard X-ray QPO leads the soft X-ray by 300-400 s (Gierli\'{n}ski et al. 2008; Middleton, Done \& Uttley 2011). This corresponds to a light travel distance of $\sim$30 $R_{\rm g}$ in the disc reprocessing scenario, which however places no constraints on the black hole spin. This soft X-ray lag was also reported by Zoghbi \& Fabian (2011) who performed spectral-timing analysis in the frequency domain using the same dataset. \subsection{This Work} Despite all previous studies, the long-term behaviour (over a timescale of 10 years) of the QPO in RE J1034+396\ remains unknown. This is because of the visibility issue of this source with {\it XMM-Newton}\ since 2011. In this paper, we present results from our new {\it XMM-Newton}\ observation of RE J1034+396\ obtained in 2018. These new data allow us to explore the latest properties of this QPO signal, and help us to understand the mechanism of AGN QPO in general. This paper is organized as follows. In Section 2 we list all the {\it XMM-Newton}\ observations of RE J1034+396\ and describe the data reduction procedures. In Section 3 we present the light curve and QPO signal in the new data, which is followed by a detailed analysis and modelling of the PSD and QPO in Section 4. The study of the QPO's long-term variation is presented in Section 5. Detailed discussions of the QPO mechanism is presented in Section 6, and the final section summarizes our main results and conclusions. Unless otherwise specified, all the error bars presented in this work refer to the 1$\sigma$ uncertainty. \section{Observations and Data Reduction} \label{sec-obs} RE J1034+396\ was previously observed by {\it XMM-Newton}\ (Jansen et al. 2001) for 8 times between 2002 and 2011, after which it was no longer observed by {\it XMM-Newton}\ due to restricted visibility, and so the QPO signal could not be monitored. Since 2018 the visibility has improved to $\gtrsim$ 70 ks per {\it XMM-Newton}\ orbit, and so we observed it again with {\it XMM-Newton}\ in 2018 for 72 ks in order to reexamine its X-ray QPO. This new observation is already 7 years from the previous observation in 2011, and 11 years from the initial discovery of QPO in 2007. All of the observations are listed in Table~\ref{tab-obs}. We downloaded all the data from {\it XMM-Newton}\ Science Archive (XSA). In this study, we mainly focused on the X-ray variability and QPO, so only the data from the European Photon Imaging Cameras (EPIC) (Str\"{u}der et al. 2001) were used. The {\it XMM-Newton}\ Science Analysis System (SAS v18.0.0) was used to reduce the data. Firstly, the {\sc epproc} and {\sc emproc} tasks were used to reprocess the data with the latest calibration files. Then we defined a circular region with a radius of 80 arcsec centered on the position of RE J1034+396\ as the source extraction region. In the first two observations the EPIC cameras were in the full-window mode, so the background extraction region was chosen to be the same size in a nearby region without any sources. Later observations were all taken in the small-window mode, so for the two Metal Oxide Semi-conductor (MOS) cameras we extracted the background from a nearby Charge-Coupled Device (CCD) chip, while for the pn camera the background was extracted close to the edge of the small window to minimize contamination of the primary source. We adopted good events (FLAG=0) with PATTERN $\leq$ 4 for pn and PATTERN $\leq$ 12 for MOS1 and MOS2. The {\sc evselect} task was used to extract the source and background light curves, where the background flares were identified. By running the {\sc epatplot} task, we found that the first two observations in the full-window model suffered from significant photon pile-up in the central $\sim$10 arcsec region of the point spread function (PSF), while the following observations were not affected by this effect thanks to the small-window mode used. The {\sc epiclccorr} task was used to perform the background subtraction, and apply various corrections to produce the intrinsic source light curve. The source and background spectra were also extracted using the {\sc evselect} task. Then the {\sc arfgen}, {\sc rmfgen} and {\sc grppha} tasks were run to produce the auxiliary and response files and rebin the spectra. The {\sc Xspec} software (v12.10.1, Arnaud 1996) was used to perform all the spectral analysis. All the timing results presented in this paper were based on the EPIC-pn data with the highest signal-to-noise (S/N) among the three EPIC cameras. The MOS data were reduced in a similar way and used for the consistency check. \begin{figure} \centering \includegraphics[trim=0.15in 0.3in 0.0in 0.0in, clip=1, scale=0.49]{lc_plot4.pdf} \caption{Light curves of RE J1034+396\ in Obs-9 as observed by {\it XMM-Newton}\ EPIC-pn, binned with 200 s. In each panel, the shadowed regions indicate masked time intervals due to background flares. The red solid line is the summation of IMFs whose timescales are equal or longer than the QPO period (see Section~\ref{sec-lc}). The vertical dotted lines indicate every 3550 s time interval, which is the latest period of the 0.3-10 keV light curve.} \label{fig-lc} \end{figure} \section{The New {\it XMM-Newton}\ Observation in 2018} \label{sec-obs9} We first explore the X-ray variability of RE J1034+396\ during the latest {\it XMM-Newton}\ observation, and search for a QPO signal in the light curve. \subsection{X-ray Light-curves} \label{sec-lc} RE J1034+396\ exhibits significant X-ray variability in the latest {\it XMM-Newton}\ observation in 2018 (hereafter: Obs-9), as shown by the EPIC-pn light curves in Figure~\ref{fig-lc}. The shadowed regions in the figure indicate time intervals affected by background flares. For $\sim$90 per cent of the observing time the background was very low and stable, only the first $\sim$ 4 ks and a few short periods are affected by flares, so the overall data quality is excellent. After masking out all of the background flares, the mean source count rates are found to be 4.55, 0.52 and 0.14 counts per second (cps) in the three typical energy bands of 0.3-1, 1-4 and 2-10 keV, respectively. This immediately suggests that the X-ray spectrum of RE J1034+396\ remained soft during the new observation. These energy bands are representative because the 0.3-1 keV band is dominated by the soft excess, the 2-10 keV band is dominated by the hard X-ray corona emission (e.g. Middleton et al. 2009). The 1-4 keV band is chosen to facilitate comparison with previous studies, because the QPO signal was significantly detected in this band in 5 out of all 8 {\it XMM-Newton}\ observations before 2011 (Alston et al. 2014). In Figure~\ref{fig-lc}, from the y-axis of fractional count rate relative to the mean value, it is also clear that the amplitude of the hard X-ray variability is much larger than that in the soft X-ray band, but the soft X-rays seem to have stronger variability over long timescales ($>$1ks) than short timescales ($<$1ks). We apply the Ensemble Empirical Mode Decomposition (EEMD) method (Huang et al. 1998; Wu \& Huang 2009; Hu etal. 2011) to these light curves in order to examine the variability in different timescales. This method works in the time domain to resolve a noisy light curve into a complete set of independent components, namely the Intrinsic Mode Functions (IMFs), which possess different variability patterns and are locally orthogonal to each other. This method has been previously applied to the light curve of RE J1034+396\ for Obs-2, and the QPO variability is found to concentrate in one of the IMFs (Hu et al. 2014). The Python {\sc PyEMD} package was used to perform the EEMD analysis. We find that each light curve (50 s binned) can be decomposed into 9 IMFs, with the timescale increasing from the first component (C1) to the last (C9). We also find that the QPO signal is contained in C5, while C6 to C9 can be combined to show the variability over longer timescales. The summation of IMFs whose timescales are equal or longer than the QPO period is shown in Figure~\ref{fig-lc} as the solid red line. The periodic positions separated by 3550 s (see Section~\ref{sec-qpovar-freq}) are marked by the vertical dotted lines. It is clearly seen that the instantaneous period of the QPO is varying within the observing time, confirming that it is indeed a quasi-periodic signal. \begin{figure} \centering \includegraphics[trim=0.0in 0.3in 0.0in -0.2in, clip=1, scale=0.59]{step1_psdfit_3model_plot.pdf} \caption{The 1-4 keV PSD of RE J1034+396\ in Obs-9, fitted with a single power law model (RL, red), or a bending power law model (Bending PL, blue). The high frequency range is dominated by the Poisson noise power, which is modeled as a free constant. The solid and dash lines indicate the total models and their separate components. The lower panel shows the data-to-model ratio (times by 2) vs. frequency, where the QPO feature is clearly visible in both models.} \label{fig-nulltest} \end{figure} \begin{figure*} \centering \includegraphics[trim=0.0in 0.3in 0.0in 0.0in, clip=1, scale=0.57]{step4_tr_tsse.pdf} \caption{{\it posterior} predictive distributions of the $T_{\rm R}$ and $T_{\rm SSE}$ statistics for the power law model (PL) and the bending power law model (Bending PL) for the 1-4 keV PSD of RE J1034+396\ in Obs-9. The $T_{\rm LRT}$ statistic is for checking the decrease of deviance after adding a lorentzian component to the continuum model to fit the QPO feature. The observed value is shown by the vertical blue dash line, together with the corresponding {\it posterior} predictive $p$-value. These results suggest that the QPO feature seen in Figure~\ref{fig-nulltest} should be an intrinsic component of the PSD.} \label{fig-nullhist} \end{figure*} \begin{table} \centering \caption{The first row shows the $R_{\rm QPO}$ value (i.e. 2$\times$data/continuum at the QPO frequency) measured in the PSD of RE J1034+396\ in Obs-9. As shown in Figure~\ref{fig-psd}, we use a power law plus a Poisson noise constant and a lorentzian profile to model the entire PSD. The critical $R_{\rm QPO}$ values for different confidence limits are derived from our Bayesian PSD simulations. The final row shows the significance of the observed QPO.} \begin{tabular}{lcccc} \hline & 0.3-1 keV & 1-4 keV & 2-10 keV \\ \hline $R_{\rm QPO, obs}$ & 39.8 & 93.3 & 22.2 \\ \hline $R_{\rm QPO, 2\sigma}$ & 6.4 & 6.4 & 7.1 \\ $R_{\rm QPO, 3\sigma}$ & 12.6 & 12.6 & 14.1 \\ $R_{\rm QPO, 4\sigma}$ & 21.4 & 21.1 & 24.1 \\ Sig. of $R_{\rm QPO, obs}$ & $5.7\sigma$ & $9.0\sigma$ & $3.8\sigma$ \\ \hline \end{tabular} \label{tab-qpo-sig} \end{table} \begin{table} \centering \caption{Results of the MLE fit and Bayesian analysis of the PSDs of RE J1034+396\ in different energy bands. $f_{\rm QPO}$ is the peak frequency of the best-fit lorentzian profile to the QPO. $W_{\rm QPO}$ is the FWHM of the best-fit QPO lorentzian profile in the log-log space. rms$_{\rm QPO}$ is the rms of the QPO by integrating the best-fit lorentzian profile. $\alpha_{\rm pl}$ is the slope of the continuum noise fitted by a power law. {\it Pos} is the Poisson noise power. We also list values corresponding to the Bayesian mean, 5\% and 95\% percentiles.} \begin{tabular}{lcccc} \hline Parameter & Method & 0.3-1 keV & 1-4 keV & 2-10 keV \\ \hline $f_{\rm QPO}$ & MLE & 2.83 & 2.83 & 2.87 \\ ($\times10^{-4}$ Hz) & 1$\sigma$ & 0.06 & 0.07 & 0.08 \\ & Mean & 2.80 & 2.82 & 2.91 \\ & 5\% & 2.63 & 2.67 & 1.26 \\ & 95\% & 2.96 & 2.97 & 7.40 \\ \hline $W_{\rm QPO}$ & MLE & 0.014 & 0.018 & 0.017 \\ Log (Hz) & Mean & 0.008 & 0.012 & 0.013 \\ & 5\% & 2.1E-7 & 2.0E-6 & 3.4E-9 \\ & 95\% & 0.044 & 0.057 & 0.074 \\ \hline rms$_{\rm QPO}$ & MLE & 4.0 & 12.4 & 10.8 \\ (\%) & Mean & 4.1 & 12.3 & 11.0 \\ & 5\% & 1.5 & 6.7 & 4.7 \\ & 95\% & 6.8 & 19.1 & 17.4 \\ \hline $\alpha_{\rm pl}$ & MLE & -1.29 & -0.71 & -0.37 \\ & Mean & -1.39 & -0.99 & -0.52 \\ & 5\% & -1.06 & -0.37 & -0.18 \\ & 95\% & -1.71 & -1.72 & -0.96 \\ \hline {\it Pos} & MLE & 0.63 & 6.05 & 12.95 \\ & Mean & 0.67 & 3.93 & 3.74 \\ & 5\% & 0.55 & 2.90 & 2.7E-3 \\ & 95\% & 1.37 & 17.9 & 24.8 \\ \hline \end{tabular} \label{tab-psd} \end{table} \begin{figure*} \centering \includegraphics[trim=0.05in 0.4in 0.0in 0.2in, clip=1, scale=0.5]{psd_plot4.pdf} \caption{PSDs of RE J1034+396\ in the 0.3-1, 1-4 and 2-10 keV bands, respectively. In Panel a1, the red solid line indicates the best-fit model which is decomposed into three red dotted lines, including a power law to fit the intrinsic underlying noise, a free constant to fit the Poisson noise, and a lorentzian profile to fit the QPO signal. In Panel a2, the ratio of the PSD data to the best-fit PSD continuum is shown. A strong and coherent QPO peak can be seen at $\sim2.8\times10^{-4}$ Hz (see Table~\ref{tab-psd} for more accurate values). The green, blue and red dashed lines indicate the 2, 3 and 4 $\sigma$ confidence limits of the fluctuation in the red noise, respectively, with the model assuming that the QPO is a real PSD component superposed on the red noise continuum (see Section~\ref{sec-bayesian}).} \label{fig-psd} \end{figure*} \subsection{X-ray PSD and the QPO Signal} \label{sec-psd} In order to quantitatively measure the QPO signal, we perform analysis in the frequency domain. We first produce the PSD{\footnote{This is actually a periodogram, which is a single realization of the intrinsic PSD. In this work we simply call it a PSD.}} for the 1-4 keV light curve, where the QPO appears more significant than in other bands (Alston et al. 2014). The first 4 ks data are excluded because of the severe background contamination. The normalization of these PSD is chosen such that the integration of the PSD is the fractional rms variability (i.e. the Belloni-Hasinger normalization, Belloni \& Hasinger 1990). Indeed, a strong peak feature can be identified in the PSD, as shown in Figure~\ref{fig-nulltest}. The frequency of this feature is similar to previously reported values (Alston et al. 2014), implying that it should be the same QPO signal as found in observations before 2011. The QPO feature is very narrow, although there appears to be a broader base which may be partly due to the fluctuation of the underlying red noise. For the width of the smallest frequency sampling interval, the quality factor of the QPO is 20 in the 1-4 keV band, suggesting that this QPO signal is highly coherent. \subsubsection{Testing Continuum-only Hypothesis for the PSD} We then perform some null hypothesis tests. Firstly, we fit a single power law model to the PSD using the maximum likelihood estimates (MLE) method (Vaughan 2010). Under the Belloni-Hasinger normalization, the theoretical Poisson power is a constant value, so we add a free constant to the power law and let the fitting determine the Poisson power. The red lines in the upper panel of Figure~\ref{fig-nulltest} shows the best-fit PSD model and the separate components. The power law slope is found to be -1.09. The Poisson noise power dominates above $10^{-3}$ Hz, while the red noise power dominates lower frequencies. A standard way to estimate the significance of the observed power, $I_{\rm j}$, deviation from the model continuum, $S_{\rm j}$, at any frequency $f_{\rm j}$ is $R_{\rm j}=2 I_{\rm j}/S_{\rm j}$. This can be used to make a test statistic $T_{\rm R}={\rm max}(R_{\rm j})$. This is shown in the lower panel of Figure~\ref{fig-nulltest}. The QPO is obvious, with the observed $T_{\rm R}$ being 43.6. However, this does not simply give a significance of the QPO via the $\chi^2$ distribution with two degrees of freedom (dof) of the observed power $I_{\rm j}$, because there are also uncertainties in the model $S_{\rm j}$ which should be taken into account. Instead, we follow the more robust Bayesian prescription of Vaughan (2010) which includes the uncertainty of estimating the intrinsic PSD parameters in the simulated {\it posterior} predictive periodograms. We simulate the continuum model using the initial values of the MLE parameters, assuming a uniform prior probability density function (Vaughan 2010; Alston et al. 2014). The Python {\sc emcee} package is used to perform the Markov Chain Monte Carlo (MCMC) sampling in order to draw from the {\it posterior} of model parameters (Hogg, Bovy \& Lang 2010). We generate $10^{5}$ {\it posterior} predictive periodograms, and fit each of them with the same model. Then the {\it posterior} predictive distributions (PPDs) are derived for $T_{\rm R}$. These are shown in Figure~\ref{fig-nullhist} Panels-a1. The {\it posterior} predictive {\it p}-value of $T_{\rm R}$ is $<~10^{-5}$, i.e. none of the simulated periodograms can produce a larger $T_{\rm R}$ than the observation. These same simulations also allow us to assess the overall goodness of fit of the power law continuum model to the data. The fit has overall $\chi^2=1053.5$, so this sum of squared standard errors can be used as a test statistic $T_{\rm SSE}=\chi^2$. The PPDs of $T_{\rm SSE}$ are shown in Figure~\ref{fig-nullhist} Panels-a2. The {\it posterior} predictive {\it p}-value of $T_{\rm SSE}$ is $<~10^{-5}$, i.e. none of the simulated periodograms can produce a larger $T_{\rm SSE}$. These results clearly rule out the power law continuum-only null hypothesis. The X-ray PSD of AGN often shows a break at high frequencies (M$^{c}$Hardy et al. 2006, 2007). Vaughan (2010) shows that a bending power law is a better fit than a power law for the PSD of RE J1034+396, when the QPO feature is not modeled separately. Thus we replace the power law model with a bending power law{\footnote{A lower limit of 0 is put to the low-frequency slope of the bending power law model in order to maintain a realistic PSD shape of AGN.}}, and repeat all the above analysis. The MLE bend frequency is $4.9\times10^{-4}$ Hz. This slightly reduces $T_{\rm R}$ to 29.4 i.e. the QPO significance is still high, but has more impact on the overall fit quality, with the $T_{\rm SSE}$ being 812.4. The PPDs of $T_{\rm R}$ and $T_{\rm SSE}$ are shown in Figure~\ref{fig-nullhist} Panels-b1 and b2. The {\it posterior} predictive {\it p}-values for the two statistics are 0.00018 and 0.054. The bending power law model does give a better overall fit to the PSD which is within the 95\% confidence limit, but the deviation at the QPO frequency is still significant at the 0.00018 level. Therefore, we can conclude that neither a power law nor a bending power law model can fully describe the PSD of RE J1034+396. The peak feature in $(2.5-3.5)\times10^{-4}$ Hz is clearly an intrinsic signal in the PSD. Therefore, it would not be appropriate to include this feature in the PSD's continuum fitting, and the previous suggestion regarding the bending power law being preferred over a power law is no longer valid. Indeed, if we mask out this band from the fitting, we find that there is no statistical difference between a power law and a bending power law. Below we show the results when this QPO-like feature is modelled independently. \subsubsection{More Complete PSD Models} We now add a Lorentzian component to the model to describe the QPO-like feature, and test for the significance of this additional component using a likelihood ratio test statistic, $T_{\rm LRT}$, which is derived from the difference in $\chi^2$ between the model with and without the Lorentzian. We emphasize that our application of $T_{\rm LRT}$ does not require the two models to be nested (Vaughan 2010). $T_{\rm LRT}$ is found to be very large for the power law continuum, with a value of 57.9. The previous MCMC simulations of models for the continuum were fit with both a continuum and a continuum plus Lorentzian component, and the PPD for the change in $\chi^2$ for a model including a Lorentzian is shown in Figure~\ref{fig-nullhist} Panel-a3 for the power law and Panel-b3 for the bending power law. Both have {\it posterior} predictive {\it p}-values of $T_{\rm LRT}$ $<~10^{-5}$, This shows that a PSD model with a separate QPO component is significantly better than a continuum-only model at the level of $<~10^{-5}$. As a further test, we also compare the goodness of fit between the power law plus Lorentzian and the previous bending power law-only model. The observed $T_{\rm LRT}$ between these two models is 23.1, with the bending power law-only model being the less preferred model. Then we perform $10^5$ simulations of the {\it posterior} predictive periodograms using the bending power law-only model. For all the simulated periodograms, the bending power law-only model is always better than the power law plus Lorentzian model. Hence this is further evidence that the observed PSD must be very different from a single bending power law. In addition to the above statistical tests, it is also important to emphasize that so far this QPO has been repeatedly detected at similar frequencies in 6 out of 9 independent {\it XMM-Newton}\ observations (see Table~\ref{tab-obs}), hence it must surely be a real signal intrinsic to the source, rather than a temporary feature due to the fluctuation of the underlying red noise. In order to model these PSDs, we first need to decide which PSD continuum model to use. Previous works reported that for RE J1034+396\ a bending power law model fits the PSD better than a single power law model (Vaughan 2010; Alston et al. 2014). However, it is important to realize that the bending of the power law was mainly driven by the QPO feature which was never modeled as a separate component. But now the QPO is confirmed to be an intrinsic PSD component, we should include it in the model and test the continuum PSD model again. We compare the power law model with a bending power law under the condition that the QPO is additionally modeled by a separate Lorentzian. In this case, the $T_{\rm LRT}$ statistic between the two models is only 11.5, which corresponds to a {\it posterior} predictive {\it p}-value of 0.07. This indicates that the difference between the two model fits is not very significant. Also, the best fit bend frequency is found to be $9.7\times10^{-4}$ Hz. This frequency is two orders of magnitude higher than $1.7\times10^{-5}$ Hz, which is estimated from the correlation between the break frequency, black hole mass and optical luminosity of AGN (M$^{c}$Hardy et al. 2006), for a black hole mass of $2\times10^{6}~M_{\odot}$ and optical luminosity of $2\times10^{43}$ erg s$^{-1}$ (Jin et al. 2012a). A similar low break frequency at $\sim~10^{-5}$ Hz is also seen in Chaudhury et al. (2018) in their longer timescale broadband PSD of RE J1034+396. Thus it is unlikely that this best-fit bend is an intrinsic feature of the PSD, especially as it is not present in any of the other energy bands (see Figure~\ref{fig-psd}a and c) or previous observations (Alston et al. 2014). Hence, we adopt a PSD model of {\sc powerlaw + Lorenzian + constant} for all the subsequent fits. \begin{figure*} \centering \begin{tabular}{cc} \includegraphics[trim=-0.1in 0.2in 0.0in 0.0in, clip=1, scale=0.55]{mcsample_qpopeak_hist_plot2.pdf} & \includegraphics[trim=-0.1in 0.2in 0.0in 0.0in, clip=1, scale=0.55]{mcsample_slope_hist_plot1.pdf} \\ \end{tabular} \caption{The {\it posterior} predictive distributions of the QPO frequency (Panel-a) and the slope of the PSD continuum (Panel-b) in 0.3-1 keV (red histogram), 1-4 keV (black histogram) and 2-10 keV (cyan histogram). The best-fit Gaussian profiles of these distributions are shown by the dash lines. The vertical solid lines mark the best-fit MLE values (see Section~\ref{sec-bayesian} for detailed descriptions).} \label{fig-qpomc-ene} \end{figure*} \section{Detailed PSD and QPO Analyses} \subsection{Energy Dependence of the PSD and QPO} \label{sec-bayesian} In order to further explore the PSD and QPO, we examine their behaviours in different energy bands. Firstly, we produce PSDs for the light curves in 0.3-1, 1-4 and 2-10 keV bands. The upper panels of Figure~\ref{fig-psd} show that a QPO feature exists in the PSDs of all three energy bands at similar frequencies, and all of them appear very narrow. Then we fit these PSDs with the PSD model mentioned above. The MLE model fits are shown as the red lines in the upper panels of Figure~\ref{fig-psd}. The best-fit parameters are determined by the MLE method and are listed in Table~\ref{tab-psd}. The Poisson noise power is higher in harder X-rays because of the decreasing count rate. The lower panels of Figure~\ref{fig-psd} show the ratio $R_{\rm j}=2\times I_{\rm j}/S_{\rm j}$, where $S_{\rm j}$ is the continuum model only, i.e. the same power law plus Poisson constant but {\em without} including the Lorentzian. The $R_{\rm j}$ value at the QPO frequency in the 1-4 keV band now increases to 93.3, higher than the value of 43.6 in the previous section due to the power law continuum level being lower when the Lorentzian is separately modelled. We perform Bayesian analysis on the continuum power law (plus noise) with MCMC sampling as in Section~\ref{sec-obs9} to produce $10^5$ {\it posterior} predictive periodograms, and use these to put the $2,3$ and $4\sigma$ significance levels (dashed lines) on the lower panels of Figure~\ref{fig-psd}. Since we only have $10^5$ simulations, we cannot go beyond a probability of $10^{-5}$ i.e. $4.6\sigma$. However, we can assess the peak significance by scaling the PPD results e.g. for the 1-4~keV energy band, the $R$ values for $2, 3, 4\sigma$ are 6.4, 12.6 and 21.1. By comparison, a standard $\chi^2$ distribution with 2 dof has $R$ values corresponding to these $\sigma$ levels of 6.2, 11.8 and 19.3, which are only slightly smaller than for the full simulations. The peak $R$ is 93.3 in this band, which is $4.4\times$ larger than the $R$ value at $4\sigma$. For a standard $\chi^2$ distribution with 2 dof, a $R$ value which is $4.4\times$ higher than that for $4\sigma$ would be a $9\sigma$ significance. We similarly assess the significance level of the QPO in the 0.3-1~keV band to be $5.7\sigma$, but for the 2-10~keV band the PPD directly give the significance as $3.8\sigma$. The $R$ values and significances for each energy band are listed in Table~\ref{tab-qpo-sig}. We emphasize that the confidence limits for the $R$ value in Figure~\ref{fig-psd} are used to assess the significance of the QPO at any particular frequency. Thus it is different from the $T_{\rm R}$ test presented in the previous section, which is used to assess the significance of a QPO signal over the entire frequency band. Note that before this study, the highest $R_{\rm QPO}$ for RE J1034+396\ was reported to be $\sim$60 for the 0.3-10 keV band in Obs-2 (Gierli\'{n}ski et al. 2008, using the same model of a power law continuum plus Lorentzian and Poisson noise). Therefore, not only does our new observation demonstrate the long-term nature of the QPO, but it also finds the so-far highest level of significance for an X-ray QPO signal in all AGN. We repeat the Baysian analysis with MCMC sampling on the full PSD model (including the Lorentzian). We use these PPD to set the 5\% and 95\% uncertainty ranges on the MLE parameter values for the power spectral components, as detailed in Table~\ref{tab-psd}. We show the full PPD for the QPO frequency in each energy band in Fig~\ref{fig-qpomc-ene}a. Clearly this is consistent across all energies, which is also confirmed by the overlap of the QPO frequency uncertainty ranges in Table~\ref{tab-psd}. Table~\ref{tab-psd} shows that the width of the QPO is very small ($\Delta \log f = 0.014$ in 0.3-1 keV), and is consistent with being the same across all energy bands. The table also shows that the fractional rms amplitude of the QPO increases significantly with energy, showing that a larger fraction of hard X-rays is varying at the QPO frequency than the soft X-rays. However, since the flux ratio between 0.3-1 keV and 1-4 keV is 7.5, the absolute rms amplitude of the QPO in 0.3-1 keV is actually larger than that in 1-4 keV by a factor of 2.4. The spectrum of the QPO will be examined in more detail in our next paper (Paper-II). Fig~\ref{fig-qpomc-ene}b shows that the best-fit power law slopes systematically harden at higher energies, with $\alpha_{pl}=-1.29$ for 0.3-1 keV, -0.71 for 1-4 keV and -0.37 for 2-10 keV. Only 0.3\% of the simulations in 0.3-1 keV have power spectra as hard as observed in 1-4 keV, and only 0.01\% simulations in 0.3-1~keV have power spectra as hard as those observed in 2-10 keV. These results confirm that the steepening of the PSD continual slope towards softer X-rays is an intrinsic property of RE J1034+396. However, the normalization of the power at low frequencies ($\sim 10^{-5}$~Hz) is $\sim 100$~[rms/mean]$^2$~Hz$^{-1}$, similar at all energies (see Figure~\ref{fig-psd}), which suggests that the difference is in the amount of high frequency power. Similar properties of the PSD continuum are also seen in other NLS1s (e.g. Jin et al. 2013; Jin, Done \& Ward 2016, 2017a). This can be interpreted in a model where fluctuations propagate from the disc (which dominates at low energies) to the corona (which dominates at high energies), with additional fluctuations in the corona enhancing the high frequency power in the energy bands dominated by this component (e.g. Gardner \& Done 2014). \subsection{Testing Potential Harmonics of the QPO Signal} We also search for possible harmonics associated with this highly significant QPO. In BHBs, a high-frequency QPO may have harmonics at frequency ratios of 2:3, 3:5 and 2:5, and a low-frequency QPO may exhibit a harmonic frequency ratio of 1:2. Since it is not clear if the detected QPO in RE J1034+396\ represents the fundamental or harmonic frequency, we check all possible harmonic frequencies for potential peak features. Based on the observed QPO frequency of $2.8\times10^{-4}$ Hz, the 2:3, 3:5 and 2:5 ratios predicts potential peaks at $1.9\times10^{-4}$, $4.2\times10^{-4}$, $1.7\times10^{-4}$, $4.7\times10^{-4}$, $1.1\times10^{-4}$ and $7.0\times10^{-4}$ Hz. The 1:2 ratio predicts potential peaks at $1.4\times10^{-4}$ and $5.6\times10^{-4}$ Hz. However, it is already clear from the lower panel of Fig~\ref{fig-psd} that there are no features above $3\sigma$ significance at any other frequency in any of the energy bands. The strongest feature which is even close to any of the potential harmonics listed above is at $1.9\times10^{-4}$ Hz in the PSD of 2-10 keV band (see Figure~\ref{fig-psd} Panel-c), but this is only seen at $\sim~2.1~\sigma$. No other energy bands show peaks with $>~2\sigma$ significance at any of the potential harmonic frequencies. The feature at $4.8\times 10^{-4}$ Hz in the 0.3-1~keV band has $\sim~2.4~\sigma$ significance, but this frequency is not harmonically related. No significant harmonics are seen in the Obs-2 data either (Gierli\'{n}ski et al. 2008; Vaughan et al. 2010; Alston et al. 2014). Therefore, we conclude that there are no significant harmonics of the QPO signal in the current data of RE J1034+396. The QPO of RE J1034+396\ is often compared to the 67 Hz high-frequency QPO of the BHB GRS 1915+105, as it approximately scales with the mass difference between these two accreting black holes (Middleton et al 2009). The overall energy spectra of GRS 1915+105\ in observations showing the 67~Hz are very similar to those of RE J1034+396\ with a strong disc, a smaller warm Compton component, and an even smaller hot Compton tail (Middleton \& Done 2010). GRS 1915+105\ shows three harmonic peaks at 27 Hz (Belloni et al. 2001), 34 Hz (Belloni \& Altamirano 2013) and 40 Hz (Strohmayer 2001), but only the 34 and 41 Hz QPOs appear simultaneously with the 67 Hz QPO. The 34 Hz QPO has a fractional rms of 0.8\% and a quality factor of 13.1 in 2-15 keV. In comparison, the 67 Hz QPO has a rms of 2.0\% and a quality factor of 24.7 in the same energy band, and so the 34 Hz QPO is 60\% weaker than the 67 Hz QPO, but with a similar line width. The 41 Hz QPO has a fractional rms of 2.4\% and a quality factor of 7.7 in 13-27 keV, while in the same energy band the 67 Hz QPO has a rms of 1.9\% and a quality factor of 19.6, thus the 41 Hz QPO is 26\% more powerful than the 67 Hz QPO, but its profile is 56\% broader. Assuming that the QPO of RE J1034+396\ has similar harmonics as the 67 Hz QPO in GRS 1915+105, we can estimate that the intrinsic PSD of RE J1034+396\ may have an extra peak at $1.4\times10^{-4}$ Hz with a rms of 5.0\%, or at $1.7\times10^{-4}$ Hz with a rms of 15.6\%. Such features are not observed in the PSDs of RE J1034+396\ in Figure~\ref{fig-psd}. We test this explicitly using the 1-4 keV PSD. We add the expected harmonic at $1.4\times 10^{-4}$~Hz to the best fit PDS model and simulate $10^{5}$ periodograms. Only a fraction 0.052 of the simulations with the harmonic have power at that frequency as low as observed. We repeat the simulations for the potential harmonic at $1.7\times10^{-4}$ Hz, and find only a fraction 0.01 have power this small. Therefore, the non-detection of these two potential harmonics is probably not due to the random fluctuation of the PSD swamping the signal, but rather it is intrinsic to RE J1034+396. The above analysis rules out the presence of harmonics of similar relative strengths to those observed in GRS GRS 1915+105\ in the current observation of RE J1034+396, but we cannot rule out the possibility that much weaker harmonics may exist, but are swamped by the PSD's fluctuation. Furthermore, GRS 1915+105\ does not often exhibit the 67 Hz QPO and its harmonics simultaneously. Clearly we cannot exclude the possibility that future observations of RE J1034+396\ may show these harmonics. \section{Long-term Variation of the QPO} We compare some key properties of the QPO between Obs-9 and previous observations, especially Obs-2 where the background contamination is low and the QPO signal can be detected across the entire 0.3-10 keV band. Such a comparison allows us to verify the robustness of various QPO properties, as well as checking if there is any evidence for the long-term variation of the QPO. \subsection{Long-term Variation of the QPO Frequency} \label{sec-qpovar-freq} The QPO frequencies reported in previous observations are all in the range of $(2.5-2.7)\times10^{-4}$ Hz (see Table~\ref{tab-obs}), except for the new Obs-9, in which the frequency increases to $2.83\times10^{-4}$ Hz. Therefore, it is necessary to assess the significance of this difference. We only compare Obs-9 with Obs-2, because both of these two observations have low background, and the QPO signal is well determined. For Obs-2, Gerli\'{n}ski et al. (2008) reported that within the 23-83 ks data segment the QPO signal was more significant, thus we perform the comparison with the 0-83 ks and 23-83 ks data segments, separately. The data within 83-91 ks of Obs-2 are excluded because of the background contamination. Since the QPO frequency does not change significantly with the photon energy, we use the entire 0.3-10 keV data to achieve the best S/N in the light curve. The same Bayesian analysis is performed to derive the PPD of the QPO frequency. Figure~\ref{fig-qpofreq-compare} compares the QPO frequency between Obs-2 and Obs-9. In Panel-a, we compare the data-to-model ratio around the QPO frequencies for the three datasets. In Obs-2, when the 0-83 ks data segment is used, two nearby QPO peaks can be detected. The stronger peak is at $2.63\times10^{-4}$ Hz, while the weaker one is at $2.42\times10^{-4}$ Hz. If the first 23 ks data are excluded, the lower-frequency peak becomes much weaker. Hence, we think the low-frequency peak is mainly due to the instantaneous variation of the QPO period (Czerny et al. 2010; Hu et al. 2014). In comparison, the QPO in Obs-9 is clearly a single peak, and is shifted to a higher frequency. The histograms in Panel-b indicate the PPDs of the QPO frequency for the three datasets. The vertical solid lines indicate the best-fit MLE values. For Obs-2 the best-fit MLE period of QPO is $3920\pm150$ s in 0-83 ks, and $3800\pm70$ s in 23-83 ks, which are consistent with the results reported before (e.g. Gerli\'{n}ski et al. 2008; Alston et al. 2014; Hu et al. 2014). The QPO period in Obs-9 is, however, found to be $3550\pm80$ s, which is $250\pm100$ s (i.e. $\sim$7\% of the QPO period) smaller than in Obs-2. The difference between the PPDs of the two observations is also obvious. Compared to the PPD of the QPO frequency for the 0-83 ks segment of Obs-2, the QPO frequency in Obs-9 has a {\it posterior} predictive {\it p}-value of 0.019. For the 23-83 ks segment of Obs-2, the {\it p}-value is 0.028. Based on these results, we report that the QPO frequency in Obs-9 is higher than that found in Obs-2. It is relevant to mention that the QPO also has a flickering nature within a single observation, and that the instantaneous period varies between 3000-4000 s (Hu et al. 2014). However, this does not mean that the observed long-term variation of the QPO frequency is simply due to the short-term variation. In fact, both Obs-2 and Obs-9 contain more than 20 QPO cycles, and so our comparison of the QPO frequency is statistically meaningful. However, it is not known if the increase of QPO frequency (e.q. the decrease of QPO period) within the last 11 years is a monotonic trend or a fluctuation, because in the other observations the QPO signal was not well constrained due to poor data quality (Alston et al. 2014). Clearly, future {\it XMM-Newton}\ observations of RE J1034+396\ can bring further evidence on the long-term variation of the QPO frequency, and hence help us to underlying physical mechanisms involved. \begin{figure*} \centering \begin{tabular}{cc} \includegraphics[trim=0.0in 0.2in 0.0in 0.0in, clip=1, scale=0.56]{psd_compare_obs29.pdf} & \includegraphics[trim=0.0in 0.2in 0.0in 0.0in, clip=1, scale=0.56]{obscomapre_qpopeak_hist_plot2.pdf} \\ \end{tabular} \caption{Comparison of the QPO frequency observed in Obs-2 and Obs-9 for the 0.3-10 keV band. Panel-a: PSDs of different data segments relative to their best-fit continuum model around the QPO frequency. Solid vertical lines indicate frequencies of different QPO peaks. Two nearby narrow QPO peaks are seen in the 0-83 ks data segment of Obs-2. Panel-b: different histograms indicate the {\it posterior} predictive distributions of the QPO frequency for different data segments. Dashed lines indicate the best-fit Gaussian profiles. Vertical solid lines indicate the best-fit MLE values.} \label{fig-qpofreq-compare} \end{figure*} \subsection{Reversed QPO Time-lag between Obs-2 and Obs-9} \label{sec-qpovar-lag} Another important phenomenon related to the QPO is the phase lag (eq. time lag in the time domain). Gierli\'{n}ski et al. (2008) applied the light curve folding method and found $\sim$260 s lag between 0.3-0.4 keV and 2-10 keV (leading) in Obs-2. Middleton, Done \& Uttley (2011) used the same data and method, and found $\sim370$ s lag between 0.2-0.3 keV and 1-10 keV (leading). However, we notice that this method is sensitive to the accuracy of the folding period. A more robust method is to perform the phase lag analysis in the Fourier domain (e.g. Uttley et al. 2014), because it differentiates the variability into different frequency bins. Zoghbi \& Fabian (2011) applied this method to the Obs-2 data, and found a lag of $\sim$500 s between 0.4-0.6 keV and 1.5-2.0 keV (leading), with a coherence of $\sim$0.6 around the QPO frequency. They also showed that the lag spectrum does not change significantly with the inner radius of the annular source extraction region, thereby ruling out any significant influence from pile-up. For consistency, we first reproduce the above results for Obs-2. Three inner radii ($r_{\rm s}$) of the annular source extraction region are tested in order to check the effect of pile-up. The resultant lag vs. frequency and coherence vs. frequency plots are shown in Figure~\ref{fig-lag-compare} Panels-a and c. The time lag and coherence values in the QPO frequency bin of $(2.5-3.5)\times10^{-4}$Hz are listed in Table~\ref{tab-qpo-lag}. Our analysis confirm that in Obs-2 the QPO in 0.3-1 keV lags behind 1-4 keV by 200-300 s for all values of $r_{\rm s}$ from 0" to 12.5". As the S/N drops towards larger inner radii, the lag becomes less significant with larger errors and the coherence becomes smaller (see Table~\ref{tab-qpo-lag}). However, even with $r_{\rm s}=$ 0" the lag is only detected at a significance of $2~\sigma$. We then investigate the time lag in Obs-9, but without trying different source extraction regions as these data are not affected by pile-up. The results for lag vs. frequency and coherence vs. frequency are shown in Figure~\ref{fig-lag-compare} Panels-b and d. Surprisingly, we find that the lag in Obs-9 has an opposite direction from Obs-2, i.e. the QPO phase in the 1-4 keV band lags behind that in 0.3-1 keV band. The absolute lag value is $430\pm50$ s, which is much more significant than that found in Obs-2, and is also associated with a high coherence of $0.89\pm0.06$. Hence, the soft X-ray lead in Obs-9 is clearly a more significant and robust measurement than the soft X-ray lag found in Obs-2. As an additional check we apply the light curve folding method to the QPO in Obs-2 ($r_{\rm s}$=0") and Obs-9. We take the QPO period measured from the entire 0.3-10 keV band as the folding period, which is 3800 s for Obs-2 and 3550 s for Obs-9 (see Section~\ref{sec-qpovar-freq}). The folded light curves are produced for the 0.3-1, 1-4 and 2-10 keV bands, as shown in Figure~\ref{fig-flc-compare}. Indeed, we also find that the QPO phase in 0.3-1 keV lags behind the 1-4 keV band by 0.024 in Obs-2, while it leads the 1-4 keV band by 0.120 in Obs-9. Interestingly, we find that the QPO phase in 2-10 keV in Obs-2 also leads that for 1-4 keV by 0.044, although the S/N of the 2-10 keV band light curve is much lower. These results are consistent with the QPO lag analysis in the frequency domain. The opposite time lag between Obs-2 and Obs-9 is difficult to understand. One possible explanation is the influence of the stochastic variability. Figure~\ref{fig-lag-compare} Panels-c and d show that in Obs-2 the coherence in the QPO frequency bin is smaller than the coherence at lower frequencies where the red noise dominates. This leads to the idea that the low-frequency stochastic variability suppresses the coherence in the QPO frequency bin, overwhelms the intrinsic lag of the QPO, and causes the apparent soft lag with a relatively low coherence. In comparison, the low-frequency stochastic variability in Obs-9 is weaker, and so its QPO shows a significant hard lag with a coherence that is much higher than all other frequencies. This red noise contamination can happen if there is a significant aliasing effect of the low-frequency power (Uttley et al. 2014). Additionally, the stochastic variability may also have a physical impact on the QPO properties (Czerny et al. 2010; Hu et al. 2014). Another possibility is the contamination of pile-up. Although it has been shown by Zoghbi \& Fabian (2011) and our analysis that the shape of the lag vs. frequency spectrum does not change significantly as the S/N decreases, the coherence does decrease significantly as more photons from the centre of the PSF are excluded, and a low coherence often means that the corresponding lag is not reliable. However, it is also possible that the phase lags observed in Obs-2 and Obs-9 are both real, in which case the reversed time lag would be an interesting new phenomenon. In any case, it is crucial to carry out further observations in order to understand the true lag behaviours of this QPO. \begin{table} \centering \caption{The coherence and time lag between 0.3-1 keV and 1-4 keV in the QPO frequency bin of $(2.5-3.5)\times10^{-4}$ Hz for Obs-2 and Obs-9. $r_{\rm s}$ indicates different inner radii of the annular source extraction region for checking the pile-up effect. A positive lag indicates that the soft X-ray variability leads the hard X-ray. $N_{\rm data}$ indicates the number of data points in the periodogram being included in the QPO frequency bin.} \begin{tabular}{llccc} \hline Obs & $r_{\rm s}$ & $N_{\rm data}$ & Time Lag & Coherence \\ \hline Obs-2 & 0" & 9 & -180 $\pm$ 90 s & 0.63 $\pm$ 0.14 \\ Obs-2 & 7.5 & 9 & -200 $\pm$ 140 s& 0.42 $\pm$ 0.18 \\ Obs-2 & 12.5" & 9 & -260 $\pm$ 160 s& 0.36 $\pm$ 0.18 \\ Obs-9 & 0" & 7 & 430 $\pm$ 50 s & 0.89 $\pm$ 0.06 \\ \hline \end{tabular} \label{tab-qpo-lag} \end{table} \begin{figure*} \centering \includegraphics[trim=0.3in 0.2in 0.0in 0.3in, clip=1, scale=0.65]{cohlagfreq4_all.pdf} \caption{Time-lag and coherence vs. frequency between the light curves in 0.3-1 keV and 1-4 keV for Obs-2 and Obs-9, separately. In each panel the shadowed region indicate the QPO frequency bin of $(2.5-3.5)\times10^{-4}$Hz. In Panels a and b, a positive lag means the soft X-ray variability leads the hard X-ray. In Panels a and c, the black, blue and red data points indicate the results for annular source extraction regions with inner radii being 0", 7.5" and 12.5", respectively.} \label{fig-lag-compare} \end{figure*} \begin{figure*} \centering \begin{tabular}{cc} \includegraphics[trim=0.0in 0.0in 0.0in 0.0in, clip=1, scale=0.53]{lc_folded_plot3_obs2.pdf} & \includegraphics[trim=0.0in 0.0in 0.0in 0.0in, clip=1, scale=0.53]{lc_folded_plot3.pdf} \\ \end{tabular} \caption{Folded QPO light curves of RE J1034+396. Panels a1 to a3: folded QPO light curves in Obs-2 with a folding period of 3800 s. Two periods are shown to reveal the periodicity. The best-fit sinusoidal function with a phase shift are shown by the solid line in each panel. The vertical dash lines indicate the QPO peaks, where phase differences can be found between different energy bands. Panels b1 to b3: folded light curves in Obs-9 with a folding period of 3550 s.} \label{fig-flc-compare} \end{figure*} \begin{figure} \centering \includegraphics[trim=0.1in 0.3in 0.0in 0.0in, clip=1, scale=0.5]{allspecs2_r40.pdf} \caption{The spectra of RE J1034+396\ in all the 9 {\it XMM-Newton}\ observations. The blue, magenta and red spectra are from Obs-1, Obs-3 and Obs-6, respectively. Spectra from the other observations are presented in different grey scales as they have similar shapes. The ratio is relative to the best-fit hard X-ray power law component. It is clear that Obs-3 and Obs-6, which do not exhibit any QPO signal, have stronger soft X-ray excesses than the other observations with the QPO signal.} \label{fig-allspec} \end{figure} \subsection{Anti-correlation between the QPO Detectability and the Soft X-ray Intensity} \label{sec-spec-sx} The trigger of the QPO in RE J1034+396\ is still not clear (Middleton et al. 2011). Previous studies have shown that the detection of this QPO is associated with the spectral hardness ratio, as the only two observations showing no QPO signal both have higher soft X-ray fluxes (Alston et al. 2014). To investigate this issue further, we perform simultaneous spectral fitting to all the time-averaged spectra from previous {\it XMM-Newton}\ observations, with a typical spectral model of super-Eddington NLS1s which includes absorbed power law and a soft X-ray Comptonisation model (e.g. Jin et al. 2017a). The Galactic absorption is fixed at $N_{\rm H}=1.36\times10^{20}$ cm$^{-2}$ (Willingale et al. 2013), and the intrinsic absorption in RE J1034+396\ is left as a free parameter. This model fits the time-averaged spectra very well, with the total $\chi^2$ being 3942 for 3252 dof for all the 9 time-averaged EPIC-pn spectra. The slope of the soft excess is characterized by the photon index of a single power law fitted to the 0.3-2 keV spectrum. The best-fit $N_{\rm H}$, fluxes and photon indices are all listed in Table~\ref{tab-obs}. The spectra and their ratios relative to the best-fit power law above 2 keV are shown in Figure~\ref{fig-allspec}. Note that Obs-1 is an exception because its clean exposure time is only 1.8 ks, and so we do not consider it as a useful dataset for the QPO study. In the rest observations, Obs-3 and Obs-6 have no QPO signal, and their soft excesses are much stronger and steeper than in other observations where a QPO can be detected. There is no significant difference in the hard X-ray flux or spectral slope between observations with and without a QPO, thus it is not very likely for the hard X-rays to contain the QPO trigger. This anti-correlation between the QPO detectability and soft X-ray flux also appears consistent with the fact that the fractional rms variability of the QPO in 0.3-1 keV is less than that seen in harder X-rays. However, the soft X-ray fluxes of the two non-QPO observations are only a factor of 2-3 higher than the other QPO observations, so the non-detection of the QPO is not simply due to the dilution from an enhanced non-QPO soft X-ray component. We suggest that there should be some fundamental change in the accretion flow during Obs-3 and Obs-6, which enhances the soft X-ray emission and eliminates the QPO signal. This issue will be investigated in more detail in our Paper-II. \section{Discussion} \label{sec-qpo-origin} The strong QPO signal in RE J1034+396\ is a rare phenomenon in AGN, and so its presence raises many interests and questions. Many models have been proposed to explain the QPO mechanism. For example, it was suggested that there is probably an X-ray emitting blob in the accretion flow of RE J1034+396\ which is periodically obscured by a warm absorber, so that the QPO signal is produced along with an absorption line whose variation is weakly correlated with the QPO's phase (Maitra \& Miller 2010, but also see Middleton, Uttley \& Done 2011). In order to explain the correlation between the instantaneous flux and QPO period, other models have been proposed, such as invoking a magnetic flare in a Keplerian orbit which has an intrinsic oscillation (Czerny et al. 2010), an oscillating shock in the accretion flow (Czerny et al. 2010; Das \& Czerny 2011; Hu et al. 2014), a spiral wave in a constant rotation state (Czerny et al. 2010), a temporary hot spot carried by the accretion flow with the Keplerian motion (Hu et al. 2014), the $g$-mode Diskoseismology caused by the gravitational-centrifugal force (Hu et al. 2014). However, due to the lack of more detailed characterisation of the QPO properties and its long-term variability, all these models remain poorly constrained. The new results concerning the QPO properties presented in this work have provided tests for some of these theoretical models. Firstly, we now know that this particular QPO is a long-term, recurrent feature of this source, which appears in RE J1034+396\ from time to time during the past 11 years. This result suggests that the QPO is produced by a quite stable mechanism, and so disfavors models involving shorter timescales. For example, an X-ray emitting blob carried by the accretion flow at 10$R_{\rm g}$ away from a $\sim10^{6}M_{\sun}$ black hole would be accreted into the black hole within just a few months, but we observe a period shortening of only $250\pm100$ s over the past 11 years. So this QPO model can be ruled out with some confidence. Secondly, now that we have more observational information to examine which energy band produces the QPO. One of the main results is that in Obs-9 the QPO in 0.3-1 keV is leading 1-4 keV by $430\pm50$ s with a high coherence. This is more consistent with the possibility that the QPO is driven by a soft X-ray component, although the time lag alone is not sufficient to pin down the causality. This possibility is further supported by the anti-correlation found between the detectability of the QPO and the intensity of the soft X-ray excess, and also by the fact that the absolute rms amplitude of the QPO is larger in 0.3-1 keV than in harder X-ray bands. For comparison, we do not observe any systematic difference of the hard X-ray power law between the time-averaged spectra with and without the QPO (see Table~\ref{tab-obs}). For example, the hard X-ray photon indices and fluxes of the two non-QPO observations (i.e. Obs-3 and Obs-6) are not significantly different from the QPO observations. Therefore, it seems not likely that the origin of the QPO lies in the hard X-ray band. Moreover, the QPO frequency does not change significantly with the energy bands, so either the QPO-related soft and hard X-ray regions have similar sizes, or the mechanism is such that the QPO timescale does not depend on the size of its emitting region. One possibility is that the QPO arises from the soft X-ray band, and is transmitted to the hard X-ray band via Comptonisation of the QPO modulated soft emission. Thirdly, the QPO in RE J1034+396\ has often been compared to the high-frequency QPOs in the micro-quasar GRS 1915+105. The similarity between RE J1034+396\ and GRS 1915+105 in terms of their X-ray spectra, PSD, and the super-Eddington accretion states suggest that the 67 Hz QPO in GRS 1915+105 may be an analogue of the QPO in RE J1034+396. This was first proposed by Middleton et al. (2009) (also see Middleton, Uttley \& Done 2011; Done 2014), but the (apparent?) soft X-ray lag in the Obs-2 data is opposite to the soft X-ray lead seen in the 67 Hz high frequency QPO in GRS 1915+105 (M\'{e}ndez et al. 2013), thereby breaking the scaling relation. However, as we point out in this work, the associated coherence of the QPO time lag is low in 2007, so the soft lag is not very significant in these data, especially after the pile-up correction. Instead, our new data from Obs-9 show that the highly coherent QPO in RE J1034+396\ has a soft X-ray lead, strongly supporting the analogy to the 67 Hz QPO in GRS 1915+105. Other features of the QPO are also consistent, e.g. small but significant changes in the 67~Hz QPO frequency are also seen in GRS~1915+105 (Belloni et al. 2019), similar to the fractional change in QPO frequency seen in RE J1034+396\ when comparing Obs-2 and Obs-9. The lack of a harmonic signal in RE J1034+396\ is not a concern, because the `harmonic' features in GRS 1915+105 do not appear simultaneously with the 67 Hz QPO very often, and they are all significantly weaker than the 67 Hz QPO (M\'{e}ndez et al. 2013). So what then is the origin of the 67~Hz QPO in GRS 1915+105? These high frequency QPOs are rare in BHB, but are much more common in neutron-star X-ray binaries (XRB). In these objects we generally see two QPOs in the kHz region, an upper and a lower frequency separated by a few hundred Hz (see e.g. the review by van der Klis 2006). The lower frequency QPO shows a soft X-ray lag, while the upper frequency one generally shows a soft X-ray lead (de Avellar et al. 2013; Peille et al. 2015; Troyer et al. 2018). The (very rare) BHB high frequency QPOs are probably the counterpart of the upper frequency QPO in neutron-star XRB (M\'{e}ndez et al. 2013). These are likely produced by oscillations within the Comptonising boundary layer between the disc and neutron star (Gilfanov et al. 2003; see also Karpouzas et al. 2020 for a detailed model of the lower frequency QPO). Whatever their origin is, we conclude that the QPO of RE J1034+396\ is indeed similar to the 67 Hz QPO in GRS 1915+105, where the soft X-rays lead the hard X-rays (M\'{e}ndez et al. 2013). \section{Conclusions} \label{sec-conclusion} In this paper we report the detection of a strong X-ray QPO in the new {\it XMM-Newton}\ observation of RE J1034+396\ in 2018, which is separated by 7 years from the previous {\it XMM-Newton}\ observation and 11 years from the original discovery of this QPO signal. New and detailed analysis have been conducted that verify and extend the QPO properties previously known, which are summarized below: \begin{itemize} \item we confirm that the X-ray QPO in RE J1034+396\ is a robust phenomenon which has occurred, at least intermittently, for more than 11 years. Its presence is most significant in the latest observation taken in 2018, which yields a 9$\sigma$ significance of detection in the 1-4 keV band. The quality value is $\sim$20, and the folded light curve exhibits a well defined sinusoidal shape, and so the QPO is highly coherent. \item in the new Obs-9 data the QPO period is found to be $3530\pm80$ s in the 1-4 keV band, and shows no significant change with energy bands. However, the fractional rms of the QPO increases from 4\% in 0.3-1 keV to 12.4\% in the 1-4 keV band, although the absolute rms amplitude of the QPO in 0.3-1 keV is actually a factor of 2.4 higher than in the 1-4 keV band. \item we find that the QPO period is shorter in the new observation than was observed before. It was $3800\pm70$ s in the 1-4 keV light curve in Obs-2, but decreases by $250\pm100$ s in Obs-9 (i.e. $\sim$7\% of the QPO period). The significance of this long-term variation of the QPO period is also confirmed by our simulations performed following the Bayesian approach. \item our analysis shows that the QPO the 0.3-1 keV band leads the 1-4 keV band by $430\pm50$ s, and the time lag is accompanied by a high coherence. This result is further confirmed by the direct folding of the light curves in these energy bands. This soft X-ray lead is opposite to the soft X-ray lag reported previously for Obs-2. However,our re-analysis of these data indicates that the QPO lag found in Obs-2 is associated with a lower coherence, and so it is less robust than that observed in Obs-9. Therefore, either the previously reported soft X-ray lag is not intrinsic to the QPO, or the lag has changed direction from Obs-2 to Obs-9, which if confirmed would be an interesting new phenomenon to explain. Clearly future observations are required to address this issue. \item by analyzing the data from all previous {\it XMM-Newton}\ observations, we show that the two observations without a QPO show stronger soft X-ray excesses than the other observations which display evidence of a QPO. Therefore, we conclude that there is a long-term anti-correlation between the intensity of the soft X-ray excess and the detectability of a QPO signal. \end{itemize} These newly discovered and refined properties of the QPO in RE J1034+396\ show that it is more similar to the 67 Hz QPO in GRS 1915+105. We also suggest that the QPO of RE J1034+396\ is probably driven by a soft X-ray component. In order to further understand the mechanisms of the QPO and the soft excess, we will present a more comprehensive spectral-timing analysis for the QPO together with broader frequency ranges in our forthcoming Paper-II. Finally, we emphasize the importance of carrying out long-term monitoring of the QPO and the spectral state of RE J1034+396. This source provides one of the best laboratories in which to study the physics of the QPO phenomenon in AGN, and so we recommend it to be one of the highest-priority AGN targets for future X-ray missions such as the Einstein Probe mission ({\it EP}), the enhanced X-ray Timing and Polarimetry mission ({\it eXTP}) and the Advanced Telescope for High-ENergy Astrophysics ({\it Athena}). \section*{Acknowledgements} We thank the anonymous referee for providing useful comments to improve the paper. CJ acknowledges the National Natural Science Foundation of China through grant 11873054, and the support by the Strategic Pioneer Program on Space Science, Chinese Academy of Sciences through grant XDA15052100. CD and MJW acknowledge the Science and Technology Facilities Council (STFC) through grant ST/P000541/1 for support. This work is based on observations conducted by {\it XMM-Newton}, an ESA science mission with instruments and contributions directly funded by ESA Member States and the USA (NASA). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
1,108,101,566,184
arxiv
\section{Introduction} Subject to the effects of wind, a flat sandy surface is unstable and evolves into a regular periodic pattern of wavelength of the order of 10 cm and height of a few centimeters (see Fig. \ref{death-valley}) \cite{bagnold,sharp,hunter1,origin,reineck,hunter2,allen,kocurek,schenk}. A slight sand accumulation on the surface tends to expose grains on the upwind stoss-side to the action of the wind, and shelter the grains on the downwind lee-side. Dislodged grains at the stoss tend to move toward the lee of the sand accumulation, the initial perturbation is amplified, and small ripples develop, which migrate in the direction of the wind. Smaller ripples travel faster than larger ripples--- since the migration velocity of ripples depends on the amount of grains being transported during migration--- so that small ripples merge with the larger ones. Due to ripple merging, the wavelength of the ripples grows in time as observed in the field \cite{sharp,werner} and in wind tunnel experiments \cite{seppala,anderson2}. Due to the action of the wind, grains fly above the bed and strike the ground at small angles and with velocities given by the wind velocity. Successive impacts of grains are called {\it saltation}. As a result of bombardment of saltating grains, a number of ejected grains are generated from the static bed which subsequently move by surface creep. These grains (called {\it reptating} grains) move distances smaller than the typical saltating jump (which is controlled by the wind velocity) until they are captured and become part of the bed. Climbing of ripples as seen in Fig. \ref{diagram} is described by a vector of translation which has two components \cite{hunter2}: the component in the horizontal direction is the rate of ripple migration across the sediment surface, and the component in the vertical direction is the net rate of deposition, defined as the rate of displacement of the sand surface in the vertical direction (see Fig. \ref{climb}a). The angle of climbing ($\xi$ in Fig. \ref{climb}a) defines different types of lamination structures (see Chapter 9 \cite{allen}). If the angle of climb is smaller than the slope of the stoss, then the structures are called ``subcritically'' climbing ripples, and these are the sedimentary structures which are the focus of this study. \begin{figure} \centerline{ \hbox{ \epsfxsize=8.cm \epsfbox{fig1.ps} } } \vspace{.5cm} \caption{Wind sand ripples on a sandy surface in Death Valley National Park, California.} \label{death-valley} \end{figure} Due to the climbing of ripples and segregation of grains during deposition and transport, ripple deposits develop lamination structures which are later preserved during solidification of the rock \cite{hunter1,hunter2,allen}. The basic types of the smallest stratification structures in climbing ripples and small aeolian dunes relevant to this study are summarized by Hunter \cite{hunter1} (see also \cite{allen}). In this paper we will focus on two lamination structures which are commonly found in sedimentary rocks: \begin {itemize} \item Inverse-graded climbing ripple lamination: one of the most common lamination structures which are formed because grains composing the ripples differ in size \cite{bagnold,hunter1,allen,bunas}. Large ejected grains travel in shorter trajectories than small grains, so that small grains are preferentially deposited in the trough of the ripples while large grains are deposited preferentially near the crest. Due to this segregation effect, the migration of a single ripple produces two layers of different grain size parallel to the climbing direction of the ripple as shown in Fig. \ref{climb}a. This lamination structure is called inverse-graded--- according to the nomenclature of Hunter \cite{hunter1}--- since in a pocket of two consecutive layers formed by the climbing of a single ripple, the layer of large grains is above the layer of small grains (Fig. \ref{climb}a). \item Cross-stratification: Migration of ripples also produces successive layers of fine and coarse grains not in the direction of climbing but parallel to the downwind face of the ripples (see Figs. \ref{diagram} and \ref{climb}b) \cite{bagnold,hunter1,reineck,hunter2,allen,kocurek}. These structures--- called cross-stratification or foresets--- are also coarser toward the troughs \cite{bagnold,brown,williams,makse1} as opposed to the segregation in inverse-graded climbing ripples. \end{itemize} \begin{figure} \centerline{ \hbox{ \epsfxsize=8.cm \epsfbox{fig2.ps} } } \vspace{.5cm} \caption{Diagram showing the climbing of the ripples and the formation of cross-stratification patterns (from \protect\cite{reineck}).} \label{diagram} \end{figure} The instability giving rise to aeolian ripple morphologies has been the subject of much study. The classic work of Bagnold \cite{bagnold} has been followed up by a number of studies. Modern models of wind ripple deposits are usually defined in terms of the splash functions introduced by Haff and coworkers \cite{splash,haff1} (see also \cite{anderson2} for a review) in order to model impact processes. These models have successfully reproduced the instability leading to ripple deposits when the sand grains are of only one size \cite{anderson2,haff2,anderson1}. When such models are defined for two species of grains differing in size \cite{bunas}, they result in patterns which resemble very closely the stratigraphic patterns found in inverse grading climbing ripple structures. It was in this context that Forrest and Haff proposed \cite{haff2} that grading changes in ripple lamination are related to fluctuations in wind or transport. However, Anderson and Bunas \cite{bunas} found using a cellular automaton model that inverse-graded ripple lamination is due to different hopping lengths of small and large grains. These models do not include the interactions between the moving grains and the static surface \cite{anderson1}--- i.e., grains are assumed to stop as soon as they reach the sand surface--- which are expected to be relevant for the dynamics in the rolling face of the ripples \cite{landry,terzidis}. Recent studies by Terzidis {\it et al.} \cite{terzidis} have reproduced the ripple instability using the theory of surface flows of grains proposed by Bouchaud and collaborators \cite{bouchaud}. This theory takes into account the interaction between rolling and static grains. Moreover, recent studies of avalanche segregation of granular mixtures show dramatic effects when such interactions are taken into account \cite{makse2,bdg}. \begin{figure} \centerline{ \hbox{ \epsfxsize=9.cm \epsfbox{fig3.ps} } } \vspace{.5cm} \caption{({\bf a}) Cross-section of a sandstone showing small-scale lamination in 'subcritically' (as defined in \protect\cite{hunter1} and \protect\cite{allen}) inverse-graded ripples (along with the quantities defined in the model). Each pair of layers of small and large grains is produced by the climbing of a single ripple. ({\bf b}) Example of cross-stratification. Each climbing ripple produces a series of layers of small and large grains oriented across the direction of climbing, parallel to the downwind face. The basic types of the smallest stratification structures in ripples and small aeolian dunes relevant to this study are summarized in \protect\cite{hunter1}.} \label{climb} \end{figure} In this article, we use this formalism to include also the segregation effects arising when considering two type of species of different size and shape. We first formulate a discrete model for two-dimensional transverse aeolian climbing ripples which incorporates simple rules for hopping and transport. Then, we incorporate the different properties of the grains, such as size and roughness, and we show that pattern formation in ripple deposits arises as a consequence of ``grain segregation'' during the flow and collisions. Specifically we show that three segregation mechanisms contribute to layer formation during ripple migration (Fig. \ref{mechanism}): \begin{itemize} \item Size segregation due to different hopping lengths of small and large grains: the hopping length of the large reptating grains is smaller than the hopping length of small grains. \item Size segregation during transport and rolling: larger grains tend to roll down to the bottom of troughs while small grains tend to be stopped preferentially near the crest of ripples. \item Shape segregation during transport and rolling: rounded grains tend to roll down easier than more faceted or cubic grains, so that the more faceted grains tend to be at the crest of the ripples, and more spherical grains tend to be near the bottom. \end{itemize} As a result of a competition between these segregation mechanisms a richer variety of stratigraphic patterns emerges: inverse-graded ripple lamination occurs when segregation due to different jump lengths dominates, and cross-stratification (and normal-graded ripple lamination as well) occurs when size and shape segregation during rolling dominates. Thus, a general framework is proposed to unify the mechanisms underlying the origin of the most common wind ripple deposits. In the case of inverse-graded climbing ripple lamination we confirm a previously formulated hypothesis by Bagnold \cite{bagnold} and Anderson and Bunas \cite{bunas} that the lamination is due to hopping induced segregation due to collisions. Regarding the origin of cross-stratification structures, we show that they arise under similar conditions as in recent table-top experiments of avalanche segregation of mixtures of large faceted grains and small rounded grains poured in a vertical Hele-Shaw cell \cite{makse1,makse2}. \begin{figure} \centerline{ \hbox{ \epsfxsize=7.cm \epsfbox{fig4.ps} } } \vspace{.5cm} \caption{Three segregation mechanism acting when the grains differ in size and shape.} \label{mechanism} \end{figure} In what follows we first define the model for the case of single species of grains. We show that the model predicts the formation of a ripple structure, and propose a simplified continuum theory to understand the onset of the stability leading to ripple structure. Then, we generalize to the case of two type of species of grains differing in size and shape which give rise to the segregation mechanisms and the lamination structures seen in aeolian climbing ripples. \section{One species model} We start by defining the model for the case of one type of grain in a two-dimensional lattice of lateral size $L$ with periodic boundary conditions in the horizontal $x$-direction. Our main assumption is to consider two different phases \cite{terzidis,bouchaud,makse2,bdg,pgg}: a reptating or rolling phase composed by grains moving with velocity $v$ by rolling, and a static phase composed by grains in the bulk. We also consider a curtain of external saltating grains which impact--- at randomly chosen positions on the static surface--- from the left to the right at a small angle $\beta$ to the horizontal (Fig. \ref{climb}a). Shadow effects are considered by allowing only impacts with ballistic trajectories which do not intersect any prior portion of the surface. Upon impacting on the sand surface, a saltating grain dislodges $n_{rep}$ grains from the static surface, and jumps a distance $l_{sal}$ after which the saltating grain is incorporated to the reptating phase. The $n_{rep}$ grains dislodged by the saltating grain jump a distance $l_{rep}\ll l_{sal}$. Upon reaching the surface, the dislodged grains form part of the reptating phase and move with velocity $v$. At every time step ($\Delta t$) only one reptating grain (in contact with the surface) interacts with the static grains of the bulk according to the angle of repose $\theta_r$ \cite{coarse-grain}; the remaining reptating grains move a distance $v \Delta t$ to the right. The angle of repose is the maximum angle at which a reptating grain is captured on the sand bed. If the local coarse-grained angle of the surface is smaller than the angle of repose, $\theta < \theta_r$, the interacting reptating grain will stop and will be converted into a static grains. If the angle of the surface is larger than the angle of repose, $\theta > \theta_r$, the reptating grain is not captured--- and moves to the right a distance $v \Delta t$ with the remaining reptating grains--- but ejects a static grain from the bulk into the reptating phase. The model predicts the formation of a ripple structure as seen in aeolian sand formations. The onset of the instability leading to ripples occurs with ripples of small wavelength. However, due to the fact that smaller ripples travel faster than larger ripples--- since smaller ripples have less amount of material to transport--- merging of small ripples on top of large ripples is observed. This gives rise to the change of the characteristic wavelength as a function of time, as is observed in wind tunnel experiments \cite{seppala,anderson2}, as well as in field observations \cite{sharp,werner}. We find that our model predicts an initial power law growth of the wavelength of the ripples $\lambda(t) \sim t^{0.4}$ (see Fig. \ref{wavelength}). The wavelength of the ripples seems to saturate after this power law growth to a value determined by the saltating length. Moreover, we also observe a subsequent growth of the wavelength (not shown in the figure) up to a new saturation value close to a multiple of the saltation length. This process continues and a series of plateaus are observed until one large ripple (or dunes) of the size of the system is formed. The series of plateaus was also observed in a recent lattice model of ripple merging \cite{vandewalle}. However, we believe that the existence of plateaus might be due to an artifact of the discreteness of the lattice, and may have no physical meaning: the rules associated with the derivatives of the sand surface break down when the local slope is bigger than a given value determined by the discretization used to define the slope of the surface. A similar artifact was observed in a discrete model of ripple morphologies formed during surface erosion via ion-sputtering in amorphous materials \cite{rodolfo}. Moreover, we will see that when we introduce two species of grains, the wavelength saturates at a constant value and the plateaus disappear. \begin{figure} \centerline{ \hbox{ \epsfxsize=8cm \epsfbox{fig5.ps} } } \caption{Growth of the wavelength of the ripples according to our model and comparison with field and experimental observations. The number of impacts in the model is translated to time using a typical impact rate of $10^7$ impacts $m^{-2}$ $s^{-1}$ \protect\cite{anderson2}. The simulation data have been shifted vertically by a multiplicative factor, so that only the general trend (the slope) of the curve should be compared with the experimental data.} \label{wavelength} \end{figure} In Fig. \ref{wavelength} we compare the prediction of our model with the available experimental data from field and wind tunnel experiments. We observe a fair agreement between our model and the experiments. However, we believe that this agreement is not conclusive. In fact, the same experimental data can be also fitted with the same accuracy by a logarithmic growth of the wavelength of the ripples as shown recently by Werner and Gillespie \cite{werner}, who proposed a discrete stochastic model of ripple merging and found a logarithmic growth of the wavelength (see also \cite{vandewalle}). \subsection{Continuum formulation of the model} The onset of the instability leading to the initial ripple structure can be studied using the continuum theory proposed by Bouchaud {\it et al.} \cite{bouchaud} to study avalanche in sandpiles. Recently, the set of coupled equations for surface flow of grains of \cite{bouchaud} has been adapted to the problem of the ripple instability \cite{terzidis}. Here, we propose a version of this theoretical formalism suitable to the physics of our model to study the initial ripple formation. Let $R(x,t)$ describe the amount of reptating grains in the rolling phase and $h(x,t)$ the height of the static bed at position $x$ and time $t$, which is related to the angle of the sand surface by $\theta(x,t) \equiv -\partial h/\partial x$ for small angles. Our set of convective-diffusion equations for the rolling and static phases are the following \cite{bouchaud}: \begin{mathletters} \begin{equation} \frac{\partial R(x,t) }{\partial t} = - v \frac{\partial R}{\partial x} + D \frac{\partial^2 R}{\partial x^2} + \Gamma(R,\theta), \label{r} \end{equation} \begin{equation} \frac{\partial h(x,t)}{\partial t} = - \Gamma(R,\theta), \label{h} \end{equation} \label{rh} \end{mathletters} \noindent where $v$ is the drift velocity of the reptating grains along the positive $x$ axis taken to be constant in space and time, and $D$ a diffusion constant. The interaction term $\Gamma$ takes into account the conversion of static grains into rolling grains, and vice versa. We propose the following form of $\Gamma$ consistent with our model: \begin{equation} \Gamma(R,\theta) \equiv \alpha [\beta - \theta(x,t) ] + \gamma [ \theta(x,t) - \theta_r ] R(x,t), \label{gamma} \end{equation} where $\alpha$ and $\gamma$ are two constants with dimension of velocity and frequency respectively. $\alpha$ is proportional to the number of collisions per unit time of the saltating grains with the sand bed, and also to the number of ejected grains per unit time, while $\gamma$ is the number of collisions per unit time between the reptating grains and the sand bed when they creep on the downwind side of the ripple. The first term in (\ref{gamma}) takes into account the spontaneous (independent of $R(x,t)$) creation of reptating grains due to collisions of the saltating grains--- a process which is most favorable at angles of the bed smaller than $\beta$. The second term takes into account the interaction of the reptating grains with the bed of static grains: the rate of the interaction is proportional to the number of grains $R(x,t)$ interacting with the sand surface. Rolling grains become part of the sand surface if the angle of the surface $\theta(x,t)$ is smaller than the repose angle $\theta_r$ (``capture''), while static grains become rolling grains if $\theta(x,t)$ is larger than $\theta_r$ (``amplification''). Higher order terms are neglected since we are interested in the linear stability analysis determining the origin of the ripple instability. Insight into the mechanism for ripple formation is obtained by studying the stability of the uniform solution of Eqs. (\ref{rh}): \begin{equation} R_0=\alpha\beta / (\gamma \theta_r) ~~~~~~~ h_0=0. \end{equation} We perform a stability analysis by looking for solutions of the type: $R(x,t) = R_0 + \bar{R}(x,t)$ and $\theta(x,t) = \theta_0 + \bar \theta(x,t)$. The linearized equations for $\bar R$ and $\bar \theta$ are \begin{mathletters} \begin{equation} \displaystyle{ \frac{\partial \bar R(x,t) }{\partial t} }=\displaystyle{ - v \frac{\partial \bar R} {\partial x} - v_m \bar \theta - \gamma \theta_r \bar R + D \frac{\partial^2 \bar R}{\partial x^2},} \end{equation} \begin{equation} \displaystyle{ \frac{\partial \bar \theta(x,t) }{\partial t}} =\displaystyle{ - v_m ~\frac{\partial \bar \theta} {\partial x} - \gamma \theta_r \frac{ \partial \bar R}{\partial x}, } \end{equation} \label{bar} \end{mathletters} where the migration velocity of the traveling wave solution is \begin{equation} v_m = \alpha (1-\beta/\theta_r), \label{trans} \end{equation} which indicates that the ripple velocity is proportional to the number of collisions per unit unit of saltating grains with the sand bed, and to the number of ejected grains per collision. By Fourier analyzing Eqs. (\ref{bar}) we find a set of two homogeneous algebraic equations for $\bar R$ and $\bar\theta$ with non-trivial solutions only when the determinant of the resulting $2 \times 2$ matrix is zero. We obtain a dispersion relation $w_\pm(k)$ where the two branches $\pm$ correspond to the solutions of the resulting quadratic equation for $w(k)$. Then we take the limit $v_m/v \ll 1$, which corresponds to the physical fact that the translation velocity (\ref{trans}) is much smaller than the rolling velocity of the individual grains (see \cite{terzidis}), and we arrive to the following dispersion relation for $w_{\_}(k)$ ($w_+(k)$ gives rise only to stable modes): \begin{equation} \mbox{Im} [w_{\_}(k)] = \frac {-(v_m/v)~ (\gamma \theta_r v^2)~k^2 } {(\gamma\theta_r)^2 + k^2 (2 \gamma\theta_r D + v^2) + D^2 k^4} +O( \frac{v_m}{v})^2. \end{equation} The asymptotic forms for small and large $k$ are \begin{equation} \begin{array}{ll} \displaystyle{ \mbox{Im} [w_{\_}(k)] \rightarrow - \frac {v_m v}{\gamma \theta_r} k^2, } & \mbox{~~~$k\to 0$}\\ \displaystyle{ \mbox{Im} [w_{\_}(k)] \rightarrow - \frac {v_m v \gamma \theta_r}{D^2} \frac{1}{k^2}, } & \mbox{~~~$k\to \infty$} \end{array} \end{equation} which indicates that the branch $w_{\_}(k)$ corresponds to only unstable modes $w_{\_}(k) < 0$. Similar stability analysis was performed by Anderson \cite{anderson1} using a continuity model neglecting the interaction between rolling and static grains and by Terzidis {\it et al.} \cite{terzidis} with the continuum model Eqs. (\ref{rh}) but with a different interaction term than (\ref{gamma}). Terzidis {\it et al.} find a band of unstable modes until a given cut off wave vector at large $k$. This behaviour is due to higher order derivatives of $\theta$ appearing in the interaction term used in \cite{terzidis}. The most unstable mode $k^*$ in our model is given by $\partial w_{\_}(k^*)/\partial k = 0$, with $k^* = \sqrt{\gamma \theta_r/D}$ which gives an estimate of the initial wavelength of the ripples: \begin{equation} \lambda = 2\pi \sqrt{D/(\gamma \theta_r)}. \end{equation} The final wavelength will be determined by higher order nonlinear terms arising from a more complicated interaction term than the one used in (\ref{gamma}). As mentioned above, using the full discrete model we find that after the appearance of initial small undulations, the wavelength of the ripples grows due to ripple merging. In the discrete model, the wavelength seems to saturate to a finite value, although this value seems to be determined by the finite size of the simulation system (we use periodic boundary conditions in the horizontal direction). \section{A model for two species differing in size and shape} Next we generalize the model to the case of two type of species differing in size and shape. The segregation mechanisms discussed in the introduction are incorporated in the model as follows: \begin {itemize} \item Size segregation due to different hopping lengths: we define $l_{\alpha}$ (to replace $l_{rep}$ of the mono disperse case) as the distance a reptating grain of type $\alpha$ travels after being collided by a saltating grain. If we call the small grains type $s$ and the large grains type $l$, then $l_{l}<l_{s}$. This effect was incorporated in the discrete stochastic model of Anderson and Bunas \cite{bunas} using a generalization of the splash functions proposed by Haff \cite{haff1}. \end{itemize} \noindent The interaction between the rolling grains and the surface is characterized by four different angles of repose; $\theta_{\alpha\beta}$ for the interaction of an $\alpha$ reptating grain and a $\beta$ static grain (replacing $\theta_r$ of the mono disperse case). The dynamics introduced by the angle of repose are relevant at the downwind face where two extra mechanisms for segregation of grains act in the system: \begin{itemize} \item Size segregation during transport and rolling: large rolling grains are found to rise to the top of the reptating phase while the small grains sink downward through the gaps left by the motion of larger grains; an effect known as percolation or kinematic sieving \cite{bagnold,drahun-savage}. Due to this effect only the small grains interact with the surface when large grains are also present in the rolling phase. Small grains are captured first near the crest of the ripples causing the larger grains to be convected further to the bottom of the ripples. We incorporate this dynamical segregation effect by considering that, when large and small grains are present in the reptating phase, only the small ones interact with the surface according to the angle of repose, while the large grains, being at the top of the reptating phase do not interact with the static grains and they are convected downward. Thus, the large grains interact with the surface only when there are no small grains present in the reptating phase. A similar percolation mechanism was introduced in the discrete and continuum models of \cite{makse2} to understand the origin of stratification patterns in two-dimensional sandpiles of granular mixtures. \item Shape segregation during transport and rolling: rounded grains tend to roll down easier than more faceted or cubic grains, so that the more faceted grains tend to be at the crest of the ripples. This segregation effect is quantified by the angles of repose of the pure species, since the repose angle is determined by the shape and surface properties of the grains and not by their size: the rougher or the more faceted the surface of the grains the larger the angle of repose. If the large grains are more faceted than the small grains we have $\theta_{ss}< \theta_{ll}$, while when the small grains are more faceted $\theta_{ll} < \theta_{ss}$ \cite{makse2}. The species with the larger angle of repose have a larger probability to be captured at the sand bed than the species with smaller angle of repose. \end{itemize} We notice that the fact that the grains have different size leads to different cross-angles of repose $\theta_{\alpha\beta}$ \cite{makse2,bmdg}. If the large grain are type $l$ and the small are type $s$ then $\theta_{ls}< \theta_{sl}$, which in turn leads to another size segregation effect. However, by incorporating the size segregation due to the different cross-angle of repose we would get the same effect as the one incorporated in the model by the percolation effect (the small grains are preferentially trapped at the top of the slip-face), so that, in what follows, we will consider only the percolation effect for simplicity--- i.e., large reptating grains are allowed to interact with the sand surface only when there are no small reptating grains below the large grains at a given position. We also notice that, in general, the distance a grain travels after being kicked by a saltating grain depends on the type of colliding grain and the type of grain on the bed. Thus we define $l_{\alpha\beta}$ as the distance a reptating grain of type $\beta$ travel after being collided by a saltating grain of type $\alpha$. If we call the large grains type $l$ and the small grains type $s$ then we have: $l_{sl}<l_{ll}<l_{ss}<l_{ls}$. Hoverer, in practice we will make the following approximation $l_{sl}=l_{ll}\equiv l_l$, and $l_{ss}=l_{ls}\equiv l_s$, i.e., we do not consider which type of grain is colliding. Thus, three different mechanisms--- size-segregation due to percolation in the reptating phase, shape-segregation in avalanches, and size segregation due to the different reptating jumps $l_{\alpha}$--- compete in the system giving rise to a rich variety of lamination patterns as we show below. \subsection{Size segregation due to different hopping lengths} We first investigate the morphologies predicted by the model in the case where the different reptating jumps $l_{\alpha}$ play the dominant role in the segregation process. This is the case $l_{s}-l_{l}>l_s$ and when the grains have approximately the same shape $\theta_{ss}\approx\theta_{ll}$. We start our simulations from a flat sand surface composed by a 50/50 by volume mixture of small and large grains and we observe the system to evolve into ripples traveling in the direction of the wind (Fig. \ref{seq}). The resulting morphology seen in Fig. \ref{seq} resembles the most common climbing ripple structures such as those documented in field observations \cite{hunter1,hunter2,allen,kocurek} (Fig. \ref{climb}a). This result confirms the hypothesis of Anderson and Bunas \cite{bunas} that the origin of inverse-graded lamination in climbing ripples is due to the size segregation effect produced by the different hopping lengths of small and large grains. In our simulations we observe that large grains (dark color) are deposited at the top of the ripples while small grains (light color) are deposited preferentially at the bottom of the ripples since $l_{l}<l_{s}$, resulting in a lamination structure showing inverse grading (layer of large grains on top of the layer of small grains). The system size used in our simulations corresponds to 256 bins, with $\Delta t=1$ s, $n_{rep}=4$ grains per impact, $l_{sal}=90$ cm, $\beta = 10^o$. For the case shown in Fig. \protect\ref{seq} we use $l_1 = 25$ cm and $l_2= 5$ cm, and $\theta_{\alpha\beta}= 30^o$. \begin{figure} \centerline{ \hbox{ \epsfxsize=8.5cm \epsfbox{fig6.ps} } } \vspace{.5cm} \caption{Morphology predicted by the model when segregation due to different jump lengths is dominant showing inverse grading climbing ripple lamination. From left to right, we show a sequence of three stages in the dynamics of the climbing ripples. Starting from a flat sand surface of small (light color) and large grains (dark color), the system evolves into ripples climbing in the direction of the wind.} \label{seq} \end{figure} The efficiency of the segregation mechanisms is sensitive to the relative value of the parameters of the model. For instance we find that by decreasing the value of $l_{sal}$ relative to the reptating lengths, the efficiency of segregation is greatly reduced in the case of inverse-graded lamination. The structures shown here are all subcritically climbing from left to right (for a review of the terminology and definition of different climbing structures in ripples deposit see \cite{hunter1}, and Chapter 9 \cite{allen}). Supercritical climbing ripples are much less common and are produced in slowly translating ripples with small number of ejected grains per impact \cite{haff2}. We also notice that the shape of ripples found in Nature are more asymmetric with the downwind side at a steeper angle than the upwind side, while our model predicts a more symmetric triangular shape of the ripples. More realistic asymmetric shape of the ripples can be obtained by considering the exact trajectories of the flying grains and the complex interaction of grains with the air flow as shown by previous models by Anderson \cite{bunas}. \subsection{Segregation due to rolling and transport} Next we consider the case where the difference between the reptating jumps is small, $|l_{s}-l_{l}|/l_{s}< 1$ (or when the downwind face is large compared to $l_{s}$ or $l_{l}$) and the grains differ appreciable in shape ($\theta_{ss}\ne \theta_{ll}$) and also in size. Then segregation in the rolling face is the relevant mechanism for segregation and we do not take into account the segregation due to different hoping lengths. We first consider the case where the small grains are the roughest and the large grains are the most rounded ($\theta_{ss} > \theta_{ll}$). In this case a segregation solution along the downwind face is possible since both, size segregation due to percolation and shape segregation act to segregate the small-faceted grains at the top of the crest and the large-rounded grains at the bottom of troughs. The result is a lamination structure (Fig. \ref{phase}a) which resembles the structure of climbing ripples of Fig. \ref{seq} but with the opposite grading: small grains at the top of each pair of layers (normal-grading climbing ripple lamination). This type of lamination is not very common in Nature. On the other hand, when the large grains are rougher than the small grains ($\theta_{ll} > \theta_{ss}$), a competition between size and shape segregation occurs \cite{makse3}. Size segregation due to percolation tends to segregate the large-cubic grains at the bottom of the downwind face, while the shape segregation effect tends to segregate the same grains at the crest of the ripples. Then a segregation solution along the downwind face as in Fig. \ref{phase}a is not possible, and the result of this instability is the appearance of layers of small and large grains parallel to the downwind face (Fig. \ref{phase}b) and not perpendicular as in Fig. \ref{phase}a. These structures correspond to the more common cross-stratification patterns found in rocks \cite{hunter1,hunter2,allen,kocurek}. In addition to the stratification parallel to the rolling face, the deposits are also coarser toward the bottom of the downwind face. The mechanism for cross-stratification involves two phenomena superimposed. {\it (i)} Segregation in the rolling face: a pair of layers is laid down in a single rolling event, with the small grains segregating themselves underneath the larger grains through a ``kink'' mechanism as seen in \cite{makse1,makse2}. This kink in the local profile of static grains provides local stability to the rolling grains trapping them. The kink moves uphill forming layers of small and large grains. {\it (ii)} The climbing of ripples due to grain deposition and ripple migration. The conditions for cross-stratification and the dynamical segregation process found with the present model are similar to the findings of the experiments of stratification of granular mixtures in two-dimensional vertical cells performed in \cite{makse1} and also the models of \cite{makse2}. We observe the formation on the slip face of the ripples of an upward traveling wave or ``kink'' at which grains are stopped during the avalanche as was observed in \cite{makse1,makse2}. According to this, the wavelength of the layers is proportional to the flux of grains reaching the rolling face, then it is proportional to the number of saltating grains impacting the surface. For the case shown in Fig. \protect\ref{phase}a we use $l_s = l_l = 5$ cm, and $\theta_{sl}=\theta_{ss} = 30^o$, and $\theta_{ls}=\theta_{ll} = 20^o$, while for Fig. \protect\ref{phase}b we use $l_s = l_l = 5$ cm, and $\theta_{sl}=\theta_{ss} = 20^o$, and $\theta_{ls}=\theta_{ll} = 30^o$. \begin{figure} \centerline{ \hbox{ \epsfxsize=9cm \epsfbox{fig7.ps} } } \vspace{.5cm} \caption{Resulting morphologies predicted by the model after $10^7$ impacts, when segregation in the rolling face is dominant showing ({\bf a}) normal grading lamination of large rounded grains (dark color, at the bottom of the downwind face) and small rough grains (light color, at the top); and ({\bf b}) cross-stratification of large rough grains (dark color) and small rounded grains (light color).} \label{phase} \end{figure} \subsection{Phase diagram: General case} We have also investigated the morphologies obtained when the three segregation effects (shape segregation, percolation effect, and hopping induced size segregation) act simultaneously. The resulting morphologies are shown in Fig. \ref{phase2} along with the phase diagram summarizing the results obtained with the model in Figs. \ref{seq} and \ref{phase}. The case $|l_{s}-l_{l}|>l_s$ and $\theta_{ss}\approx\theta_{ll}$ corresponds to the inverse-graded lamination shown in Fig. \ref{seq}, as indicated in the phase diagram Fig. \ref{phase2}. Moreover, when $l_{s}-l_{l}>l_s$ and for any other value of the angles $\theta_{ss}$ and $\theta_{ll}$, we find that the hopping induced segregation seems to dominate over rolling induced segregation. As can be seen from the upper two panels in Fig. \ref{phase2}, the morphologies obtained in these cases resemble the one of Fig. \ref{seq} corresponding to inverse-graded climbing ripple lamination. The upper left panel shows the results when the smaller grains are the roughest $\theta_{ll}-\theta_{ss}<0$ and shows clearly the same lamination as in Fig. \ref{seq}. In this case the hopping induced segregation dominates completely over the rolling segregation due to the angles of repose and percolation. In the upper right panel we show the case when the large grains are the roughest $\theta_{ll}-\theta_{ss}>0$. In this case we also see the same lamination pattern as in Fig. \ref{seq} but we see a tenuous trace of the cross-stratification seen in Fig. \ref{phase}b too; the rolling induced segregation seems to have a more important role in comparison with the hopping induced segregation than in the case shown in the upper left panel. The region $|l_{s}-l_{l}|<l_{s}$ is discussed in Figs. \ref{phase}, and our model shows normal grading lamination when $\theta_{ll}-\theta_{ss}<0$ (Fig. \ref{phase}a) and cross-stratification when $\theta_{ll}-\theta_{ss}>0$ (Fig. \ref{phase}b). Interesting morphologies are predicted when $l_{s}-l_{l}<-l_s$. This might be the case of a large difference in density between the grains, i.e., very light large grains and heavy small grains. The patterns obtained with the model are shown in the two lower panels in Fig. \ref{phase2}. The late stage in the dynamics evolution of the patterns shown at the left lower panel ($\theta_{ll}-\theta_{ss}<0$) and at the right lower panel ($\theta_{ll}-\theta_{ss}>0$) in Fig. \ref{phase2} resemble the patterns of normal grading climbing ripples (Fig. \ref{phase}a) and cross-stratification (Fig. \ref{phase}b) respectively. This indicates the dominance of avalanche segregation (shape segregation and percolation effect) over the hopping induced segregation in this region of the phase space (this dominance is the opposite of what we find in the region $l_{s}-l_{l}>l_{s}$). However, as seen in Fig. \ref{phase2}, these patterns appear only in the late stages of the evolution. As seen in the two lower panels, in the early stages the lamination structure appears to be dominated by the hopping induced segregation. We observe a clear transition at intermediate time in the simulation which can be clearly observed in the change of the climbing angle of the ripples observed in the left and right lower panels in Fig. \ref{phase2}. The late stage patterns appear only when the slip-face is well developed after a transient of small ripples dominated by hopping segregation. We are not aware of any wind tunnel experiment or field observation showing similar dynamical transition, so it would be interesting to explore this transition experimentally. \begin{figure} \centerline{ \hbox{ \epsfxsize=9cm \epsfbox{fig8.ps} } } \caption{Phase diagram predicted by the model. The ``$s$'' refers to the small grains (light colors), and the ``$l$'' refers to the large grains (dark colors). $\theta_{ss}>\theta_{ll}$ means that the smaller grains are the roughest, and $\theta_{ll}>\theta_{ss}$ means that the larger grains the roughest. The three areas in the phase space near the axis refer to the morphologies already studied in Figs. \protect\ref{seq}, \protect\ref{phase}a, and \protect\ref{phase}b.} \label{phase2} \end{figure} \section{Summary} In summary, we have shown that lamination in sedimentary rocks at around the centimeter scale is a manifestation of grain size and grain shape segregation. Thin layers of coarse and fine sand are present in these rocks, and understanding how layers in sandstone are created might aid, for instance, in oil exploration since great amounts of oil are locked beneath layered rocks. Our findings suggest a unifying framework towards the understanding of the origin of inverse grading, normal grading and cross-stratification patterns in wind ripple deposits. We identify the conditions under which these different laminations patterns arise in sandstone. In particular, we find that cross-stratification is only possible when the large grains are coarser than the small grains composing the layered structure in agreement with recent experiments on avalanche segregation in two-dimensional sandpiles. The fact that cross-stratification patterns are very common in Nature might be due to the fact that fragmentation processes usually leads to smaller grains more rounded than larger grains. So far we have explored a two-dimensional cross-section of the ripple deposits. Extensions to three dimensional systems are ongoing as new physics may emerge when taking into account the lateral motion of the grains \cite{landry}.
1,108,101,566,185
arxiv
\section{Introduction} \section{Introduction} \label{sec:intro} Reconnection is aprocess by which electromagnetic energy is transferred to the plasma, heating and accelerating the particles. In a magnetospheric context, this process occurs on the dayside, where closed magnetic flux is opened, and on the nightside, where open flux is closed and transported back to the dayside. In an open magnetopshere, it thus drives the magnetospheric convection, as initially postulated by Dungey (1961). Near the reconnection site, the ions and electrons become decoupled from the magnetic field. The breaking of the frozen-in condition takes place in two stages. First, the ions are decoupled from the magnetic field in what is called the ion diffusion region (IDR). Later, the electrons decouple in a much smaller region, appropriately called the electron diffusion region (EDR), where the magnetic fields also reconnect and change topology i.e. from closed to open and vice versa (Vasyliunas, 1975; Sonnerup, 1979). Thus diffusion regions are central to our understanding of energy conversion through reconnection. Hence the need to be able to properly identify them. Our focus here will be on IDRs. Thus far, IDRs have been identified by visual inspection of the data. This has been done for large surveys of Geotail data (Nagai et al. 1998, 2005) and Cluster (Eastwood et al., 2010). Several important properties came to light, which enhanced our understanding of these important regions. Usually, visual identification depends on the observation of correlated reversals of the Sun-Earth component ($\hat{x}$) of the flow velocity and the north-south ($\hat{z}$) component of the magnetic field in proper coordinates. This reversal is taken as a sign of the X-line moving past the spacecraft. The IDR identification is then further strengthened when Hall magnetic and electric fields are identified as predicted by theory (Sonnerup, 1979) and notably observed by Øieroset et al. (2001). These fields arise from the differential motion of the electrons and ions. In the tail, which is the region we direct our attention to here, the Hall magnetic fields typically form a quadrupolar structure pointing in the out-of-plane direction, while the Hall electric fields are bipolar and point towards the current sheet (Mozer et al. 2002, Wygant et al. 2005, Borg et al. 2005). This is illustrated in Figure 1. A strong electric field often accompanies observations of IDRs (Eastwood et al., 2010). While very useful, this visual identification can be tedious and prone to error. For example, there are cases where establishing an association of oppositely-directed ion flows and Hall fields is problematic (Nagai et al. 2013). Clearly, a significant advance would be achieved if we were to speed up this process, eliminate some uncertainties, and improve consistency. This is what we intend to do here. We develop a numerical algorithm to search for IDRs. The process has three stages. Only those events which satisfy all three are considered to be \textit{bona fide} IDRs. We then apply this to observations by the Magnetospheric Multi-Scale (MMS) spacecraft during the 2017 tail season from May 1 to September 30, spanning all the near tail and going from dawn to dusk from MLT 4:47 to 18:00 hrs MLT at low MLATs in 53 orbits. The statistical properties of the IDRs that the code finds, such as typical Hall magnetic and electric fields will be compared with those of Cluster as reported by Eastwood et al (2010). \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{X-line_schematic.pdf} \caption{An ideal picture of symmetric Hall reconnection showing the quadrupolar out-of-plane ($\hat{y}$ in this view) idealized magnetic fields and inflow and outflow regions in and surrounding the Ion Diffusion Region.} \label{fig:xline} \end{figure} The geomagnetic tail is a preferred region for testing this algorithm. The reconnecting fields are approximately anti-parallel and, further, the field and plasma properties in the lobes above and below the current sheet are typically about the same so that the reconnection is under symmetric conditions. This mitigates complications arising from the presence of a guide field and asymmetries in the inflow regions which do not let us know beforehand what Hall structures, such as the quadrupolar B-field structure, to expect (see e.g. Eastwood et al., 2013; Pritchett and Mozer, 2008, Muzamil et al., 2014, and references therein) since the asymmetries can alter them severely. (See, however, Zhou, et al. (2016) who describe quadrupolar B fields under asymmetric conditions.) Further, use of geocentric solar magnetospheric (GSM) coordinates is often good enough so that we do not need to transform to boundary-normal coordinates. Only rarely does reconnection on the dayside approach symmetric conditions (see Mozer et al., 2002). The layout of the paper is as follows. In section 2 we describe the data we shall use, in section 3 we detail the Methodology and Procedure. Section 4 gives two case studies, section 5 gives an overview of the output of the code as applied to the 2017 MMS tail season, section 6 shows statistical results over the ensemble of IDRs and Section 7 gives a discussion. We compare various aspects of our findings with the works of Eastwood et al. (2010) and Nagai et al.(2005) We shall also compare our findings with the predictions on the occurrence frequency of IDRs postulated by Genstreti et al. (2014). In section 8 we draw our conclusions. \section{Instruments} \label{sec:instr} The method we describe utilizes magnetic and electric field and ion velocity moment data collected by the MMS spacecraft and published on the MMS science gateway website (https://lasp.colorado.edu/mms/sdc/public/). These data were acquired by the FIELDS instrument suite in the case of magnetic and electric fields and by the Fast Plasma Investigation (FPI) for ion moments. The MMS spacecraft measure electric and magnetic fields using the FIELDS instrument suite. FIELDS consists of three electric field and three magnetic field instruments (Torbert et al., 2016). The two pairs of spin-plane double probes (SDP) and the axial double probes (ADP) allow MMS to make direct measurements of the full 3D electric field, ranging from DC up to 100 kHz (Lindqvist et al., 2016, Ergun et al., 2016). These data are combined into the EDP data product for 3D vector $\vec{E}$ measurements. Version 3.0.0 of the level 2 EDP data products was used throughout this study. The analog and digital fluxgate magnetometers (AFG/DFG) measure magnetic fields in the frequency range from DC up to 64 Hz (Russell et al., 2016). The higher frequency range, from 1 Hz up to 6 kHz, is covered by a search-coil magnetometer (SCM; Le Contel et al., 2016). Level 2 fluxgate magnetometer (FGM) data of version 5.86 and higher (highest available as of submission) were used throughout this study. The FPI provides MMS with high cadence electron and ion distributions in the energy/charge range of 10 eV/q up to 30 keV/q. Each MMS satellite is equipped with eight FPI spectrometers which, combined with electrostatic control of the field-of-view, allows FPI to sample in burst mode the full electron and ion distributions with a time resolution of 30 ms and 150 ms, respectively (Pollock et al. 2016). It is important to note that core ion distributions can extend beyond the range of FPI, meaning that actual ion bulk velocities may be higher than what is calculated using FPI data. Identification of IDRs in our algorithm uses fast survey data. Level 2 FPI ion moments of version 3.3.0 were used throughout this study. \section{Methodology and Procedure} Data collected by the MMS fleet of spacecraft were analyzed for the 5-month period from 01 May 2017 through 30 September 2017. During this time the apogee of the spacecraft orbits reached a typical distance of $\sim 25R_{E}$ and swept from $\sim 5MLT$ to $\sim 19MLT$. Orbital tracks are given in Figure \ref{fig:orbit}. Central to us is the time the MMS spacecraft spends in the plasma sheet. We adopt a criterion based on plasma density ($n \geq 0.05 \frac{1}{cc}$) which is taken as indicative of the plasma sheet (Baumjohann 1993, Raj et al. 2002). Only time segments which meet this condition are considered for further analysis. Dwell times for the spacecraft in the plasma sheet are represented as a part of Figure \ref{fig:dwell-xy}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{ORBIT_1718_001.pdf} \caption{MMS coverage of the near-tail for the 5-month period of our study. The GSM Z was within -7 to 7 RE. This shows a fairly complete coverage of the near-tail region during the period of our study.} \label{fig:orbit} \end{figure} MMS data was analyzed in three-minute segments with a 1-minute overlap between adjacent segments. Each segment determined to be in the plasma sheet was analyzed using a maximum of five search criteria: (i) Ion flow reversal in the GSM$_{x}$, Earth-Sun direction; (ii) magnetic field reversal in the GSM$_{z}$ direction; (iii) sign correlation with the flow reversal; (iv) presence of Hall electric, and Hall magnetic fields; and (v) magnitude of the measured electric field. These criteria were applied to each three-minute segment in sequential stages. Stage 1 consisted of searching for correlated reversals of the $\hat{x}$-component of the ion flow and the $\hat{z}$ component of the magnetic field within the segment. Flow reversals are further required to be of at least $200 \frac{km}{s}$ in magnitude and B$_{z}$ reversals are required to be of at least $2nT$ in magnitude centered about zero. Correlation of these reversals is determined by requiring that V$_{x}$ should turn from positive (negative) to negative (positive) within 90 seconds of B$_{z}$ turning from positive (negative) to negative (positive). Segments which satisfy these criteria are then analyzed using the criteria for stage 2. Figure \ref{fig:S1-plot} shows an example of a segment which passes the Stage 1 criteria where $B_{z}$ and $V_{x}$ are represented respectively by red and blue traces. During these 2 minutes the flow changes from $V_{x} < -400\frac{km}{s}$ to $\sim 200 \frac{km}{s}$ (i.e. tailward to sunward) and, at practically the same time, $B_{z}$ goes from $-7 nT$ to $4 nT$ (i.e. southward to northward-pointing). This is consistent with a tailward motion of the X-line (Øieroset et al. 2001, Runov et al. 2003, Borg et al. 2005). \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Fig3_2017-07-26_0701_S1_r1.pdf} \caption{Components of the ion velocity moment and magnetic field measured by MMS1 on 26 July, 2017. The correlated reversal of the ion velocity in the GSM$_{x}$ direction (blue) and the normal (GSM$_{z}$) component of the magnetic field (red) as observed by MMS1. Given the small separation in time between the velocity and magnetic field reversals and the magnitudes of each both before and after the reversals, this example satisfies the algorithmic requirements for Stage 1.} \label{fig:S1-plot} \end{figure} Stage 2 consists of a search for Hall electric and magnetic fields within the segment. Hall electric fields are determined by measuring the $\hat{z}$-component of the electric field with a magnitude of $\geq 3\frac{mV}{m}$ sign anti-correlated with the $\hat{x}$-component of the magnetic field, both in GSM coordinates. The top panel of Figure \ref{fig:S2-plot} shows $B_{x}$ (blue trace) and $E_{z}$ (red trace) for the same time period as in Figure \ref{S1-plot}. $B_{x}$ is negative throughout, indicating the spacecraft is south of the center of the current sheet. Except for brief excursions, $E_{z}$, which is normal to the mid-tail current sheet, is positive. Thus it is directed towards the current sheet, as expected from the Hall Electric field. All Hall magnetic fields are determined by measuring the GSM$_{y}$ component of the magnetic field $\geq 1nT$ with sign equal to $\operatorname{Sign}(B_{x})\times\operatorname{Sign}(V_{x})$ as shown in Figure \ref{fig:xline}. At least two of the four quadrants implied by this coordinate system, marked with Roman numerals in Figure \ref{fig:xline}, must be populated by B$_{y}$ measurements of the correct sign and magnitude for them to be accepted as evidence of the presence of Hall $\vec{B}$-fields. Figure \ref{fig:S3-plot} illustrates this. The spacecraft is sampling below the neutral sheet (negative $B_{x}$). Tailward of the X-line ($V_{x}$ negative) the $B_{y}$ component is positive (blue bubbles) and earthward of the X-line it is negative (red bubbles). There is a tendency for the magnitude of the Hall $\vec{B}$ field to be greater close to the neutral sheet and the center of the reconnection region. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Fig4_2017-07-26_0701_S2e_S3_r1.pdf} \caption{Components of the electric and magnetic fields measured by MMS1 on 26 July, 2017. The magnitude of the electric field (top panel) is strong in the neighborhood (within 40s) of the correlated magnetic field and flow reversals as shown above. We also note the strong GSM$_{z}$ electric field (in red, bottom panel) which is sign anti-correlated with the GSM$_{x}$ magnetic field component (in blue, bottom panel) which suggests the presence of the Hall electric field.} \label{fig:S2-plot} \end{figure} Stage 3 consists of detection of a strong DC electric field of magnitude $\geq 10 \frac{mV}{m}$ in the neighborhood of the correlated B$_{z}$ and V$_{x}$ reversal and after having shown good evidence of Hall electric and magnetic fields. To limit the influence of strong wave activity in this stage, the electric field data is low-pass filtered with a critical frequency of $1Hz$ prior to analysis. Figure \ref{fig:S2-plot} shows an example of this on the same event as presented in Figure \ref{fig:S1-plot} where |$\vec{E}$| (lower graph) reached values greater than 40 $\frac{mV}{m}$ shortly after the correlated reversals at 0701UT. If the segment satisfies these criteria, having already satisfied all previous criteria, then it is considered to contain a candidate IDR. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Fig5_2017-07-26_0701-HallB_MMS1-4_r1.png} \caption{Bubbles represent the out-of-plane component (y) of the magnetic field, where blue circles are positive values and red are negative with area proportional to the square root of the magnitude of B$_{y}$ $\in$ $[0.1, 5.6] nT$, after Borg et al. (2005). The $\hat{x}$ component of the measured ion velocity is the abscissa of the plot, while the ordinate axis is the $\hat{x}$ component of the magnetic field, providing a proxy coordinate system centered on the X-line. Two quadrants of the quadrupolar Hall magnetic field are visible here, with moderate cross-contamination near the $V_{i}=0$ point.} \label{fig:S3-plot} \end{figure} \subsection{Algorithm Implementation} The algorithm was implemented through an IDL procedure, attached in the supplementary material for this article and available from a github repository (https://github.com/unh-mms-rogers/IDR\_tail\_search), leveraging the SPEDAS library for loading and version control of all MMS data files. The procedure checks for each of the stages described above by way of conditional Boolean statements applied to each three-minute time series data segment. Results are returned in the form of a text file which lists time segments which pass stage 1 criteria as well as markers indicating if the segment has also passed stage 2, or both stages 2 and 3. Segments which have passed all three automated stages are then reviewed by a human researcher. IDRs are only reported and confirmed after human review. All electric field data was processed using a low-pass digital filter with a critical frequency of 1 Hz prior to application of the algorithm in order to reduce false positives due to wave activity during the stage 3 analysis. Fast survey data was used for this analysis; further investigation can and should be done using burst-mode data where available. FPI measurements of particle moments were used for all reported IDRs and are the default source for particle moments. The implementation of the above algorithm automatically falls back onto using HPCA proton moments when FPI moments are not available, although that was not necessary for any of the selected events. Hall magnetic fields are tested by quadrant (see Figure \ref{fig:xline} for quadrant identification). Each B$_{y}$ data point is checked for the minimum magnitude and correct polarity required to satisfy expected Hall magnetic field parameters in that quadrant. After checking all B$_{y}$ data in the 3-minute time segment under analysis, a ratio of data points which satisfy expected Hall parameters to the total data points in the segment is calculated. This is compared to a minimum threshold ratio entered as a user-defined parameter at runtime. If the 'good' B$_{y}$ data points exceed the threshold ratio then the time segment is marked as having signatures of a likely Hall magnetic field. The ratio used for this study was 0.10. Final examination of code-identified IDR candidates is performed by eye using survey plots similar to those shown in Figure \ref{fig:s3a-example} and Figure \ref{fig:s3b-example}. Human review is focused on checking for clear flow and magnetic field reversals with a minimum of erratic or noisy behavior which may call into question the timing or certainty of the reversals. The electric field in the neighborhood of the reversal is also reviewed to ensure that strong wave activity does not mimic a DC field of sufficient size to pass Stage 3 even after the low-pass filter has been applied. Hall electric and magnetic fields are reviewed for non-Hall behavior near the magnetic field and flow reversal. Hall magnetic fields are checked using bubble plots similar to Figure \ref{fig:S3-plot}. Non-ideal behavior in any of the three stages can still pass checks in the algorithm as currently implemented and may represent other geospatial events which are not ion diffusion regions (see Discussion). These events are currently removed from consideration after review of the candidates by a human reviewer. Modifications to the algorithm and the code which applies it are currently under development to further reduce the need and extent of human review of IDR candidates, as well as to possibly automate confident identification of dipolarization fronts, bursty bulk flows, non-active flow reversals, etc. for later study. \section{Examples} We now offer two case studies to illustrate the working of the algorithm. \subsection{Case study 1: 0729UT July 26, 2017} \begin{figure}[h!] \centering \includegraphics[width=35pc]{Paper_2017-07-26_0728_S3ex_v3.pdf} \caption{A strong reversal in B$_{z}$ (vertical line) is seen $\sim$3s before a correlated reversal in V$_{x}$. The 30s surrounding these reversals also have strong electric fields, frequently much greater than $20\frac{mV}{m}$. Other regions with strong electric field indicators of reconnection occur both before and after the correlated reversals (07:28:12, 07:30:00). Hall electric fields were also measured in each of these regions and Hall magnetic fields are seen throughout the neighborhood surrounding the reversals. Based on these indicators the time period where the observatory is within the diffusion region is marked by the colored label at the top of the Figure.} \label{fig:s3a-example} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Fig7_2017-07-26_0728-HallB_MMS1-4_r1.png} \caption{In this example all four quadrants of the Hall magnetic field (GSM B$_{y}$)are detected by the MMS observatories. The time period of data points used in this plot is the same as for the time series data shown in Figure 5. Colors are the same as in Figure 4, with the area of each circle again proportional to the square root of the magnitude of B$_{y}$.} \label{fig:s3a-bubble} \end{figure} Figure \ref{fig:s3a-example} shows three minutes surrounding an ion diffusion region observed by the MMS4 observatory as identified by the algorithm described in the previous section. This event was previously studied by Ergun et al. (2018) whose emphasis was on the turbulent energy transfer processes occurring in the period between 07:21:13UT and 07:38:42UT. The location of MMS was (-23.0, 8.94, 2.20)$R_{E}$ in GSM coordinates. The fluxgate magnetometer data in the top panel shows a rapidly changing magnetic field with no fewer than 10 current sheet crossings (reversal in the polarity of B$_{x}$) in the period shown. The z-component of the magnetic field also changes sign several times, with a particular reversal of interest from southward to northward at 07:28:46.7UT and marked with a vertical guideline. In the second panel, ion speeds up to $400 \frac{km}{s}$ are recorded, with a transition from tailward to earthward flow direction at 07:28:48UT. Filtered electric field data in GSM coordinates is shown, with additional panels dedicated to the z-component of $\vec{E}$ and to the magnitude of the electric field. Strong DC electric fields ($\geq 10\frac{mV}{m}$) are measured frequently during the minute surrounding the correlated ion flow and B$_{z}$ reversal, reaching $\sim$40 mV/m during the reversal. Comparison of E$_{z}$ and B$_{x}$ show the conditions expected for Hall electric fields in this region (marked in yellow). Figure \ref{fig:s3a-bubble} shows the y-component (out-of-plane) of the magnetic field ordered by the x-components of both the magnetic field and ion velocity. Three quadrants of the Hall magnetic field are clearly observed by the combined measurements of all four MMS observatories. All four quadrants are observed, although quadrant II, as described in Figure 1, has only sparse coverage. The out-of-plane magnetic fields are stronger tailward of the X-line, both north and south of the neutral sheet; a trend typical of events in this study. \subsection{Case study 2: 0749UT July 17, 2017} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Paper_2017-07-17_0749_S3ex_v3.pdf} \caption{A moderate reversal in B$_{z}$ from $-11nT$ to $6nT$ is the last of a rapid series of neutral sheet crossings, with an associated ion flow reversal approximately 10s later. Moderate to strong Hall electric fields are seen immediately surrounding the reversal.} \label{fig:s3b-example} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Fig9_2017-07-17_0749-HallB_MMS1-4_r1.png} \caption{In this example, showing the same time period as in Figure \ref{fig:s3b-example}, three quadrants of the Hall magnetic field were observed by the MMS fleet in the minute surrounding the B$_{z}$ reversal. The organisation of the measured Hall magnetic fields implies a small structure velocity for the X-line and surrounding diffusion region.} \label{fig:s3b-bubble} \end{figure} Figure \ref{fig:s3b-example} shows time series data for three minutes of observations by the MMS3 observatory, roughly centered on an ion diffusion region as identified by our proposed algorithm. The location of MMS was (-18.1, 7.30, 0.66)$R_{E}$ in GSM coordinates, 22.5 MLT at a distance of 19.5 $R_{E}$, therefore also on the dusk side. The magnetic field data (top panel) shows seven suspected current sheet crossings during this period, the final crossing occurring shortly before the zero-crossing of interest in B$_{z}$ at 07:49:06.4UT. Ion velocity moment data is given in the second panel and shows strong tailward flows of approximately $400\frac{km}{s}$ throughout the region of current sheet crossings before reversing to weaker earthward flows of about $200\frac{km}{s}$ approximately ten seconds after the correlated B$_{z}$ reversal. Filtered electric field data is provided showing moderate-strength electric fields in the neighborhood of and in the minute following the correlated magnetic field and ion flow reversals. Comparison of the z-component of the electric field with the x-component of the magnetic field indicates the presence of Hall electric fields in the seconds before and the minute following the correlated magnetic field and ion flow reversals. Figure \ref{fig:s3b-bubble} shows the y-component (out-of-plane component) of the magnetic field in terms of B$_{x}$ and V$_{i,x}$ where three quadrants of the Hall magnetic field are clearly evident. \section{Output of the 3-step selection scheme} \begin{table}[h!] \normalsize \centering \begin{tabular}{|c|c|} \hline Stage Passed & \# of Events \\ \hline \hline Stage 1 & 148 \\ \hline Stage 2 & 37 \\ \hline Stage 3 & 17 \\ \hline \end{tabular} \caption{A table listing the stage of analysis and the number of events in the range from May 01, 2017 to September 30, 2017 which passed each stage.} \label{tab:selection-stats} \end{table} \begin{table}[h!] \normalsize \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Event Label & yyyy-mm-dd/tttt UT & X-Line Motion & GSM$_{X}$(R$_{E}$) & GSM$_{Y}$(R$_{E}$) & GSM$_{Z}$(R$_{E}$) \\ \hline A & 2017-05-28/0358 & Tailward & -19.3 & -11.8 & 0.78\\ \hline B & 2017-07-03/0527 & Tailward & -17.6 & 3.32 & 1.75\\ \hline C & 2017-07-06/1535 & Tailward & -24.1 & 1.41 & 4.44 \\ \hline D & 2017-07-06/1546 & Tailward & -24.2 & 1.33 & 4.48 \\ \hline E$^{a}$ & 2017-07-11/2234 & Tailward & -21.5 & 4.12 & 3.78 \\ \hline F & 2017-07-17/0749 & Tailward & -18.1 & 7.30 & 0.66 \\ \hline G & 2017-07-26/0003 & Tailward & -20.7 & 9.05 & 3.46 \\ \hline H & 2017-07-26/0701 & Tailward & -22.9 & 8.97 & 2.27 \\ \hline I$^{b}$ & 2017-07-26/0728 & Tailward & -23.0 & 8.94 & 2.20 \\ \hline J & 2017-08-06/0514 & Tailward & -18.9 & 13.0 & 0.37 \\ \hline K & 2017-08-07/1538 & Tailward & -16.4 & 4.38 & 3.77 \\ \hline L & 2017-08-23/1753 & Earthward & -18.8 & 16.1 & 1.11 \\ \hline \end{tabular} \caption{The 12 IDRs identified by the algorithm and confirmed by human review. Event E (indicated by an 'a' superscript) has been previously reported by Torbert et al.(2018). Event I (indicated by a 'b' super script) has been previously reported by Ergun et al. (2018)} \label{tab:event-list} \end{table} Table \ref{tab:selection-stats} shows the total number of events which pass each stage. Events which pass stage one but not subsequent stages include examples of non-active flow reversals and other phenomena. Events which also passed stage 2 exhibit both clear correlated B$_{z}$ and V$_{x}$ reversals as well as good evidence of Hall electric and magnetic fields, but weak electric field magnitude overall. Examples of these can be found in the discussion. A table of all time segments in our study which passed stage 1 with markers for if the segment passed stage 2 or stage 3 is given in the supplementary materials. The twelve IDRs identified by the algorithm described in the previous section and confirmed on review are listed in table \ref{tab:event-list}. Epochs given in column 2 refer to the approximate center of the identified IDR. The direction of X-line motion is determined by the observed direction of ion flow reversal with earthward motion indicated by an initial earthward ion flow converting to a tailward flow. The order of encountered ion flows is inverted for tailward motion of the X-line (Eastwood et al., 2010 and references therein). \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|} \hline Event Label & yyyy-mm-dd/tttt UT & GSM$_{X}$(R$_{E}$) & GSM$_{Y}$(R$_{E}$) & GSM$_{Z}$(R$_{E}$) \\ \hline N1 & 2017-07-03/0546 & -17.8 & 3.28 & 1.78\\ \hline N2 & 2017-07-16/0638 & -15.6 & -2.04 & 5.51\\ \hline N3 & 2017-07-23/0753 & -22.2 & 8.50 & 1.77\\ \hline N4 & 2017-08-04/0923 & -21.6 & 8.44 & 2.59\\ \hline N5 & 2017-08-31/1153 & -13.0 & 18.7 & -5.77\\ \hline \end{tabular} \caption{Five events which satisfied all three stages of the algorithm described in this paper but could not be confirmed to be Ion Diffusion Regions upon human review.} \label{tab:weak-list} \end{table} Five other events which passed all three stages of the automated analysis but which are not reported here as IDRs display periods within a 3-minute selection segment which might indicate an IDR, but which lack consistency. These events are tabulated in table \ref{tab:weak-list} and discussed in a following section. \section{Statistical Results} Maximum positive and maximum negative values of the X-component of the ion bulk velocities for each of the nine reported events are shown in Figure \ref{fig:S1-stats} (top panel). These values represent the extrema measured across all four spacecraft of the MMS fleet during the 3-minute time step approximately centered on each event. No attempt has been made in these measurements to correct for structure motion relative to the spacecraft, thus asymmetries in the minimum and maximum may indicate tailward or earthward motion of the X-line relative to the spacecraft. Any attempt to infer the magnitude of the X-line velocity using these asymmetries is hampered by the lack of a boundary-normal coordinate system. Maximum negative (maximum pointing in a southward direction), maximum positive (maximum northward), and average values for the normal magnetic field are given in Figure \ref{fig:S1-stats}b (bottom panel). \begin{figure}[h!] \centering \includegraphics[width=0.8\linewidth]{Vx_Bz_fig10_1r.pdf} \caption{\underline{Stage 1 Criteria:}The above plots show the statistical properties of $V_{i,X}$ and $B_{Z}$ components in the neighborhood around the reconnection site. Asymmetries in the flow offsets may indicate significant structure velocity. Similar asymmetries or a non-zero average value of $B_{Z}$ are likely artefacts of the coordinates system not being boundary normal to the structure. Comparison data from the Eastwood et al.(2010) study were not available.} \label{fig:S1-stats} \end{figure} The upper panel in Figure \ref{fig:S2-stats} shows the maximum and average values of the electric field magnitude in the neighborhood including the diffusion region for each IDR. As seen in Figures \ref{fig:s3a-example} and \ref{fig:s3b-example} the electric field strength increases significantly when approaching the inner diffusion region and drops off rapidly further into the outer diffusion region. The broad difference between maximum and average values of $|E|$ suggest the existence of strong electric fields only very near the thin current sheet, with more moderate values elsewhere. The lower panel in Figure \ref{fig:S2-stats} shows the largest and average values of the GSM$_{z}$ electric field (E$_{Z}$) for both positive and negative values. We use the GSM$_{z}$ electric field as an approximation of the Hall electric field within the diffusion region, i.e. normal to the current sheet. The dominating source of asymmetries is likely the path of the observatory through the diffusion region. For the majority of the events reported, the distance to the current sheet, as approximated by the magnitude of B$_{X}$, was not uniform for observations on either side during a given event. Extrema and average values for all events reported by Eastwood, et al. (2010) are also included for comparison and are discussed more later. \begin{figure}[h!] \centering \includegraphics[width=0.8\linewidth]{E-mag_Ez_fig11_1r.pdf} \caption{\underline{Stage 2 and Electric Field Criteria:} The upper plot, a), shows statistical properties of the electric field magnitude identified IDRs in this study. Data were not available from the Eastwood, et al.(2010) study to compare |$\vec{E}$|. |E| $\geq$ 10$\frac{mV}{m}$ in the neighborhood surrounding the magnetic field and ion flow reversal is the requirement for passing Stage 3. Below, in b), are statistical properties of the Hall electric field (E$_{z}$) with comparison data from Eastwood, et al.(2010). Detection of Hall electric fields (E$_{z}$ sign anti-correlated with B$_{x}$) is one half of the Stage 2 criteria for IDR candidate identification.} \label{fig:S2-stats} \end{figure} Extreme and average values for all four quadrants of the Hall magnetic field, approximated by the out-of-plane magnetic field (B$_{Y}$), are given in Figure \ref{fig:S3-stats}. These measurements are taken from across the entire selection window of 3 minutes. As with the Hall electric fields, the relative magnitude of Hall magnetic field seen in any given quadrant is dependent to a significant degree on the path of the observatory through the diffusion region. Despite the small average separation between spacecraft in the fleet during this mission phase, we saw three and sometimes all four of the Hall magnetic field quadrants. Extrema and average values for all events reported by Eastwood, et al. (2010) are also included for comparison. The comparison data from Eastwood, et al.(2010) for E$_{z}$ and out-of-plane magnetic fields shows the maximum positive and maximum negative values across all 18 IDRs which are reported there, while the average positive and negative values are the averages of average values reported for each event. Average values do not include events where the relevant field was not detected. \begin{figure}[h!] \centering \includegraphics[width=0.8\linewidth]{By_4qs_max_avg_fig12_1r.pdf} \caption{\underline{Out-Of-Plane (Hall) Magnetic Fields:} Detection of the quadrupolar Hall magnetic field is one half of the third stage in our IDR detection algorithm. Properties of the out-of-plane magnetic field are divided into the four regions of the Hall model. Not all quadrants were observed by MMS for some events. Hall magnetic field data from the Eastwood et al.(2010) study are included for comparison.} \label{fig:S3-stats} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=0.8\linewidth]{2017_PS_dwell_Hrs_wEvents_MLT.pdf} \caption{Locations of identified and confirmed IDRs in circled stars are plotted over the number of hours MMS spacecraft spent in the magnetotail. The strong dusk-side prevalence of events is consistent with previous observations and predictions.} \label{fig:dwell-xy} \end{figure} Figure \ref{fig:dwell-xy}a) shows the distribution of IDRs as listed in table \ref{tab:event-list} over an underlay of the dwell time of the MMS fleet in the near-tail plasma sheet as determined by reported flight ephemeris and FPI ion density. Figure \ref{fig:dwell-xy}b) shows the total hours spent in the plasma sheet by MMS as a function of position in MLT. The event distribution shows a clear preference for the dusk region of the magnetotail (56.5\% dusk vs 43.5\% dawn). The lone exception to this trend in our observations occurs on May 28, 2017 during a period when there was significant geomagnetic activity with an AE index in excess of 1000nT. \section{Discussion} \label{sec:disc} \subsection{Mechanics and Limitations to the Algorithm} The criteria employed in this algorithm are, in most respects, highly conservative when attempting to identify IDRs. Our adherence to only considering those IDRs which are well aligned with the GSM coordinate system may eliminate many otherwise strong candidates. Similarly, requiring the detection of an ion flow reversal is highly restrictive and eliminates candidates similar to the diffusion region encountered by Polar as described by Mozer, Bale, and Phan (2002) which would not have passed the first stage of analysis since the traversal was normal to the current sheet and on one side of the X-line. More qualitative criteria such as that of requiring smooth and rapid reversals of both ion flow and the normal component of the magnetic field also remove many events and conditions, such as a bifurcated flow reversal like that described by Runov et al.(2005), which might otherwise be argued to represent passage of MMS through the ion diffusion region. Our requirement of a strong $|\vec{E}| \ge 10\frac{mV}{m}$ also serves to limit detection of weaker or secondary reconnection diffusion regions such as those reported by Huang, et al. (2018) and Zhou et al. (2019). These events can display weak but clear Hall electric and magnetic fields in the neighborhood surrounding a clear ion flow and normal magnetic field reversal, but are accompanied by only a small increase in the magnitude of the electric field (Zhou et al. 2019). Such a weak electric field calls into question the existence of a thin current sheet in the region, a common feature of the canonical parallel reconnection model, but may still provide important or at least interesting examples. Our criteria, both in terms of the electric field magnitude and the quality of the correlated ion flow and normal magnetic field reversal were made intentionally restrictive so as to, hopefully, provide a collection of examples of IDRs which can be considered such beyond a reasonable doubt. One noteworthy point regarding those flow reversals which do not display significant Hall fields is how common they are. There is a factor of four drop in the number of events satisfying Stage 1 to those satisfying also Stage 2. Almost half of the events which satisfied Stage 1 were identified as non-active flow reversals similar to those reported in Geotail observations by Nagai et al. (2013). An example is given in Figure \ref{fig:s1-example}. Here we see a clear correlated reversal from Earthward ion flow and north-ward magnetic field to a tailward flow and southward-pointing magnetic field. However, strong electric fields are missing from the region, nor is significant evidence of Hall electric or magnetic fields found. The missing elements combined with a steady, strong B$_{x}$, suggest passage of the observatory near to an X-line but at a distance from the current sheet such that no diffusion region was detected. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{NAFR_2017-07-24_0700_MMS1.pdf} \caption{This shows an ion flow reversal from earthward to tailward preceded by a correlated reversal in the GSM$_{z}$ component of the magnetic field $\approx$ 10s before, thus satisfying stage 1 criteria. However other indicators of a possible IDR (Hall fields, strong |E|) are missing. This is classified as a Non-Active Flow Reversal.} \label{fig:s1-example} \end{figure} Some events which satisfy the criteria from all three stages are still questionable examples of IDRs. An example of this is given in table \ref{tab:weak-list} as event N3 and shown in Figure \ref{fig:S3_noIDR}. Here a strong ion flow reverses slowly from -315 to 170 $\frac{km}{s}$ in a moderately active magnetic field. There are numerous instances during this reversal of B$_{z}$ crossing zero from negative to positive, but none appear to line up well with the flow reversal and both the closest in time and the most prominent B$_{Z}$ reversal occurs $\sim 50s$ before the flow reversal. The Hall electric and magnetic fields are, likewise, indicated in places but are occasionally contradicted. Despite strong electric fields consistent with a Hall electric field at the center of the bifurcated flow reversal, the electric fields elsewhere are less indicative of an IDR. The flow reversal is strongly bifurcated with V$_{i} \sim 0$ for approximately one minute between distinct tailward and Earthward flows. The strong, consistent magnitude of B$_{x}$ combined with the fairly weak plasma density raise the possibility that the observations were made in or near the plasma sheet boundary layer and away from the possible reconnection site. For these and similar reasons this event and three others which passed all three stages of the automated detection algorithm are not reported as IDRs. \begin{figure}[ht!] \centering \includegraphics[width=0.9\textwidth]{N-IDR_2017-07-23_0754_MMS1.pdf} \caption{A survey plot of event N3. This is an example of an interval which satisfied the criteria of all 3 stages, but which on visual inspection was not confirmed as an IDR. Note the intermittent presence of Hall electric and magnetic fields and uncertain reversal in B$_{z}$. Due to these conditions, and despite the strong, if bifurcated, ion flow reversal, the event was not determined to be an IDR} \label{fig:S3_noIDR} \end{figure} Work is currently in progress to further refine the stages as they are described here and also to possibly include further requirements in an effort to ensure confidence in the identification made by the algorithm. However, care must be taken to include as few assumptions as possible when considering refining criteria so as not to bias conclusions about the nature of the reconnection region based on the parameters used to find it. \subsection{Interpretation of Results} The abundance of correlated magnetic field and ion flow reversals without significant electric fields or evidence of Hall fields calls into question a common assumption in the study of magnetic reconnection. Non active flow reversals such as that shown in Figure \ref{fig:s1-example} represent a large portion of the total correlated field and flow reversals observed in this study. This result goes against the received wisdom of correlated field and flow reversals being a sure indicator of a diffusion region, which has persisted for over a generation (Frank et al. 1976). The inclusion of additional criteria, such as significant evidence of Hall electric and magnetic fields greatly improves the success of the algorithm, but still allows for some events which are not clearly diffusion regions and require further analysis before identification is certain. \subsection{Comparison with Cluster} We now compare our observations with those made by Eastwood et al. (2010), wherein Cluster data from over a much longer period of time was analyzed. The values for the proxy normal electric field fall within the range reported by Eastwood, et al.(2010) (their Figure 5) as seen in Figure \ref{fig:S2-stats}. Again, extreme and average out-of-plane magnetic field values for the events reported by Eastwood, et al. (2010) are also shown for comparison. All four quadrants of the Hall magnetic field structure are observed clearly in $3/12$ events reported here with a typical spacecraft separation of $\sim 20km$. The Cluster mission with spacecraft separation greater than $200km$ and often exceeding $1000km$ for all orbits reported by Eastwood, et al (2010) observed all four quadrants slightly more often, ($6/18$) diffusion regions, as expected from a larger spacecraft separation and the expected extent of Hall magnetic field signatures away from the diffusion region. In both studies at least two and often three quadrants of the Hall magnetic field were observed for each event where the full quadrupolar was not. These factors along with the predominance of tailward over earthward motion of the X-line and the clear preference for dusk-side location suggest that the properties of the ion diffusion regions observed in this study are not fundamentally different than those observed during Cluster. Several instances of normal electric field (E$_{z}$) in this study exceeded the maximum value reported for events observed by Cluster as reported by Eastwood, et al. The reasons for this are unclear but may indicate a closer approach to the inner diffusion region than was done by the Cluster spacecraft during their encounters. Certainly in the case of event I of this study the strong normal electric field coincided with an encounter with the inner diffusion region (Ergun, et al. 2018). It may be that Cluster was not so fortunate during the events studied by Eastwood, et al. The average E$_{z}$ across all twelve events reported here was $\sim 3.52\frac{mV}{m}$ for E$_{z}^{+}$ and $\sim -2.81\frac{mV}{m}$ for E$_{z}^{-}$ which, while smaller, compare reasonably well with $\sim5.33\frac{mV}{m}$ and $\sim -6.47\frac{mV}{m}$ for the same averages from the Eastwood, et al. study. The average positive out-of-plane magnetic field observed in the twelve events reported here exceeded the average value of events reported in Eastwood, et al. in quadrants 1 ($\sim 3.08nT$ vs $\sim 2.64nT$) and 3 ($\sim 4.85nT$ vs $\sim 4.44nT$). The average negative out-of-plane magnetic field is somewhat smaller than that reported by Eastwood, et al. in quadrant 4 ($\sim-2.46nT$ vs $\sim-3.25nT$) and quadrant 2 ($\sim 3.52nT$ vs $\sim 3.66nT$). A significant source of variation in the magnitudes of out-of-plane magnetic fields is likely due to different flight paths through the diffusion region, which can vary greatly from one example to the next. In all cases the averages in B$_{y}$ magnitudes from each study are sufficiently similar to be confident that behavior of the events in this study are fundamentally similar to those reported by Eastwood, et al. A notable trend is the greater magnitude of out-of-plane magnetic fields in regions with tailward flow (quadrants 2 and 3) relative to quadrants with Earthward flow (quadrants 1 and 4). This trend is evident in both the events reported here as well as those reported by Eastwood, et al. The ratio of average |B$_{y}$| for V$_{x} < 0$ to average |B$_{y}$| for V$_{x} > 0$ is $1.38$ for the events reported by Eastwood, et al. and $1.31$ for those reported here. These are both of similar magnitude to the ratio of average peak tailward flow to average peak Earthward flow in events A--L of $\sim1.41$. The direction of X-line motion was predominantly tailward except for one event (event 'L' on Table \ref{tab:event-list)} moving Earthward as indicated by the order of the ion flow reversal. Similarly, Earthward moving events were in the minority during the Eastwood, et al. study with only $3/18$ Earthward-moving X-lines (Table 2, Eastwood, et al.(2010)). There is a suggestion that some of these tailward moving X-lines may be near-Earth neutral lines moving tailward during the recovery phase of substorms. Similarly, the Earthward-moving X-lines may be related to the expansion phase of a related substorm. Further investigations regarding this question are currently underway. \subsection{Further Discussion} An interesting result on the statistics is the following: the total number of IDRs reported also conforms with predictions made by Genestreti et al. (2014) pre-launch, wherein a statistical analysis of the distribution of IDRs observed by Cluster and Geotail indicated that MMS ought to observe $11\pm 4$ IDRs during the first tail phase (Genestreti et al. 2014), i.e. the one covered in this study. The twelve events we report is comfortably within this prediction. The spatial distribution of events observed here strongly favors the dusk-side (GSM$_{y} > 0$) as was also predicted by Genestreti et al., based on the locations of IDRs previously observed by Cluster and Geotail, although the dawn-dusk asymmetry observed by MMS was far greater than that of previous missions (Figure 6, Genestreti et al. 2014). During the 2017 MMS tail season the fleet spent a total of 2143.02 hrs in the plasma sheet ($n_{i} > 0.05 cc^{-1}$ by FPI fast survey). Plasma sheet dwell time on the dusk side was 1210.30 hrs and on the dawn side 932.72 hrs. This corresponds to a Dusk/Dawn ratio of 56.5\%/43.5\%. Meanwhile, we observed 11 confirmed IDR events on the dusk side and one on the dawn side (91.7\%/8.3\%). If the distribution of IDRs were to be even across the tail, we would instead expect to have seen 7 (5) events on the dusk (dawn) side. We can, therefore, confidently state that the dawn-dusk asymmetry of reconnection events is not due to any observational bias caused by orbital variations. A previous study by Raj et al. (2002) of high-speed ion flows (often associated with exhaust from reconnecting X-lines) in the geomagnetic tail also showed an asymmetry of events favoring the dusk side over the dawn side well in excess of the asymmetry in total number of measurements made in either region (their Figure 18). Lu et al. (2018) propose a mechanism which produces a dawn-dusk asymmetry in the thickness of the tail current sheet and support it with PIC simulation results which approximate Cluster measurements. The distribution of IDRs observed by MMS, however, is much greater than what is suggested in these previous studies. \section{Concluding Remarks} We have presented a numerical algorithm to identify IDRs and applied it to MMS \textit{in situ} observations in the geomagnetic tail during a 5-month period. The algorithm uses a stepwise scheme, testing topological parameters in the magnetic and electric field as well as the bulk ion flow to allow a high degree of confidence in the regions identified by code. Using this method we have identified 12 IDRs from the 2017 Phase 2B magnetotail campaign of the MMS mission. After performing statistical analysis of these events and comparing them with previous surveys of magnetotail diffusion regions we find that the algorithmically identified IDRs have average properties which are similar to those identified in previous studies. We also demonstrate categorically that MMS has only a slight orbital dawn-dusk bias in its coverage of the plasma sheet (43.5\% - 56.5\%). However this is much smaller than the asymmetry in the IDR observations (8.3\% - 91.7\%). From this we conclude that the effect we show is a real bias in the occurrence of diffusion regions. While our algorithm has been developed for and initially applied to MMS data, the criteria used are meant to be general for IDRs regardless of where they occur and are applicable without alteration to any mission in the geomagnetic tail. The code, as currently implemented in IDL, requires minimal modification to be adapted for use on data from any other mission with support in the SPEDAS library. The essential algorithm in the code could, with some small additional effort, be ported to other languages or adapted for use in other regions besides the tail. The source code is made available, both on github and as a supplementary material here, for this purpose. The efficacy of the algorithm is readily apparent in the results generated by its use and presented here in sections 4 and 5. Analysis of those results, as presented in sections 6 and 7, show that the algorithm produces worthwhile events which are of interest to the community with a minimum of human intervention. This presents a valuable, time-saving aid to researchers in the field of magnetic reconnection. \acknowledgments We thank the entire MMS team for the effort invested in the preparation of the data. The authors would like to thank Kevin Gennestretti and Terry Forbes for useful conversations. All MMS data are publicly available at MMS Science Data Center (https://lasp.colorado.edu/mms/sdc/public/). Source code for the algorithm described is part of the supplementary materials for this article and is available from a github repository (https://github.com/unh-mms-rogers/IDR\_tail\_search). This work was supported by NASA contracts 499878Q,499935Q,NNX10AQ29G, NNX16AO04G, NNX15AB87G, NSF grant AGS-1435785. \nocite{*}
1,108,101,566,186
arxiv
\section{Introduction} State-of-the-art nonlinear spectroscopy techniques finally enable tracking ultrafast coupled electronic-vibrational motion on its natural timescale, granting unprecedented insight into the quantum-mechanical effects ruling nanoscale physical phenomena~\cite{mukamel2000arpc, jonas2003arpc, cho2008cr, hamm+zanni2011,coll21jpcc}. These advances demand time-resolved first-principles simulations complementary to measurements~\cite{conti+20jacs}. Within the variety of available approaches to non-adiabatic \textit{ab initio} molecular dynamics~\cite{curc+mart2018cr}, mean-field techniques~\cite{tully1998faraday} are appealing thanks to their excellent numerical scalability. Among them, the Ehrenfest scheme -- the classical-nuclei limit of the time-dependent self-consistent field approximation~\cite{heller1976jcp_tdscf} -- is particularly attractive as it can be straightforwardly coupled to real-time time-dependent density functional theory (RT-TDDFT), whereby the quantum-mechanical electronic subsystem explicitly evolves in time. In the last decade, RT-TDDFT+Ehrenfest has successfully complemented experiments to rationalize charge-transfer dynamics in realistic complexes~\cite{rozzi2013natcom, falke+2014sci, rozzi+2017jpcm, zhang+2017as, jaco+2020advpx}. The direct time propagation of the electronic subsystem in RT-TDDFT enables the explicit inclusion in the simulation of a laser field elevating the system to a non-stationary excited state (ES), in close similarity with the experimental scenario~\cite{desi-lien17pccp}. All electronically coherent features arising from the system not being in an eigenstate of its Hamiltonian are naturally captured, including linear-response polarization, as well as quantum interferences between different ES. However, in this context, there are some problems associated with the complete lack of electron-nuclei correlation in the single-trajectory RT-TDDFT+Ehrenfest (STE) scheme. Laser-initiated electron dynamics remain overly coherent when the nuclei have always well-defined positions and momenta. Moreover, neglecting zero-point energy (ZPE) is hardly justifiable in organic systems, where it is much higher than thermal energy due to the low atomic masses. Here, we analyze to what extent a multi-trajectory RT-TDDFT+Ehrenfest (MTE) approach with random initial configurations from quantum distributions can overcome the shortcomings of single-trajectory calculations. The formalism retrieves some of the electron-nuclear correlation that is lost in making the time-dependent self-consistent field approximation, the quantum-mechanical parent of single-trajectory Ehrenfest molecular dynamics~\cite{nancy+miller1987}. We account for coherent electron-vibrational couplings in systems excited by a laser pulse of defined shape, polarization, intensity, and duration. Results obtained for prototypical carbon-conjugated molecules, namely benzene and coronene, are contrasted against corresponding single-trajectory simulations and ensemble-averaged calculations with fixed nuclei. This comparison highlights the role of nuclear motion, which redistributes the oscillator strength in optical spectra through non-adiabatic couplings between ES. These processes are particularly evident when considering nonlinear response. The population dynamics are mainly encoded in the second order; results of corresponding simulations resemble established methods for accessing non-adiabatic dynamics, but naturally feature also transient contributions related to coherences between ES. Finally, we consider the nuclear motion triggered by electronic excitations, demonstrating the ability of the proposed approach to simulate wavepacket motion, which follows an almost classical time evolution for fully symmetric modes, while becoming non-trivial for the less symmetric ones. \section{Methodology} MTE simulations are initialized by generating a set of nuclear coordinates and velocities. The coordinates ${\cal Q}_\alpha$ and the momenta ${\cal P}_\alpha$ in the normal-mode basis are randomly sampled from a Wigner distribution, which - for harmonic oscillators - reads~\cite{hillery+1984pr} \begin{align}\label{eq.wigner} \Gamma(\{{\cal Q}_\alpha\}, \{{\cal P}_\alpha\}) \propto \prod_\alpha\, f^{(p)}_\alpha({\cal P}_\alpha)f^{(q)}_\alpha({\cal Q}_\alpha), \end{align} where $f^{(p)}_\alpha$ and $f^{(q)}_\alpha$ are zero-centered Gaussians with standard deviations \begin{subequations}\label{eq.stddev} \begin{align} \sigma_\alpha^{(p)} &= \left(\frac{\hbar\Omega_\alpha}{2\tanh(\beta\hbar\Omega_\alpha/2)}\right)^{1/2}\hspace{0.3cm}\text{and}\\ \sigma_\alpha^{(q)} &= \left(\frac{\hbar}{2\Omega_\alpha\tanh(\beta\hbar\Omega_\alpha/2)}\right)^{1/2}, \end{align} \end{subequations} respectively, where $\beta = 1/k_\mathrm{B}T$, with $T$ being the temperature, and $\Omega_\alpha$ is the frequency of the vibrational mode $\alpha$. Normal coordinates and momenta sampled from this distribution are transformed into Cartesian initial conditions, \begin{subequations} \begin{align}\label{eq.normalTrafo} R_{K\nu}(t=0) &= M_K^{-1/2}\sum_{\alpha}{\cal T}^{-1}_{\alpha,K\nu}{\cal Q}_\alpha \\ \frac{\text dR_{K\nu}}{\text dt}(t=0) &= M_K^{-1/2}\sum_{\alpha}{\cal T}^{-1}_{\alpha,K\nu}{\cal P}_\alpha, \end{align} \end{subequations} where $\textbf{R}_K$ and $M_K$ are position and mass of the $K$-th nucleus, respectively, and ${\cal T}_{\alpha,K\nu}$ is the matrix transforming mass-weighted Cartesian coordinates into normal ones. The normal frequencies $\Omega_\alpha$ and the transformation matrix ${\cal T}_{\alpha,K\nu}$ required for the generation of these starting configurations are obtained from density-functional perturbation theory. Subsequently, the nuclear subsystem evolves based on the classical forces acting on them~\cite{ullrich2012oxford}: \begin{align}\label{eq.ehrenfest} &M_K\frac{\text d^2\mathbf{R}_K}{\text dt^2} =\nonumber\\ &-\left.\nabla_{\textbf{R}_K}\left[V_{nn}(R)+\int\text d^3r\,\rho(\textbf{r},t)V_{en}(\textbf{r},R)\right]\right|_{R=R(t)}, \end{align} where $R = \lbrace\textbf{R}_K\rbrace$ is the set of the positions of all nuclei, $V_{\mathrm{nn}}$ is the electrostatic repulsion between them, and $\rho$ is the electron density, \begin{align} \rho(\textbf{r},t) = \sum_n^{\text{occ}}|\psi_n(\textbf{r},t)|^2, \end{align} which is calculated from the occupied time-dependent Kohn-Sham orbitals, $\psi_n$. The time evolution of these orbitals is performed, using RT-TDDFT~\cite{rung+1984prl}, starting from a ground-state density obtained from density functional theory~\cite{hohenbergKohn1964pr,kohnSham1965pr}. The electronic equation of motion is the time-dependent Kohn-Sham equation, \begin{align}\label{eq.ks} i\hbar\frac{\partial}{\partial t}\psi(\textbf{r},t) = \hat{\cal H}_{\text{KS}}[\rho](\textbf{r},t)\psi(\textbf{r},t). \end{align} The Kohn-Sham Hamiltonian in Eq.~\eqref{eq.ks}, \begin{align}\label{eq.ks_ham} \hat{\cal H}_{\text{KS}}[\rho](\textbf{r},t) &= -\frac{\hbar^2}{2m_e}\nabla^2+ V_\mathrm{en}(\textbf{r},R(t))+V_{\text{ext}}(\textbf{r},t)\nonumber\\&+V_\text{H}[\rho(t)](\textbf{r},t)+V_\text{xc}[\rho](\textbf{r},t), \end{align} contains the kinetic energy, the electrostatic potential generated by the nuclei, $V_\mathrm{en}$, and the interaction with the external potential, $V_{\text{ext}}$, arising from the coupling to a Gaussian-enveloped, dynamical electric field in the dipole approximation, \begin{align}\label{eq.electricField} V_{\text{ext}}(\textbf{r},t) &= e\textbf{r}\cdot\textbf{E}(t) \nonumber\\&= e\textbf{r}\cdot\hat{\mathbf{n}}E_0\exp(-(t-t_\mu)^2/2t_\sigma^2)\cos(\omega_pt), \end{align} which is characterized by the polarization direction $\hat{\mathbf{n}}$, the field strength $E_0$, the pulse center $t_\mu$, the width $t_\sigma$, and the carrier frequency $\omega_p$. Finally, the last two terms in Eq.~\eqref{eq.ks_ham}, carrying a functional dependence on $\rho$, describe interactions among electrons, including the Hartree potential, $V_\mathrm{H}[\rho(t)]$, and the exchange-correlation one, $V_\mathrm{xc}[\rho]$. Much of the complexity of the dynamical many-electron problem is contained in $V_\mathrm{xc}[\rho]$, the exact form of which is unknown and requires approximations. Two layers of approximation are usually made: (i) the neglect of memory, \textit{i.e.} the dependence on $\rho(t'<t)$, the electron density at earlier times~\cite{mait+2002prl}, and (ii) the approximation of the instantaneous $V_\mathrm{xc}[\rho(t)]$ by a ground-state density functional, inserting $\rho(t)$ instead of the ground-state density. The adiabatic approximation (i) is most accurate when the electronic system remains close to the ground state~\cite{lacombe2020fd, lacombe+maitra2021jpcl}, such that it is advisable to choose a small laser amplitude, $E_0$, generating only little excited-state population. All simulations are carried out with version 9.2 of the \textsc{Octopus} code~\cite{octopus2015, octopus2020}. Wavefunctions are represented on a real-space grid generated by sampling with a spacing of 0.24~\AA\,the union of spheres of radius 5~\AA\,centered at each atomic site. Using the FIRE algorithm~\cite{fire}, geometries are optimized until forces are below 10$^{-3}$~eV/\AA~before determining the normal modes of vibration. 500 initial geometries and velocities are generated by sampling Eq.~\eqref{eq.wigner} for all modes (excluding rotations and translations). For the ensuing time evolution, we employ the approximated enforced time-reversal symmetry scheme~\cite{castro+2004jcp} with a time step of 2.7~as. The Perdew-Zunger variant~\cite{pz} of the adiabatic local-density approximation is used for the exchange-correlation potential, and nuclear potentials are described with Troullier-Martins pseudopotentials~\cite{trou-mart91prb}. The parameters for the electric field in Eq.~\eqref{eq.electricField} are chosen as $\hat{\textbf{n}} = (1,1,1)/\sqrt{3}$, $t_\mu$~=~8~fs, $t_\sigma$~=~2~fs, and $E_0$ corresponding to a peak intensity of about 3.5$\times$10$^{10}$~W/cm$^2$. The carrier frequency $\omega_p$ is set to 6.9~eV, 3.7~eV, and 3.9~eV for benzene, coronene, and N-substituted coronene, respectively. No thermal energy is added, \textit{i.e.}, $T=0$, though test runs reveal that the difference between $T=0$ and $T=300$K is rather small (Fig.~S2). \section{Results and Discussion} \subsection{Electron Dynamics: Linear Regime} \begin{figure*} \centering \includegraphics[width=.95\textwidth]{fig_1.pdf} \caption{a) Dipole moment induced by a pulse (grey area) resonantly exciting benzene (top), coronene (middle), and N-substituted coronene (bottom). Faint curves represent the envelopes resulting from single-trajectory calculations starting from rest. b) Absorption spectrum of coronene (red curve) and its N-substituted counterpart (blue) triggered by pulses with spectra shown by dashed lines. The same ensemble is used for N-coronene with nuclei fixed in their initial configuration (turquoise). Inset: Ball-and-stick representation of coronene (C atoms in grey, H in white), with the CH group circled in blue replaced by N in the substituted counterpart.} \label{fig.1} \end{figure*} The central quantity to consider in the analysis of the electron dynamics is the ensemble-averaged electronic dipole moment. This quantity creates a polarization in macroscopic samples, which, in turn, gives rise to an emitted electric field~\cite{krum+21jcp}; oscillations of the induced dipole moment are an indicator of (electronic) coherence. In benzene, where the linear absorption spectrum~\cite{kochOtto1972cpl} is reproduced remarkably well by the employed adiabatic local-density approximation~\cite{yabanaBertsch1999ijqp}, a laser pulse in resonance with the strong $1^1E_{1u}\leftarrow 1^1A_{1g}$ excitation at 6.9~eV causes merely short-lived dipolar oscillations [Fig.~\ref{fig.1}a), top]: A monoexponential fit to the decaying envelope yields a dephasing time $T_{01}$ = 2.5~fs. The corresponding first-order density matrix after the pulse, giving rise to the induced dipole moment, is \begin{align}\label{eq.linear_single} \hat\rho^{(1)}(t) \sim \exp(i\omega_{01}t-t/T_{01})|0\rangle\langle 1|, \end{align} where $|0\rangle$ and $|1\rangle$ represent $1^1A_{1g}$ and $1^1E_{1u}$ states with energies $E_0$ and $E_1$, respectively, and $\omega_{01} = (E_1-E_0)/\hbar$. In contrast to the exponential decay, no damping is observed in the single-trajectory case (grey faded curve), yielding $\hat\rho^{(1)}(t)$ as in Eq.~\eqref{eq.linear_single} except for the missing decaying part in the exponential. The fast damping of the induced dipole moment in benzene results from the small size of this molecule, its correspondingly high flexibility, and its high excitation energy. In larger and more rigid molecules like coronene [Fig.~\ref{fig.1}b), inset], the decay is slower, occurring over tens of femtoseconds [Fig.~\ref{fig.1}a), middle panel]. Here, the laser frequency is set to 3.7~eV [Fig.~\ref{fig.1}b), dashed red curve], close to the absorption onset of the molecule. An initial transient polarization during irradiation produces a maximum of the envelope at 8~fs; this is a dispersive rather than absorptive feature, associated with the real part of the polarizability. The subsequent decay is not monotonic as for benzene, but superimposed with a beating pattern that is missing in the STE calculation. In the latter scenario, such a nuclear-motion-induced beating is not expected as the laser and consequently the induced nuclear motion is extremely weak: The excited-state electron dynamics in this approach are predominantly mediated by zero-point energy, not by induced wavepacket motion. On a technical note, the dipole moment statistically converges much faster for coronene than for benzene (Sec.~S4 and Fig.~S5), \textit{i.e.}, fewer trajectories are required for the former molecule, likely owing to its bigger size and larger rigidity. Symmetry is another important aspect: Both benzene and coronene belong to the $D_{6h}$ point group and are thus highly symmetric. As a consequence, neither of them showcases the common scenario of disordered samples. In such a case, much less selection rules are in effect~\cite{harris1989,cocchi+2014jpca}, leading to richer vibronic dynamics involving many bright electronic states and coupled vibrations. As an example, only 2 out of 30 vibrational modes in benzene are totally symmetric~\cite{wilson1934pr} and, therefore, allowed to couple to electronic excitations within the Franck-Condon (FC) approximation. In a molecule without symmetries, such constraints are absent. Motivated by these considerations, we examine a less symmetric conjugated molecule obtained by isoelectronically replacing one CH group in coronene by an N atom [Fig.~\ref{fig.1}b), inset]. The laser frequency is set to 3.9~eV, slightly above the absorption onset [Fig.~\ref{fig.1}b), dashed blue curve]. Already in the single-trajectory dynamics, a persistent beating pattern appears in the induced dipole moment [Fig.~\ref{fig.1}a), bottom]. This is a fingerprint of the large number of optically-active states participating in the dynamics, which still predominantly occur within the linear regime, resulting from a density matrix of the form \begin{align}\label{eq.linear_multi} \hat\rho^{(1)}(t) \sim \sum_{m=1}^M\rho_{0m}\exp(i\omega_{0m}t)|0\rangle\langle m|, \end{align} involving a total of $M$ ES within the frequency band of the laser. The coefficients $\rho_{0m}$ depend on the coupling strength between the electric field and the $|0\rangle\rightarrow|m\rangle$ transition. As for coronene, the ensemble-averaged dipole decays over time, supplying the exponents in Eq.~\eqref{eq.linear_multi} with a real-valued part, $i\omega_{0m}t\rightarrow i\omega_{0m}t-t/T_{0m}$. From the induced dipole moments, we are able to calculate within a limited frequency band the optical absorption spectrum (see Supplementary Material, Sec.~S1), for which related nuclear-ensemble-based methods were previously employed~\cite{barbatti+2010pccp, crespo-oteroBarbatti2012tcacc, lively+2021jpcl}. Their success in predicting linewidths validates our results, as these widths are closely related to the polarization decay. In the spectrum of coronene, the so-called $\beta$-band~\cite{clar1964pah}, corresponding to the bright $1^1E_{1u}\leftarrow 1^1A_{1g}$ excitation, exhibits some structure at the low-energy end and a shoulder at the high-energy side [Fig.~\ref{fig.1}b)], in agreement with experiments~\cite{ohno+1972bull, cataldo+2011full}. Comparing to linear-response calculations of the molecule in equilibrium (Fig.~S1), the peak is red-shifted as a sign of FC-type vibronic coupling, corresponding to a non-vanishing curvature in the FC region, \textit{i.e.}, the vertical projection of the ground-state distribution onto ES surfaces. The finite oscillator strength at 3.1~eV is a consequence of Herzberg-Teller (HT) vibronic coupling. Like the bright $1^1E_{1u}\leftarrow 1^1A_{1g}$ excitation, the $1^1B_{2u}\leftarrow 1^1A_{1g}$ one, corresponding to the so-called $p$-band~\cite{clar1964pah}, arises from transitions between the double-degenerate highest occupied and lowest unoccupied orbitals. In the FC approximation, it is strictly forbidden, as the two equivalent configurations involving degenerate frontier orbitals are superposed destructively. However, symmetry-breaking fluctuations of the nuclear positions due to ZPE give rise to a non-zero transition dipole moment: the excitation borrows intensity from $1^1E_{1u}\leftarrow 1^1A_{1g}$, enabled by their energetic proximity. Compared to the pristine counterpart, N-coronene absorbs less in the considered energy window [Fig.~\ref{fig.1}b)]. The main peak exhibits several satellites due to optical activation of dark states by substitution-related symmetry lowering (Fig.~S1). While the $p$-band is no longer visible, likely due to negligible laser power at corresponding frequencies, additional dark states emerge at 3.3~eV, draining oscillator strength. As the laser spectrum is not centered on the peak, induced dipolar oscillations are rather weak [Fig.~\ref{fig.1}a)]. This leads to a higher number of trajectories to statistically converge the dipole moment (Fig.~S5); the higher amount of bright states and FC-coupled vibrational modes plays a role, too. \begin{figure*} \centering \includegraphics[width=.95\textwidth]{fig_2.pdf} \caption{a) Time- and frequency-resolved dipole moment of N-coronene with the laser spectrum shown by white dashed lines. b) Slow components of the dipole moment resolved in $x$ and $y$ (dashed and solid lines, respectively, inset) with the grey bar marking the interval of laser irradiation. Right inset: schematic illustration of the time evolution, involving two excited states with different dipole moments.} \label{fig.2} \end{figure*} We perform additional calculations with the same nuclear ensemble, but keeping the atoms fixed. This scenario can be expected to yield results equivalent to those from snapshot-based nuclear-ensemble approach based on linear-response TDDFT~\cite{barbatti+2010pccp, crespo-oteroBarbatti2012tcacc}, given the good agreement between the linear absorption spectra predicted by real-time propagation and the Casida equation formalism (Fig.~S1). Comparing the fully dynamical to the snapshot results, we can assess the role of nuclear momentum in the coupled dynamics. We highlight two differences in the resulting spectra [Fig.~\ref{fig.1}b)]: (i) Main absorption features are smeared out by the nuclear motion; details in the spectra arise from long-term time evolution, which here is damped by moving ions. (ii) Nuclear motion leads to a redistribution of the oscillator strength at 3.5~eV to lower energy. In time domain, this is reflected in stronger mid-term dipolar oscillations (40-60~fs window, not shown). We attribute both (i) and (ii) to non-adiabatic coupling: Population is transferred from weak transitions at 3.8~eV to bright ones at 3.6~eV, and finally to the dark states below the onset. Such processes are mediated by nuclear momentum and thus missing in static-nuclei calculations. In the field of non-adiabatic molecular dynamics, the term ``coherence'' often evokes associations with the localization of nuclear wavefunctions on different potential-energy surfaces (PES). ``Decoherence'' is thus understood mainly as nuclear wavepackets travelling towards different regions in configurational space. This is manifested, \textit{e.g.}, in the definition of coherence indicators based on integrals over absolute values of nuclear wavefunctions~\cite{min+2017jpcl}, neglecting the complex phase. In this sense, there is no decoherence in Ehrenfest dynamics. However, writing the coherence between the ground state $g$ and an ES $e$, with respective wavepackets $\chi_g$ and $\chi_e$, as \begin{align} \rho_{ge}(t) = \int\text d{\cal Q}\,\chi_g^*({\cal Q},t)\chi_e({\cal Q},t), \end{align} it is clear that it does not solely decrease due to the divergence of wavepackets, but also due to internal dephasing between $\chi_g$ and $\chi_e$. In the limiting case of an instantaneous optical excitation, $e\leftarrow g$, part of $\chi_g$ is elevated to an ES surface to form $\chi_e$, where it no longer corresponds to an energy eigenstate and thus undergoes a non-trivial and ${\cal Q}$-dependent phase evolution, while $\chi_g({\cal Q},t) = \chi_g({\cal Q},0)\exp(-i\omega_gt/2)$. As a result, $\rho_{ge}$ can quickly diminish even while $\chi_e$ has not yet left the FC region. This contribution to the loss of coherence is captured by MTE through the distribution of electronic transition frequencies. The good agreement of the predicted initial dipole dynamics with exact results recently shown by Albareda \textit{et al.} \cite{albareda2021jctc} indicates that the internal dephasing occurs significantly faster than the departure of $\chi_e$ from the FC region of the ES surface. Overlap revivals due to rephasing of $\chi_g$ and $\chi_e$ - the cause of FC replica in optical spectra~\cite{heller1976jcp} - are, however, not captured, as the coherence decay is irreversible due to the non-quantized distribution of transition frequencies. In larger systems, such recurrences become increasingly unlikely, as they have to take place in all vibrational modes simultaneously; a vanishing overlap for a single mode renders the total overlap zero. \subsection{Electron Dynamics: Nonlinear Regime} Populations of many-body states are diagonal elements of the density matrix and thus can be reached only through two-photon absorption. Consequently, they come into play in second-order processes, buried underneath the dominant linear response. Such populations are not directly accessible in the adopted real-time implementation. However, indications can be drawn from the time-resolved fluorescence, hereby calculated performing short-time Fourier transforms of individual dipole moments~\cite{kuda+2020jpca} (Sec.~S1). The ensemble-averaged result remains constant after some initial transient polarization during laser irradiation if nuclei are frozen [Fig.~\ref{fig.2}a), top panel]: no population is transferred to other states, as anticipated. By enabling nuclear motion, other frequency components are mixed-in over time at the expense of the optically targeted state, mainly resulting from an energetically lower state at 2.7~eV [Fig.~\ref{fig.2}a), bottom panel]. However, higher-lying states participate, too. We note that this population transfer is mediated by the zero-point energy, not by the induced nuclear motion. Indeed, the latter is very weak in the employed formalism, since only a weak pulse is applied. This can be expected to work well only for systems that do not reorganize significantly upon excitation, \textit{i.e.}, whose targeted ES surfaces do not differ qualitatively from the ground-state one and support bound states. More aspects of the ES dynamics can be illuminated using arguments based on perturbation theory. ES populations and coherences, $\hat\rho^{(2)}(t) ~\sim\rho_{mn}\exp(i\omega_{mn}t)|m\rangle\langle n|$, where $|m\rangle$ and $|n\rangle$ are both ES, are part of the second-order response. Coherences ($m\neq n$) tend to have oscillation periods $\omega_{mn}$, similar to those of interatomic vibrations, and have been conjectured to play a key role in energy transport in certain photosynthetic complexes~\cite{brixner+2005nature, engel+2007nature, lee+2007science, collini+2010nature, pani+2010proceedings, hildner+2013sci}. The second-order response additionally contains second-harmonic and Raman terms, both involving the ground state (Fig.~S3). All these superposed processes can be partially separated using phase-cycling~\cite{seidner+1995jcp} or low-pass filtering (Sec.~S3), exploiting the fact that second-harmonic processes, as well as linear ones, entail a distinctively fast time evolution. We now investigate the ES dynamics occurring in N-coronene after excitation with a pulse centered at 3.9~eV. Due to the lack of inversion invariance in this system, second-order contributions tend to be dipolar in character, which is not possible in centrosymmetric compounds like coronene. In the N-substituted molecule we indeed find a non-zero second-order dipole after irradiation [blue curves in Fig.~\ref{fig.2}b)]. For a brief period after excitation, coherence between ES is maintained, as evident in the initial dip, corresponding to a single oscillation cycle. Afterwards, this evolution is taken over by an incoherent buildup of dipole moment on a timescale of $\sim$100~fs, associated with population transfer between states. Unsurprisingly, this effect is absent in fixed-nuclei simulations [Fig.~\ref{fig.2}b), turquoise curves]. The dipole moment after 100~fs differs significantly by magnitude and orientation in the two scenarios [Fig.~\ref{fig.2}b), right inset]: with enabled nuclear dynamics, it points towards the N atom, indicating charge transfer to its site after excitation of the delocalized $\pi$-conjugated network. The dipole moment does not always reflect the ES dynamics as unambiguously as in this case. In general, one can resort to other observables directly related to the electron density, such as partial charges or higher multipole moments. Compared to other methods for non-adiabatic dynamics, the focus is thus shifted away from PES towards a real-space representation based on the charge density. Further conclusions can be drawn from direct simulations of third-order spectroscopy, which is straightforward with RT-TDDFT or related methods~\cite{umberto+2013cpc,bonafe+2018jpcl, hernandez+2019jpca, krumland+2020jcp, herperger+2021jpca}. \begin{figure} \centering \includegraphics[width=0.475\textwidth]{fig_3_old.pdf} \caption{Wave-packet motion caused by electronic excitation of benzene, achieved by applying a laser pulse with a frequency of about 6.9~eV. Positive (red) and negative (blue) areas indicate increased and decreased nuclear probability density with respect to the ground state, respectively. The top panel shows the dominant totally symmetric breathing mode, the bottom one a less symmetric, yet Herzberg-Teller active stretching mode. The dashed line in the top panel is a sine function with a frequency of 1000~cm$^{-1}$, matching that of the breathing mode. } \label{fig.3} \end{figure} \subsection{Nuclear Dynamics} Finally, we analyze vibrational coherence established by electronic excitation. This corresponds to harmonic wavepacket motion after instantaneous elevation to a harmonic ES PES and is characterized by an essentially classical time evolution~\cite{schleich2011quantum}. Thus, such motion does not need to be viewed as arising from quantum interference, even though the term ``coherence'' tends to invoke such associations~\cite{miller2012jcp}. In the following, we consider benzene as an example. As mentioned, this molecule has only two totally symmetric modes that can couple to electronic excitations within the FC approximation. One of them is the breathing mode around 1000~cm$^{-1}$, while the other one is a C-H stretching mode, which, however, is not effectively stimulated by the $\pi-\pi^*$ transition at 6.9~eV. In STE, which yields only FC-type dynamics, nuclear motion is thus induced predominantly in the breathing mode. For MTE, we visualize the induced nuclear density (Fig.~\ref{fig.3}) which is the probability distribution of the nuclei relative to the ground state (Sec.~S1). For the breathing mode (upper panel), we indeed find an approximately harmonic time evolution. A corresponding classical trajectory is given by the superposed sine function (dashed line). At the maxima, there are adjacent regions of density accumulation and depletion, reflecting a wavepacket having departed the FC region of the ES surface, leaving behind a hole in the ground state; minima mark instances of return. The slightly tilted shape of high-density regions as well as the density not being strictly zero at the minima is the result of using a realistically shaped laser pulse instead of an instantaneous promotion to the ES. We note again that due to its mean-field nature, MTE does not actually include wavepacket splitting, but rather describes the dynamics through a single, averaged nuclear wavepacket. However, these different descriptions appear qualitatively similarly in the nuclear density if a ground-state reference is subtracted, as done here. In MTE, modes without the complete symmetry of the structure gain energy, too, like the C=C stretching mode around 1600~cm$^{-1}$, which transforms as $e_{2g}$, thus breaking the hexagonal symmetry of the molecule. Vibrations of this representation activate otherwise dark $n^1B_{2u}\leftarrow 1^1A_{1g}$ excitations through HT-type coupling~\cite{li+2010pccp}. This type of post-FC effect can be interpreted as a fingerprint of electron-nuclear correlation, requiring a ${\cal Q}$-dependent transition-dipole moment and thus parametrically ${\cal Q}$-dependent electronic wavefunctions, with ${\cal Q}$ being normal-mode coordinates (Sec.~S1). In this case, the intuitive picture of a Gaussian wavepacket prepared and classically moving on an ES PES is invalid, and the observed nuclear dynamics are qualitatively different (Fig.~\ref{fig.3}, bottom). Particularly, perfect overlap between the wavepacket and the hole left behind in the ground state is never recovered. There is a stationary redistribution of nuclear density from the center to the outskirts of the oscillator, plus a superposed harmonic oscillation which is damped over time. \section{Summary and Conclusions} In summary, we have investigated the combination of RT-TDDFT+Ehrenfest and quantum sampling of initial conditions for \textit{ab initio} simulations of laser-induced ultrafast coherent dynamics applied to conjugated molecules. Contrary to the single-trajectory version, this approach naturally includes electronic dephasing effects without empirical parameters and is capable of describing transitions between close-lying ES. Furthermore, it can be employed to determine laser-induced nuclear dynamics such as coherent wavepacket motion of FC-coupled vibrational modes, or non-classical nuclear dynamics associated with purely HT-active excitations. While the considered molecules are medium-sized, the computational efficiency and scalability of RT-TDDFT+Ehrenfest favor application to larger systems. We assume the validity of the proposed scheme to increase with system size for several reasons: (i) Duschinsky rotations and vibrational frequency shifts tend to be much smaller; (ii) a revival of electronic coherence due to the wavepacket returning to the FC region -- which MTE does not seem to capture properly -- becomes unlikely; (iii) the excitation-induced electron-density perturbation becomes smaller in relation to the total density, presumably enhancing the validity of the adiabatic approximation assumed for the dynamical exchange-correlation potential. This rationale holds mainly for electrons in delocalized orbitals; for strongly localized ones, the excitation-induced density in relation to the total density is more akin to smaller molecules. The optimal trade-off between accuracy, computational efficiency, and insight into the involved physical processes offered by the proposed method unfolds bright perspectives for \textit{ab initio} simulations of ultrafast coherent spectroscopies as a key complement to corresponding experiments. In future work, extensions to various flavors of nonlinear~\cite{cocchi2014,guandalini2021} and multidimensional spectroscopies~\cite{desi+19zna,coll21jpcc} are foreseen. While the statistical aspects naturally lead towards machine learning~\cite{chen+2020jpcl, xue+2020jpca}, methodological progress can be achieved by coupling trajectories during the time evolution, thereby inducing wavepacket splitting~\cite{min+2015prl, curc+2018epjb, gossel+2018jctc}. \vspace{0.5 cm} \section*{Acknowledgements} We are thankful to Michele Guerrini, Katherine R. Herperger, and Mariana Rossi for fruitful discussions, and to Ralph Ernstorfer for posing the question that stimulated this research. This work was funded by the German Research Foundation (DFG), project number 182087777 -- CRC 951, by the German Federal Ministry of Education and Research (Professorinnenprogramm III), and by the State of Lower Saxony (Professorinnen für Niedersachsen). Computational resources were provided by the North-German Supercomputing Alliance (HLRN), project bep00076.
1,108,101,566,187
arxiv
\section{Introduction} \IEEEPARstart{D}{uring} the process of a cascading outage in power systems, the propagation of component failures may cause serious consequences, even catastrophic blackouts \cite{r1,r2}. For the sake of effectively mitigating blackout risk, a naive way is to reduce the probability of blackouts, or more precisely, to reduce failure probabilities of system components by means of maintenance or so. Intuitively, it can be readily understood that component failure probabilities (CFPs) have great influence on blackout risk (BR). However, it is not clear how to efficiently quantify such influence, particularly in large-scale power grids. Generally, CFP largely depends on characteristics of system components as well as working conditions of those components. Since working conditions, e.g., system states, may change during a cascading outage, CFP varies accordingly. Therefore, to quantitatively characterize the influence of CFP on BR, two essential issues need to be addressed. On the one hand, an appropriate probability function of component failure which can consider changing working conditions should be well defined first to depict CFP . In this paper, such a function is referred to as \emph{component failure probability function} (CoFPF). On the other hand, the quantitative relationship between CoFPF and BR should also be explicitly established. For the first issue, a few CoFPFs have been built in terms of specific scenarios. \cite{r3} proposes an end-of-life CoFPF of power transformers taking into account the effect of load conditions. \cite{r4} considers the process and mechanism of tree-contact failures of transmission lines, based on which an analytic formulation is proposed. Another CoFPF of transmission lines given in \cite{r5} adopts an exponential function to depict the relationship between CFP and some specific indices which can be calculated from monitoring data of transmission lines. This formulation can be extended to other system components in addition to transmission lines, e.g., transformers \cite{r6}. Similar works can be found in \cite{r7,r8}. In addition, some simpler CoFPFs are deployed in cascading outage simulations \cite{r9,r10,r11,r12}. Specifically, in the OPA model\cite{r9}, CoFPF is usually chosen as a monotonic function of the load ratio, while a piecewise linear function is deployed in the hidden failure model\cite{r10}. Such simplified formulations have been widely used in various blackout models\cite{r11,r12}. However, it is worthy of noting that these CoFPFs are formulated for specific scenarios. In this paper, we adopt a generic CoFPF to facilitate establishing the relationship between CFP and BR. The second issue, i.e., the quantitative relationship between CoFPF and BR, is the main focus of this paper. Generally, since various kinds of uncertainties during the cascading process make the number of possible propagation paths explode with the increase of system scale, it is extremely difficult, if not impossible, to calculate the exact value of BR in practice. Therefore, analyticaly expressing the relationship between CoFPF and BR is really challenging. In this context, estimated BR by statistics, which is based on a set of samples generated by cascading simulations, appears to be the only practical substitute. Among the sample-based approaches in BR estimation, Monte Carlo simulation (MCS) is the most popular one to date. However, MCS usually requires a large number of samples with respect to specific CoFPFs for achieving satisfactory accuracy of estimation. Due to the intrinsic inefficiency, MCS is greatly limited in practice, particularly in large-scale systems \cite{r13}. More importantly, it cannot explicitly reveal the relationship between CoFPF and BR in an analytic manner. Hence, whenever parameters or forms of CoFPFs change, MCS must be completely re-conducted to generate new samples for correctly estimating BR, which is extremely time consuming. Moreover, due to the inherent strong nonlinearity between CoFPF and BR, when multiple CoFPFs change simultaneously, which is a common scenario in practice, BR cannot be directly estimated by using the relationship between BR and individual CoFPFs. In this case, in order to correctly estimate BR and analyze the relationship, the required sample size will dramatically increase compared with the case that a single CoFPF changes. That indicates even efficient variance reduction techniques (which may effectively reduce the sample size in a single scenario\cite{r14,r15}) are employed, the computational complexity will remain too high to be tractable. The aforementioned issue gives rise to an interesting question: when one or multiple CoFPFs change, could it be possible to accurately estimate BR without re-conducting the extremely time-consuming blackout simulations? To answer this question, the paper proposes a sample-induced semi-analytic method to quantitatively characterize the relationship between CoFPFs and BR based on a given sample set. Main contributions of this work are threefold: \begin{enumerate} \item Based on a generic form of CoFPFs, a cascading outage is formulated as a Markov sequence with appropriate transition probabilities. Then an exact relationship between BR and CoFPFs results. \item Given a set of blackout simulation samples, an unbiased estimation of BR is derived, rendering a semi-analytic expression of the mapping between BR and CoFPFs. \item A high-efficiency algorithm is devised to directly compute BR when CoFPFs change, avoiding re-conducting any blackout simulations. \end{enumerate} The rest of this paper is organized as follows. In Section II, a generic formulation of CoFPF, an abstract model of cascading outages as well as the exact relationship between CoFPF and BR are presented. Then the sample-induced mapping between the unbiased estimation of BR and CoFPFs is explicitly established in Section III. A high-efficiency algorithm is presented in Section IV. In Section V, case studies are given. Finally, Section VI concludes the paper with remarks. \section{ Relationship between CoFPFs and BR } The propagation of a cascading outage is a complicated dynamic process, during which many practical factors are involved, such as hidden failures of components, actions of the dispatch/control center, etc. In this paper, we focus on the influence of random component failures (or more precisely, CFP) on the BR, where a cascading outage can be simplified into a sequence of component failures with corresponding system states, and usually emulated by steady-state models\cite{r9,r10,r12}. In this case, individual component failures are only related to the current system state while independent of previous states, which is known as the Markov property. This property enables an abstract model of cascading outages with a generic form of CoFPFs, as we explain below. \subsection{A Generic Formulation of CoFPFs} To describe a CFP varying along with the propagation of cascading outages, a CoFPF is usually defined in terms of working conditions of the component. In the literature, CoFPF has various forms with regard to specific scenarios \cite{r3,r4,r5,r6,r7,r8,r9,r10,r11,r12}. To generally depict the relationship between BR and CFP with varying parameters or forms, we first define an abstract CoFPF here. Specifically, the CoFPF of component $k$, denoted by $\varphi_k$, is defined as \begin{eqnarray} \label{eq1} \varphi_k(s_k,\boldsymbol{\eta}_k):=\mathbf{Pr}(\text{component}\; k\; \text{fails at}\; s_k \; \text{given} \; \boldsymbol{\eta}_k) \end{eqnarray} In \eqref{eq1}, $s_k$ represents the current working condition of component $k$, which can be load ratio, voltage magnitude, etc. $\boldsymbol{\eta}_k$ is the parameter vector. Both $\boldsymbol{\eta}_k$ and the form of $\varphi_k$ represent the characteristics of the component $k$, e.g., the type and age of component $k$ \fliu{I don't understand this sentence}. It is worthy of noting that the working condition of component $k$ varies during a cascading outage, resulting in changes of the related CFPs. On the other hand, whereas the cascading process usually does not change $\boldsymbol{\eta}_k$ and the form of $\varphi_k$, they can also be influenced due to controlled or uncontrolled factors, such as maintenance and extreme weather, etc. In this sense, Eq. \eqref{eq1} provides a generic formulation to depict such properties of CoFPFs. \subsection{Formulation of Cascading Outages} \begin{figure}[!t] \centering \includegraphics[width=0.95\columnwidth]{f2.eps} \caption{Cascading process in power systems} \label{fig.2} \end{figure} \fliu{How about we directly cite our previous paper and simplify the description?} In this paper, we are only interested in the paths of cascading outages that lead to blackouts as well as the associated load shedding. Then, according to \cite{r14}, cascading outages can be abstracted as a Markov sequence with appropriate transition probabilities. Specifically, denote $j\in \mathbb{N}$ as the sequence label and $X_j$ as the system state at stage $j$ of a cascading outage. Here, $X_j$ can represent the power flow of transmission lines, the ON/OFF statuses of components, or other system status of interest. The complete state space, denoted by $\mathcal{X}$, is spanned by all possible system states. $X_0$ is the initial state of the system, which is assumed to be deterministic in our study. Under this condition, $\mathcal{X}$ is finite when only the randomness of component failures is considered \cite{r14}. Note that $X_j$ specifies the working conditions of each component at the current stage (stage $j$) in the system, consequently determines $s_k$ in the CoFPF, $\varphi_k(s_k, \boldsymbol{\eta}_k)$. Then an $n$-stage cascading outage can be represented by a series of states, $X_j$ (see Fig. \ref{fig.2}) and mathematically defined as below. \begin{definition} \label{cascadingoutage} An $n$-stage cascading outage is a Markov sequence $ Z:= \{ {X_0, X_1, ..., X_j, ..., X_n}$, $X_j\in \mathcal{X}, \forall j\in \mathbb{N} \}$ with respect to a given joint probability series $g(Z)=g(X_{n} ,\cdots , X_{1} ,X_{0} )$. \end{definition} In the definition above, $n$ is the total number of cascading stages, or the length of the cascading outage. Particularly, we denote the set of all possible paths of cascading outages in the power system by $\mathcal{Z}$. Since $\mathcal{X}$ and the total number of components are finite, $\mathcal{Z}$ is finite as well, albeit it may be huge in practice.\fliu{should we explain $\mathcal{Z}$ is finite?}\gjp{I prefer not mention this point in the original version} Then for a specific path, $z\in \mathcal{Z}$, of cascading outages, we have \begin{eqnarray*} z&=&\{x_0, x_1, ..., x_j, ..., x_n\}\\ g(Z=z)&=&g(X_0=x_0, ..., X_j=x_j, ..., X_n=x_n) \end{eqnarray*} where $g(Z=z)$ is the joint probability of the path, $z$. For simplicity, we denote $g(Z=z)$ as $ g(z)=g({x_n}, \cdots, {x_1},{x_0}) $. Invoking the conditional probability formula and Markov Property, $g(z)$ can be further rewritten as \begin{equation} \label{eq8} \begin{array}{rcl} g(z) & = &g({x_n}, \cdots, {x_1},{x_0}) \\ &= &g_n({x_n}|{x_{n - 1}} \cdot \cdot \cdot {x_0})\cdot g_{n-1}({x_{n - 1}}|{x_{n - 2}} \cdots {x_0}) \\ & & \cdots g_1(x_1|x_0)\cdot g_0({x_0}) \\ &= &g_n({x_n}|{x_{n - 1}})\cdot g_{n-1}({x_{n - 1}}|{x_{n - 2}}) \cdot \cdot \cdot g_0({x_0}) \end{array} \end{equation} where \begin{equation} \nonumber g_{j+1}({x_{j+1}}|{x_{j}})=\mathbf{Pr}(X_{j+1}=x_{j+1}|X_{j}=x_{j}) \end{equation} \begin{equation} \nonumber g_0({x_0})=\mathbf{Pr}(X_0=x_0)=1 \end{equation} It is worthy of noting that this formulation is a mathematical abstraction of the cascading processes in practice and simulation models considering physical details \cite{r9,r10,r12}. Different from the high-level statistic models\cite{r18,r19}, it can provide an analytic way to depict the influence of many physical details, e.g., CFP, on the cascading outages and BR, which will be elaborately explained later on. \subsection{Formulation of Blackout Risk} In the literature, blackout risk has various definitions \cite{r14,r20,r21} \fliu{refs?}. Here we adopt the widely-used one, which is defined with respect to load shedding caused by cascading outages. Due to the intrinsic randomness of cascading outages, the load shedding, denoted by $Y$, is also a random variable up to the path-dependent propagation of cascading outages. Therefore, $Y$ can be regarded as a function of cascading outage events, denoted by $Y:=h(Z)$. Then the BR with respect to $g(Z)$ is defined as the expectation of the load shedding greater than a given level, $Y_0$. That is \begin{equation} \label{eq10} R_g(Y_0)=\mathbb{E}(Y\cdot\delta_{\{Y\ge{Y_0}\}}) \end{equation} where, $R_g(y_0)$ stands for the BR with respect to $g(Z)$ and $Y_0$; $\delta_{\{Y\ge{Y_0}\}}$ is the indicator function of $\{Y\ge{Y_0}\}$, given by \begin{equation*} \delta_{\{Y\ge{Y_0}\}}:=\left\{\begin{array}{lll} 1 & &\text{if}\quad Y\ge{Y_0};\\ 0 & &\text{otherwise}. \end{array} \right. \end{equation*} In Eq. \eqref{eq10}, when the load shedding level is chosen as $Y_0=0$, it is simply the traditional definition of blackout risk. If $Y_0>0$, it stands for the risk of cascading outages with quite serious consequences, which is closely related to the renowned risk measures, value at risk(\textit{VaR}) and conditional value at risk(\textit{CVaR})\cite{r16}. Specifically, the risk defined in \eqref{eq10} is equivalent to \textit{CVaR}$_\alpha$ times $(1-\alpha)$ with respect to \textit{VaR}$_\alpha=Y_0$ with a confidence level of $\alpha$. \subsection{Relationship Between BR and CoFPFs} We first derive the probability of cascading outages based on the generic form of CoFPFs. Then we characterize the relationship between BR and CoFPFs. At stage $j$, the working condition of component $k$ can be represented as a function of the system state $x_j$, denoted by $\phi_k(x_j)$. That is $s_k:=\phi_k(x_j)$. Hence the CFP of component $k$ at stage $j$ is $\varphi_k(\phi_k(x_j),\boldsymbol{\eta}_k)$. For simplicity, we avoid the abuse of notion by letting $\varphi_k(x_j)$ to stand for $\varphi_k(\phi_k(x_j),{\boldsymbol{\eta}_k})$. Considering stages $j$ and $(j+1)$, we have \begin{equation} \label{eq4} g_{j+1}(x_{j+1}|x_{j})=\prod\limits_{k \in {F(x_{j})}} {{\varphi _k}( x_j )} \cdot \prod\limits_{k \in {\bar{F}(x_{j})}} {(1 - {{\varphi _k}( x_j )})} \end{equation} In Eq. \eqref{eq4}, $F(x_{j})$ is the component set consisting of the components that are defective at $x_{j+1}$ but work normally at $x_{j}$, while $\bar{F}(x_{j})$ consists of components that work normally at $x_{j+1}$. \fliu{please check the previous sentence to make sur it is exact.} \gjp{$\bar{F}(x_{j})$ is not the complement of $F(x_{j})$, since there are components breaking down at former stages.} With \eqref{eq4}, Eq. \eqref{eq8} can be rewritten as \begin{equation} \label{eq2} \begin{array}{rcl} g(z)&=&{\prod\limits_{j=0}^{n-1}{g_{j+1}(x_{j+1}|x_{j})}}\\ &=&\prod\limits_{j=0}^{n-1}\left[\prod\limits_{k \in {F( x_{j} )}} {{\varphi_{k}}({x_j})} \cdot \prod\limits_{k \in {\bar{F}( x_{j} )}} {(1 - {\varphi _{k}}({x_j}))} \right]\\ \end{array} \end{equation} Furthermore, substituting $Y=h(Z)$ into \eqref{eq10} yields \begin{equation} \label{eq11} \begin{array}{rcl} R_g(y_0) & = & \mathbb{E}(h(Z)\cdot\delta_{\{h(Z)\ge{Y_0}\}})\\ &=&\sum\limits_{z \in \mathcal{Z}} {g(z)h(z){\delta _{\{ h(z) \ge {Y_0}\} }}} \\ \end{array} \end{equation} Theoretically, the relationship between BR and CoFPFs can be established immediately by substituting \eqref{eq2} into \eqref{eq11}. However, this relationship cannot be directly applied in practice, as we explain. Note that, according to \eqref{eq2}, the component failures occurring at different stages on a path of cascading outages are correlated with one another. As a consequence, this long-range coupling, unfortunately, produces complicated and nonlinear correlation between BR and CoFPFs. In addition, since the number of components in a power system usually is quite large, the cardinality of $\mathcal{Z}$ can be huge. Hence it is practically impossible to accurately calculate BR with respect to the given CoFPFs by directly using \eqref{eq2} and \eqref{eq11}. To circumvent this problem, next we turn to using an unbiased estimation of BR as a surrogate, and propose a sample-based semi-analytic method to characterize the relationship. \section{Sample-induced Semi-analytic Characterization} \subsection{Unbiased Estimation of BR} To estimate BR, conducting MCS is the easiest and the most extensively-used way. The first step is to generate independent identically distributed (i.i.d.) samples of cascading outages and corresponding load shedding with respect to the joint probability series, $g(Z)$. Unfortunately, $g(Z)$ is indeed unknown in practice. In such a situation, one can heuristically sample the failed components at each stage of possible cascading outage paths in terms of the corresponding system states and CoFPFs. Afterwards, system states at the next stage are determined with the updated system topology. This process repeats until there is no new failure happening anymore. Then a path-dependent sample is generated. This method essentially carries out sampling sequentially using the \emph{conditional component probabilities} instead of the \emph{joint probabilities}. Eq. \eqref{eq2} provides this method with a mathematical interpretation, which is a application of the Markov property of cascading outages. Suppose $N$ i.i.d. samples of cascading outage paths are obtained with respect to $g(Z)$. Let $Z_g:=\{z^i,i=1,\cdots,N\}$ record the set of these samples. Then, the $i$-th cascading outage path contained in the set is expressed by $z^i:=\{x_0^i,\cdots,x_{n^i}^i\}$, where $n^i$ is the number of total stages of the $i$-th sample. For each $z^i \in Z_g$, the associated load shedding is given by $y^i=h(z^i)$. All $y^i$ make up the set of load shedding with respect to $g(Z)$, denoted by $Y_g:=\{y^i,i=1,\cdots,N\}$. Then the unbiased estimation of BR is formulated as \begin{equation} \label{eq12} \hat{R}_g(Y_0)=\frac{1}{N}\sum\limits_{i = 1}^N {{y^i}{\delta _{\{ {y^i} \ge {Y_0}\} }}}=\frac{1}{N}\sum\limits_{i = 1}^N {{h(z^i)}{\delta _{\{ {h(z^i)} \ge {Y_0}\} }}} \end{equation} Note that Eq. \eqref{eq12} applies to $g(Z)$ or the corresponding CoFPFs. That implies the underlying relationship between BR and CoFPFs relies on samples. Hence, whenever parameters or forms of the CoFPFs change, all samples need to be re-generated to estimate the BR, which is extremely time consuming, even practically impossible. Next we derive a semi-analytic method by building a mapping between CoFPFs and the unbiased estimation of BR. \subsection{Sample-induced Semi-Analytic Characterization} Suppose the samples are generated with respect to $g(Z)$. Then the sample set is $Z_g$, and the set of load shedding is $Y_g$. When changing $g(Z)$ into $f(Z)$ (both are defined on $\mathcal{Z}$), usually all samples of cascading outage paths need to be regenerated. However, inspired by the sample treatment in Importance Sampling \cite{r14}, it is possible to avoid sample regeneration by revealing the underlying relationship between $g(Z)$ and $f(Z)$. Specifically, for a given path $z^i$, we define \begin{equation} \label{eq:w_def} w(z^i):=\frac{f(z^i)}{g(z^i)}, \quad (z^i\in\mathcal{Z}) \end{equation} \fliu{should we use $z^i$?} then each sample in terms of $f(z)$ can be represented as the sample of $g(z)$ weighted by $w(z)$. Consequently, the unbiased estimation of BR in terms of $f(Z)$ can be directly obtained from the sample generated with respect to $g(Z)$, as we explain. From Eqs. \eqref{eq12} and \eqref{eq:w_def}, we have \begin{equation} \label{eq13} \hat{R}_f(Y_0)=\frac{1}{N}\sum\limits_{i = 1}^N {w(z^i){h(z^i)}{\delta _{\{ {h(z^i)} \ge {Y_0}\} }}} \end{equation} Obviously, when $w(z)\equiv 1,z\in\mathcal{Z}$, \eqref{eq13} is equivalent to \eqref{eq12}. Moreover, Eq. \eqref{eq13} is an unbiased estimation by noting that \begin{equation} \label{eq14} \begin{array}{rcl} \mathbb{E}(\hat{R}_f(Y_0) )& = & \mathbb{E}\left(\frac{f(Z)}{g(Z)}{h(Z)}{\delta _{\{ {h(Z)} \ge {Y_0}\} }}\right) \\ & = & \sum\limits_{z\in\mathcal{Z}}{ g(z) \times \frac{f(z)}{g(z)}{h(z)}{\delta _{\{ {h(z)} \ge {Y_0}\} }} }\\ & = & R_f(Y_0) \end{array} \end{equation} Note that in Eqs. \eqref{eq13} and \eqref{eq14}, only the information of $h(z)$ is required. One crucial implication is that the BR with respect to $f(Z)$, $R_f$, can be estimated directly with no need of regenerating cascading outage samples. This feature can further lead to an efficient algorithm to analyze BR under varying CoFPFs, which will be discussed next. \section{Estimating BR with Varying CoFPFs} \subsection{Changing a Single CoFPF} We first consider a simple case, where a single CoFPF changes. Suppose CoFPF of the $m$-th component changes from $\varphi_m$ to $\bar{\varphi}_m$\footnote{For simplicity, we use the notion $\bar{\varphi}_m$ to denote the new CoFPF of component $m$, which may have a new function form or parameters $\boldsymbol{\eta}_m$. }, and the corresponding joint probability series changes from $g(Z)$ to $f(Z)$. Considering a sample of cascading outage path generated with respect to $g(Z)$, i.e., $z^i \in Z_g$, we have \begin{small} \begin{equation} \label{eq15} \begin{array}{rcl} f(z^i)&=&{\prod\limits_{j=0}^{n^i-1}{f_{j+1}(x_{j+1}^i|x_{j}^i)}}\\ &=&{\prod\limits_{j=0}^{n^i-1}{\left[\prod\limits_{k \in {F^m( x_{j}^i) }} {{\varphi _k}({x_j^i})}\cdot \prod\limits_{k \in {\bar{F}^m( x_{j}^i )}} {(1 - {\varphi _{k}}({x_j^i}))}\right]} }\cdot \\ &&{\cdots \Gamma(\bar{\varphi}_{m},z^i) } \end{array} \end{equation} \end{small} where, \begin{small} \begin{equation} \label{eq7} \Gamma(\bar{\varphi}_{m},z^i) = \left\{ {\begin{array}{*{20}{c}} \prod\limits_{j=0}^{n_m^i-1}{(1 - {\bar{\varphi} _m}({x_j^i}))}&:& \text{if}\quad n_m^i=n^i\\ {\bar{\varphi} _m}({x_{n_m^i}^i}) \prod\limits_{j=0}^{n_m^i-1}{(1 - {\bar{\varphi} _m}({x_j^i}))}&:& \text{otherwise}\\ \end{array}} \right. \end{equation} \end{small} Here, $n_m^i$ is the stage at which the $m$-th component experience an outage. Particularly, $n_m^i=n^i$ when the $m$-th component is still working normally at the last stage of the cascading outage path. Component set $F^m( x_{j}^i ):=F( x_{j}^i )\setminus\{m\}$ consists of all the elements in $F( x_{j}^i )$ except for $m$. Similarly $\bar{F}^m( x_{j}^i ):=\bar{F}( x_{j}^i )\setminus\{m\}$ is the component set including all the elements in $\bar{F}( x_{j}^i )$ except for $m$ \fliu{we should explain what is $F_m(x^i_j)$ and $\bar{F}_m(x^i_j)$}. According to \eqref{eq:w_def}, the sample weight is \begin{equation} \label{eq16} w(z^i)=\frac{f(z^i)}{g(z^i)}=\frac{\Gamma(\bar{\varphi}_{m},z^i)}{\Gamma({\varphi}_{m},z^i)} \end{equation} Substituting \eqref{eq16} into \eqref{eq13}, the unbiased estimation of BR is \begin{equation} \label{eq17} \hat{R}_f(Y_0)=\frac{1}{N}\sum\limits_{i = 1}^N {\frac{\Gamma(\bar{\varphi}_{m},z^i)}{\Gamma({\varphi}_{m},z^i)}{h(z^i)}{\delta _{\{ {h(z^i)} \ge {Y_0}\} }}} \end{equation} Eq. \eqref{eq17} provides an unbiased estimation of BR after changing a CoFPF by only using the original samples. \subsection{Changing Multiple CoFPFs} In this section, we consider the general case that multiple CoFPFs change simultaneously. Invoking the expression of $f(z^i)$ in \eqref{eq15}, we have \begin{equation} \label{eq19} g(z^i)=\prod\limits_{k\in K}{\Gamma(\varphi_k,z^i)} \end{equation} \begin{equation} \label{eq20} f(z^i)=\prod\limits_{k\in K_c}{\Gamma(\bar{\varphi}_k,z^i)} \cdot \prod\limits_{k\in K_u}{\Gamma(\varphi_k,z^i)} \end{equation} where, $K$ is the complete set of all components in the system; $K_c$ is the set of components whose CoFPFs change; $K_u$ is the set of others, i.e., $K=K_c\cup K_u$; $\bar{\varphi}_k$ is the new CoFPF of the $k$-th component. According to \eqref{eq19} and \eqref{eq20}, the sample weight is given by \begin{equation} \label{eq21} w(z^i)=\prod\limits_{k\in K_c}{ \frac{ {\Gamma(\bar{\varphi}_k,z^i)} }{\Gamma({\varphi}_k,z^i)}} \end{equation} Substituting \eqref{eq21} into \eqref{eq13} yields \begin{equation} \label{eq22} \hat{R}_f(Y_0)=\frac{1}{N}\sum\limits_{i = 1}^N { \left( \prod\limits_{k\in K_c}{ \frac{ {\Gamma(\bar{\varphi}_k,z^i)} }{\Gamma({\varphi}_k,z^i)}} {h(z^i)}{\delta _{\{ {h(z^i)} \ge {Y_0}\} }} \right) } \end{equation} Eq. \eqref{eq22} is a generalization of Eq. \eqref{eq17}. \eqref{eq22} provides a mapping between the unbiased estimation of BR and CoFPFs. When multiple CoFPFs change, the unbiased estimation of BR can be directly calculated by using \eqref{eq22}. Since no additional cascading outage simulations are required, and only algebraic calculations are involved, it is computationally efficient. \subsection{Algorithm} To clearly illustrate the algorithm, we first rewrite \eqref{eq22} in a matrix form as \begin{equation} \label{eq23} \hat{R}_f(Y_0)=\frac{1}{N}\mathbf{L}\mathbf{F_p} \end{equation} In \eqref{eq23}, $\mathbf{L}$ is an $N$-dimensional row vector, where $\mathbf{L}_i={h(z^i)}{\delta _{\{ {h(z^i)} \ge {Y_0}\} }}/g(z^i)$. $\mathbf{F_p}$ is an $N$-dimensional column vector, where $\mathbf{F_p}_{i}=f(z^i)$. We further define two $N \times k_{a}$ matrices $\mathbf{A}$ and $\mathbf{B}$, where $k_{a}$ is the total number of all components, $\mathbf{A}_{ik}=\Gamma(\varphi_k,z^i)$, $\mathbf{B}_{ik}=\Gamma(\bar{\varphi}_k,z^i)$. According to \eqref{eq20}, we have \begin{equation} \label{eq24} \mathbf{F_p}_i = \prod\limits_{k\in K_c}{\mathbf{B}_{ik}} \cdot \prod\limits_{k\in K_u}{\mathbf{A}_{ik}} \end{equation} Then the algorithm is given as follows. \noindent\rule[0.25\baselineskip]{0.5\textwidth}{1pt} \begin{itemize} \small \item {\bf Step 1: Generating samples.} Based on the system and blackout model in consideration, generate a set of i.i.d. samples. Record the sample sets $Z_g$ and $Y_g$, as well as the row vector $\mathbf{L}$. \item {\bf Step 2: Calculating $\mathbf{F_p}$.} Define the new CoFPFs for each component in $K_c$, and calculate $\mathbf{B}$ and $\mathbf{A}$. Then calculate $\mathbf{F_p}$ according to \eqref{eq24}. Particularly, instead of calculation, $\mathbf{A}$ can be saved in Step 1. \item {\bf Step 3: Data analysis.} According to \eqref{eq23}, estimate the blackout risk for the changed CoFPFs. \end{itemize} \noindent\rule[0.25\baselineskip]{0.5\textwidth}{1pt} \subsection{Implications} The proposed method has important implications in blackout-related analyses. Two of typical examples are: efficient estimation of blackout risk considering extreme weather conditions and the risk-based maintenance scheduling. For the first case, as well known, extreme weather conditions (e.g., typhoon) often occur for a short time but affect a wide range of components. The failure probabilities of related components may increase remarkably. In this case, the proposed method can be applied to fast evaluate the consequent risk in terms of the weather forecast. For the second case, since maintenance can considerably reduce CoFPFs, the proposed method allows an efficient identification of the most effective candidate devices in the system for mitigating blackout risks. Specifically, suppose that one only considers simultaneous maintenance of at most $d_m$ components. Then the number of possible scenarios is up to $\sum\nolimits_{d=1}^{d_m}{C({k_{a}},d)}$, which turns to be intractable in a large practical system ($C({k_{a}},d)$ is the number of $d$-combinations from $k_a$ elements). Moreover, in each scenario, a great number of cascading outage simulations are required to estimate the blackout risk, which is extremely time consuming, or even practically impossible. In contrast, with the proposed method, one only needs to generate the sample set in the base scenario. Then the blackout risks for other scenarios can be directly calculated using only algebraic calculations, which is very simple and computationally efficient. \section{Case Studies} \subsection{Settings} In this section, the numerical experiments are carried out on the IEEE 300-bus system with a simplified OPA model omitting the slow dynamic \cite{r9,r17}. Its basic sampling steps are summarized as follows. \newline \noindent\rule[0.25\baselineskip]{0.5\textwidth}{1pt} \begin{itemize} \small \item {\bf Step 1: Data initialization.} Initialize the system data and parameters. Particularly, define specific CoFPFs for each component. The initial state is $x_0$. \item {\bf Step 2: Sampling outages.} At stage $j$ of the $i$-th sampling, according to the system state, $ x_j^i $ , and CoFPFs, simulate the component failures with respect to the failure probabilities. \item {\bf Step 3: Termination judgment.} If new failures happen in Step 2, recalculate the system state $ x_{j+1}^i $ at stage $j+1$ with the new topology, and go back to Step 2; otherwise, one sampling ends. The corresponding samples are $z^i = \{x_0, x_1^i \cdots x_j^i\} $ and $y^i$. If all $N$ simulations are completed, the sampling process ends. \end{itemize} \noindent\rule[0.25\baselineskip]{0.5\textwidth}{1pt} In this simulation model, the state variables $X_j$ are chosen as the ON/OFF statuses of all components and power flow on corresponding components at stage $j$. Meanwhile, the random failures of transmission lines and power transformers are considered. The total number of them is $k_{a}=411$, and the CoFPF we use is \begin{equation} \label{eq26} \varphi_k (s_k,\boldsymbol{\eta}_k)= \left\{ {\begin{array}{*{20}{c}} {p_{\min }^k}&:&{s^k < s_d^k}\\ {\frac{{p_{\max }^k - p_{\min }^k}}{{s_u^k - s_d^k}}(s^k - s_d^k) + p_{\min }^k}&:&{else}\\ {p_{\max }^k}&:&{s^k > s_u^k} \end{array}} \right. \end{equation} where $s^k$ is the load ratio of component $k$ and $\boldsymbol{\eta}_k=[p^k_{min},p^k_{max},s^k_d,s^k_u]$. Specifically, $p^k_{min }$ denotes the minimum failure probability of component $k$ when the load ratio is less than $s^k_d$; $p^k_{\max } $ denotes the maximum failure probability when the load ratio is larger than $s^k_{u} $. Usually it holds $0<p^k_{min}<p^k_{max}<1$, which depicts certain probability of hidden failures in protection devices. Particularly, we use the following initial parameters to generate $Z_g$: $s_d^k=0.97$, $s_u^k=1.3$, $p^k_{max }=0.9995$, $p^k_{min } \sim U[0.002,0.006]$, $k\in K$. It is worthy of noting that the simulation process with specific settings mentioned above is a simple way to emulate the propagation of cascading outages. We only use it to demonstrate the proposed method. The proposed method can apply when other more realistic models and parameters are adopted. \subsection{Unbiasedness of the Estimation of Blackout Risk} In this case, we will show that our method can achieve unbiased estimation of blackout risk. We first carry out 100,000 cascading outage simulations with the initial parameters. Then the sample set $Z_g$ and related $\mathbf{L}$ are obtained. Afterward we randomly choose a set of failure components, $K_c$, including two components. Accordingly, we modify the parameters, $\boldsymbol{\eta}_k$, of their CoFPFs to $\bar{p}^k_{min }=p^k_{min}-0.001,k \in K_c$. In terms of the new settings and various load shedding levels, we estimate the blackout risks by using \eqref{eq23}. For comparison, we re-generate 100,000 samples under the new settings and estimate the BRs by using \eqref{eq12}. The results are given in Fig. \ref{fig.1}. \begin{figure}[!t] \centering \includegraphics[width=0.95\columnwidth]{f1.eps} \caption{BR Estimation with sampling and calculation} \label{fig.1} \end{figure} Fig. \ref{fig.1} shows that the blackout risk estimations with two methods are almost the same, which indicates the proposed method can achieve unbiased estimation of BR. Note that our method requires no more simulations, which is much more efficient than traditional MCS, and scalable for large-scale systems. \subsection{Parameter Changes in CoFPFs} In this case, we test the performance of our method when parameters, $\boldsymbol{\eta}_k$, of some CoFPFs change. The sample set $Z_g$ and related $\mathbf{L}$ are based on the 100,000 samples with respect to the initial parameters. We consider two different settings: 1) $Y_0=0$; 2) $Y_0=1500$. Statistically, we obtain $\hat{R}_g(0)=54.2$ and $\hat{R}_g(1500)=0.43$, respectively. Since there are 411 components in the system, we consider $k_a=411$ different scenarios. Specifically, we change each CoFPF individually by letting $\bar{p}^k_{min }=p^k_{min}-0.001$. Using the proposed method, the blackout risks can be estimated quickly. Some results are presented in Table \ref{t1} ($Y_0=0$MW) and Table \ref{t2} ($Y_0=1500$MW). Particularly, the average computational times of unbiased estimation of BR in each scenario in Table \ref{t1} and \ref{t2} are 0.004s and 0.00001s, respectively.\fliu{we should give the computational time for each case in Tables I and II} \begin{table}[htp]\footnotesize \caption{BR estimation when parameters of CoFPFs change $(Y_0=0)$ } \label{t1} \centering \begin{tabular}{c|ccc} \hline \hline Component & $p^k_{min}$ & Blackout &Risk reduction \\ index&$(\times 10^{-3})$&risk& ratio$(\%)$ \\ \hline 204 &5.8&50.97&6.0\\ 312 &5.1&51.93&4.2\\ 372 &3.0&53.43&1.5\\ 114 &4.1&53.46&1.4\\ 307 &3.7&53.53&1.3\\ 410 &3.6&53.68&1.0\\ 117 &4.0&53.69&1.0\\ 63 &2.3&53.70&1.0\\ 259 &3.0&53.90&0.6\\ 126 &2.7&53.92&0.5\\ ...&...&...&...\\ Mean value &4.0&54.13&0.2\\ \hline \hline \end{tabular} \end{table} \begin{table}[htp]\footnotesize \caption{BR estimation when parameters of CoFPF changes $(Y_0=1500)$ } \label{t2} \centering \begin{tabular}{c|ccc} \hline \hline Component & $p^k_{min}$ & Blackout &Risk reduction \\ index&$(\times 10^{-3})$&risk&ratio$(\%)$\\ \hline 259 &3.0&0.358&16.7\\ 254 &4.3&0.380&11.4\\ 204 &5.8&0.382&11.0\\ 312 &5.1&0.403&6.0\\ 93 &3.0&0.403&6.0\\ 325 &2.2&0.404&5.9\\ 48 &3.0&0.404&5.9\\ 378 &2.6&0.405&5.7\\ 116 &2.2&0.406&5.5\\ 305 &2.2&0.407&5.3\\ ...&...&...&...\\ Mean value &4.0 &0.427&0.5\\ \hline \hline \end{tabular} \end{table} Table \ref{t1} shows the top ten scenarios having the lowest value of BR as well as the average risk of all scenarios. Whereas reducing failure probabilities of certain components can effectively mitigate the blackout risk, others have little impact. For example, decreasing $p^k_{min}$ of component $\#$204 results in $6.0\%$ reduction of blackout risk, while the average ratio of risk reduction is only $0.2\%$. This result implies that there may exist some critical components which play a core role in the propagation of cascading outages and promoting load shedding. Our method enables a scalable way to efficiently identify those components, which may facilitate effective mitigation of blackout risk. When considering $Y_0=1500$MW, which is associated with serious blackout events, it is interesting to see the most influential components in Table \ref{t2} are different from that in Table \ref{t1}. In other words, the impact of CoFPF varies with different load shedding levels, which demonstrates the complex relationship between BR and CoFPFs. To better show this point, we choose four typical components ($\#$312, $\#$372, $\#$307 and $\#$259) and calculate the risk reduction ratios with respect to a series of load shedding levels. As shown in Fig. \ref{fig.3}, component $\#259$ has little influence on medium to small size of blackouts, while considerably reducing the risk of large-size blackouts. It implies this component has a significant contribution to the propagation of cascading outages. In contrast, some other components, such as $\#307$ and $\#372$, result in similar risk reduction ratios for various load shedding levels. They are likely to cause certain direct load shedding, but have little influence on the propagation of cascading outages. In terms of component $\#312$, the curve of risk reduction ratio exhibits a multimodal feature, which means the changes of such a CoFPF may have much larger influence on BR for some load shedding levels than others. All these results demonstrate a highly complicated relationship between BR and CoFPFs. Our method indeed provides a computationally efficient tool to conveniently analyze such relationships in practice. \begin{figure}[htp] \centering \includegraphics[width=1\columnwidth]{f3.eps} \caption{Ratio of risk reduction under different load shedding levels} \label{fig.3} \end{figure} \begin{table}[htp]\footnotesize \caption{Risk reduction ratio when parameters of CoFPFs change $(\%)$ } \label{t3} \centering \begin{tabular}{cc|cc} \hline \hline Component&Risk reduction ratio&Component&Risk reduction ratio\\ index&$(Y_0=0)$&index&$(Y_0=1500)$\\ \hline (204,312) &10.2&(204,259) &25.5\\ (204,372) &7.5 &(254,259) &24.5\\ (114,204) &7.4 &(93,259) &22.6\\ (204,307) &7.3 &(259,312) &22.2\\ (63,204) &7.0 &(259,378) &21.8\\ (204,410) &7.0 &(259,305) &21.8\\ (117,204) &7.0 &(259,325) &21.6\\ (204,259) &6.6 &(116,259) &21.3\\ (126,204) &6.6 &(40,259) &21.1\\ (204,301) &6.5 &(48,259) &21.1\\ ... &... & ... &... \\ Mean value&0.3 &Mean value&1.0 \\ \hline \hline \end{tabular} \end{table} Then we decrease $p^k_{min}$ of two CoFPFs simultaneously. $Z_g$, $\mathbf{L}$ are the same as before, and the number of scenarios is $C(k_m,2)=C(411,2)$. The calculated unbiased estimations with respect to two $Y_0$ are shown in Table \ref{t3}. It is no surprise that the risk reduction ratios are more remarkable compared with the results in Table \ref{t1} and Table \ref{t2}, where only one CoFPF decrease. However, it should be noted that the pairs of components in Table \ref{t3} are not simple combinations of the ones shown in Table \ref{t1}/Table \ref{t2}. The reason is that the relationship between BR and CFP is complicated and nonlinear, which actually results in the difficulties in analyses as we demonstrate in Section III. \subsection{Form Changes in CoFPFs } In this case, we show the performance of the proposed method when the form of CoFPFs changes. The new form of CoFPF of component $k$ is modified into \begin{equation} \label{eq27} \bar{\varphi}_k (s_k,\boldsymbol{\eta}_k) = \left\{ {\begin{array}{*{20}{c}} {max(\varphi_k,ae^{bs^k})}&:&{s^k < s_u^k}\\ {p_{\max }^k}&:&{s^k \ge s_u^k} \end{array}} \right. \end{equation} where $a=p_{\min }^k$ and $b=\frac{ln(p_{\max }^k/p_{\min }^k)}{s_u^k}$ are corresponding parameters. $\varphi_k$, $p_{\min }$, $p_{\max }$, $s_u^k$, $s_d^k$ are the same as the ones in \eqref{eq26}. Obviously, the failure probabilities of individual components are amplified in this case. Similar to the previous cases, we use the proposed method to calculate the unbiased estimations of blackout risks when some CoFPFs change from \eqref{eq26} to \eqref{eq27}. The comparison results of our method and MCS are presented in Fig. \ref{fig.4}. In addition, the blackout risk in several typical scenarios is shown in Tabs \ref{t5} and \ref{t7}. These results empirically confirm the efficacy of our method. \begin{figure}[!t] \centering \includegraphics[width=0.95\columnwidth]{f4.eps} \caption{Estimation of the blackout risk with sampling and calculation} \label{fig.4} \end{figure} \begin{table}[!t]\footnotesize \caption{Risk increase ratio when form of CoFPF changes $(\%)$ } \label{t5} \centering \begin{tabular}{cc|cc} \hline \hline Component&Risk increase ratio&Component&Risk increase ratio\\ index&$(Y_0=0)$&index&$(Y_0=1500)$\\ \hline 204 &12.3& 204 &22.1\\ 312 &5.5 & 259 &21.0\\ 259 &0.8 & 254 &16.2\\ 264 &0.7 & 312 &7.5\\ 372 &0.6 & 264 &6.8\\ ... &... & ... &... \\ Mean value&0.1 &Mean value&0.2 \\ \hline \hline \end{tabular} \end{table} \begin{table}[!t]\footnotesize \caption{Risk increase ratio when forms of CoFPFs change $(\%)$ } \label{t7} \centering \begin{tabular}{cc|cc} \hline \hline Component&Risk increase ratio&Component&Risk increase ratio\\ index&$(Y_0=0)$&index&$(Y_0=1500)$\\ \hline (204,312) &17.8 &(204,259) &48.7\\ (204,259) &13.1 &(254,259) &43.5\\ (204,264) &12.9 &(204,254) &42.3\\ (204,372) &12.9 &(204,312) &30.7\\ (204,307) &12.8 &(201,204) &30.2\\ ... &... & ... &... \\ Mean value&0.2 &Mean value&0.5 \\ \hline \hline \end{tabular} \end{table} \subsection{Computational Efficiency } We carry out all tests on a computer with an Intel Xeon E5-2670 of 2.6GHz and 64GB memory. It takes 107 minutes to generate $Z_g$ (100,000 samples) with respect to the initial parameters. Then we enumerate all $\sum\nolimits_{d=1}^{2}{C({k_{a}},d)=84666}$ scenarios. In each scenario, the parameters and forms of one or two CoFPFs are changed (cases in Part C and D, Section V, respectively). Then we calculate the unbiased estimations of BR with $Y_0=0$ and $Y_0=1500$ by using the proposed method \fliu{I don't understand what do ``Case C'' and ``Case D'' in Table V stand for...}. The complete computational times are given in Tables \ref{t6} and \ref{t8}. It is worthy of noting that the computational times of risk estimation in tables are the total times for enumerating \emph{all} the 84666 scenarios. It takes only about 10 minutes to calculate $\mathbf{B}$, and additional 10 minutes to computing BRs. On the contrary, it will take about $107\times 84666 \approx 9,000,000$ minutes for the MCS method, which is practically intractable. \begin{table}[!t]\footnotesize \caption{Computing time when parameter or form changes in CoFPF (Min.)} \label{t6} \centering \begin{tabular}{c|ccc} \hline \hline & Calculate $\mathbf{B}$ & Estimate risk $(Y_0=0)$ & Total\\ \hline Parameter change &10.0&9.2 &19.2\\ Form change &10.7&9.3 &20.0\\ \hline \hline \end{tabular} \end{table} \begin{table}[!t]\footnotesize \caption{Computing time when parameter or form changes in CoFPF (Min.)} \label{t8} \centering \begin{tabular}{c|ccc} \hline \hline & Calculate $\mathbf{B}$ & Estimate risk $(Y_0=1500)$ & Total\\ \hline Parameter change &10.0&0.01 &10.01\\ Form change &10.7&0.01 &10.71\\ \hline \hline \end{tabular} \end{table} \section{Conclusion with Remarks} In this paper, we propose a sample-induced semi-analytic method to efficiently quantify the influence of CFP on BR. Theoretical analyses and numerical experiments show that: \begin{enumerate} \item With appropriate formulations of cascading outages and a generic form of CoFPFs, the relationship between CoFPF and BR is exactly characterized and can be effectively estimated based on samples. \item Taking advantages of the semi-analytic expression between CoFPFs and the unbiased estimation of BR, the BR can be efficiently estimated when CoFPFs change, and the relationship between the CFP and BR can be analyzed correspondingly. \end{enumerate} Numerical experiments reveal that the long-range correlation among component failures during cascading outages is really complicated, which is often considered closely related to self-organized criticality and power low distribution. Our results serve as a step towards providing a scalable and efficient tool for further understanding the failure correlation and the mechanism of cascading outages. Our ongoing work includes the application of the proposed method in making maintenance plans and risk evaluation considering extreme weather conditions, etc. \fliu{The style of reference is not correct. We need to check it carefully. }
1,108,101,566,188
arxiv
\section{Introduction} \label{sec:intro} The interaction between solid objects and a surrounding fluid is at the heart of many fluid mechanics problems stemming from various fields such as physics, engineering and biology. Among other factors, the behaviour of such fluid-structure interaction systems is critically determined by the boundary conditions at the surface of the solid, but also by the geometry of the solid itself, or, more simply said, its shape. In this context, the research for some notion of shape optimality in the fluid-structure interaction is widespread, with the objective of understanding which shapes allow for optimal response from the fluid, typically involving energy-minimising criteria \citep{MR2567067}. At low Reynolds number, a regime occurring in particular at the microscopic scale where viscosity dominates on inertial effects, fluid dynamics are governed by the Stokes equations. These equations are linear and time-reversible -- an remarkable specificity compared to the more general Navier-Stokes equations, which makes fluid-structure interaction and locomotion at microscopic scale a peculiar world \citep{purcell1977life}. In particular, when considering the \textit{resistance problem} of a rigid body moving into a fluid at Stokes regime, a linear relationship holds between the motion of a body (translation and rotation) and the effects (forces and torques) it experiences. This relationship is materialised by the well-known \textit{grand resistance tensor}, which depends only, once a reference frame has been set, on the shape of the object and not on its motion on or the boundary conditions on fluid velocity. In other words, the hydrodynamic resistance properties of an object at low Reynolds number are intrinsic, contained in a finite number of parameters, which in turn are determined by its shape only. The question of which shapes possess maximal or minimal values for these resistance parameters then naturally arises, both from a theoretical fluid mechanics perspective, and as potential ways to explain the sometimes intriguing geometries of microorganisms \citep{yang2016staying,van2017determinants,lauga2020fluid, ryabov2021shape}. Optimal shapes for resistance problems have been tackled in previous studies. In particular, the minimal drag problem, which seeks the shape of fixed volume opposing the least hydrodynamic resistance to translation in a set direction, is well known and was solved in the 1970s, both analytically \citep{pironneau1973optimum} and numerically \citep{bourot1974numerical}. The characteristic rugby-ball shape resulting from this optimisation problem has then been used as a reference for many later works, among which we can cite the adaptations to two-dimesional and axisymmetric flows in \citet{richardson1995optimum,srivastava2011optimum}, linear elastic medium in \citet{zabarankin2013minimum}, or minimal drag for fixed surface in \citet{montenegro2015other}. These studies rely on symmetry properties for the minimal drag problem, and such methods fail to be immediately extended to solve the optimisation of the generic resistance problem, associated to other entries of the resistance tensor. Shape optimisation in microhydrodynamics has also been widely carried in the context of microswimmer locomotion. Notable works include \citet{quispe2019geometry}, where the best pitch and cross-section for efficient magnetic swimmers is numerically and experimentally discussed, and \citet{fujita2001optimum,ishimoto2016hydrodynamic, berti2021shapes}, where parametric optimisation is conducted to find the best geometry for flagellated microswimmers. Efficient shapes for periodic swimming strokes and ciliary locomotion are addressed in \citet{vilfan2012optimal,daddi2021optimal}. However, in all of these studies, restrictive assumptions are made on the possible shapes, with the optimisation being carried on a few geometrical parameters and not on a general space of surfaces in 3D. Another approach, allowing to explore a wider class of shapes than with parametric optimisation, is based on the use of shape derivatives: a generalisation of the notion of derivative, which yields a perturbation function of a domain in a descent direction. This method however requires caution regarding the regularity assumptions on the boundaries of the domains involved. Other popular methods for shape optimisation in structural mechanics include density methods, in which the characteristic function of a domain is replaced by a density function -- we mention in particular the celebrated SIMP method \citep{bendsig,borvall,evgrafov}, and the level set method, \citep{osher1988fronts,sethian2000structural,ajt,wang} which can handle changes of topology. Obtaining efficient numerical algorithms to apply these analytical methods to find optimal shapes is also challenging: one must be able to handle both a decrease of the objective function, while avoiding that the numerical representation becomes invalid (for example because of problems related to the mesh or to changes in the topology of the shapes considered). In the context of low-Reynolds number fluid mechanics, variational techniques and shape derivatives are notably used by \citet{Walker2013} and \citet{keaveny2013optimization} to carry the optimisation of the torque-speed mobility coefficient in the context of magnetically propelled swimmers, for a shape constrained to be a slender curved body, yielding helicoidal folding. However, this study is also focused on a restricted class of shapes, characterised by a one-dimensional curve, and a single entry of the resistance tensor or energy dissipation criteria. To the best of the authors' knowledge, the systematic theoretical or numerical study of the coefficients of the grand resistance tensor has not yet been carried out. Hence, as the principal aim of this paper, we will provide a general framework of shape optimisation for this type of problem, and show that the optimisation of any entry of the resistance tensor amounts to a single, simple formula for the shape derivative, which depends on the solution of two Stokes problems whose boundary conditions depend on the considered entry. We then describe an algorithm to numerically implement the shape optimisation and display some illustrative examples. \section{Problem statement} \label{sec:problem} \subsection{Resistance problem for a rigid body in Stokes flow} We consider a rigid object set in motion into an incompressible fluid with viscosity $\mu$ at low Reynolds number, with coordinates $\vec{x}$ expressed in the fixed lab frame $(O,\vec{e}_1,\vec{e}_2,\vec{e}_3)$, as shown on the left panel of \Cref{fig:diagram}. The object's surface is denoted by $\mathscr{S}$ and we assume that the fluid is contained in a bounded domain $\mathscr{B}$, thus occupying a volume $\mathscr{V}$ having $\partial \mathscr{V}= \mathscr{S}\cup \partial \mathscr{B}$ as boundary. Such a boundedness assumption ensures that the solutions of the fluid equations are well-defined and that the computations that will be performed on them throughout the paper are rigorously justified (see Appendix \ref{append:diff} and textbooks referred therein), although we assume the outer boundary $\partial \mathscr{B}$ to be sufficiently far from the object for its effect on hydrodynamic resistance to be negligible. At the container boundary $\partial \mathscr{B}$, we consider a uniform, linear background flow $\vec{U}^\infty$, broken down into translational velocity vector $\vec{Z}$, rotational velocity vector $\vec{\Omega}^{\infty}$ and rate-of-strain (second-rank) tensor $\vec{E}^{\infty}$ components as follows: \begin{equation}\label{def:Uinfty} \vec{U}^\infty=\vec{Z} + \vec{\Omega}^{\infty} \times \vec{x} + \vec{E}^{\infty} \vec{x}. \end{equation} Similarly, the object's rigid motion velocity field is simply described by \begin{equation}\label{def:U} \vec{U} = \vec{Z} + \vec{\Omega} \times \vec{x}, \end{equation} with $\vec{U}$ and $\vec{\Omega}$ denoting its translational and rotational velocities. Having set as such the velocities at the boundary of $\mathscr{V}$ defines a boundary value problem for the fluid velocity field $\vec{u}$ and pressure field $p$, which satisfy the Stokes equations: \begin{equation} \left \{ \begin{array}{ll} \mu \Delta \vec{u} - \nabla p = \vec{0} & \text{in }\mathscr{V}, \\ \nabla \vec{\cdot} \vec{u} = 0& \text{in }\mathscr{V}, \\ \vec{u} = \vec{U} & \text{on } \mathscr{S}, \\ \vec{u} = \vec{U}^{\infty} & \text{on } \partial \mathscr{B}. \end{array} \right . \label{eq:stokes} \end{equation} From the solution of this Stokes problem with set boundary velocity, one can then calculate the hydrodynamic drag (force $\vec{F}^h$ and torque $\vec{T}^h$) exerted by the moving particle to the fluid, \textit{via} the following surface integrals formulae over $\mathscr{S}$: \begin{align} \vec{F}^h & = - \int_{\mathscr{S}} \vec{\sigma}[\vec{u},p] \vec{n} \mathrm{d} \mathscr{S}, \\ \vec{T}^h & = - \int_{\mathscr{S}} \vec{x} \times \left ( \vec{\sigma}[\vec{u},p] \vec{n} \right ) \mathrm{d} \mathscr{S}. \label{eq:FT} \end{align} In \Cref{eq:FT}, $\vec{n}$ is the normal to $\mathrm{d} \mathscr{S}$ pointing outward to the body (see Fig. \ref{fig:scheme}), and $\vec{\sigma}$ is the stress tensor, defined as \begin{equation} \vec{\sigma} [\vec{u},p] = - p \vec{I} + 2 \mu \vec{e}[\vec{u}], \end{equation} in which $\vec{I}$ denotes the identity tensor and $\vec{e}[\vec{u}]$ is the rate-of-strain tensor, given by \begin{equation} \vec{e}[\vec{u}] = \frac{1}{2} \left ( \nabla \vec{u} + (\nabla \vec{u} )^T \right ). \end{equation} Finding this way the hydrodynamic drag for a given velocity field is called the \textit{resistance problem} -- as opposed to the \textit{mobility problem} in which one seeks to find the velocity generated by a given force and torque profile on the boundary. \begin{figure} \begin{center} \begin{overpic}[height=5cm]{Fig1.png} \put(26,12){$O$} \end{overpic} $\quad$ \includegraphics[height=5cm]{Fig2.png} \end{center} \caption{Problem setup: a rigid body in a Stokes flow. A diagram of the general Stokes problem \eqref{eq:stokes} can be seen on the left of the figure. The panels on the right-hand side show examples of resistance problems associated to selected entries of the grand resistance tensor. For instance, for $K_{11}$ (top left), one sets the motion of the object to a unitary translation in the direction $\vec{e}_1$, and then $K_{11}$ may be obtained as the component along $\vec{e}_1$ of the total drag force $\vec{F}$ exerted on the object. The other coefficients shown on the other panels are analogously obtained by using the appropriate boundary conditions and drag force or torque shown on the figure. \label{fig:scheme}} \end{figure} \subsection{Grand resistance tensor} In addition to Equations, \eqref{eq:FT}, a linear relationship between $(\vec{F}^h,\vec{T}^h)$ and $(\vec{U},\vec{U}^{\infty})$ can be derived from the linearity of the Stokes equation \citep[see][Chapter 5]{Kim2005}: \begin{equation} \begin{pmatrix} \vec{F}^h \\ \vec{T}^h \\ \vec{S} \end{pmatrix} = \mathsfbi{R} \begin{pmatrix} \vec{Z} - \vec{Z}^{\infty} \\ \vec{\Omega}- \vec{\Omega}^{\infty} \\ - \vec{E}^{\infty} \end{pmatrix} = \begin{pmatrix} \mathsfbi{K} & \mathsfbi{C} & \vec{\Gamma} \\ \tilde{\mathsfbi{C}} & \mathsfbi{Q} & \vec{\Lambda} \\ \tilde{\vec{\Gamma}} & \tilde{\vec{\Lambda}} & \mathsfbi{M} \end{pmatrix} \begin{pmatrix} \vec{Z} - \vec{Z}^{\infty} \\ \vec{\Omega} - \vec{\Omega}^{\infty} \\ - \vec{E}^{\infty} \end{pmatrix}. \label{eq:GRT} \end{equation} The stresslet $\vec{S}$, defined as \begin{equation} \vec{S} = \frac{1}{2} \int_{\mathscr{S}} (\vec{x} \vec{\cdot} \vec{\sigma}[\vec{u},p] \vec{n}^T + \vec{\sigma}[\vec{u},p] \vec{n} \vec{\cdot} \vec{x}^T ) \mathrm{d} \mathscr{S}, \end{equation} appears on the right-hand side of Equation \eqref{eq:GRT} and is displayed here for the sake of completeness, though we will not be dealing with it in the following. The tensor $\mathsfbi{R}$, called the grand resistance tensor, is symmetric and positive definite. As seen in Equation \eqref{eq:GRT}, it may be written as the concatenation of nine tensors, each accounting for one part of the force-velocity coupling. The second-rank tensors $\mathsfbi{K}$ and $\mathsfbi{C}$ represents the coupling between hydrodynamic drag force and, respectively, translational and rotational velocity. Similarly, $\tilde{\mathsfbi{C}}$ and $\mathsfbi{Q}$ are second-rank tensors coupling hydrodynamic torque with translational and rotational velocity. Note that, by symmetry of $\mathsfbi{R}$, $\mathsfbi{K}$ and $\mathsfbi{Q}$ are symmetric and one has $\mathsfbi{C}^T = \tilde{\mathsfbi{C}}$. Further, $\vec{\Gamma}$, $\tilde{\vec{\Gamma}}$, $\vec{\Lambda}$, and $\tilde{\vec{\Lambda}}$ are third-rank tensors accounting for coupling involving either the shear part of the background flow or the stresslet, and $\mathsfbi{M}$ is a fourth-rank tensor representing the coupling between the shear and the stresslet, with similar properties deduced from the symmetry of $\mathsfbi{R}$. An important property of the grand resistance tensor is that it is independent of the boundary conditions associated to a given resistance problem. In other words, for a given viscosity $\mu$ and once fixed a system of coordinates, the grand resistance tensor $\mathsfbi{R}$ depends only on the shape of the object, \textit{i.e.} its surface $\mathscr{S}$. A change of coordinates or an affine transformation applied to $\mathscr{S}$ modifies the entries of $\mathsfbi{R}$ through standard linear transformations. For that reason, here we fix a reference frame once and for all and carry the shape optimisation within this frame; which means in particular that we distinguish shapes that do not overlap in the reference frame, even if they are in fact identical after an affine transformation. With these coordinates considerations aside, we can argue that the grand resistance tensor constitutes an intrinsic characteristic of an object; and all the relevant information about the hydrodynamic resistance of a certain shape is carried in the finite number of entries in $\mathsfbi{R}$. While these entries can be obtained by direct calculation in the case of simple geometries, in most cases their value must be determined by solving a particular resistance problem and using Equations $\eqref{eq:FT}$. For example, to determine $K_{ij}$, one can set $\vec{U}$ as unit translation along $\vec{e}_j$, $\vec{U} = \vec{e}_j$. Then Equation \eqref{eq:GRT}, combined with $\eqref{eq:FT}$, gives \begin{equation} K_{ij} = \vec{F}^h \vec{\cdot} \vec{e}_i = - \int_{\mathscr{S}} (\vec{\sigma}[\vec{u},p] \vec{n})\vec{\cdot} \vec{e}_i \mathrm{d} \mathscr{S}. \end{equation} The same strategy can be applied for other entries of $\mathsfbi{R}$, setting appropriate boundary conditions $\vec{U}$ and $\vec{U}^{\infty}$ in the Stokes equation and calculating the appropriate projection of $\vec{F}^h$ or $\vec{T}^h$ along one of the basis vectors. Figure \ref{fig:scheme} displays a few illustrative examples. In fact, let us define the generic quantity $J_{\vec{V}}$ as the surface integral \begin{equation} J_{\vec{V}}(\mathscr{S}) = - \int_{\mathscr{S}} (\vec{\sigma}[\vec{u},p] \vec{n})\vec{\cdot} \vec{V} \mathrm{d} \mathscr{S}. \label{eq:J} \end{equation} Then, judicious choices of $\vec{U}$, $\vec{U}^{\infty}$ and $\vec{V}$, summarised in Table \ref{table:1}, allow to obtain any coefficient of the grand resistance tensor from formula \eqref{eq:J}. \begin{table} \begin{center} \def~{\hphantom{0}} \begin{tabular}{c c c c} $\qquad J_{\vec{V}} \qquad$ & $\qquad \vec{U} \qquad$ & $\qquad \vec{V} \qquad$ & $\qquad \vec{U}_{\infty} \qquad$ \\[3pt] $K_{ij}$ & $\vec{e}_j$ & $\vec{e}_i$ & $\vec{0}$ \\[2pt] $C_{ij}$ & $\vec{e}_j \times \vec{x}$ & $\vec{e}_i$ & $\vec{0}$ \\[2pt] $\tilde{C}_{ij}$ & $\vec{e}_j$ & $\vec{e}_i \times \vec{x}$ & $\vec{0}$\\[2pt] $Q_{ij}$ & $\vec{e}_j \times \vec{x}$ & $\vec{e}_i \times \vec{x}$ & $\vec{0}$\\[2pt] $\Gamma_{ijk}$ & $\vec{0}$ & $\vec{e}_i$ & $x_k \vec{e}_j$ \\[2pt] $\Lambda_{ijk}$ & $\vec{0}$ & $\vec{e}_i \times \vec{x}$ & $x_k \vec{e}_j$ \\ \end{tabular} \caption{Entries of the grand resistance tensor associated to $J$ with respect to the choice of $\vec{U}$, $\vec{V}$ and $\vec{U}^{\infty}$.} \label{table:1} \end{center} \end{table} Of note, for the determination of the coefficients lying on the diagonal of $\mathsfbi{R}$, another relation involving power instead of hydrodynamic force is sometimes found \citep{Kim2005}. Indeed, the energy dissipation rate $\epsilon$ is defined as $\epsilon = \int_{\mathscr{V}} 2 \mu \vec{e}[u] : \vec{e}[u] \mathrm{d} \mathscr{V}$. In the case of the translation $\vec{U}$ of a rigid body, one also has $\epsilon = \vec{F} \vec{\cdot} \vec{U}$. Then, to determine for instance $K_{11}$, one sets $\vec{U} = \vec{e}_1$ as described above and obtains \begin{equation} K_{11} = \int_{\mathscr{V}} 2 \mu e_{1j} e_{1j} \mathrm{d}\mathscr{V}. \end{equation} This last expression yields in particular the important property that the diagonal entries of $\mathsfbi{R}$ are positive. Nonetheless, in the following we will prefer the use of formula \eqref{eq:J} that conveniently works for both diagonal and extradiagonal entries. \subsection{Towards a shape optimisation framework} Seeing $J_{\vec{V}}$ as a functional depending on the surface $\mathscr{S}$ of the object, we will now seek to optimise the shape $\mathscr{S}$ with $J_{\vec{V}}$ as an objective function; in other terms, we want to optimise one of the parameters accounting for the hydrodynamic resistance of the object. As is usually done in shape optimisation, it is relevant in our framework to add some constraint on the optimisation problem. This is both motivated by our wish to obtain relevant and non-trivial shapes (e.g. a shape occupying the whole computational domain), but also to model manufacturing constraints. In this domain, there are multiple choices. In the following, we will focus on the standard choice: $|\mathscr{V}|=V_0$ for some positive parameter $V_0$, where $\mathscr{V}$ stands for the domain enclosed by $\mathscr{S}$ whereas $|\mathscr{V}|$ denotes its volume. The {\it generic} resulting shape optimisation problem we will tackle in what follows hence reads: \begin{equation}\label{SOPgen-min} \min_{\mathscr{S}\in \mathscr{O}_{ad,V_0}} J_{\vec{V}}(\mathscr{S}), \end{equation} where $\mathscr{O}_{ad,V_0}$ denotes the set of all connected domains $\mathscr{V}$ enclosed by $\mathscr{S}$ such that $|\mathscr{V}|=V_0$. Of note, when performing optimisation in practice, we will also occasionally consider $\max J_{\vec{V}}(\mathscr{S})$ instead of $\min J_{\vec{V}}(\mathscr{S})$ in \eqref{SOPgen-min}, which is immaterial to the following analysis as it amounts to replacing $J_{\vec{V}}(\mathscr{S})$ by $-J_{\vec{V}}(\mathscr{S})$. Throughout the rest of this paper, we will not address the question of the existence of optimal shapes, i.e. the existence of solutions for the above problem. In the context of shape optimisation for fluid mechanics, few qualitative analysis questions have been solved so far. Let us mention for instance \citet{MR2601075} about some progress in this field, whether it is about questions of existence or qualitative analysis of optimal shapes. Whilst it may appear as purely technical, these fundamental questions can have very tangible impact on the actual optimal shapes; for instance, necessary regularity assumptions influence the decisions to be made for numerical implementation, and a reckless choice of admissible shapes space in which the optimisation problem does not have any solution may thus lead to ``miss'' the minimiser. Nevertheless, we put these questions aside in this paper, in order to focus on the presentation of an efficient optimisation algorithm based on the use of shape derivatives, as well as the numerical results obtained and their interpretation. We will hence use the framework of shape derivative calculus, which allows to consider very general shape deformations, independent of any shape parametrisation, and a global point of view on the objective function; this notably constitutes a progress with respect to previous studies focusing on one particular entry of the resistance tensor. The following section is devoted to laying out the mathematical tools required to address the hydrodynamic shape optimisation problem. \section{Analysis of the shape optimisation problem} \subsection{Theoretical framework}\label{sec:theoFramSO} In this section, we introduce the key concepts of domain variation and shape gradient that we need to introduce the main results of this paper, as well as a practical optimisation algorithm. An important point that must be considered throughout this study is shape regularity. In order for the presented calculations and the equations involved to be valid and understood in their usual sense, sufficient regularity of the surfaces involved must be imposed; however, it must be kept in mind that the need to consider regular shapes might prevent some optimal shapes from being found if they possess ridges or sharp corners. In addition, the discretisation step required when moving to the numerical implementation also raises further discussions about the regularity of shapes. For the sake of readability, the mathematical framework, especially with respect to shape regularity and the other functional spaces involved, is kept to a minimal level of technicality in the body of the paper and further details and discussions are provided in the appendix \ref{append:diff}. \begin{figure} \begin{center} \includegraphics[width=8cm]{diagram.png} \caption{Shape optimisation principle: the surface $\mathscr{S}$ of the body is deformed with respect to a certain vector field $\vec{\theta}$, such that the deformed shape $\mathscr{S}_{\vec{\theta}} = (\mathrm{Id} + \vec{\theta}) (\mathscr{S})$ improves the objective, \textit{i.e.} satisfies $J(\mathscr{S}_{\vec{\theta}})<J(\mathscr{S})$.} \label{fig:diagram} \end{center} \end{figure} Formally, deforming a shape can be done by defining a deformation vector field $\vec{\theta}~:~\mathbb{R}^3~\rightarrow~\mathbb{R}^3$ within the domain $\mathscr{B}$. This vector field will be assumed to belong to a set $\vec{\Theta}$ of so-called ``admissible'' fields, that are smooth enough to preserve the regularity of the shape. A discussion on the exact choice of set $\vec{\Theta}$ can be found in the Appendix. Applying this deformation vector field $\vec{\theta}$ to the surface of the shape $\mathscr{S}$ yields a new shape $\mathscr{S}_{\vec{\theta}}= (\mathrm{Id} + \vec{\theta}) (\mathscr{S})$ (see figure \ref{fig:diagram}), where $\mathrm{Id}$ denotes the identity operator corresponding to no deformation. This operation is called a \textit{domain variation}. The fundamental question at the heart of all shape optimisation algorithms is the search for a "good" vector field $\vec{\theta}$ chosen so that $\mathscr{S}_{\vec{\theta}}$ satisfies the constraints of the problem but also so that the objective function decreases, ideally strictly -- but most methods only guarantee the inequality $J(\mathscr{S}_{\vec{\theta}})\leq J(\mathscr{S})$. In the terminology of optimisation, such a deformation vector field will be called a \textit{descent direction}, according to the inequality above. In numerical optimisation, descent methods are expected to bring the shape towards a local optimum for the objective criterion. To this end, following the so-called Hadamard boundary variation framework which considers a small change of functional by perturbing shape geometry as featured in \citet{allaire2007conception} and \citet{HENROTPIERRE}, to which we refer the interested reader for a detailed theory of shape optimisation, we introduce the fundamental notion of \textit{shape derivative}, characterising the variation of the criterion for an infinitesimal deformation of the domain. More precisely, for a given shape $\mathscr{S}$ and $\vec{\theta} \in \vec{\Theta}$, the \textit{shape derivative of $\mathscr{S}$ in the direction $\vec{\theta}$} is denoted by $\langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle$ and defined as the first order term in the expansion \begin{equation} J(\mathscr{S}_{\vec{\theta}}) = J(\mathscr{S}) + \langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle + o(\bm{\theta}) \quad \text{where } o(\bm{\theta}) \to 0 \text{ as } \bm{\theta} \to 0. \label{eq:shape-diff} \end{equation} For additional explanations on the precise meaning of such an expansion, we refer to Appendix~\ref{append:funSpace}. In particular, the \textit{shape derivative of $J$ at $\mathscr{S}$ in the direction $\vec{\theta}$} can be computed through the directional derivative \begin{equation} \langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle = \lim_{\varepsilon \rightarrow 0} \frac{J((\mathrm{Id} + \varepsilon \vec{\theta}) (\mathscr{S}))- J(\mathscr{S})}{\varepsilon}, \end{equation} from which we recover \Cref{eq:shape-diff}. Of note, the bracket notation for the shape derivative refers to the fact that the application $\vec{\theta} \mapsto \langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle$ is a linear form from $\vec{\Theta}$ to $\mathbb{R}$, which itself stems out from the differential $\mathrm{d}J (\mathscr{S})$ at $\vec{\theta} = 0$ of the domain variation application \begin{equation} \begin{array}{r c l} \vec{\Theta} & \rightarrow & \mathbb{R} \\ \vec{\theta} & \mapsto & J(\mathscr{S}_{\vec{\theta}}). \end{array} \end{equation} The expression of the shape derivative in Equation \eqref{eq:shape-diff} suggests that the deformation $\vec{\theta}$ should be chosen such that $\langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle$ is negative, effectively decreasing the objective criterion at first order. A classical strategy to achieve this goal \citep[see][Chapter~6]{allaire2007conception} consists in deriving an explicit and workable expression of the shape derivative as a surface integral of the form \begin{equation} \langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle = \int_{\mathscr{S}} G(\vec{x}) \vec{\theta} \vec{\cdot} \vec{n} \mathrm{d} \mathscr{S}(\vec{x}), \label{eq:shape-grad} \end{equation} where $G$ is a function called \textit{shape gradient} of the involved functional. Such a rewriting is in general possible for generic cost functions (according to the structure theorem, see e.g. \cite[Section~5.9]{HENROTPIERRE}), but usually requires some work, and involves the determination of the adjoint of a linear operator. Once an expression of type \eqref{eq:shape-grad} has been obtained, it is then easy to prescribe the descent direction such that the shape derivative is negative, by choosing for instance $\theta(\vec{x}) = -G(\vec{x}) \vec{n}$, or less straightforward expressions yielding suitable properties; see section \ref{sec:descent} for further discussion. \subsection{The shape derivative formula for problem \eqref{SOPgen-min}} The calculation of the shape gradient through the derivation of a formula like Equation \eqref{eq:shape-grad} for the minimisation problem \eqref{SOPgen-min} is the main result of this theoretical section, which is displayed in Proposition \ref{prop:diff} below. In order to state this result, let us introduce the pair $(\vec{v},q)$ playing the role of \textit{adjoint states} for the optimisation problems we will deal with, as the unique solution of the Stokes problem \begin{equation} \left \{ \begin{array}{ll} \mu \Delta \vec{v} - \nabla q = \vec{0} & \text{in }\mathscr{V} \\ \nabla \vec{\cdot} \vec{v} = 0& \text{in }\mathscr{V} \\ \vec{v} =\vec{V} & \text{on } \mathscr{S}, \\ \vec{v} = \vec{0} & \text{on } \partial \mathscr{B}, \end{array} \right . \label{eq:stokesAdj} \end{equation} Then, one can express the shape derivative and shape gradient with respect to the solution of the resistance problem \eqref{eq:stokes} and the adjoint problem \eqref{eq:stokesAdj}: \noindent \begin{minipage}{\textwidth} \begin{proposition}\label{prop:diff} The functional $J_{\vec{V}}$ is shape differentiable. Furthermore, for all $\bm{\theta}$ in $\Theta$, one has \begin{equation} \langle dJ_{\vec{V}}(\mathscr{S}),\bm{\theta}\rangle = 2\mu \int_{\mathscr{S}} \big ( \vec{e}[\vec{u}]:\vec{e}[\vec{v}] -\vec{e}[\vec{U} ] : \vec{e} [ \vec{v} ] - \vec{e} [\vec{u} ] : \vec{e} [ \vec{V} ] \big ) (\vec{\theta}\vec{\cdot}\vec{n}) \mathrm{d} \mathscr{S}, \label{eq:shape_grad} \end{equation} and the shape gradient $G$ is therefore given by $$ G = 2\mu \big ( \vec{e}[\vec{u}]:\vec{e}[\vec{v}] -\vec{e}[\vec{U} ] : \vec{e} [ \vec{v} ] - \vec{e} [\vec{u} ] : \vec{e} [ \vec{V} ] \big ). $$ \end{proposition} \end{minipage} Of particular note, if we assume moreover that $ \vec{U}$ and $ \vec{V}$ are such that $$ \vec{e}[\vec{U}]= \vec{e}[ \vec{V}]= \vec{0} \text{ in }\mathscr{V}, $$ which is trivially true for all the relevant choices of $\vec{U}$ and $\vec{V}$ displayed in Table \ref{table:1} -- and more generally for any linear flow and rigid body motion -- then the shape gradient simply becomes \begin{equation} G = 2\mu \vec{e}[\vec{u}]:\vec{e}[\vec{v}], \end{equation} which is the expression we will use later on when implementing the shape optimisation algorithm. \subsection{Proof of Proposition \ref{prop:diff}} To compute the shape gradient of the functional $J_{\vec{V}}$, which is expressed as a surface integral, a standard technique \citep[see][Chapter~5]{HENROTPIERRE} first consists in rewriting it under volumetric form. Let us multiply the main equation of \eqref{eq:stokes} by $\vec{v}$ and then integrate by parts\footnote{ In what follows, we will often use the following integration by parts formula, well adapted to the framework of fluid mechanics: let $\vec{u}$ and $\vec{v}$ denote two vector fields; then, $ 2\int_{\mathscr{V}} \vec{e} [\vec{v}]:\vec{e} [\vec{u}] \mathrm{d} {\mathscr{V}}=-\int_{\mathscr{V}} (\Delta \vec{v}+\nabla (\nabla \vec{\cdot} \vec{v}))\vec{\cdot} \vec{u}\mathrm{d}\mathscr{V}+2\int_{\partial \mathscr{V}}\vec{e} [\vec{v}]\vec{n}\vec{\cdot} \vec{u}\mathrm{d} \mathscr{S}. $ }. One thus gets $$ 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u}]:\vec{e}[\vec{v}]\mathrm{d} {\mathscr{V}}-\int_{\mathscr{V}} p\nabla \vec{\cdot} \vec{v}\mathrm{d} {\mathscr{V}}-2\int_{\partial \mathscr{V}}\vec{\sigma}[\vec{u},p]\vec{n}\vec{\cdot} \vec{v}\mathrm{d} \mathscr{S}=0 $$ By plugging the boundary conditions into this equality, one gets \begin{equation} -J_{\vec{V}}(\mathscr{S}) = 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u}]:\vec{e}[\vec{v}]\mathrm{d} {\mathscr{V}}. \end{equation} We are now ready to differentiate this relation with respect to the variations of the domain $\mathscr{S}$. To this end, we will use the formula for the derivative of integrals on a variable domain, shown in \citet[Theorem 5.2.2]{HENROTPIERRE} in a mathematical setting. Of note, this formula is also a particular application of the so-called Reynolds transport theorem. \begin{multline}\label{eq:0856} - \langle dJ_{\vec{V}}(\mathscr{S}),\bm{\theta}\rangle =2\mu \int_{\mathscr{S}} \vec{e}[\vec{u}]:\vec{e}[\vec{v}](\bm{\theta}\vec{\cdot} \vec{n})\mathrm{d} \mathscr{S} \\ +2\mu \int_{\mathscr{V}} \vec{e}[\vec{u'}]:\vec{e}[\vec{v}]\mathrm{d}\mathscr{V}+2\mu \int_{\mathscr{V}} \vec{e}[\vec{u}]:\vec{e}[\vec{v'}]\mathrm{d} {\mathscr{V}}, \end{multline} where $(\vec{u'},p')$ and $(\vec{v'},q')$ may be interpreted as characterising the hypothetical behaviour of the fluid within $\mathscr{B}$ if the surface $\mathscr{S}$ was moving at a speed corresponding to the deformation $\vec{\theta}$. The quantities $(\vec{u'},p')$ and $(\vec{v'},q')$ are thus solutions of the Stokes-like systems \begin{equation} \left \{ \begin{array}{ll} \mu \Delta \vec{u'} - \nabla p' = \vec{0} & \text{in }\mathscr{V}, \\ \nabla \vec{\cdot} \vec{u'} = 0& \text{in }\mathscr{V}, \\ \vec{u'} = -[\nabla (\vec{u}-\vec{U})]\vec{n} (\vec{\theta}\vec{\cdot}\vec{n}) & \text{on } \mathscr{S}, \\ \vec{u'} = \vec{0} & \text{on } \partial \mathscr{B}, \end{array} \right . \label{eq:stokesBisprime} \end{equation} and \begin{equation} \left \{ \begin{array}{ll} \mu \Delta \vec{v'} - \nabla q' = \vec{0} & \text{in }\mathscr{V} \\ \nabla \vec{\cdot} \vec{v'} = 0& \text{in }\mathscr{V} \\ \vec{v'} = -[\nabla(\vec{v}-\vec{V})]\vec{n} (\vec{\theta}\vec{\cdot}\vec{n}) & \text{on } \mathscr{S}, \\ \vec{v'} = \vec{0} & \text{on } \partial \mathscr{B}. \end{array} \right . \label{eq:stokesAdjprime} \end{equation} Let us rewrite the two last terms of the sum in \eqref{eq:0856} under a convenient form for algorithmic issues. By using an integration by parts, one gets \begin{equation} 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u'}]:\vec{e}[\vec{v}]\mathrm{d}\mathscr{V} = \mu \int_{\mathscr{V}} (-\Delta \vec{v}+\nabla (\nabla \vec{\cdot} \vec{v}))\vec{\cdot} \vec{u'}\mathrm{d}\mathscr{V} +2\mu \int_{\mathscr{S}} \vec{e}[\vec{v}]\vec{n}\vec{\cdot} \vec{u'}\mathrm{d} \mathscr{S}. \end{equation} Using the relations contained in Eqs. \eqref{eq:stokesAdj} for $\vec{v}$ and \eqref{eq:stokesBisprime} for $\vec{u}'$ yields \begin{eqnarray*} 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u'}]:\vec{e}[\vec{v}]\mathrm{d}\mathscr{V} &=& - \int_{\mathscr{V}} \nabla q\vec{\cdot} \vec{u'}\mathrm{d}\mathscr{V}-2\mu \int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \vec{e}[\vec{v}]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n} \mathrm{d} \mathscr{S} , \\ &=& - \int_{\mathscr{S}} q \vec{u'}\vec{\cdot} \vec{n}\mathrm{d}\mathscr{V}-2\mu \int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \vec{e}[\vec{v}]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n} \mathrm{d} \mathscr{S} , \end{eqnarray*} which finally leads to \begin{equation} 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u'}]:\vec{e}[\vec{v}]\mathrm{d}\mathscr{V} = -\int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \vec{\sigma}[\vec{v},q]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n} \mathrm{d} \mathscr{S} . \label{eq:prop1-0} \end{equation} Now, observe that since $\vec{u}-\vec{U}$ vanishes on $\mathscr{S}$ and is divergence free, and defining the derivative with respect to the normal by $\frac{\partial}{\partial \vec{n}} x_i = \frac{\partial x_i}{\partial x_j} n_j$, one has \begin{eqnarray*} \vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n}&=&\frac{\partial (u_i-U_{i})}{\partial x_j}n_j n_i=\frac{\partial (u_i-U_{i})}{\partial \vec{n}}n_i\\ &=& \frac{\partial (u_i-U_{i})}{\partial x_i}=\nabla\vec{\cdot} (\vec{u}-\vec{U})=0 \end{eqnarray*} on $\mathscr{S}$, since $\vec{n}\vec{\cdot} \vec{n}=1$ and therefore, \begin{equation} \vec{\sigma}[\vec{v},q]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n}=2\mu \vec{e}[\vec{v}]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n}\quad \text{on }\mathscr{S}. \label{eq:prop1-1} \end{equation} Using straightforward calculations as carried in \citet[Lemma~1]{MR4269970}, we can moreover show that \begin{equation} \vec{e} [\vec{v}] \vec{n} \vec{\cdot} \nabla (\vec{u}-\vec{U}) \vec{n} = \vec{e} [\vec{v}] : \vec{e} [\vec{u}-\vec{U}], \label{eq:prop1-2} \end{equation} yielding a more symmetrical expression for \eqref{eq:prop1-1}: $$ \vec{\sigma}[\vec{v},q]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n}=2\mu \vec{e}[\vec{v}]\vec{n}\vec{\cdot} \vec{e}[\vec{u}-\vec{U}]\vec{n}\quad \text{on }\mathscr{S}. $$ It follows that \eqref{eq:prop1-0} can be rewritten as \begin{equation} 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u'}]:\vec{e}[\vec{v}]\mathrm{d}\mathscr{V}=-2\mu \int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \vec{e}[\vec{v}]: \vec{e} [\vec{u}-\vec{U}] \mathrm{d} \mathscr{S} . \label{eq:prop1-3} \end{equation} By mimicking the computation above, we obtain similarly \begin{equation} 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u}]:\vec{e}[\vec{v'}]\mathrm{d}\mathscr{V}=-2\mu \int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \vec{e}[\vec{u}]: \vec{e} [\vec{v}-\vec{V}] \mathrm{d} \mathscr{S} . \label{eq:prop1-4} \end{equation} Gathering \eqref{eq:0856}, \eqref{eq:prop1-3} and \eqref{eq:prop1-4} yields $$ -\langle dJ_{\vec{V}}(\mathscr{S}),\bm{\theta}\rangle = 2\mu \int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \left(\vec{e}[\vec{u}]:\vec{e}[\vec{v}]-\vec{e}[\vec{v}]: \vec{e} [\vec{u}-\vec{U}]-\vec{e}[\vec{u}]: \vec{e} [\vec{v}-\vec{V}]\right) \mathrm{d} \mathscr{S} , $$ and rearranging the terms finally leads to the expected expression of the shape derivative \eqref{eq:shape-diff} and concludes the proof of Proposition \ref{prop:diff}. As explained above, using the shape gradient formula, one can infer a descent direction that is guaranteed to decrease the objective function, although this is valid only at first order, and therefore as long as the domain variation remains small enough. Hence, to reach an optimal shape in practice, one needs to iterate several times the process of applying small deformations and calculating the new shape gradient on the deformed shape. \subsection{Descent direction}\label{sec:descent} In this section, we focus on how to prescribe the descent direction $\vec{\theta}$ from \eqref{eq:shape_grad}. As described in the previous section, the most natural idea consists in choosing $\vec{\theta} = -G \vec{n}$, ensuring that a small domain variation in this direction decreases the objective function. However, this simple choice can yield vector fields that are not smooth enough, typically leading to numerical instability \citep[see e.g.][]{MR2340012}. To address this issue, a classical method consists in using a variational formulation involving the derivative of $\vec{\theta}$. More precisely, we want to find a field $\vec{\theta}$ that satisfies the following identity \textit{for all} $\vec{\psi} \in \vec{\Theta}$: \begin{equation} \int_{\mathscr{V}}{\nabla \bm{\theta}: \nabla \bm{\psi}\: \mathrm{d} \mathscr{V}} =-\langle dJ(\mathscr{S}),\vec{\psi}\rangle. \label{eq:descent1} \end{equation} In particular, evaluating this identity at $\vec{\theta}$ yields $$ \langle dJ(\mathscr{S}),\bm{\theta}\rangle=- \int_{\mathscr{V}}|{\nabla \bm{\theta}}|^2\mathrm{d}\mathscr{V}\leq 0, $$ guaranteeing decrease of $J_{\vec{V}}$. Thus, the variational formulation of Equation \eqref{eq:descent1} implicitly defines a good descent direction. To determine $\vec{\theta}$ explicitly, let us now apply Green's formula on Equation \eqref{eq:descent1}: \begin{equation} -\int_{\mathscr{V}} \vec{\psi} \vec{\cdot} \Delta \vec{\theta} \mathrm{d} \mathscr{V} +\int_{\mathscr{S}} \vec{\psi} \vec{\cdot} ( \nabla \vec{\theta} \vec{n} ) \mathrm{d} \mathscr{S} = - \int_{\mathscr{S}} \vec{\psi} \vec{\cdot} G \vec{n} \mathrm{d} \mathscr{S}. \end{equation} This identity being valid for all $\vec{\psi}$, we straightforwardly deduce that $\vec{\theta}$ is solution of the Laplace equation \begin{equation} \left\{ \begin{array}{cl} -\Delta \bm{\theta} = \vec{0} & \text{in } \mathscr{V},\\ \bm{\theta} = \vec{0} & \text{on } \partial\mathscr{B},\\ {(\nabla \bm{\theta})} \vec{\cdot} \vec{n}= -G \vec{n} & \text{on } \mathscr{S}. \end{array} \right. \label{eq:laplace} \end{equation} Note that the dependence of this problem in the criterion $J_{\vec{V}}$ and shape derivative is contained within the boundary condition on $\mathscr{S}$, in which the shape gradient $G$ appears. Of note, while this variational method allows to infer a ``good'' descent direction, it is also numerically more costly, since it requires the resolution of the PDE system \eqref{eq:laplace} at every iteration. Overall, we have shown in this section that the shape derivative for the optimisation of any entry of the grand resistance tensor comes down to a single formula \eqref{eq:shape_grad}, which depends on the solutions to two appropriately chosen resistance problems. In the next section, we see how to numerically apply this theoretical framework to compute various optimised shapes. \section{Numerical implementation} In this section, moving further towards a practical use of the shape gradient calculation for the resistance problem, we will introduce a dedicated algorithm of shape optimisation. As described in the previous section, the main idea of this algorithm consists in computing a descent direction $\vec{\theta}$ and applying it to deform ``a little'' the surface of the shape, then iterating these little deformations. The objective function will be monitored to ensure that it gradually decreases in the process and, hopefully, that it eventually converges to a given value, suggesting that the corresponding shape represents a local optimum. \subsection{Manufacturing constraints} In addition to this main feature of the algorithm, we typically need to include so-called \textit{manufacturing} constraints on the shape to prevent it from reaching trivial (shrunk to a single point or expanded to fill the entire fluid domain) or unsuitable (e.g. too thin or too irregular) solutions. As mentioned at the beginning of section \ref{sec:theoFramSO}, in this paper, we chose to focus on the simple and arguably canonical constraint of a constant volume $| \mathscr{V} |$ enclosed by the shape $\mathscr{S}$. Hence, denoting by $V_0$ the volume of the initial solid, we are considering the \textit{constrained} optimisation problem \begin{equation}\label{SOP} \max_{| \mathscr{V} | = V_0} J_{\vec{V}} (\mathscr{S}). \end{equation} The volume constraint may be enforced with a range of classical optimisation techniques, among which we will use a so-called ``augmented Lagrangian'', adapted from \citet[Section~3.7]{MR3878725} and briefly described in this section. The augmented Lagrangian algorithm converts the constrained optimisation problem \eqref{SOP} into a sequence of unconstrained problems (hereafter indexed by $n$). Hence, we will be led to solve: \begin{equation}\label{eq.optLn} \inf_{\mathscr{S}}{\mathcal{L}(\mathscr{S},\ell^n, b^n)}, \end{equation} where \begin{equation}\label{eq.auglag} \mathcal{L}(\mathscr{S},\ell,b) = J(\mathscr{S}) - \ell (|\mathscr{V}|-V_0) + \frac{b}{2} (|\mathscr{V}|-V_0)^2. \end{equation} In this definition, the parameter $b$ is a (positive) penalisation factor preventing the equality constraint `$|\mathscr{S}|=V_0$' to be violated. The parameter $\ell$ is a Lagrange multiplier associated with this constraint. The principle of the augmented Lagrangian algorithm rests upon the search for a (local) minimiser $S^n$ of $S \mapsto \mathcal{L}(S, \ell^n,b^n)$ for fixed values of $\ell^n$ and $b^n$. Given $\alpha>1$, these parameters are updated according to the rule: \begin{equation}\label{eq.uplag} \ell^{n+1} = \ell^n - b^n (|\mathscr{V}|^n-V_0) , \text{ and } b^{n+1} = \left\{ \begin{array}{cl} \alpha b^n & \text{if } b < b_{\text{target}},\\ b^n & \text{otherwise}; \end{array} \right. \end{equation} in other terms, the penalisation parameter $b$ is increased during the first iterations until the value $b_{\text{target}}$ is reached. This regular increase of $b$ ensures that the domain satisfies the constraint more and more precisely during the optimisation process. \subsection{Numerical resolution of the PDEs} For the sake of clarity and replicability of the algorithm described below, we provide some additional information about the numerical resolution of the Stokes and Laplace equations (\eqref{eq:stokes}, \eqref{eq:laplace}) required at each step of the deformation. The surface $\mathscr{S}$ is first equipped with a triangular surface mesh ${\mathcal T}$ containing the coordinates of the nodes, the middle of the edges, the center of the elements, and connectivity matrices. The numerical resolution is then carried out by boundary element method \citep{pozrikidis1992boundary} using the BEMLIB Fortran library \citep{pozrikidis2002practical}, which allows to determine the force distribution at each point $\vec{x}$ of the (discretised) surface $\mathscr{S}$ by making use of the integral representation \begin{equation} \vec{u}(\vec{x}) = \int_\mathscr{S}\vec{G}(\vec{x} - \vec{x}_0) \vec{f}(\vec{x}_0) \mathrm{d} \vec{x}_0, \end{equation} where $\vec{G}$ is the Oseen tensor given by \begin{equation} G_{ij}(\vec{x}) = \frac{1}{\| \vec{x} \|} \delta_{ij} + \frac{1}{\| \vec{x} \|^3} x_i x_j. \end{equation} Once the force distribution $\vec{f}$ is known, the rate-of-strain tensors $\vec{e}$ needed to compute the shape gradient established in formula \eqref{eq:shape-grad} can be conveniently computed through the integral expression \begin{equation} e_{ij} (\vec{x}) = \int_\mathscr{S}\left ( \frac{1}{\| \vec{x} -\vec{x}_0 \|^3} \delta_{ij} (\vec{x} -\vec{x}_0)_k - \frac{3}{\| \vec{x} -\vec{x}_0 \|^5} (\vec{x} -\vec{x}_0)_i (\vec{x} -\vec{x}_0)_j (\vec{x} -\vec{x}_0)_k \right ) f_k (\vec{x}_0) \mathrm{d} \mathscr{S}. \end{equation} \subsection{Shape optimisation algorithm} Let us now describe the main steps of the algorithm.\\ \begin{enumerate} \item $\;$ \textbf{Initialisation.} { \begin{itemize} \item Equip the initial shape $\mathscr{S}^0$ with a mesh ${\mathcal T}^0$, as described above. \item Select initial values for the coefficients $\ell^0$, $b^0>0$ of the augmented Lagrangian algorithm.\\ \end{itemize} } \item $\;$ \textbf{Main loop: for $n=0, ...$} { \begin{enumerate} \item $\;$Compute the solution $(\vec{u}^n,p^n)$ to the Stokes system \eqref{eq:stokes} on the mesh ${\mathcal T}^n$ of $\mathscr{S}^n$; \item $\;$Compute the solution $(\vec{v}^n, q^n)$ {to} the adjoint system \eqref{eq:stokesAdj} on the mesh ${\mathcal T}^n$ of $\mathscr{S}^n$. \item $\;$Compute the $L^2(\mathscr{S}^n)$ shape gradient $G^n$ of $J$, as well as the shape gradient $\phi^n$ of $\mathscr{S} \mapsto \mathcal{L}(\mathscr{S},\ell^n,b^n)$ given by $$ \phi^n=G^n-\ell^n+b^n(|\mathscr{V}|-V_0). $$ \item $\;$ Infer a descent direction $\bm{\theta}^n$ for $\mathscr{S} \mapsto \mathcal{L}(\mathscr{S},\ell^n,b^n)$ by solving the PDE \begin{equation}\label{eq:laper} \left\{ \begin{array}{cl} -\Delta \bm{\theta} = \vec{0} & \text{in } \mathscr{V}^n,\\ \bm{\theta} = \vec{0} & \text{on } \partial\mathscr{B},\\ {(\nabla \bm{\theta})}\vec{n}= -\phi^n \vec{n} & \text{on } \mathscr{S}^n. \end{array} \right. \end{equation} on the mesh ${\mathcal T}^n$. \item $\;$ \label{step:descent} Find a descent step $\tau^n$ such that \begin{equation}\label{eq.declag} \mathcal{L}((\text{\rm Id}+\tau^n\bm{\theta}^n)(\mathscr{S}^n),\ell^n,b^n) < \mathcal{L}(\mathscr{S}^n,\ell^n,b^n). \end{equation} \item $\;$ Move the vertices of ${\mathcal T}^n$ according to $\tau^n$ and $\bm{\theta}^n$: \begin{equation}\label{eq.xinp1} \vec{x}_p^{n+1} = \vec{x}_p^n + \tau^n \bm{\theta}^n(\vec{x}_p^n). \end{equation} \begin{itemize} \item If the resulting mesh is invalid, go back to step \ref{step:descent}, and use a smaller value for $\tau^n$, \item Else, the positions (\ref{eq.xinp1}) define the vertices of the new mesh ${\mathcal T}^{n+1}$. \end{itemize} \item $\;$ If the quality of ${\mathcal T}^{n+1}$ is too low, use a local remeshing. \item $\;$ Update the augmented Lagrangian parameters according to (\ref{eq.uplag}).\\ \end{enumerate} \item $\;$ \textbf{Ending criterion.} Stop if \begin{equation} \|\bm{\theta}^n\|_{L^2(S^n)} < \varepsilon_{\text{\rm stop}}. \label{eq:end} \end{equation} \textbf{Return} $\mathscr{S}^n$. } \end{enumerate} \section{Numerical results} \label{sec:numerics} In this section, we present various applications of the algorithm with different entries of the resistance tensor as objective functions. As mentioned above, the initial shape can be chosen freely, but we have made the choice to focus on the symmetric sphere for the initial shape, allowing for easier interpretability. An important preliminary remark to these results is that they did not all reach the stopping criterion \eqref{eq:end}. Instead, the algorithm was stopped due to overlapping of the shape or other problem-specific considerations, discussed below. While the displayed shapes are not all strictly local optima for said reasons, the interpretation of the deformation tendencies seem to have great importance from the point of view of fluid mechanics, giving crucial information on the general, ideal characteristics of shapes that offer high or low resistance for a particular entry of the resistance tensor and offering a theoretical backup to previous phenomenological observations \begin{figure} \centering \begin{overpic}[width=\textwidth]{fig1_170522.png} \put(4,50){(a)} \put(36,50){(b)} \put(70,50){(c)} \end{overpic} \caption{Visualisation of the shape optimisation algorithm running through the minimisation of $K_{11}$. The three columns on the left show the aspect of the shape at three different stages: (a) spherical shape at the first iteration; (b) after 20 iterations; and (c) at the end of the simulation. The surface colours on the top row represent the shape gradient value (from red for a high value for inwards deformation to blue for high outwards deformation), while the arrows show the deformation field $\vec{\theta}$ (normalised for better visualisation). The bottom row allows to observe the evolution of the shape profile, with the final shape closely resembling the well-known optimal drag profile first described by \citet{pironneau1973optimum}.} \label{fig:k11} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig2_170522.eps} \caption{Evolution of $K_{11}$, $\|\vec{\theta}\|$ and the shape volume $V$ along the simulation displayed on \Cref{fig:k11}, strongly suggesting convergence to a minimum of the shape functional.} \label{fig:k11_param} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{Fig_shapes.png} \caption{Results obtained from a spherical initial shape and various objective coefficients.} \label{fig:shapes} \end{figure} \subsection{Diagonal parameters} The first numerical example will be the resolution of the classical ``minimal drag'' problem at constant volume, equivalent to the minimisation of $K_{11}$. The solution to this problem was determined to be an axisymmetric ``rugby-ball''-like shape by \citet{pironneau1973optimum}, and has been found again with different methods as well as used as an exemplar in an extensive literature later on. We will also use it as a convenient way to check the validity and performance of the algorithm described in the previous section. The results are shown on figure \ref{fig:k11}. Starting from a sphere, the shape gradient $G$ and deformation field $\vec{\theta}$ are represented on the top left plot (a), with the red and blue colours being respectively associated to positive and negative gradient, meaning inward and outward associated deformation. As expected, the deformation vector field tends to stretch the sphere in the $x$ direction in order to decrease its drag. After 20 iterations (b), the object has taken the shape of an ellipsoid. Of note, axisymmetry, known as a feature of the optimal shape for this problem, is remarkably well preserved along the numerical resolution. At 250 iterations, the ending criterion \eqref{eq:end} is reached and the algorithm stops, with the resulting shape resembling closely the famous ``rugby ball'' of \citet{pironneau1973optimum}. The final drag coefficient is equal to 0.9558, in good agreement with the value known to be the one associated to optimal drag (approx. 0.95425). The small difference between these is attributable to the coarse meshing used for this simulation, which decreases overall precision. Note nonetheless that it is numerically rather remarkable that a mesh of this low quality is able hold the full optimisation problem with good accuracy, suggesting the optimisation framework laid out in this paper enjoys a good level of robustness to coarse discretisation. The three plots on the Figure \ref{fig:k11_param} show the evolution of the criterion $J_{\vec{V}}(\mathscr{S}) = K_{11}$, the $\mathrm{L}^2$-norm of the deformation vector field $\| \vec{\theta} \|$ and the volume $V$ enclosed by $\mathscr{S}$ along the simulation, with a clear numerical convergence being observed. Of note, the value of $K_{11}$ is directly correlated to the volume $V$ of the body, making this particular problem extremely sensitive to volume variations. Unlike for the optimisation problems associated to other entries of the resistance tensor, the augmented Lagrangian algorithm with adaptive step described in the previous section was observed to induce instability and amplifying volume oscillations, even with fine tuning of the parameters $\ell$ and $b$. For that reason, the algorithm was adapted for the results displayed on figure \ref{fig:k11}, empirically setting a fixed deformation step $\tau$ and Lagrange multiplier $\ell$ to obtain stability and convergence. The parameter values used in figure \ref{fig:k11} are $\tau = 10^{-3}$, $\ell = 98.8$, $b_0 = 10$, $b_{\mathrm{target}} = 500$ and $\alpha = 1.03$. More generally, a good choice of augmented Lagrangian parameters is critical to observe convergence of the algorithm, and is highly dependent on the nature of the problem, therefore requiring \textit{ad hoc} tuning for each different objective function. Now, let us turn to the other entries of $\mathsfbi{R}$. Figure \ref{fig:shapes} gathers the results for six different objective functions. These results are, to the best of the authors' knowledge, fully novel and hold several interesting interpretations. Panels \ref{fig:shapes}(a) and \ref{fig:shapes}(c) display numerical results obtained for maximising the resistance coefficients $K_{11}$ and $Q_{11}$ -- formally, we can indifferently minimise or maximise the criterion $J$ by simply reversing the sign of the shape gradient in Equation \eqref{eq:shape_grad}. As could be expected, maximising the translational drag through $K_{11}$ has the effect of flattening the sphere along the $yz$-plane. Perhaps less intuitive is that the final shape presents a biconcavity evoking those of a red blood cell, and that similar characteristics are observed when maximising the torque-rotation coupling through $Q_{11}$. Of note, for these two situations, the algorithm was stopped due to overlapping of the surface at the center of the biconcavity, visible on panel (c). On the other hand, the shape that minimises $Q_{11}$ can be seen on panel \ref{fig:shapes}b. This time, the rotational drag for the sphere appears to be reduced by stretching the shape along the $x$-axis, until reaching a cylinder-like shape with nearly hemispherical extremities. The final shape strikingly evokes the shape of some bacteria species like \textit{Escherichia coli}, with this observation possibly being an argument backing the importance of motility to explain microorganism morphology, among many other factors \citep{yang2016staying,van2017determinants}. \subsection{Extradiagonal parameters} Unlike the diagonal entries $K_{ii}$ and $Q_{ii}$ of the resistance tensor, the extradiagonal entries of the grand resistance tensor are not necessarily positive. In fact, a mirror symmetry in along an appropriately chosen plane will reverse the sign of extradiagonal entries. This observation induces that objects possessing certain planar symmetries have null entries in their resistance tensor; in particular, all the extradiagonal entries of a sphere's resistance tensor are equal to zero. These properties importantly imply that the minimisation and maximisation problems are equivalent when choosing an extradiagonal entry as an objective: one can switch between both by means of an appropriate planar symmetry. With this being said, the bottom row of figure \ref{fig:shapes} displays results of the minimisation of extradiagonal coefficients of the resistance tensor. The optimal shape for $K_{12}$ can be seen on figure \ref{fig:shapes}d. This may be interpreted as the shape that realises the best transmission from a force applied to a direction (here along the $x$-axis) to a translation towards a perpendicular direction (here, the $y$-axis). The corresponding optimal shape presents a flattened aspect along the diagonal plane $x=y$, and is slightly thicker at the center than at the edges. In the case of the optimisation of $Q_{12}$ (figure \ref{fig:shapes}e), an interesting ``dumbbell'' shape emerges. This can be understood when considering that maximising $Q_{12}$ accounts for achieving the best transmission of a torque applied in the $x$ direction to a rotation around the $y$ axis. The algorithm finds that the best way to do this is to separate the mass of the sphere in two smaller parts along a $y=x$ line. Of note, convergence of the criterion was not observed when stopping the simulation here; indeed, with suitable remeshing provided, the algorithm would most likely continue indefinitely to spread the two extremities of the dumbbell further aside. Finally, one of the most interesting findings lie in the optimisation of $C_{11}$ coefficient, observable on figure \ref{fig:shapes}f. This parameter accounts for the coupling between torque and translation; hence optimising it means that we are looking for the shape that converts best a rotational effect into directional velocity. Helicoidal shapes are well-known to be capable to achieve this conversion. More generally, $C_{11}$ is nonzero only if the shape possesses some level of chirality. Optimisation of $C_{11}$ was tackled for a particular class of shapes in \citet{keaveny2013optimization}, in the context of magnetic helicoidal swimmers. Considering slender shapes parametrised by a one-dimensional curve, they find that optimal shapes are given by regular helicoidal folding, with additional considerations on its pitch and radius depending on parameters and on the presence of a head. The family of optimal shapes however remained rather restrictive compared to our general setting. Starting from a spherical initial shape, which is notably achiral, we can observe on figure \ref{fig:shapes}f the striking emergence of four helicoidal wings, that tend to sharpen along the simulation. Again, the stopping criterion for the shape displayed on figure \ref{fig:shapes}f occurred because of overlapping of the mesh at the edges, and not because the norm of the deformation field converged to zero. While appropriate handling of the narrow parts of the helix wings may allow to carry on the shape optimisation process and observe further folding of the sphere into a long corkscrew-like shape, the observation itself of chirality emerging out of an achiral initial structure throughout an optimisation process is already an arguably captivating result, echoing the widespread existence and importance of chirality among microswimmers, in particular as a possible mean of producing robust directional locomotion within background flows \citep{wheeler2017use}. \section{Discussion and perspectives} \label{sec:discussion} In this paper, we have addressed the problem of optimal shapes for the resistance problem in a Stokes flow. Considering the entries of the grand resistance tensor as objective shape functionals to optimise, and using the framework of Hadamard boundary variation, we derived a general formula for the shape gradient, allowing to define the best deformation to apply to any given shape. While this shape optimisation framework is mathematically standard, its usage in the context of microhydrodynamics is limited, mostly circumscribed to the work of \citet{keaveny2013optimization}, and the theoretical results and numerical scheme that we presented here provide a much higher level of generality, both concerning the admissible shapes and the range of objective functions. After validating the numerical capabilities of the shape optimisation algorithm by comparing the optimal shape for $K_{11}$ to the celebrated result of \citet{pironneau1973optimum}, we systematically investigated the shapes minimising and maximising entries of the resistance tensor. The numerical results reveal fascinating new insights on optimal hydrodynamic resistance. In particular, we obtained an optimal profile for the torque drag ($Q_{11}$), observed the emergence of chiral, helicoidal structure maximising the force/rotation coupling ($C_{11}$), and other intriguing shapes generated when minimising extradiagonal entries. The potentialities of this framework are not limited to the examples displayed in the numerical results section. With most of the optimisation problems considered here being highly unconstrained and nonconvex, we can safely assume that many local extrema exist, and that a range of different results is likely to be observed for different initial shapes. As discussed above, finer handling of the surface mesh to deal with locally high curvature, sharp edges and cusps, and additional manufacturing constraints to prevent self-overlapping and take other criteria into account, are warranted to pursue this broader exploration. Furthermore, seeing as some of the shapes in figure \ref{fig:shapes} appear to take a torus-like profile from an initial spherical shape, it might be interesting to allow topological modifications of the shape along the optimisation process, which requires different approaches such as the level set method \citep{allaire2007conception}. As mentioned in Section \ref{sec:theoFramSO}, whilst being beyond the fluid mechanics scope of this paper, mathematical questions also arise from this study, such as formal proof of existence and uniqueness of minimisers for the optimisation problem \eqref{SOPgen-min}. In the context of low-Reynolds number hydrodynamics, our results provide novel perspectives to the fundamental problem of optimal hydrodynamic resistance for a rigid body, with the optimisation of some entries of the resistance tensor being performed for the first time. Beyond their theoretical interest, these results could help understand and refine some of the the criteria that are believed to govern the morphology of microscopic bodies \citep{yang2016staying,van2017determinants}. Furthermore, the computational structure of the optimisation problem is readily adaptable to more complex objective criteria defined as functions of entries of the grand resistance tensor, which allows to tackle relevant quantities for various applications. A prototypical example would be to seek extremal values for the Bretherton constant $B$ \citep{Bretherton1962}, a geometrical parameter for the renowned Jeffery equations \citep{Jeffery1922} which describe the behaviour of an axisymmetric object in a shear flow. As noted by \citet{ishimoto2020jeffery}, $B$ can be expressed as a rational function of seven distinct entries of the grand resistance tensor. For spheroids, $B$ lies between $-1$ and $1$, but nothing theoretically forbids it from being greater than 1 or smaller than $-1$; yet exhibiting realistic shapes achieving it is notoriously difficult \citep{Bretherton1962,Singh2013}. Further, another geometrical parameter $C$ is introduced for chiral helicoidal particles in \citet{ishimoto2020helicoidal} This shape constant, now termed as the Ishimoto constant \citep{ohmura2021near}, characterises the level of chirality and is useful to study bacterial motility in flow \cite{jing2020chirality} Whilst this parameter can be expressed with respect to entries of the resistance tensor in a similar manner as $B$, very little is known about typical shapes for a given value of $C$, not to mention shapes optimising $C$. The framework developed in this paper provides a promising way of investigating these questions. Finally, various refinements of the Stokes problem \ref{eq:stokes} can be fathomed to address other open problems in microhydrodynamics and microswimming. Dirichlet boundary conditions on the object surface, considered in this paper as well as in a vast part of the literature, may fail to properly describe the fluid friction arising at small scale, notably when dealing with complex biological surfaces. Nonstandard boundary conditions such as the Navier conditions \citep{B616490K} are then warranted. Interestingly, the optimal drag problem for a rigid body, although well resolved since long for Dirichlet conditions \citep{pironneau1973optimum}, is still open for Navier conditions. Seeking to further connect shape optimisation to efficient swimming at microscale, one could also include some level of deformability of the object, which requires to couple the Stokes equation with an elasticity problem. A simple model in this spirit was recently introduced in the context of shape optimisation in \citet{calisti2021synthesis}. Another problem with biological relevance it the optimisation of hydrodynamic resistance when interacting with a more or less complex environment, such as a neigbouring wall or a channel, which is known to change locomotion strategies for microorganisms \citep{elgeti2016microswimmers}; overall, a dynamical, environment-sensitive shape optimisation study stemming from this paper's framework could provide key insights on microswimming and microrobot design. \backsection[Funding]{C.M. is a JSPS Postdoctoral Fellow (P22023), and acknowledges partial support by the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located at Kyoto University and JSPS-KAKENHI Grant-in Aid for JSPS Fellows (Grant No. 22F22023). K.I. acknowledges JSPS-KAKENHI for Young Researchers (Grant No. 18K13456), JSPS-KAKENHI for Transformative Research Areas (Grant No. 21H05309) and JST, PRESTO, Japan (Grant No. JPMJPR1921). Y.P. was partially supported by the ANR Project VirtualChest ANR-16-CE19-0014.} \backsection[Declaration of interests]{The authors report no conflict of interest.} \backsection[Author ORCID]{C. Moreau, https://orcid.org/0000-0002-8557-1149; K. Ishimoto, https://orcid.org/0000-0003-1900-7643; Y. Privat, https://orcid.org/0000-0002-2039-7223.} \section{Introduction} \label{sec:intro} The interaction between solid objects and a surrounding fluid is at the heart of many fluid mechanics problems stemming from various fields such as physics, engineering and biology. Among other factors, the behaviour of such fluid-structure interaction systems is critically determined by the boundary conditions at the surface of the solid, but also by the geometry of the solid itself, or, more simply said, its shape. In this context, the research for some notion of shape optimality in the fluid-structure interaction is widespread, with the objective of understanding which shapes allow for optimal response from the fluid, typically involving energy-minimising criteria \citep{MR2567067}. At low Reynolds number, a regime occurring in particular at the microscopic scale where viscosity dominates on inertial effects, fluid dynamics are governed by the Stokes equations. These equations are linear and time-reversible -- an remarkable specificity compared to the more general Navier-Stokes equations, which makes fluid-structure interaction and locomotion at microscopic scale a peculiar world \citep{purcell1977life}. In particular, when considering the \textit{resistance problem} of a rigid body moving into a fluid at Stokes regime, a linear relationship holds between the motion of a body (translation and rotation) and the effects (forces and torques) it experiences. This relationship is materialised by the well-known \textit{grand resistance tensor}, which depends only, once a reference frame has been set, on the shape of the object and not on its motion on or the boundary conditions on fluid velocity. In other words, the hydrodynamic resistance properties of an object at low Reynolds number are intrinsic, contained in a finite number of parameters, which in turn are determined by its shape only. The question of which shapes possess maximal or minimal values for these resistance parameters then naturally arises, both from a theoretical fluid mechanics perspective, and as potential ways to explain the sometimes intriguing geometries of microorganisms \citep{yang2016staying,van2017determinants,lauga2020fluid, ryabov2021shape}. Optimal shapes for resistance problems have been tackled in previous studies. In particular, the minimal drag problem, which seeks the shape of fixed volume opposing the least hydrodynamic resistance to translation in a set direction, is well known and was solved in the 1970s, both analytically \citep{pironneau1973optimum} and numerically \citep{bourot1974numerical}. The characteristic rugby-ball shape resulting from this optimisation problem has then been used as a reference for many later works, among which we can cite the adaptations to two-dimesional and axisymmetric flows in \citet{richardson1995optimum,srivastava2011optimum}, linear elastic medium in \citet{zabarankin2013minimum}, or minimal drag for fixed surface in \citet{montenegro2015other}. These studies rely on symmetry properties for the minimal drag problem, and such methods fail to be immediately extended to solve the optimisation of the generic resistance problem, associated to other entries of the resistance tensor. Shape optimisation in microhydrodynamics has also been widely carried in the context of microswimmer locomotion. Notable works include \citet{quispe2019geometry}, where the best pitch and cross-section for efficient magnetic swimmers is numerically and experimentally discussed, and \citet{fujita2001optimum,ishimoto2016hydrodynamic, berti2021shapes}, where parametric optimisation is conducted to find the best geometry for flagellated microswimmers. Efficient shapes for periodic swimming strokes and ciliary locomotion are addressed in \citet{vilfan2012optimal,daddi2021optimal}. However, in all of these studies, restrictive assumptions are made on the possible shapes, with the optimisation being carried on a few geometrical parameters and not on a general space of surfaces in 3D. Another approach, allowing to explore a wider class of shapes than with parametric optimisation, is based on the use of shape derivatives: a generalisation of the notion of derivative, which yields a perturbation function of a domain in a descent direction. This method however requires caution regarding the regularity assumptions on the boundaries of the domains involved. Other popular methods for shape optimisation in structural mechanics include density methods, in which the characteristic function of a domain is replaced by a density function -- we mention in particular the celebrated SIMP method \citep{bendsig,borvall,evgrafov}, and the level set method, \citep{osher1988fronts,sethian2000structural,ajt,wang} which can handle changes of topology. Obtaining efficient numerical algorithms to apply these analytical methods to find optimal shapes is also challenging: one must be able to handle both a decrease of the objective function, while avoiding that the numerical representation becomes invalid (for example because of problems related to the mesh or to changes in the topology of the shapes considered). In the context of low-Reynolds number fluid mechanics, variational techniques and shape derivatives are notably used by \citet{Walker2013} and \citet{keaveny2013optimization} to carry the optimisation of the torque-speed mobility coefficient in the context of magnetically propelled swimmers, for a shape constrained to be a slender curved body, yielding helicoidal folding. However, this study is also focused on a restricted class of shapes, characterised by a one-dimensional curve, and a single entry of the resistance tensor or energy dissipation criteria. To the best of the authors' knowledge, the systematic theoretical or numerical study of the coefficients of the grand resistance tensor has not yet been carried out. Hence, as the principal aim of this paper, we will provide a general framework of shape optimisation for this type of problem, and show that the optimisation of any entry of the resistance tensor amounts to a single, simple formula for the shape derivative, which depends on the solution of two Stokes problems whose boundary conditions depend on the considered entry. We then describe an algorithm to numerically implement the shape optimisation and display some illustrative examples. \section{Problem statement} \label{sec:problem} \subsection{Resistance problem for a rigid body in Stokes flow} We consider a rigid object set in motion into an incompressible fluid with viscosity $\mu$ at low Reynolds number, with coordinates $\vec{x}$ expressed in the fixed lab frame $(O,\vec{e}_1,\vec{e}_2,\vec{e}_3)$, as shown on the left panel of \Cref{fig:diagram}. The object's surface is denoted by $\mathscr{S}$ and we assume that the fluid is contained in a bounded domain $\mathscr{B}$, thus occupying a volume $\mathscr{V}$ having $\partial \mathscr{V}= \mathscr{S}\cup \partial \mathscr{B}$ as boundary. Such a boundedness assumption ensures that the solutions of the fluid equations are well-defined and that the computations that will be performed on them throughout the paper are rigorously justified (see Appendix \ref{append:diff} and textbooks referred therein), although we assume the outer boundary $\partial \mathscr{B}$ to be sufficiently far from the object for its effect on hydrodynamic resistance to be negligible. At the container boundary $\partial \mathscr{B}$, we consider a uniform, linear background flow $\vec{U}^\infty$, broken down into translational velocity vector $\vec{Z}$, rotational velocity vector $\vec{\Omega}^{\infty}$ and rate-of-strain (second-rank) tensor $\vec{E}^{\infty}$ components as follows: \begin{equation}\label{def:Uinfty} \vec{U}^\infty=\vec{Z} + \vec{\Omega}^{\infty} \times \vec{x} + \vec{E}^{\infty} \vec{x}. \end{equation} Similarly, the object's rigid motion velocity field is simply described by \begin{equation}\label{def:U} \vec{U} = \vec{Z} + \vec{\Omega} \times \vec{x}, \end{equation} with $\vec{U}$ and $\vec{\Omega}$ denoting its translational and rotational velocities. Having set as such the velocities at the boundary of $\mathscr{V}$ defines a boundary value problem for the fluid velocity field $\vec{u}$ and pressure field $p$, which satisfy the Stokes equations: \begin{equation} \left \{ \begin{array}{ll} \mu \Delta \vec{u} - \nabla p = \vec{0} & \text{in }\mathscr{V}, \\ \nabla \vec{\cdot} \vec{u} = 0& \text{in }\mathscr{V}, \\ \vec{u} = \vec{U} & \text{on } \mathscr{S}, \\ \vec{u} = \vec{U}^{\infty} & \text{on } \partial \mathscr{B}. \end{array} \right . \label{eq:stokes} \end{equation} From the solution of this Stokes problem with set boundary velocity, one can then calculate the hydrodynamic drag (force $\vec{F}^h$ and torque $\vec{T}^h$) exerted by the moving particle to the fluid, \textit{via} the following surface integrals formulae over $\mathscr{S}$: \begin{align} \vec{F}^h & = - \int_{\mathscr{S}} \vec{\sigma}[\vec{u},p] \vec{n} \mathrm{d} \mathscr{S}, \\ \vec{T}^h & = - \int_{\mathscr{S}} \vec{x} \times \left ( \vec{\sigma}[\vec{u},p] \vec{n} \right ) \mathrm{d} \mathscr{S}. \label{eq:FT} \end{align} In \Cref{eq:FT}, $\vec{n}$ is the normal to $\mathrm{d} \mathscr{S}$ pointing outward to the body (see Fig. \ref{fig:scheme}), and $\vec{\sigma}$ is the stress tensor, defined as \begin{equation} \vec{\sigma} [\vec{u},p] = - p \vec{I} + 2 \mu \vec{e}[\vec{u}], \end{equation} in which $\vec{I}$ denotes the identity tensor and $\vec{e}[\vec{u}]$ is the rate-of-strain tensor, given by \begin{equation} \vec{e}[\vec{u}] = \frac{1}{2} \left ( \nabla \vec{u} + (\nabla \vec{u} )^T \right ). \end{equation} Finding this way the hydrodynamic drag for a given velocity field is called the \textit{resistance problem} -- as opposed to the \textit{mobility problem} in which one seeks to find the velocity generated by a given force and torque profile on the boundary. \begin{figure} \begin{center} \begin{overpic}[height=5cm]{Fig1.png} \put(26,12){$O$} \end{overpic} $\quad$ \includegraphics[height=5cm]{Fig2.png} \end{center} \caption{Problem setup: a rigid body in a Stokes flow. A diagram of the general Stokes problem \eqref{eq:stokes} can be seen on the left of the figure. The panels on the right-hand side show examples of resistance problems associated to selected entries of the grand resistance tensor. For instance, for $K_{11}$ (top left), one sets the motion of the object to a unitary translation in the direction $\vec{e}_1$, and then $K_{11}$ may be obtained as the component along $\vec{e}_1$ of the total drag force $\vec{F}$ exerted on the object. The other coefficients shown on the other panels are analogously obtained by using the appropriate boundary conditions and drag force or torque shown on the figure. \label{fig:scheme}} \end{figure} \subsection{Grand resistance tensor} In addition to Equations, \eqref{eq:FT}, a linear relationship between $(\vec{F}^h,\vec{T}^h)$ and $(\vec{U},\vec{U}^{\infty})$ can be derived from the linearity of the Stokes equation \citep[see][Chapter 5]{Kim2005}: \begin{equation} \begin{pmatrix} \vec{F}^h \\ \vec{T}^h \\ \vec{S} \end{pmatrix} = \mathsfbi{R} \begin{pmatrix} \vec{Z} - \vec{Z}^{\infty} \\ \vec{\Omega}- \vec{\Omega}^{\infty} \\ - \vec{E}^{\infty} \end{pmatrix} = \begin{pmatrix} \mathsfbi{K} & \mathsfbi{C} & \vec{\Gamma} \\ \tilde{\mathsfbi{C}} & \mathsfbi{Q} & \vec{\Lambda} \\ \tilde{\vec{\Gamma}} & \tilde{\vec{\Lambda}} & \mathsfbi{M} \end{pmatrix} \begin{pmatrix} \vec{Z} - \vec{Z}^{\infty} \\ \vec{\Omega} - \vec{\Omega}^{\infty} \\ - \vec{E}^{\infty} \end{pmatrix}. \label{eq:GRT} \end{equation} The stresslet $\vec{S}$, defined as \begin{equation} \vec{S} = \frac{1}{2} \int_{\mathscr{S}} (\vec{x} \vec{\cdot} \vec{\sigma}[\vec{u},p] \vec{n}^T + \vec{\sigma}[\vec{u},p] \vec{n} \vec{\cdot} \vec{x}^T ) \mathrm{d} \mathscr{S}, \end{equation} appears on the right-hand side of Equation \eqref{eq:GRT} and is displayed here for the sake of completeness, though we will not be dealing with it in the following. The tensor $\mathsfbi{R}$, called the grand resistance tensor, is symmetric and positive definite. As seen in Equation \eqref{eq:GRT}, it may be written as the concatenation of nine tensors, each accounting for one part of the force-velocity coupling. The second-rank tensors $\mathsfbi{K}$ and $\mathsfbi{C}$ represents the coupling between hydrodynamic drag force and, respectively, translational and rotational velocity. Similarly, $\tilde{\mathsfbi{C}}$ and $\mathsfbi{Q}$ are second-rank tensors coupling hydrodynamic torque with translational and rotational velocity. Note that, by symmetry of $\mathsfbi{R}$, $\mathsfbi{K}$ and $\mathsfbi{Q}$ are symmetric and one has $\mathsfbi{C}^T = \tilde{\mathsfbi{C}}$. Further, $\vec{\Gamma}$, $\tilde{\vec{\Gamma}}$, $\vec{\Lambda}$, and $\tilde{\vec{\Lambda}}$ are third-rank tensors accounting for coupling involving either the shear part of the background flow or the stresslet, and $\mathsfbi{M}$ is a fourth-rank tensor representing the coupling between the shear and the stresslet, with similar properties deduced from the symmetry of $\mathsfbi{R}$. An important property of the grand resistance tensor is that it is independent of the boundary conditions associated to a given resistance problem. In other words, for a given viscosity $\mu$ and once fixed a system of coordinates, the grand resistance tensor $\mathsfbi{R}$ depends only on the shape of the object, \textit{i.e.} its surface $\mathscr{S}$. A change of coordinates or an affine transformation applied to $\mathscr{S}$ modifies the entries of $\mathsfbi{R}$ through standard linear transformations. For that reason, here we fix a reference frame once and for all and carry the shape optimisation within this frame; which means in particular that we distinguish shapes that do not overlap in the reference frame, even if they are in fact identical after an affine transformation. With these coordinates considerations aside, we can argue that the grand resistance tensor constitutes an intrinsic characteristic of an object; and all the relevant information about the hydrodynamic resistance of a certain shape is carried in the finite number of entries in $\mathsfbi{R}$. While these entries can be obtained by direct calculation in the case of simple geometries, in most cases their value must be determined by solving a particular resistance problem and using Equations $\eqref{eq:FT}$. For example, to determine $K_{ij}$, one can set $\vec{U}$ as unit translation along $\vec{e}_j$, $\vec{U} = \vec{e}_j$. Then Equation \eqref{eq:GRT}, combined with $\eqref{eq:FT}$, gives \begin{equation} K_{ij} = \vec{F}^h \vec{\cdot} \vec{e}_i = - \int_{\mathscr{S}} (\vec{\sigma}[\vec{u},p] \vec{n})\vec{\cdot} \vec{e}_i \mathrm{d} \mathscr{S}. \end{equation} The same strategy can be applied for other entries of $\mathsfbi{R}$, setting appropriate boundary conditions $\vec{U}$ and $\vec{U}^{\infty}$ in the Stokes equation and calculating the appropriate projection of $\vec{F}^h$ or $\vec{T}^h$ along one of the basis vectors. Figure \ref{fig:scheme} displays a few illustrative examples. In fact, let us define the generic quantity $J_{\vec{V}}$ as the surface integral \begin{equation} J_{\vec{V}}(\mathscr{S}) = - \int_{\mathscr{S}} (\vec{\sigma}[\vec{u},p] \vec{n})\vec{\cdot} \vec{V} \mathrm{d} \mathscr{S}. \label{eq:J} \end{equation} Then, judicious choices of $\vec{U}$, $\vec{U}^{\infty}$ and $\vec{V}$, summarised in Table \ref{table:1}, allow to obtain any coefficient of the grand resistance tensor from formula \eqref{eq:J}. \begin{table} \begin{center} \def~{\hphantom{0}} \begin{tabular}{c c c c} $\qquad J_{\vec{V}} \qquad$ & $\qquad \vec{U} \qquad$ & $\qquad \vec{V} \qquad$ & $\qquad \vec{U}_{\infty} \qquad$ \\[3pt] $K_{ij}$ & $\vec{e}_j$ & $\vec{e}_i$ & $\vec{0}$ \\[2pt] $C_{ij}$ & $\vec{e}_j \times \vec{x}$ & $\vec{e}_i$ & $\vec{0}$ \\[2pt] $\tilde{C}_{ij}$ & $\vec{e}_j$ & $\vec{e}_i \times \vec{x}$ & $\vec{0}$\\[2pt] $Q_{ij}$ & $\vec{e}_j \times \vec{x}$ & $\vec{e}_i \times \vec{x}$ & $\vec{0}$\\[2pt] $\Gamma_{ijk}$ & $\vec{0}$ & $\vec{e}_i$ & $x_k \vec{e}_j$ \\[2pt] $\Lambda_{ijk}$ & $\vec{0}$ & $\vec{e}_i \times \vec{x}$ & $x_k \vec{e}_j$ \\ \end{tabular} \caption{Entries of the grand resistance tensor associated to $J$ with respect to the choice of $\vec{U}$, $\vec{V}$ and $\vec{U}^{\infty}$.} \label{table:1} \end{center} \end{table} Of note, for the determination of the coefficients lying on the diagonal of $\mathsfbi{R}$, another relation involving power instead of hydrodynamic force is sometimes found \citep{Kim2005}. Indeed, the energy dissipation rate $\epsilon$ is defined as $\epsilon = \int_{\mathscr{V}} 2 \mu \vec{e}[u] : \vec{e}[u] \mathrm{d} \mathscr{V}$. In the case of the translation $\vec{U}$ of a rigid body, one also has $\epsilon = \vec{F} \vec{\cdot} \vec{U}$. Then, to determine for instance $K_{11}$, one sets $\vec{U} = \vec{e}_1$ as described above and obtains \begin{equation} K_{11} = \int_{\mathscr{V}} 2 \mu e_{1j} e_{1j} \mathrm{d}\mathscr{V}. \end{equation} This last expression yields in particular the important property that the diagonal entries of $\mathsfbi{R}$ are positive. Nonetheless, in the following we will prefer the use of formula \eqref{eq:J} that conveniently works for both diagonal and extradiagonal entries. \subsection{Towards a shape optimisation framework} Seeing $J_{\vec{V}}$ as a functional depending on the surface $\mathscr{S}$ of the object, we will now seek to optimise the shape $\mathscr{S}$ with $J_{\vec{V}}$ as an objective function; in other terms, we want to optimise one of the parameters accounting for the hydrodynamic resistance of the object. As is usually done in shape optimisation, it is relevant in our framework to add some constraint on the optimisation problem. This is both motivated by our wish to obtain relevant and non-trivial shapes (e.g. a shape occupying the whole computational domain), but also to model manufacturing constraints. In this domain, there are multiple choices. In the following, we will focus on the standard choice: $|\mathscr{V}|=V_0$ for some positive parameter $V_0$, where $\mathscr{V}$ stands for the domain enclosed by $\mathscr{S}$ whereas $|\mathscr{V}|$ denotes its volume. The {\it generic} resulting shape optimisation problem we will tackle in what follows hence reads: \begin{equation}\label{SOPgen-min} \min_{\mathscr{S}\in \mathscr{O}_{ad,V_0}} J_{\vec{V}}(\mathscr{S}), \end{equation} where $\mathscr{O}_{ad,V_0}$ denotes the set of all connected domains $\mathscr{V}$ enclosed by $\mathscr{S}$ such that $|\mathscr{V}|=V_0$. Of note, when performing optimisation in practice, we will also occasionally consider $\max J_{\vec{V}}(\mathscr{S})$ instead of $\min J_{\vec{V}}(\mathscr{S})$ in \eqref{SOPgen-min}, which is immaterial to the following analysis as it amounts to replacing $J_{\vec{V}}(\mathscr{S})$ by $-J_{\vec{V}}(\mathscr{S})$. Throughout the rest of this paper, we will not address the question of the existence of optimal shapes, i.e. the existence of solutions for the above problem. In the context of shape optimisation for fluid mechanics, few qualitative analysis questions have been solved so far. Let us mention for instance \citet{MR2601075} about some progress in this field, whether it is about questions of existence or qualitative analysis of optimal shapes. Whilst it may appear as purely technical, these fundamental questions can have very tangible impact on the actual optimal shapes; for instance, necessary regularity assumptions influence the decisions to be made for numerical implementation, and a reckless choice of admissible shapes space in which the optimisation problem does not have any solution may thus lead to ``miss'' the minimiser. Nevertheless, we put these questions aside in this paper, in order to focus on the presentation of an efficient optimisation algorithm based on the use of shape derivatives, as well as the numerical results obtained and their interpretation. We will hence use the framework of shape derivative calculus, which allows to consider very general shape deformations, independent of any shape parametrisation, and a global point of view on the objective function; this notably constitutes a progress with respect to previous studies focusing on one particular entry of the resistance tensor. The following section is devoted to laying out the mathematical tools required to address the hydrodynamic shape optimisation problem. \section{Analysis of the shape optimisation problem} \subsection{Theoretical framework}\label{sec:theoFramSO} In this section, we introduce the key concepts of domain variation and shape gradient that we need to introduce the main results of this paper, as well as a practical optimisation algorithm. An important point that must be considered throughout this study is shape regularity. In order for the presented calculations and the equations involved to be valid and understood in their usual sense, sufficient regularity of the surfaces involved must be imposed; however, it must be kept in mind that the need to consider regular shapes might prevent some optimal shapes from being found if they possess ridges or sharp corners. In addition, the discretisation step required when moving to the numerical implementation also raises further discussions about the regularity of shapes. For the sake of readability, the mathematical framework, especially with respect to shape regularity and the other functional spaces involved, is kept to a minimal level of technicality in the body of the paper and further details and discussions are provided in the appendix \ref{append:diff}. \begin{figure} \begin{center} \includegraphics[width=8cm]{diagram.png} \caption{Shape optimisation principle: the surface $\mathscr{S}$ of the body is deformed with respect to a certain vector field $\vec{\theta}$, such that the deformed shape $\mathscr{S}_{\vec{\theta}} = (\mathrm{Id} + \vec{\theta}) (\mathscr{S})$ improves the objective, \textit{i.e.} satisfies $J(\mathscr{S}_{\vec{\theta}})<J(\mathscr{S})$.} \label{fig:diagram} \end{center} \end{figure} Formally, deforming a shape can be done by defining a deformation vector field $\vec{\theta}~:~\mathbb{R}^3~\rightarrow~\mathbb{R}^3$ within the domain $\mathscr{B}$. This vector field will be assumed to belong to a set $\vec{\Theta}$ of so-called ``admissible'' fields, that are smooth enough to preserve the regularity of the shape. A discussion on the exact choice of set $\vec{\Theta}$ can be found in the Appendix. Applying this deformation vector field $\vec{\theta}$ to the surface of the shape $\mathscr{S}$ yields a new shape $\mathscr{S}_{\vec{\theta}}= (\mathrm{Id} + \vec{\theta}) (\mathscr{S})$ (see figure \ref{fig:diagram}), where $\mathrm{Id}$ denotes the identity operator corresponding to no deformation. This operation is called a \textit{domain variation}. The fundamental question at the heart of all shape optimisation algorithms is the search for a "good" vector field $\vec{\theta}$ chosen so that $\mathscr{S}_{\vec{\theta}}$ satisfies the constraints of the problem but also so that the objective function decreases, ideally strictly -- but most methods only guarantee the inequality $J(\mathscr{S}_{\vec{\theta}})\leq J(\mathscr{S})$. In the terminology of optimisation, such a deformation vector field will be called a \textit{descent direction}, according to the inequality above. In numerical optimisation, descent methods are expected to bring the shape towards a local optimum for the objective criterion. To this end, following the so-called Hadamard boundary variation framework which considers a small change of functional by perturbing shape geometry as featured in \citet{allaire2007conception} and \citet{HENROTPIERRE}, to which we refer the interested reader for a detailed theory of shape optimisation, we introduce the fundamental notion of \textit{shape derivative}, characterising the variation of the criterion for an infinitesimal deformation of the domain. More precisely, for a given shape $\mathscr{S}$ and $\vec{\theta} \in \vec{\Theta}$, the \textit{shape derivative of $\mathscr{S}$ in the direction $\vec{\theta}$} is denoted by $\langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle$ and defined as the first order term in the expansion \begin{equation} J(\mathscr{S}_{\vec{\theta}}) = J(\mathscr{S}) + \langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle + o(\bm{\theta}) \quad \text{where } o(\bm{\theta}) \to 0 \text{ as } \bm{\theta} \to 0. \label{eq:shape-diff} \end{equation} For additional explanations on the precise meaning of such an expansion, we refer to Appendix~\ref{append:funSpace}. In particular, the \textit{shape derivative of $J$ at $\mathscr{S}$ in the direction $\vec{\theta}$} can be computed through the directional derivative \begin{equation} \langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle = \lim_{\varepsilon \rightarrow 0} \frac{J((\mathrm{Id} + \varepsilon \vec{\theta}) (\mathscr{S}))- J(\mathscr{S})}{\varepsilon}, \end{equation} from which we recover \Cref{eq:shape-diff}. Of note, the bracket notation for the shape derivative refers to the fact that the application $\vec{\theta} \mapsto \langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle$ is a linear form from $\vec{\Theta}$ to $\mathbb{R}$, which itself stems out from the differential $\mathrm{d}J (\mathscr{S})$ at $\vec{\theta} = 0$ of the domain variation application \begin{equation} \begin{array}{r c l} \vec{\Theta} & \rightarrow & \mathbb{R} \\ \vec{\theta} & \mapsto & J(\mathscr{S}_{\vec{\theta}}). \end{array} \end{equation} The expression of the shape derivative in Equation \eqref{eq:shape-diff} suggests that the deformation $\vec{\theta}$ should be chosen such that $\langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle$ is negative, effectively decreasing the objective criterion at first order. A classical strategy to achieve this goal \citep[see][Chapter~6]{allaire2007conception} consists in deriving an explicit and workable expression of the shape derivative as a surface integral of the form \begin{equation} \langle \mathrm{d} J(\mathscr{S}),\bm{\theta}\rangle = \int_{\mathscr{S}} G(\vec{x}) \vec{\theta} \vec{\cdot} \vec{n} \mathrm{d} \mathscr{S}(\vec{x}), \label{eq:shape-grad} \end{equation} where $G$ is a function called \textit{shape gradient} of the involved functional. Such a rewriting is in general possible for generic cost functions (according to the structure theorem, see e.g. \cite[Section~5.9]{HENROTPIERRE}), but usually requires some work, and involves the determination of the adjoint of a linear operator. Once an expression of type \eqref{eq:shape-grad} has been obtained, it is then easy to prescribe the descent direction such that the shape derivative is negative, by choosing for instance $\theta(\vec{x}) = -G(\vec{x}) \vec{n}$, or less straightforward expressions yielding suitable properties; see section \ref{sec:descent} for further discussion. \subsection{The shape derivative formula for problem \eqref{SOPgen-min}} The calculation of the shape gradient through the derivation of a formula like Equation \eqref{eq:shape-grad} for the minimisation problem \eqref{SOPgen-min} is the main result of this theoretical section, which is displayed in Proposition \ref{prop:diff} below. In order to state this result, let us introduce the pair $(\vec{v},q)$ playing the role of \textit{adjoint states} for the optimisation problems we will deal with, as the unique solution of the Stokes problem \begin{equation} \left \{ \begin{array}{ll} \mu \Delta \vec{v} - \nabla q = \vec{0} & \text{in }\mathscr{V} \\ \nabla \vec{\cdot} \vec{v} = 0& \text{in }\mathscr{V} \\ \vec{v} =\vec{V} & \text{on } \mathscr{S}, \\ \vec{v} = \vec{0} & \text{on } \partial \mathscr{B}, \end{array} \right . \label{eq:stokesAdj} \end{equation} Then, one can express the shape derivative and shape gradient with respect to the solution of the resistance problem \eqref{eq:stokes} and the adjoint problem \eqref{eq:stokesAdj}: \noindent \begin{minipage}{\textwidth} \begin{proposition}\label{prop:diff} The functional $J_{\vec{V}}$ is shape differentiable. Furthermore, for all $\bm{\theta}$ in $\Theta$, one has \begin{equation} \langle dJ_{\vec{V}}(\mathscr{S}),\bm{\theta}\rangle = 2\mu \int_{\mathscr{S}} \big ( \vec{e}[\vec{u}]:\vec{e}[\vec{v}] -\vec{e}[\vec{U} ] : \vec{e} [ \vec{v} ] - \vec{e} [\vec{u} ] : \vec{e} [ \vec{V} ] \big ) (\vec{\theta}\vec{\cdot}\vec{n}) \mathrm{d} \mathscr{S}, \label{eq:shape_grad} \end{equation} and the shape gradient $G$ is therefore given by $$ G = 2\mu \big ( \vec{e}[\vec{u}]:\vec{e}[\vec{v}] -\vec{e}[\vec{U} ] : \vec{e} [ \vec{v} ] - \vec{e} [\vec{u} ] : \vec{e} [ \vec{V} ] \big ). $$ \end{proposition} \end{minipage} Of particular note, if we assume moreover that $ \vec{U}$ and $ \vec{V}$ are such that $$ \vec{e}[\vec{U}]= \vec{e}[ \vec{V}]= \vec{0} \text{ in }\mathscr{V}, $$ which is trivially true for all the relevant choices of $\vec{U}$ and $\vec{V}$ displayed in Table \ref{table:1} -- and more generally for any linear flow and rigid body motion -- then the shape gradient simply becomes \begin{equation} G = 2\mu \vec{e}[\vec{u}]:\vec{e}[\vec{v}], \end{equation} which is the expression we will use later on when implementing the shape optimisation algorithm. \subsection{Proof of Proposition \ref{prop:diff}} To compute the shape gradient of the functional $J_{\vec{V}}$, which is expressed as a surface integral, a standard technique \citep[see][Chapter~5]{HENROTPIERRE} first consists in rewriting it under volumetric form. Let us multiply the main equation of \eqref{eq:stokes} by $\vec{v}$ and then integrate by parts\footnote{ In what follows, we will often use the following integration by parts formula, well adapted to the framework of fluid mechanics: let $\vec{u}$ and $\vec{v}$ denote two vector fields; then, $ 2\int_{\mathscr{V}} \vec{e} [\vec{v}]:\vec{e} [\vec{u}] \mathrm{d} {\mathscr{V}}=-\int_{\mathscr{V}} (\Delta \vec{v}+\nabla (\nabla \vec{\cdot} \vec{v}))\vec{\cdot} \vec{u}\mathrm{d}\mathscr{V}+2\int_{\partial \mathscr{V}}\vec{e} [\vec{v}]\vec{n}\vec{\cdot} \vec{u}\mathrm{d} \mathscr{S}. $ }. One thus gets $$ 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u}]:\vec{e}[\vec{v}]\mathrm{d} {\mathscr{V}}-\int_{\mathscr{V}} p\nabla \vec{\cdot} \vec{v}\mathrm{d} {\mathscr{V}}-2\int_{\partial \mathscr{V}}\vec{\sigma}[\vec{u},p]\vec{n}\vec{\cdot} \vec{v}\mathrm{d} \mathscr{S}=0 $$ By plugging the boundary conditions into this equality, one gets \begin{equation} -J_{\vec{V}}(\mathscr{S}) = 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u}]:\vec{e}[\vec{v}]\mathrm{d} {\mathscr{V}}. \end{equation} We are now ready to differentiate this relation with respect to the variations of the domain $\mathscr{S}$. To this end, we will use the formula for the derivative of integrals on a variable domain, shown in \citet[Theorem 5.2.2]{HENROTPIERRE} in a mathematical setting. Of note, this formula is also a particular application of the so-called Reynolds transport theorem. \begin{multline}\label{eq:0856} - \langle dJ_{\vec{V}}(\mathscr{S}),\bm{\theta}\rangle =2\mu \int_{\mathscr{S}} \vec{e}[\vec{u}]:\vec{e}[\vec{v}](\bm{\theta}\vec{\cdot} \vec{n})\mathrm{d} \mathscr{S} \\ +2\mu \int_{\mathscr{V}} \vec{e}[\vec{u'}]:\vec{e}[\vec{v}]\mathrm{d}\mathscr{V}+2\mu \int_{\mathscr{V}} \vec{e}[\vec{u}]:\vec{e}[\vec{v'}]\mathrm{d} {\mathscr{V}}, \end{multline} where $(\vec{u'},p')$ and $(\vec{v'},q')$ may be interpreted as characterising the hypothetical behaviour of the fluid within $\mathscr{B}$ if the surface $\mathscr{S}$ was moving at a speed corresponding to the deformation $\vec{\theta}$. The quantities $(\vec{u'},p')$ and $(\vec{v'},q')$ are thus solutions of the Stokes-like systems \begin{equation} \left \{ \begin{array}{ll} \mu \Delta \vec{u'} - \nabla p' = \vec{0} & \text{in }\mathscr{V}, \\ \nabla \vec{\cdot} \vec{u'} = 0& \text{in }\mathscr{V}, \\ \vec{u'} = -[\nabla (\vec{u}-\vec{U})]\vec{n} (\vec{\theta}\vec{\cdot}\vec{n}) & \text{on } \mathscr{S}, \\ \vec{u'} = \vec{0} & \text{on } \partial \mathscr{B}, \end{array} \right . \label{eq:stokesBisprime} \end{equation} and \begin{equation} \left \{ \begin{array}{ll} \mu \Delta \vec{v'} - \nabla q' = \vec{0} & \text{in }\mathscr{V} \\ \nabla \vec{\cdot} \vec{v'} = 0& \text{in }\mathscr{V} \\ \vec{v'} = -[\nabla(\vec{v}-\vec{V})]\vec{n} (\vec{\theta}\vec{\cdot}\vec{n}) & \text{on } \mathscr{S}, \\ \vec{v'} = \vec{0} & \text{on } \partial \mathscr{B}. \end{array} \right . \label{eq:stokesAdjprime} \end{equation} Let us rewrite the two last terms of the sum in \eqref{eq:0856} under a convenient form for algorithmic issues. By using an integration by parts, one gets \begin{equation} 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u'}]:\vec{e}[\vec{v}]\mathrm{d}\mathscr{V} = \mu \int_{\mathscr{V}} (-\Delta \vec{v}+\nabla (\nabla \vec{\cdot} \vec{v}))\vec{\cdot} \vec{u'}\mathrm{d}\mathscr{V} +2\mu \int_{\mathscr{S}} \vec{e}[\vec{v}]\vec{n}\vec{\cdot} \vec{u'}\mathrm{d} \mathscr{S}. \end{equation} Using the relations contained in Eqs. \eqref{eq:stokesAdj} for $\vec{v}$ and \eqref{eq:stokesBisprime} for $\vec{u}'$ yields \begin{eqnarray*} 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u'}]:\vec{e}[\vec{v}]\mathrm{d}\mathscr{V} &=& - \int_{\mathscr{V}} \nabla q\vec{\cdot} \vec{u'}\mathrm{d}\mathscr{V}-2\mu \int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \vec{e}[\vec{v}]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n} \mathrm{d} \mathscr{S} , \\ &=& - \int_{\mathscr{S}} q \vec{u'}\vec{\cdot} \vec{n}\mathrm{d}\mathscr{V}-2\mu \int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \vec{e}[\vec{v}]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n} \mathrm{d} \mathscr{S} , \end{eqnarray*} which finally leads to \begin{equation} 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u'}]:\vec{e}[\vec{v}]\mathrm{d}\mathscr{V} = -\int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \vec{\sigma}[\vec{v},q]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n} \mathrm{d} \mathscr{S} . \label{eq:prop1-0} \end{equation} Now, observe that since $\vec{u}-\vec{U}$ vanishes on $\mathscr{S}$ and is divergence free, and defining the derivative with respect to the normal by $\frac{\partial}{\partial \vec{n}} x_i = \frac{\partial x_i}{\partial x_j} n_j$, one has \begin{eqnarray*} \vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n}&=&\frac{\partial (u_i-U_{i})}{\partial x_j}n_j n_i=\frac{\partial (u_i-U_{i})}{\partial \vec{n}}n_i\\ &=& \frac{\partial (u_i-U_{i})}{\partial x_i}=\nabla\vec{\cdot} (\vec{u}-\vec{U})=0 \end{eqnarray*} on $\mathscr{S}$, since $\vec{n}\vec{\cdot} \vec{n}=1$ and therefore, \begin{equation} \vec{\sigma}[\vec{v},q]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n}=2\mu \vec{e}[\vec{v}]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n}\quad \text{on }\mathscr{S}. \label{eq:prop1-1} \end{equation} Using straightforward calculations as carried in \citet[Lemma~1]{MR4269970}, we can moreover show that \begin{equation} \vec{e} [\vec{v}] \vec{n} \vec{\cdot} \nabla (\vec{u}-\vec{U}) \vec{n} = \vec{e} [\vec{v}] : \vec{e} [\vec{u}-\vec{U}], \label{eq:prop1-2} \end{equation} yielding a more symmetrical expression for \eqref{eq:prop1-1}: $$ \vec{\sigma}[\vec{v},q]\vec{n}\vec{\cdot} \nabla (\vec{u}-\vec{U})\vec{n}=2\mu \vec{e}[\vec{v}]\vec{n}\vec{\cdot} \vec{e}[\vec{u}-\vec{U}]\vec{n}\quad \text{on }\mathscr{S}. $$ It follows that \eqref{eq:prop1-0} can be rewritten as \begin{equation} 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u'}]:\vec{e}[\vec{v}]\mathrm{d}\mathscr{V}=-2\mu \int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \vec{e}[\vec{v}]: \vec{e} [\vec{u}-\vec{U}] \mathrm{d} \mathscr{S} . \label{eq:prop1-3} \end{equation} By mimicking the computation above, we obtain similarly \begin{equation} 2\mu \int_{\mathscr{V}} \vec{e}[\vec{u}]:\vec{e}[\vec{v'}]\mathrm{d}\mathscr{V}=-2\mu \int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \vec{e}[\vec{u}]: \vec{e} [\vec{v}-\vec{V}] \mathrm{d} \mathscr{S} . \label{eq:prop1-4} \end{equation} Gathering \eqref{eq:0856}, \eqref{eq:prop1-3} and \eqref{eq:prop1-4} yields $$ -\langle dJ_{\vec{V}}(\mathscr{S}),\bm{\theta}\rangle = 2\mu \int_{\mathscr{S}} (\vec{\theta}\vec{\cdot}\vec{n}) \left(\vec{e}[\vec{u}]:\vec{e}[\vec{v}]-\vec{e}[\vec{v}]: \vec{e} [\vec{u}-\vec{U}]-\vec{e}[\vec{u}]: \vec{e} [\vec{v}-\vec{V}]\right) \mathrm{d} \mathscr{S} , $$ and rearranging the terms finally leads to the expected expression of the shape derivative \eqref{eq:shape-diff} and concludes the proof of Proposition \ref{prop:diff}. As explained above, using the shape gradient formula, one can infer a descent direction that is guaranteed to decrease the objective function, although this is valid only at first order, and therefore as long as the domain variation remains small enough. Hence, to reach an optimal shape in practice, one needs to iterate several times the process of applying small deformations and calculating the new shape gradient on the deformed shape. \subsection{Descent direction}\label{sec:descent} In this section, we focus on how to prescribe the descent direction $\vec{\theta}$ from \eqref{eq:shape_grad}. As described in the previous section, the most natural idea consists in choosing $\vec{\theta} = -G \vec{n}$, ensuring that a small domain variation in this direction decreases the objective function. However, this simple choice can yield vector fields that are not smooth enough, typically leading to numerical instability \citep[see e.g.][]{MR2340012}. To address this issue, a classical method consists in using a variational formulation involving the derivative of $\vec{\theta}$. More precisely, we want to find a field $\vec{\theta}$ that satisfies the following identity \textit{for all} $\vec{\psi} \in \vec{\Theta}$: \begin{equation} \int_{\mathscr{V}}{\nabla \bm{\theta}: \nabla \bm{\psi}\: \mathrm{d} \mathscr{V}} =-\langle dJ(\mathscr{S}),\vec{\psi}\rangle. \label{eq:descent1} \end{equation} In particular, evaluating this identity at $\vec{\theta}$ yields $$ \langle dJ(\mathscr{S}),\bm{\theta}\rangle=- \int_{\mathscr{V}}|{\nabla \bm{\theta}}|^2\mathrm{d}\mathscr{V}\leq 0, $$ guaranteeing decrease of $J_{\vec{V}}$. Thus, the variational formulation of Equation \eqref{eq:descent1} implicitly defines a good descent direction. To determine $\vec{\theta}$ explicitly, let us now apply Green's formula on Equation \eqref{eq:descent1}: \begin{equation} -\int_{\mathscr{V}} \vec{\psi} \vec{\cdot} \Delta \vec{\theta} \mathrm{d} \mathscr{V} +\int_{\mathscr{S}} \vec{\psi} \vec{\cdot} ( \nabla \vec{\theta} \vec{n} ) \mathrm{d} \mathscr{S} = - \int_{\mathscr{S}} \vec{\psi} \vec{\cdot} G \vec{n} \mathrm{d} \mathscr{S}. \end{equation} This identity being valid for all $\vec{\psi}$, we straightforwardly deduce that $\vec{\theta}$ is solution of the Laplace equation \begin{equation} \left\{ \begin{array}{cl} -\Delta \bm{\theta} = \vec{0} & \text{in } \mathscr{V},\\ \bm{\theta} = \vec{0} & \text{on } \partial\mathscr{B},\\ {(\nabla \bm{\theta})} \vec{\cdot} \vec{n}= -G \vec{n} & \text{on } \mathscr{S}. \end{array} \right. \label{eq:laplace} \end{equation} Note that the dependence of this problem in the criterion $J_{\vec{V}}$ and shape derivative is contained within the boundary condition on $\mathscr{S}$, in which the shape gradient $G$ appears. Of note, while this variational method allows to infer a ``good'' descent direction, it is also numerically more costly, since it requires the resolution of the PDE system \eqref{eq:laplace} at every iteration. Overall, we have shown in this section that the shape derivative for the optimisation of any entry of the grand resistance tensor comes down to a single formula \eqref{eq:shape_grad}, which depends on the solutions to two appropriately chosen resistance problems. In the next section, we see how to numerically apply this theoretical framework to compute various optimised shapes. \section{Numerical implementation} In this section, moving further towards a practical use of the shape gradient calculation for the resistance problem, we will introduce a dedicated algorithm of shape optimisation. As described in the previous section, the main idea of this algorithm consists in computing a descent direction $\vec{\theta}$ and applying it to deform ``a little'' the surface of the shape, then iterating these little deformations. The objective function will be monitored to ensure that it gradually decreases in the process and, hopefully, that it eventually converges to a given value, suggesting that the corresponding shape represents a local optimum. \subsection{Manufacturing constraints} In addition to this main feature of the algorithm, we typically need to include so-called \textit{manufacturing} constraints on the shape to prevent it from reaching trivial (shrunk to a single point or expanded to fill the entire fluid domain) or unsuitable (e.g. too thin or too irregular) solutions. As mentioned at the beginning of section \ref{sec:theoFramSO}, in this paper, we chose to focus on the simple and arguably canonical constraint of a constant volume $| \mathscr{V} |$ enclosed by the shape $\mathscr{S}$. Hence, denoting by $V_0$ the volume of the initial solid, we are considering the \textit{constrained} optimisation problem \begin{equation}\label{SOP} \max_{| \mathscr{V} | = V_0} J_{\vec{V}} (\mathscr{S}). \end{equation} The volume constraint may be enforced with a range of classical optimisation techniques, among which we will use a so-called ``augmented Lagrangian'', adapted from \citet[Section~3.7]{MR3878725} and briefly described in this section. The augmented Lagrangian algorithm converts the constrained optimisation problem \eqref{SOP} into a sequence of unconstrained problems (hereafter indexed by $n$). Hence, we will be led to solve: \begin{equation}\label{eq.optLn} \inf_{\mathscr{S}}{\mathcal{L}(\mathscr{S},\ell^n, b^n)}, \end{equation} where \begin{equation}\label{eq.auglag} \mathcal{L}(\mathscr{S},\ell,b) = J(\mathscr{S}) - \ell (|\mathscr{V}|-V_0) + \frac{b}{2} (|\mathscr{V}|-V_0)^2. \end{equation} In this definition, the parameter $b$ is a (positive) penalisation factor preventing the equality constraint `$|\mathscr{S}|=V_0$' to be violated. The parameter $\ell$ is a Lagrange multiplier associated with this constraint. The principle of the augmented Lagrangian algorithm rests upon the search for a (local) minimiser $S^n$ of $S \mapsto \mathcal{L}(S, \ell^n,b^n)$ for fixed values of $\ell^n$ and $b^n$. Given $\alpha>1$, these parameters are updated according to the rule: \begin{equation}\label{eq.uplag} \ell^{n+1} = \ell^n - b^n (|\mathscr{V}|^n-V_0) , \text{ and } b^{n+1} = \left\{ \begin{array}{cl} \alpha b^n & \text{if } b < b_{\text{target}},\\ b^n & \text{otherwise}; \end{array} \right. \end{equation} in other terms, the penalisation parameter $b$ is increased during the first iterations until the value $b_{\text{target}}$ is reached. This regular increase of $b$ ensures that the domain satisfies the constraint more and more precisely during the optimisation process. \subsection{Numerical resolution of the PDEs} For the sake of clarity and replicability of the algorithm described below, we provide some additional information about the numerical resolution of the Stokes and Laplace equations (\eqref{eq:stokes}, \eqref{eq:laplace}) required at each step of the deformation. The surface $\mathscr{S}$ is first equipped with a triangular surface mesh ${\mathcal T}$ containing the coordinates of the nodes, the middle of the edges, the center of the elements, and connectivity matrices. The numerical resolution is then carried out by boundary element method \citep{pozrikidis1992boundary} using the BEMLIB Fortran library \citep{pozrikidis2002practical}, which allows to determine the force distribution at each point $\vec{x}$ of the (discretised) surface $\mathscr{S}$ by making use of the integral representation \begin{equation} \vec{u}(\vec{x}) = \int_\mathscr{S}\vec{G}(\vec{x} - \vec{x}_0) \vec{f}(\vec{x}_0) \mathrm{d} \vec{x}_0, \end{equation} where $\vec{G}$ is the Oseen tensor given by \begin{equation} G_{ij}(\vec{x}) = \frac{1}{\| \vec{x} \|} \delta_{ij} + \frac{1}{\| \vec{x} \|^3} x_i x_j. \end{equation} Once the force distribution $\vec{f}$ is known, the rate-of-strain tensors $\vec{e}$ needed to compute the shape gradient established in formula \eqref{eq:shape-grad} can be conveniently computed through the integral expression \begin{equation} e_{ij} (\vec{x}) = \int_\mathscr{S}\left ( \frac{1}{\| \vec{x} -\vec{x}_0 \|^3} \delta_{ij} (\vec{x} -\vec{x}_0)_k - \frac{3}{\| \vec{x} -\vec{x}_0 \|^5} (\vec{x} -\vec{x}_0)_i (\vec{x} -\vec{x}_0)_j (\vec{x} -\vec{x}_0)_k \right ) f_k (\vec{x}_0) \mathrm{d} \mathscr{S}. \end{equation} \subsection{Shape optimisation algorithm} Let us now describe the main steps of the algorithm.\\ \begin{enumerate} \item $\;$ \textbf{Initialisation.} { \begin{itemize} \item Equip the initial shape $\mathscr{S}^0$ with a mesh ${\mathcal T}^0$, as described above. \item Select initial values for the coefficients $\ell^0$, $b^0>0$ of the augmented Lagrangian algorithm.\\ \end{itemize} } \item $\;$ \textbf{Main loop: for $n=0, ...$} { \begin{enumerate} \item $\;$Compute the solution $(\vec{u}^n,p^n)$ to the Stokes system \eqref{eq:stokes} on the mesh ${\mathcal T}^n$ of $\mathscr{S}^n$; \item $\;$Compute the solution $(\vec{v}^n, q^n)$ {to} the adjoint system \eqref{eq:stokesAdj} on the mesh ${\mathcal T}^n$ of $\mathscr{S}^n$. \item $\;$Compute the $L^2(\mathscr{S}^n)$ shape gradient $G^n$ of $J$, as well as the shape gradient $\phi^n$ of $\mathscr{S} \mapsto \mathcal{L}(\mathscr{S},\ell^n,b^n)$ given by $$ \phi^n=G^n-\ell^n+b^n(|\mathscr{V}|-V_0). $$ \item $\;$ Infer a descent direction $\bm{\theta}^n$ for $\mathscr{S} \mapsto \mathcal{L}(\mathscr{S},\ell^n,b^n)$ by solving the PDE \begin{equation}\label{eq:laper} \left\{ \begin{array}{cl} -\Delta \bm{\theta} = \vec{0} & \text{in } \mathscr{V}^n,\\ \bm{\theta} = \vec{0} & \text{on } \partial\mathscr{B},\\ {(\nabla \bm{\theta})}\vec{n}= -\phi^n \vec{n} & \text{on } \mathscr{S}^n. \end{array} \right. \end{equation} on the mesh ${\mathcal T}^n$. \item $\;$ \label{step:descent} Find a descent step $\tau^n$ such that \begin{equation}\label{eq.declag} \mathcal{L}((\text{\rm Id}+\tau^n\bm{\theta}^n)(\mathscr{S}^n),\ell^n,b^n) < \mathcal{L}(\mathscr{S}^n,\ell^n,b^n). \end{equation} \item $\;$ Move the vertices of ${\mathcal T}^n$ according to $\tau^n$ and $\bm{\theta}^n$: \begin{equation}\label{eq.xinp1} \vec{x}_p^{n+1} = \vec{x}_p^n + \tau^n \bm{\theta}^n(\vec{x}_p^n). \end{equation} \begin{itemize} \item If the resulting mesh is invalid, go back to step \ref{step:descent}, and use a smaller value for $\tau^n$, \item Else, the positions (\ref{eq.xinp1}) define the vertices of the new mesh ${\mathcal T}^{n+1}$. \end{itemize} \item $\;$ If the quality of ${\mathcal T}^{n+1}$ is too low, use a local remeshing. \item $\;$ Update the augmented Lagrangian parameters according to (\ref{eq.uplag}).\\ \end{enumerate} \item $\;$ \textbf{Ending criterion.} Stop if \begin{equation} \|\bm{\theta}^n\|_{L^2(S^n)} < \varepsilon_{\text{\rm stop}}. \label{eq:end} \end{equation} \textbf{Return} $\mathscr{S}^n$. } \end{enumerate} \section{Numerical results} \label{sec:numerics} In this section, we present various applications of the algorithm with different entries of the resistance tensor as objective functions. As mentioned above, the initial shape can be chosen freely, but we have made the choice to focus on the symmetric sphere for the initial shape, allowing for easier interpretability. An important preliminary remark to these results is that they did not all reach the stopping criterion \eqref{eq:end}. Instead, the algorithm was stopped due to overlapping of the shape or other problem-specific considerations, discussed below. While the displayed shapes are not all strictly local optima for said reasons, the interpretation of the deformation tendencies seem to have great importance from the point of view of fluid mechanics, giving crucial information on the general, ideal characteristics of shapes that offer high or low resistance for a particular entry of the resistance tensor and offering a theoretical backup to previous phenomenological observations \begin{figure} \centering \begin{overpic}[width=\textwidth]{fig1_170522.png} \put(4,50){(a)} \put(36,50){(b)} \put(70,50){(c)} \end{overpic} \caption{Visualisation of the shape optimisation algorithm running through the minimisation of $K_{11}$. The three columns on the left show the aspect of the shape at three different stages: (a) spherical shape at the first iteration; (b) after 20 iterations; and (c) at the end of the simulation. The surface colours on the top row represent the shape gradient value (from red for a high value for inwards deformation to blue for high outwards deformation), while the arrows show the deformation field $\vec{\theta}$ (normalised for better visualisation). The bottom row allows to observe the evolution of the shape profile, with the final shape closely resembling the well-known optimal drag profile first described by \citet{pironneau1973optimum}.} \label{fig:k11} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig2_170522.eps} \caption{Evolution of $K_{11}$, $\|\vec{\theta}\|$ and the shape volume $V$ along the simulation displayed on \Cref{fig:k11}, strongly suggesting convergence to a minimum of the shape functional.} \label{fig:k11_param} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{Fig_shapes.png} \caption{Results obtained from a spherical initial shape and various objective coefficients.} \label{fig:shapes} \end{figure} \subsection{Diagonal parameters} The first numerical example will be the resolution of the classical ``minimal drag'' problem at constant volume, equivalent to the minimisation of $K_{11}$. The solution to this problem was determined to be an axisymmetric ``rugby-ball''-like shape by \citet{pironneau1973optimum}, and has been found again with different methods as well as used as an exemplar in an extensive literature later on. We will also use it as a convenient way to check the validity and performance of the algorithm described in the previous section. The results are shown on figure \ref{fig:k11}. Starting from a sphere, the shape gradient $G$ and deformation field $\vec{\theta}$ are represented on the top left plot (a), with the red and blue colours being respectively associated to positive and negative gradient, meaning inward and outward associated deformation. As expected, the deformation vector field tends to stretch the sphere in the $x$ direction in order to decrease its drag. After 20 iterations (b), the object has taken the shape of an ellipsoid. Of note, axisymmetry, known as a feature of the optimal shape for this problem, is remarkably well preserved along the numerical resolution. At 250 iterations, the ending criterion \eqref{eq:end} is reached and the algorithm stops, with the resulting shape resembling closely the famous ``rugby ball'' of \citet{pironneau1973optimum}. The final drag coefficient is equal to 0.9558, in good agreement with the value known to be the one associated to optimal drag (approx. 0.95425). The small difference between these is attributable to the coarse meshing used for this simulation, which decreases overall precision. Note nonetheless that it is numerically rather remarkable that a mesh of this low quality is able hold the full optimisation problem with good accuracy, suggesting the optimisation framework laid out in this paper enjoys a good level of robustness to coarse discretisation. The three plots on the Figure \ref{fig:k11_param} show the evolution of the criterion $J_{\vec{V}}(\mathscr{S}) = K_{11}$, the $\mathrm{L}^2$-norm of the deformation vector field $\| \vec{\theta} \|$ and the volume $V$ enclosed by $\mathscr{S}$ along the simulation, with a clear numerical convergence being observed. Of note, the value of $K_{11}$ is directly correlated to the volume $V$ of the body, making this particular problem extremely sensitive to volume variations. Unlike for the optimisation problems associated to other entries of the resistance tensor, the augmented Lagrangian algorithm with adaptive step described in the previous section was observed to induce instability and amplifying volume oscillations, even with fine tuning of the parameters $\ell$ and $b$. For that reason, the algorithm was adapted for the results displayed on figure \ref{fig:k11}, empirically setting a fixed deformation step $\tau$ and Lagrange multiplier $\ell$ to obtain stability and convergence. The parameter values used in figure \ref{fig:k11} are $\tau = 10^{-3}$, $\ell = 98.8$, $b_0 = 10$, $b_{\mathrm{target}} = 500$ and $\alpha = 1.03$. More generally, a good choice of augmented Lagrangian parameters is critical to observe convergence of the algorithm, and is highly dependent on the nature of the problem, therefore requiring \textit{ad hoc} tuning for each different objective function. Now, let us turn to the other entries of $\mathsfbi{R}$. Figure \ref{fig:shapes} gathers the results for six different objective functions. These results are, to the best of the authors' knowledge, fully novel and hold several interesting interpretations. Panels \ref{fig:shapes}(a) and \ref{fig:shapes}(c) display numerical results obtained for maximising the resistance coefficients $K_{11}$ and $Q_{11}$ -- formally, we can indifferently minimise or maximise the criterion $J$ by simply reversing the sign of the shape gradient in Equation \eqref{eq:shape_grad}. As could be expected, maximising the translational drag through $K_{11}$ has the effect of flattening the sphere along the $yz$-plane. Perhaps less intuitive is that the final shape presents a biconcavity evoking those of a red blood cell, and that similar characteristics are observed when maximising the torque-rotation coupling through $Q_{11}$. Of note, for these two situations, the algorithm was stopped due to overlapping of the surface at the center of the biconcavity, visible on panel (c). On the other hand, the shape that minimises $Q_{11}$ can be seen on panel \ref{fig:shapes}b. This time, the rotational drag for the sphere appears to be reduced by stretching the shape along the $x$-axis, until reaching a cylinder-like shape with nearly hemispherical extremities. The final shape strikingly evokes the shape of some bacteria species like \textit{Escherichia coli}, with this observation possibly being an argument backing the importance of motility to explain microorganism morphology, among many other factors \citep{yang2016staying,van2017determinants}. \subsection{Extradiagonal parameters} Unlike the diagonal entries $K_{ii}$ and $Q_{ii}$ of the resistance tensor, the extradiagonal entries of the grand resistance tensor are not necessarily positive. In fact, a mirror symmetry in along an appropriately chosen plane will reverse the sign of extradiagonal entries. This observation induces that objects possessing certain planar symmetries have null entries in their resistance tensor; in particular, all the extradiagonal entries of a sphere's resistance tensor are equal to zero. These properties importantly imply that the minimisation and maximisation problems are equivalent when choosing an extradiagonal entry as an objective: one can switch between both by means of an appropriate planar symmetry. With this being said, the bottom row of figure \ref{fig:shapes} displays results of the minimisation of extradiagonal coefficients of the resistance tensor. The optimal shape for $K_{12}$ can be seen on figure \ref{fig:shapes}d. This may be interpreted as the shape that realises the best transmission from a force applied to a direction (here along the $x$-axis) to a translation towards a perpendicular direction (here, the $y$-axis). The corresponding optimal shape presents a flattened aspect along the diagonal plane $x=y$, and is slightly thicker at the center than at the edges. In the case of the optimisation of $Q_{12}$ (figure \ref{fig:shapes}e), an interesting ``dumbbell'' shape emerges. This can be understood when considering that maximising $Q_{12}$ accounts for achieving the best transmission of a torque applied in the $x$ direction to a rotation around the $y$ axis. The algorithm finds that the best way to do this is to separate the mass of the sphere in two smaller parts along a $y=x$ line. Of note, convergence of the criterion was not observed when stopping the simulation here; indeed, with suitable remeshing provided, the algorithm would most likely continue indefinitely to spread the two extremities of the dumbbell further aside. Finally, one of the most interesting findings lie in the optimisation of $C_{11}$ coefficient, observable on figure \ref{fig:shapes}f. This parameter accounts for the coupling between torque and translation; hence optimising it means that we are looking for the shape that converts best a rotational effect into directional velocity. Helicoidal shapes are well-known to be capable to achieve this conversion. More generally, $C_{11}$ is nonzero only if the shape possesses some level of chirality. Optimisation of $C_{11}$ was tackled for a particular class of shapes in \citet{keaveny2013optimization}, in the context of magnetic helicoidal swimmers. Considering slender shapes parametrised by a one-dimensional curve, they find that optimal shapes are given by regular helicoidal folding, with additional considerations on its pitch and radius depending on parameters and on the presence of a head. The family of optimal shapes however remained rather restrictive compared to our general setting. Starting from a spherical initial shape, which is notably achiral, we can observe on figure \ref{fig:shapes}f the striking emergence of four helicoidal wings, that tend to sharpen along the simulation. Again, the stopping criterion for the shape displayed on figure \ref{fig:shapes}f occurred because of overlapping of the mesh at the edges, and not because the norm of the deformation field converged to zero. While appropriate handling of the narrow parts of the helix wings may allow to carry on the shape optimisation process and observe further folding of the sphere into a long corkscrew-like shape, the observation itself of chirality emerging out of an achiral initial structure throughout an optimisation process is already an arguably captivating result, echoing the widespread existence and importance of chirality among microswimmers, in particular as a possible mean of producing robust directional locomotion within background flows \citep{wheeler2017use}. \section{Discussion and perspectives} \label{sec:discussion} In this paper, we have addressed the problem of optimal shapes for the resistance problem in a Stokes flow. Considering the entries of the grand resistance tensor as objective shape functionals to optimise, and using the framework of Hadamard boundary variation, we derived a general formula for the shape gradient, allowing to define the best deformation to apply to any given shape. While this shape optimisation framework is mathematically standard, its usage in the context of microhydrodynamics is limited, mostly circumscribed to the work of \citet{keaveny2013optimization}, and the theoretical results and numerical scheme that we presented here provide a much higher level of generality, both concerning the admissible shapes and the range of objective functions. After validating the numerical capabilities of the shape optimisation algorithm by comparing the optimal shape for $K_{11}$ to the celebrated result of \citet{pironneau1973optimum}, we systematically investigated the shapes minimising and maximising entries of the resistance tensor. The numerical results reveal fascinating new insights on optimal hydrodynamic resistance. In particular, we obtained an optimal profile for the torque drag ($Q_{11}$), observed the emergence of chiral, helicoidal structure maximising the force/rotation coupling ($C_{11}$), and other intriguing shapes generated when minimising extradiagonal entries. The potentialities of this framework are not limited to the examples displayed in the numerical results section. With most of the optimisation problems considered here being highly unconstrained and nonconvex, we can safely assume that many local extrema exist, and that a range of different results is likely to be observed for different initial shapes. As discussed above, finer handling of the surface mesh to deal with locally high curvature, sharp edges and cusps, and additional manufacturing constraints to prevent self-overlapping and take other criteria into account, are warranted to pursue this broader exploration. Furthermore, seeing as some of the shapes in figure \ref{fig:shapes} appear to take a torus-like profile from an initial spherical shape, it might be interesting to allow topological modifications of the shape along the optimisation process, which requires different approaches such as the level set method \citep{allaire2007conception}. As mentioned in Section \ref{sec:theoFramSO}, whilst being beyond the fluid mechanics scope of this paper, mathematical questions also arise from this study, such as formal proof of existence and uniqueness of minimisers for the optimisation problem \eqref{SOPgen-min}. In the context of low-Reynolds number hydrodynamics, our results provide novel perspectives to the fundamental problem of optimal hydrodynamic resistance for a rigid body, with the optimisation of some entries of the resistance tensor being performed for the first time. Beyond their theoretical interest, these results could help understand and refine some of the the criteria that are believed to govern the morphology of microscopic bodies \citep{yang2016staying,van2017determinants}. Furthermore, the computational structure of the optimisation problem is readily adaptable to more complex objective criteria defined as functions of entries of the grand resistance tensor, which allows to tackle relevant quantities for various applications. A prototypical example would be to seek extremal values for the Bretherton constant $B$ \citep{Bretherton1962}, a geometrical parameter for the renowned Jeffery equations \citep{Jeffery1922} which describe the behaviour of an axisymmetric object in a shear flow. As noted by \citet{ishimoto2020jeffery}, $B$ can be expressed as a rational function of seven distinct entries of the grand resistance tensor. For spheroids, $B$ lies between $-1$ and $1$, but nothing theoretically forbids it from being greater than 1 or smaller than $-1$; yet exhibiting realistic shapes achieving it is notoriously difficult \citep{Bretherton1962,Singh2013}. Further, another geometrical parameter $C$ is introduced for chiral helicoidal particles in \citet{ishimoto2020helicoidal} This shape constant, now termed as the Ishimoto constant \citep{ohmura2021near}, characterises the level of chirality and is useful to study bacterial motility in flow \cite{jing2020chirality} Whilst this parameter can be expressed with respect to entries of the resistance tensor in a similar manner as $B$, very little is known about typical shapes for a given value of $C$, not to mention shapes optimising $C$. The framework developed in this paper provides a promising way of investigating these questions. Finally, various refinements of the Stokes problem \ref{eq:stokes} can be fathomed to address other open problems in microhydrodynamics and microswimming. Dirichlet boundary conditions on the object surface, considered in this paper as well as in a vast part of the literature, may fail to properly describe the fluid friction arising at small scale, notably when dealing with complex biological surfaces. Nonstandard boundary conditions such as the Navier conditions \citep{B616490K} are then warranted. Interestingly, the optimal drag problem for a rigid body, although well resolved since long for Dirichlet conditions \citep{pironneau1973optimum}, is still open for Navier conditions. Seeking to further connect shape optimisation to efficient swimming at microscale, one could also include some level of deformability of the object, which requires to couple the Stokes equation with an elasticity problem. A simple model in this spirit was recently introduced in the context of shape optimisation in \citet{calisti2021synthesis}. Another problem with biological relevance it the optimisation of hydrodynamic resistance when interacting with a more or less complex environment, such as a neigbouring wall or a channel, which is known to change locomotion strategies for microorganisms \citep{elgeti2016microswimmers}; overall, a dynamical, environment-sensitive shape optimisation study stemming from this paper's framework could provide key insights on microswimming and microrobot design. \backsection[Funding]{C.M. is a JSPS Postdoctoral Fellow (P22023), and acknowledges partial support by the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located at Kyoto University and JSPS-KAKENHI Grant-in Aid for JSPS Fellows (Grant No. 22F22023). K.I. acknowledges JSPS-KAKENHI for Young Researchers (Grant No. 18K13456), JSPS-KAKENHI for Transformative Research Areas (Grant No. 21H05309) and JST, PRESTO, Japan (Grant No. JPMJPR1921). Y.P. was partially supported by the ANR Project VirtualChest ANR-16-CE19-0014.} \backsection[Declaration of interests]{The authors report no conflict of interest.} \backsection[Author ORCID]{C. Moreau, https://orcid.org/0000-0002-8557-1149; K. Ishimoto, https://orcid.org/0000-0003-1900-7643; Y. Privat, https://orcid.org/0000-0002-2039-7223.}
1,108,101,566,189
arxiv
\section{Introduction}\label{S:Intro} Let $n\geq 2$ be a fixed integer. We write $\mathbb{B}$ for the open unit ball and $\mathbb{S}$ for the unit sphere in $\mathbb{R}^n$. The closure of $\mathbb{B}$, which is the closed unit ball, is denoted by $\bar{\mathbb{B}}$. For any $x=(x_1,\ldots, x_n)$ in $\mathbb{R}^n$, we use $|x|$ to denote the Euclidean norm of $x$, that is, $|x|=(x_1^2+\cdots+x_n^2)^{1/2}$. Let $\nu$ be a regular Borel probability measure on $\mathbb{B}$ that is invariant under the action of the group of orthogonal transformations $O(n)$. Then there is a regular Borel probability measure $\mu$ on the interval $[0,1)$ so that the integration in polar coordinates formula $$\int_{\mathbb{B}}f(x)\mathrm{d}\nu(x)=\int_{[0,1)}\int_{\mathbb{S}}f(r\zeta)\mathrm{d}\sigma(\zeta)\mathrm{d}\mu(r)$$ holds for all functions $f$ that belong to $L^1(\mathbb{B},\nu)$. Here $\sigma$ is the unique $O(n)$-invariant regular Borel probability measure on the unit sphere $\mathbb{S}$. We are interested in measures $\nu$ whose support is not entirely contained in a compact subset of the unit ball so we will assume through out the paper that $\nu(\{x\in\mathbb{B}: |x|\geq r\})>0$ for all $0<r<1$. This is equivalent to the condition that $\mu([r,1))>0$ for all $0<r<1$. The (generalized) harmonic Bergman space $b^2_{\nu}$ is the space of all harmonic functions that belong also to the Hilbert space $L^2_{\nu}=L^2(\mathbb{B},\nu)$. It follows from Poisson integral representation of harmonic functions and the assumption about $\nu$ that for any compact subset $K$ of $\mathbb{B}$, there is a constant $C_{K}$ such that \begin{equation}\label{Eqn:kernel} |u(x)|\leq C_K\|u\|=\Big(\int_{\mathbb{B}}|u(x)|^2\mathrm{d}\nu(x)\Big)^{1/2} \end{equation} for all $x$ in $K$ and all $u$ in $b^2_{\nu}$. This implies that $b^2_{\nu}$ is a closed subspace of $L^2_{\nu}$ and that the evaluation map $u\mapsto u(x)$ is a bounded linear functional on $b^2_{\nu}$ for each $x$ in $\mathbb{B}$. By the Riesz's representation, there is a function $R_x$ in $b^2_{\nu}$ so that $u(x)=\langle u, R_x\rangle$. The function $R(y,x):=R_x(y)$ for $x,y\in\mathbb{B}$ is called the reproducing kernel for $b^2_{\nu}$. Let $Q$ denote the orthogonal projection from $L^2_{\nu}$ onto $b^2_{\nu}$. For a bounded measurable function $f$ on $\mathbb{B}$, the Toeplitz operator $T_{f}: b^2_{\nu}\to b^2_{\nu}$ is defined by $$T_fu = QM_fu = Q(fu), \quad u\in b^2_{\nu}.$$ Here $M_f: L^2_{\nu}\to L^2_{\nu}$ is the operator of multiplication by $f$. The function $f$ is called the symbol of $T_f$. We also define the Hankel operator $H_f: b^2_{\nu}\to (b^2_{\nu})^{\perp}$ by $$H_fu = (1-Q)M_fu = (1-Q)(fu),\quad u\in b^2_{\nu}.$$ It is immediate that $\|T_f\|\leq\|f\|_{\infty}$ and $\|H_{f}\|\leq\|f\|_{\infty}$. For $f,g$ bounded measurable functions on $\mathbb{B}$, the following basic properties are immediate from the definition of Toeplitz and Hankel operators: \begin{equation}\label{Eqn:ToeplitzHankel} T_{gf}-T_gT_f = H^{*}_{\bar{g}}H_{f}, \end{equation} and $$(T_g)^{*}=T_{\bar{g}},\quad\quad T_{af+bg}=aT_f+bT_g,$$ where $a, b$ are complex numbers and $\bar{g}$ denotes the complex conjugate of $g$. If $\mathrm{d}\nu(x)=\mathrm{d}V(x)$, where $V$ is the normalized Lebesgue volume measure on $\mathbb{B}$, then $b^2_{\nu}$ is the usual unweighted harmonic Bergman space. See \cite[Chapter 8]{AxlerSpringer2001} for more details about this space. If $\mathrm{d}\nu(x)=c_{\alpha}(1-|x|^2)^{\alpha}\mathrm{d}V(x)$ where $-1<\alpha<\infty$ and $c_{\alpha}$ is a normalizing constant, then $\nu$ is a weighted Lebesgue measure on $\mathbb{B}$ and $b^2_{\nu}$ is a weighted harmonic Bergman space. Compactness of certain classes of Toeplitz operators on these weighted harmonic Bergman spaces was considered by K. Stroethoff in \cite{StroethoffAuMS1998}. He also described the essential spectra of Toeplitz operators with uniformly continuous symbols. He showed that if $f$ is a continuous function on the closed unit ball $\bar{\mathbb{B}}$, then the essential spectrum of $T_f$ is the same as the set $f(\mathbb{S})$. This result in the setting of unweighted harmonic Bergman spaces was obtained earlier by J. Miao \cite{MiaoIEOT1997}. More recently, B.R. Choe, Y.J. Lee and K. Na \cite{ChoeNMJ2004} showed that the above essential spectral formula remains valid for unweighted harmonic Bergman space of any bounded domain with smooth boundary in $\mathbb{R}^n$. The common approach, which was used in all of the above papers, involved a careful estimate on the kernel function. In the case where $\nu$ is not a weighted Lebesgue measure, it seems that similar estimates are not available. Nevertheless, with a different approach, we still obtain the aforementioned essential spectral formula. Let $\mathfrak{T}$ denote the $C^{*}$-algebra generated by all Toeplitz operators $T_f$, where $f$ belongs to the space $C(\bar{\mathbb{B}})$ of continuous functions on the closed unit ball. Let $\mathfrak{CT}$ denote the two-sided ideal of $\mathfrak{T}$ generated by commutators $[T_f,T_g]=T_fT_g-T_gT_f$, for $f,g\in C(\bar{\mathbb{B}})$. In the case $n=2$ and $\nu$ the normalized Lebesgue measure on the unit disk, K. Guo and D. Zheng \cite{GuoJMAA2002} proved that $\mathfrak{CT}=\mathcal{K}$, the ideal of compact operators on $b^2_{\nu}$, and there is a short exact sequence $$0\rightarrow\mathcal{K}\rightarrow\mathfrak{T}\rightarrow C(\mathbb{S})\rightarrow 0.$$ They also proved that any Fredholm operator in the Toeplitz algebra $\mathfrak{T}$ has Fredholm index $0$. We will show that these results are in fact valid for all $n\geq 2$. The paper is organized as follows. In Section 2 we give some preliminaries. We then study Toeplitz operators with uniformly continuous symbols and establish the essential spectral formula in Section 3. The Toeplitz algebra and the associated short exact sequence are studied in Section 4. We close the paper with a criterion for compactness of operators with more general symbols in Section 5. \section{Preliminaries} It is well known that the reproducing kernel $R(x,y)$ is symmetric and real-valued for $x,y\in\mathbb{B}$. From \eqref{Eqn:kernel} we see that for any compact subset $K$ of $\mathbb{B}$ and $x\in K$, \begin{equation}\label{Eqn:ineqkernel} R(x,x) = R_x(x)\leq C_K\|R_x\|=C_K(\langle R_x,R_x\rangle)^{1/2} = C_K(R(x,x))^{1/2}. \end{equation} This shows that $0\leq R(x,x)\leq C_{K}^2$ for $x\in K$. So the function $x\mapsto K(x,x)$ is bounded on compact subsets of $\mathbb{B}$. A polynomial $p$ in the variable $x$ with complex coefficients is homogeneous of degree $m$ (or $m$-homogeneous), where $m\geq 0$ is an integer, if $p(tx)=t^mp(x)$ for all non-zero real numbers $t$. We write $\mathcal{P}_{m}$ for the vector space of all $m$-homogeneous polynomials on $\mathbb{R}^n$. We use $\mathcal{H}_m$ to denote the subspace of $\mathcal{P}_m$ consisting of harmonic polynomials. The subspace $\mathcal{H}_m$ is finite dimensional and its dimension $h_m$ is given by $h_0=1, h_1=n$ and $h_m=\binom{n+m-1}{n-1}-\binom{n+m-3}{n-1}$ for $m\geq 2$. See \cite[Proposition 5.8]{AxlerSpringer2001}. For polynomials $p$ in $\mathcal{H}_m$ and $q$ in $\mathcal{H}_k$, using the orthogonality of their restrictions on the sphere \cite[Proposition 5.9]{AxlerSpringer2001} and integration in polar coordinates we obtain \begin{align}\label{Eqn:orthogonality} \langle p, q\rangle & = \Big(\int_{[0,1)}r^{m+k}\mathrm{d}\mu(r)\Big)\int_{\mathbb{S}}p\bar{q}\mathrm{d}\sigma\notag\\ & = \begin{cases} 0 & \text{ if } m\neq k\\ \Big(\int_{[0,1)}r^{2m}\mathrm{d}\mu(r)\Big)\int_{\mathbb{S}}p\bar{q}\mathrm{d}\sigma & \text{ if } m=k. \end{cases} \end{align} This shows that the spaces $\mathcal{H}_m$ for $m=0,1,\ldots$ are pairwise orthogonal. On the other hand, \cite[Corollary 5.34]{AxlerSpringer2001} shows that if $u$ is a harmonic function on the unit ball $\mathbb{B}$, then there exist polynomials $p_m\in\mathcal{H}_m$ such that $u(x)=\sum_{m=0}^{\infty}p_m(x)$ for all $x$ in $\mathbb{B}$. The series converges uniformly on compact subsets of $\mathbb{B}$. Thus we have the orthogonal decomposition $b^2_{\nu} = \oplus_{m=0}^{\infty}\mathcal{H}_m$. We next present some other elementary results which we will use later in the paper. The following lemmas follow from the above orthogonal decomposition of $b^2_{\nu}$. Since these are well known results in functional analysis, we omit their proofs. \begin{lemma} \label{L:cpt} Suppose $A$ is a compact operator from $b^2_{\nu}$ into a Hilbert space $\mathcal{L}$. Then $\lim_{m\to\infty}\|A|_{\mathcal{H}_m}\|=0$. \end{lemma} The converse of Lemma \ref{L:cpt} is false in general. However, if additional conditions are imposed on the images of the subspaces $\mathcal{H}_m$ under $A$, the converse holds. \begin{lemma} \label{L:cptOperator} Suppose $A$ is an operator defined on the algebraic direct sum of the subspaces $\mathcal{H}_m$ into a Hilbert space $\mathcal{L}$ so that $A(\mathcal{H}_m)\perp A(\mathcal{H}_l)$ for all $m\neq l$ and $\lim_{m\to\infty}\|A|_{\mathcal{H}_m}\| = 0$. Then $A$ extends (uniquely) to a compact operator from $b^2_{\nu}$ into $\mathcal{L}$. \end{lemma} We will also need the following lemma, which is a special case of \cite[Lemma 2.4]{LePAMS2009}. \begin{lemma}\label{L:limitIntegrals} Suppose $\varphi$ is a function on $[0,1)$ so that $\lim_{r\uparrow 1}\varphi(r)=\gamma$, then $$\lim_{m\to\infty}\dfrac{\int_{[0,1)}\varphi(r)r^{2m}\mathrm{d}\mu(r)}{\int_{[0,1)}r^{2m}\mathrm{d}\mu(r)}=\gamma.$$ \end{lemma} \section{Toeplitz operators with uniformly continuous symbols} In this section we study Toeplitz operators whose symbols are continuous on the closed unit ball. We show that the essential spectrum of such an operator is the set of values of its symbol on the unit sphere. For any integer $k\geq 0$, we denote the $2k$-th moment of $\mu$ by $\hat{\mu}(k)$, that is, $$\hat{\mu}(k)=\int_{[0,1)}r^{2k}\mathrm{d}\mu(r) = \int_{\mathbb{B}}|x|^{2k}\mathrm{d}\nu(x).$$ For any $u\in b^2_{\nu}$, write $u=\sum_{m=0}^{\infty}u_m$, where $u_m\in\mathcal{H}_m$ for $m\geq 0$. We then have \begin{align*} \|u\|^2 & = \sum_{m=0}^{\infty}\|u_m\|^2 = \sum_{m=0}^{\infty}\int_{\mathbb{B}}|u_m(x)|^2\mathrm{d}\nu(x)\\ & = \sum_{m=0}^{\infty}\int_{[0,1)}r^{2m}\mathrm{d}\mu(r)\int_{\mathbb{S}}|u_m(\zeta)|^2\mathrm{d}\sigma(\zeta)\\ & = \sum_{m=0}^{\infty}\hat{\mu}(m)\int_{\mathbb{S}}|u_m(\zeta)|^2\mathrm{d}\sigma(\zeta). \end{align*} This shows that the linear map $W: b^2_{\nu}\longrightarrow L^2(\mathbb{S})$ defined by $$W(u) = \sum_{m=0}^{\infty}(\hat{\mu}(m))^{1/2}\ u_m|_{\mathbb{S}}$$ is isometric. The restriction of an element in $\mathcal{H}_m$ to $\mathbb{S}$ is called a spherical harmonic of degree $m$. Theorem 5.12 in \cite{AxlerSpringer2001} shows that the span of all spherical harmonics is dense in $L^2(\mathbb{S})$. We then conclude that $W$ is a surjective isometry, hence a unitary operator. For a continuous function $f$ on the closed unit ball, let $f_{*}$ denote the restriction of $f$ on the unit sphere. Recall that the operator $M_f$ is the multiplication operator on $L^2_{\nu}$ with symbol $f$. As usual, we denote its restriction on $b^2_{\nu}$ by $M_f|_{b^2_{\nu}}$. We also write $M_{f_{*}}$ for the multiplication operator on $L^2(\mathbb{S})$ with symbol $f_{*}$. Using the above unitary, we establish the following connection between these two operators. \begin{theorem}\label{T:reduction} Let $f$ be in $C(\bar{\mathbb{B}})$. Then the operator $$M_f|_{b^2_{\nu}}-W^{*}M_{f_{*}}W: b^2_{\nu}\longrightarrow L^2_{\nu}$$ is compact. \end{theorem} \begin{proof} Let $$\mathcal{A}=\{f\in C(\bar{\mathbb{B}}): M_{f}|_{b^2_{\nu}}-W^{*}M_{f_{*}}W \text{ is compact}\}.$$ We need to show that $\mathcal{A}=C(\bar{\mathbb{B}})$. It is clear that $\mathcal{A}$ is a closed linear subspace of $C(\bar{\mathbb{B}})$. Now suppose $f,g$ are in $\mathcal{A}$. Then there are compact operators $K_f$ and $K_g$ from $b^2_{\nu}$ into $L^2_{\nu}$ so that $$M_{f}|_{b^2_{\nu}}=W^{*}M_{f_{*}}W+K_f\text{ and } M_{g}|_{b^2_{\nu}}=W^{*}M_{g_{*}}W+K_g.$$ Since the range of $W^{*}$ is $b^2_{\nu}$, we have $(I-Q)W^{*}M_{g_{*}}W=0$ (we recall here that $Q$ is orthogonal projection from $L^2_{\nu}$ onto $b^2_{\nu}$). This implies that $(1-Q)M_{g}|_{b^2_{\nu}}=(1-Q)K_g$ and so we have \begin{align*} M_{fg}|_{b^2_{\nu}} & = M_fM_g|_{b^2_{\nu}} = M_f(1-Q)M_g|_{b^2_{\nu}}+M_fQM_g|_{b^2_{\nu}}\\ & = M_f(1-Q)K_g+(W^{*}M_{f_{*}}W+K_f)(W^{*}M_{g_{*}}W+QK_g)\\ & = W^{*}M_{f_{*}}WW^{*}M_{g_{*}}W+K\\ & = W^{*}M_{(fg)_{*}}W+K, \end{align*} where $K$ is compact. Thus, $fg\in\mathcal{A}$ if $f,g$ are in $\mathcal{A}$. We have showed that $\mathcal{A}$ is a closed subalgebra of $C(\bar{\mathbb{B}})$. To complete the proof of the theorem, we will show that the coordinate functions $x_1,\ldots,x_n$ belong to $\mathcal{A}$. By symmetry, we only need to check that $f(x)=x_1$ is in $\mathcal{A}$. For any integer $m\geq 0$ and $p\in\mathcal{H}_m$, the polynomial $fp$ is homogeneous of degree $m+1$. Since $\Delta^{2}(fp)=\Delta^{2}(x_1p(x))=0$, \cite[Theorem 5.21]{AxlerSpringer2001} shows that there is a unique decomposition $$M_{f}p = fp = p_{m+1}+|x|^2p_{m-1},$$ where $p_{m+1}\in\mathcal{H}_{m+1}$ and $p_{m-1}\in\mathcal{H}_{m-1}$ if $m\geq 1$ and $p_{m-1}=0$ if $m=0$. Using integration in polar coordinates and the fact that restrictions of homogeneous harmonic polynomials of different degrees are orthogonal in $L^2(\mathbb{S})$, we obtain \begin{equation}\label{Eqn:multiplication} \|M_{f}p\|^2 = \|p_{m+1}\|^2 + \||x|^2p_{m-1}\|^2 = \|p_{m+1}\|^2 + \dfrac{\hat{\mu}(m+1)}{\hat{\mu}(m-1)}\|p_{m-1}\|^2. \end{equation} On the other hand, \begin{align*} W^{*}M_{f_{*}}W(p) & = (\hat{\mu}(m))^{1/2}W^{*}M_{f_{*}}(p|_{\mathbb{S}})\\ & = (\hat{\mu}(m))^{1/2}W^{*}(fp|_{\mathbb{S}})\\ & = (\hat{\mu}(m))^{1/2}W^{*}(p_{m+1}|_{\mathbb{S}}+p_{m-1}|_{\mathbb{S}})\Big)\\ & = \Big(\dfrac{\hat{\mu}(m)}{\hat{\mu}(m+1)}\Big)^{1/2}p_{m+1}+\Big(\dfrac{\hat{\mu}(m)}{\hat{\mu}(m-1)}\Big)^{1/2}p_{m-1} \end{align*} Therefore, \begin{align}\label{Eqn:sumCptOperators} & (M_f|_{b^2_{\nu}}-W^{*}M_{f_{*}}W)(p)\\ &\quad\quad\quad = \Big\{1-\Big(\dfrac{\hat{\mu}(m)}{\hat{\mu}(m+1)}\Big)^{1/2}\Big\}p_{m+1}+\Big(|x|^2-\Big(\dfrac{\hat{\mu}(m)}{\hat{\mu}(m-1)}\Big)^{1/2}\Big)p_{m-1}.\notag \end{align} Now we define \begin{align*} A_1(p) & = \Big\{1-\Big(\dfrac{\hat{\mu}(m)}{\hat{\mu}(m+1)}\Big)^{1/2}\Big\}p_{m+1}, \text{ and }\\ A_2(p) & = \Big\{|x|^2-\Big(\dfrac{\hat{\mu}(m)}{\hat{\mu}(m-1)}\Big)^{1/2}\Big\}p_{m-1}, \end{align*} for $p\in\mathcal{H}_m$ and extend $A_1$ and $A_2$ by linearity to the algebraic direct sum of the subspaces $\mathcal{H}_m$, $m=0,1\ldots$. Using \eqref{Eqn:multiplication} and integration in polar coordinates, we have \begin{align*} \|A_1(p)\| & = \Big|1-\Big(\dfrac{\hat{\mu}(m)}{\hat{\mu}(m+1)}\Big)^{1/2}\Big|\|p_{m+1}\|\leq \Big\{\Big(\dfrac{\hat{\mu}(m)}{\hat{\mu}(m+1)}\Big)^{1/2}-1\Big\}\|p\|, \end{align*} \begin{align*} &\|A_2(p)\|^2\\ & = \int_{[0,1)}\Big(r^2-\big(\dfrac{\hat{\mu}(m)}{\hat{\mu}(m-1)}\big)^{1/2}\Big)^2 r^{2m-2}\mathrm{d}r\int_{\mathbb{S}}|p_{m-1}(\zeta)|^2\mathrm{d}\sigma(\zeta)\\ & = \Big\{\hat{\mu}(m+1)+\hat{\mu}(m)-2\hat{\mu}(m)\big(\dfrac{\hat{\mu}(m)}{\hat{\mu}(m-1)}\big)^{1/2}\Big\}\big(\hat{\mu}(m-1)\big)^{-1}\|p_{m-1}\|^2\\ & \leq \Big\{\hat{\mu}(m+1)+\hat{\mu}(m)-2\hat{\mu}(m)\big(\dfrac{\hat{\mu}(m)}{\hat{\mu}(m-1)}\big)^{1/2}\Big\}\big(\hat{\mu}(m+1)\big)^{-1}\|p\|^2. \end{align*} From Lemma \ref{L:limitIntegrals} we have $\lim_{k\to\infty}\hat{\mu}(k)/\hat{\mu}(k+1) = 0$. This implies that $\lim_{m\to\infty}\|A_j|_{\mathcal{H}_m}\|=0$ for $j=1,2$. On the other hand, by the orthogonality of homogeneous harmonic polynomials of different degrees when restricted to the sphere, we see that $A_j(\mathcal{H}_m)\bot A_j(\mathcal{H}_k)$ if $m\neq k$ for $j=1,2$. Lemma \ref{L:cptOperator} now shows that $A_1$ and $A_2$ extend to compact operators on $b^2_{\nu}$. From \eqref{Eqn:sumCptOperators}, $M_f|_{b^2_{\nu}}-W^{*}M_{f_{*}}W$, being a sum of two compact operators, is also compact. This completes the proof of the theorem. \end{proof} Theorem \ref{T:reduction} has important consequences that we now describe. Since the image of $W^{*}$ is contained in $b^2_{\nu}$, we have $QW^{*}=W^{*}$ and $(1-Q)W^{*}=0$. Theorem \ref{T:reduction} implies that the Toeplitz operator $T_{f}=QM_{f}|_{b^2_{\nu}}$ is a compact perturbation of $W^{*}M_{f_{*}}W$ and the Hankel operator $H_{f}=(1-Q)M_{f}|_{b^2_{\nu}}$ is compact for any $f$ in $C(\bar{\mathbb{B}})$. Now for any bounded measurable function $g$ on $\mathbb{B}$, \eqref{Eqn:ToeplitzHankel} shows that both operators $T_{gf}-T_gT_f$ and $T_{gf}-T_{f}T_g$ are compact. Let us write $\mathcal{B}(b^2_{\nu})$ for the $C^{*}$-algebra of all bounded operators on $b^2_{\nu}$. Let $\mathcal{K}$ denote the ideal of all compact operators on $b^2_{\nu}$. For any bounded operator $A$, recall that the essential spectrum of $A$, denoted by $\sigma_{e}(A)$, is the spectrum of $A+\mathcal{K}$ in the quotient algebra $\mathcal{B}(b^2_{\nu})/\mathcal{K}$. The essential norm $\|A\|_{e}$ is the norm of $A+\mathcal{K}$ in $\mathcal{B}(b^2_{\nu})/\mathcal{K}$. If $f_{*}$ is the restriction of $f$ on the unit sphere $\mathbb{S}$, then $\sigma_{e}(M_{f_{*}})=f_{*}(\mathbb{S})=f(\mathbb{S})$ and $\|M_{f_{*}}\|_{e}=\sup\{|f_{*}(\zeta)|: \zeta\in\mathbb{S}\}=\sup\{|f(\zeta)|: \zeta\in\mathbb{S}\}$. Theorem \ref{T:reduction} now implies the following results, which were obtained earlier by Stroethoff \cite{StroethoffGMJ1997,StroethoffAuMS1998} for weighted spaces with a different approach. \begin{corollary}\label{C:essSpectra} Let $f$ be a uniformly continuous function and $g$ be a bounded measurable function on $\mathbb{B}$. Then $T_{gf}-T_gT_f$ and $T_{gf}-T_fT_g$ are compact operators, $\sigma_{e}(T_f)=f(\mathbb{S})$ and $\|T_f\|_{e}=\sup\{|f(\zeta)|: \zeta\in\mathbb{S}\}$. In particular, $T_f$ is compact if and only if $f$ vanishes on $\mathbb{S}$. \end{corollary} \section{The Toeplitz Algebra} We now turn our attention to the $C^{*}$-algebra $\mathfrak{T}$ generated by all Toeplitz operators $T_f$, where $f$ belongs to $C(\bar{\mathbb{B}})$. Our main result in this section is a description of this algebra as an extension of the compact operators $\mathcal{K}$ by continuous functions on the unit sphere. We begin by exhibiting a class of block diagonal operators in $\mathfrak{T}$. A function $f$ on the unit ball is called radial if there is a function $\varphi$ defined on the interval $[0,1)$ so that $f(x)=\varphi(|x|)$ for $\nu$-almost every $x$ in $\mathbb{B}$. The following lemma, which is Lemma 4.2 in \cite{StroethoffAuMS1998} in the case $\nu$ a weighted Lebesgue measure, shows that each $\mathcal{H}_m, m=0,1,\ldots$ is an eigenspace for $T_f$. For completeness we include here a proof. \begin{lemma} \label{L:radial} If $f$ is a bounded radial function on $\mathbb{B}$, then each non-zero homogeneous harmonic polynomial of degree $m\geq 0$ is an eigenvector of $T_f$ with eigenvalue given by \begin{equation*} \lambda_m = \dfrac{\int_{[0,1)}\varphi(r)r^{2m}\mathrm{d}\mu(r)}{\int_{[0,1)}r^{2m}\mathrm{d}\mu(r)}, \end{equation*} where $\varphi$ is a bounded function on the interval $[0,1)$ so that $f(x)=\varphi(|x|)$ for $\nu$-almost every $x\in\mathbb{B}$. \end{lemma} \begin{proof} For any homogeneous harmonic polynomials $p$ of degree $m$ and $q$ of degree $k$, using \eqref{Eqn:orthogonality} we have \begin{align*} \langle T_fp,q\rangle & = \langle fp,q\rangle = \int_{[0,1)}\int_{\mathbb{S}}f(r\zeta)p(r\zeta)\bar{q}(r\zeta)\mathrm{d}\sigma(\zeta)\mathrm{d}\mu(r)\\ & = \Big(\int_{[0,1)}\varphi(r)r^{m+k}\mathrm{d}\mu(r)\Big)\int_{\mathbb{S}}p\bar{q}\mathrm{d}\sigma\\ & = \begin{cases} 0 & \text{ if } m\neq k,\\ \Big(\int_{[0,1)}\varphi(r)r^{2m}\mathrm{d}\mu(r)\Big)\int_{\mathbb{S}}p\bar{q}\mathrm{d}\sigma & \text{ if } m=k \end{cases}\\ & =\lambda_{m}\langle p, q\rangle. \end{align*} Since the span of homogeneous harmonic polynomials is dense in $b^2_{\nu}$, we conclude that $T_fp = \lambda_{m}p$ for any $p$ in $\mathcal{H}_m$. \end{proof} Let $\eta(x)=|x|^2$ for $x$ in $\mathbb{B}$. From the lemma, each subspace $\mathcal{H}_m$ is an eigenspace for $T_{\eta}$ with corresponding eigenvalue $\gamma_{m}=\dfrac{\int_{[0,1)}{r^{2m+2}\mathrm{d}\mu(r)}}{\int_{[0,1)}r^{2m}\mathrm{d}\mu(r)}$. We show that these eigenvalues form a strictly increasing sequence. In particular, they are pairwise distinct. \begin{lemma} \label{L:inequality} For any integer $m\geq 0$ we have $$\dfrac{\int_{[0,1)}{r^{2m+2}\mathrm{d}\mu(r)}}{\int_{[0,1)}r^{2m}\mathrm{d}\mu(r)} < \dfrac{\int_{[0,1)}{r^{2(m+1)+2}\mathrm{d}\mu(r)}}{\int_{[0,1)}r^{2(m+1)}\mathrm{d}\mu(r)}.$$ \end{lemma} \begin{proof} Let $a(r)=r^{m}$ and $b(r)=r^{m+2}$ for $0\leq r<1$. Cauchy-Schwarz's inequality gives $$\Big(\int a(r)b(r)\mathrm{d}\mu(r)\Big)^2\leq\Big(\int_{[0,1)}a^2(r)\mathrm{d}\mu(r)\Big)\Big(\int_{[0,1)}b^2(r)\mathrm{d}\mu(r)\Big),$$ which is almost the same as the required inequality. We need to show why the equality cannot occur. This is indeed the case because the ratio $b(r)/a(r)=r^2$ is not a constant function $\mu$-almost everywhere on $(0,1)$. \end{proof} We are now ready for the description of $\mathfrak{T}$, which is the main result in this section. \begin{theorem}\label{T:ToeplitzAlgebra} The following statements hold. (1) The commutator ideal $\mathfrak{CT}$ of $\mathfrak{T}$ is the same as the ideal $\mathcal{K}$ of compact operators on $b^2_{\nu}$. (2) Any element of $\mathfrak{T}$ has the form $T_f+K$ for some $f$ in $C(\bar{\mathbb{B}})$ and $K\in\mathcal{K}$ and there is a short exact sequence $$0\rightarrow\mathcal{K}\rightarrow\mathfrak{T}\rightarrow C(\mathbb{S})\rightarrow 0.$$ \end{theorem} Guo and Zheng \cite{GuoJMAA2002} proved Theorem \ref{T:ToeplitzAlgebra} for the case $n=2$ and $\nu$ the normalized Lebesgue measure. When $n$ is an even number and $\nu$ is the normalized Lebesgue measure, Theorem \ref{T:ToeplitzAlgebra} was proved by L. Coburn \cite{CoburnIMJ1973} for Toeplitz operators on the holomorphic Bergman space. The idea of our proof is similar to that of \cite[Theorem 1]{CoburnIMJ1973}. \begin{proof} We first show that $\mathfrak{T}$ is irreducible on $b^2_{\nu}$. Suppose $A$ is an operator on $b^2_{\nu}$ that commutes with all elements of $\mathfrak{T}$. Then in particular, $A$ commutes with $T_{\eta}$. For any homogeneous polynomials $p$ of degree $m$ and $q$ of degree $k$, we have \begin{align*} \langle AT_{\eta}p,q\rangle & = \gamma_m\langle Ap,q\rangle,\\ \langle T_{\eta}Ap,q\rangle & = \langle Ap, T_{\bar{\eta}}q\rangle = \langle Ap, T_{\eta}q\rangle=\gamma_k\langle Ap,q\rangle. \end{align*} Since $AT_{\eta}=T_{\eta}A$ and $\gamma_m\neq\gamma_k$ if $m\neq k$, we conclude that $\langle Ap,q\rangle=0$ if $m\neq k$. This implies that each subspace $\mathcal{H}_m$ is invariant, hence also reducing for $A$. In particular, $\mathcal{H}_{0}$ reduces $A$. But $\mathcal{H}_{0}$ is a one-dimensional space spanned by the constant function $e_{0}(x)=1$, so we have $Ae_{0}=\lambda e_{0}$ for some scalar $\lambda$. For each harmonic polynomial $p$, we have $$A(p) = AT_{p}(e_{0}) = T_{p}A(e_{0})=\lambda T_{p}(e_{0}) = \lambda p.$$ Since the space of harmonic polynomials is dense in $b^2_{\nu}$, we conclude that $A = \lambda I_{b^2_{\nu}}$. Thus $\mathfrak{T}$ is irreducible. Now Corollary \ref{C:essSpectra} shows that $\mathfrak{T}$ contains a non-zero compact operator (for example $T_{1-|x|^2}$). It then follows from a well known result in $C^{*}$-algebra \cite[Theorem 5.39]{Douglas1972} that $\mathfrak{T}$ contains the ideal $\mathcal{K}$ of compact operators. Therefore the commutator ideal $\mathfrak{CT}$ contains the commutator ideal of $\mathcal{K}$, which is the same as $\mathcal{K}$. On the other hand, for any functions $f, g$ in $C(\bar{\mathbb{B}})$, the commutator $T_fT_g - T_g T_f = (T_fT_g-T_{fg})-(T_gT_f-T_{fg})$ is compact by Corollary \ref{C:essSpectra} again. This implies the inclusion $\mathfrak{CT}\subseteq\mathcal{K}$, which completes the proof of statement (1). For the proof of statement (2), consider the map $\Phi:$ $f\mapsto T_{f}+\mathcal{K}$ from $C(\bar{\mathbb{B}})$ into the quotient algebra $\mathfrak{T}/\mathcal{K} = \mathfrak{T}/\mathfrak{CT}$. It is clear that $\Phi$ is a $*$-homomorphism of $C^{*}$-algebras. By a result from the theory of $C^{*}$-algebras, the range of $\Phi$ is a closed $C^{*}$-subalgebra. On the other hand, it follows from the definition of $\mathfrak{T}$ that the range of the $\Phi$ is dense. Hence, $\Phi$ is a surjective $*$-homomorphism and we have $\mathfrak{T} = \{T_{f}+K: f\in C(\bar{\mathbb{B}}) \text{ and } K\in\mathcal{K}\}.$ From Corollary \ref{C:essSpectra}, the kernel of $\Phi$ is the ideal $\{f\in C(\bar{\mathbb{B}}): f|_{\mathbb{S}}\equiv 0\}$. Since the quotient of $C(\bar{\mathbb{B}})$ by this ideal is naturally isometrically $*$-isomorphic to $C(\mathbb{S})$, $\Phi$ induces a $*$-isomorphism $\tilde{\Phi}: C(\mathbb{S})\longrightarrow\mathfrak{T}/\mathcal{K}.$ This shows that the sequence $$0\rightarrow \mathcal{K}\xrightarrow{\iota}\mathfrak{T}\xrightarrow{\tilde{\Phi}^{-1}\circ\pi} C(\mathbb{S})\rightarrow 0$$ is exact. Here $\iota$ is the inclusion map and $\pi:\mathfrak{T}\longrightarrow\mathfrak{T}/\mathcal{K}$ is the projection map. \end{proof} The following corollary shows that any Fredholm operator in $\mathfrak{T}$ has index zero. This generalizes Guo and Zheng's result to higher dimensions. \begin{corollary} Let $A$ be a Fredholm operator in $\mathfrak{T}$. Then $\ind(A)=0$. \end{corollary} \begin{proof} By Theorem \ref{T:ToeplitzAlgebra}, there is a function $f\in C(\bar{\mathbb{B}})$ and a compact operator $K_1$ so that $A=T_f+K_1$. By the remark after Theorem \ref{T:reduction}, there is a compact operator $K_2$ so that $T_{f}=W^{*}M_{f_{*}}W+K_2$, where $f_{*}$ is the restriction of $f$ on $\mathbb{S}$. Since $A$ is a Fredholm operator, $M_{f_{*}}$ is a Fredholm operator on $L^2(\mathbb{S})$ and these two operators have the same index. But $M_{f_{*}}$ is Fredholm if and only if $f_{*}$ does not vanish on $\mathbb{S}$. In this case, $M_{f_{*}}$ is invertible (with inverse $M_{1/f_{*}}$) hence its index is $0$. The proof of the corollary is thus completed. \end{proof} \section{Toeplitz Operators with General Symbols} In this section we consider Toeplitz operators with more general symbols. We present some necessary conditions for the compactness of $T_{f}$, where $f$ is assumed to be bounded. As a result, we show that if $f$ is a bounded harmonic function on $\mathbb{B}$ and $T_f$ is compact, then $f$ is the zero function. We begin with a well known result that Toeplitz operators whose symbols have zero limit at the boundary of the unit ball are compact. The proof is based on the boundedness of kernel functions on compact subsets. \begin{proposition} \label{P:cptMultiplication} If $f$ is a bounded (not necessarily continuous) function on $\mathbb{B}$ so that $\lim_{|x|\uparrow 1}f(x)=0$, then $M_f|_{b^2_{\nu}}$ is compact. As a consequence, the Toeplitz operator $T_f$ is compact on $b^2_{\nu}$. \end{proposition} \begin{proof} For any $0<r<1$, let $\mathbb{B}_r=\{x\in\mathbb{R}^2: |x|\leq r\}$ and let $f_{r}=f\chi_{\mathbb{B}_r}$. It follows from the hypothesis that $\|f-f_r\|_{\infty}\to 0$ as $r\uparrow 1$. Therefore, $\|M_f-M_{f_r}\|\to 0$ as $r\uparrow 1$. We now show that $M_{f_r}$ is a Hilbert-Schmidt operator on $b^2_{\nu}$ for $0<r<1$. Let $e_0, e_1, \ldots$ be an orthonormal basis for $b^2_{\nu}$. We have \begin{align}\label{Eqn:cptMultiplication} \sum_{j=0}^{\infty}\|M_{f_r}e_j\|^2 & = \sum_{j=0}^{\infty}\int_{\mathbb{B}}|f_r(x)|^2|e_j(x)|^2\mathrm{d}\nu(x)\notag\\ & = \int_{\mathbb{B}}|f_r(x)|^2\sum_{j=0}^{\infty}|e_j(x)|^2\mathrm{d}\nu(x)\\ & = \int_{\mathbb{B}_r}|f(x)|^2R(x,x)\mathrm{d}\nu(x).\notag \end{align} The last equality follows from the well known formula for the reproducing kernel function: $$R(x,x)=\|R_x\|^2 = \sum_{j=0}^{\infty}|\langle R_x,e_j\rangle|^2=\sum_{j=0}^{\infty}|e_j(x)|^2.$$ Since $R(x,x)$ is bounded for $x$ in $\mathbb{B}_r$ by \eqref{Eqn:ineqkernel}, the last integral in \eqref{Eqn:cptMultiplication} is finite. This shows that the operator $M_{f_r}|_{b^2_{\nu}}$ is a Hilbert-Schmidt operator on $b^2_{\nu}$. Therefore, $M_f|_{b^2_{\nu}}$, which is the norm limit of a net of compact operators, is compact. Since $T_f = PM_f|_{b^2_{\nu}}$, $T_f$ is also compact. \end{proof} The following proposition offers a necessary condition for a Toeplitz operator to be compact. \begin{proposition} \label{P:cptToeplitz} For each integer $m\geq 0$, let $\varphi_{m}$ be a positive function of the form $\varphi_{m}(x) = \sum_{j=1}^{s_m}|a_{j}^{(m)}(x)|^2,$ where $s_m$ is a positive integer and $a_j^{(m)}\in\mathcal{H}_m$ for $1\leq j\leq s_m$. Suppose $f$ is a bounded function on $\mathbb{B}$ so that $T_f$ is a compact operator on $b^2_{\nu}$. Then we have \begin{equation}\label{Eqn:cptLimita} \lim_{m\to\infty}\dfrac{\int_{\mathbb{B}}f(x)\varphi_{m}(x)\mathrm{d}\nu(x)}{\int_{\mathbb{B}}\varphi_m(x)\mathrm{d}\nu(x)}=0. \end{equation} In particular, \begin{equation}\label{Eqn:cptLimitb} \lim_{m\to\infty}\dfrac{\int_{[0,1)}\big(\int_{\mathbb{S}}f(r\zeta)\mathrm{d}\sigma(\zeta)\big)r^{2m}\mathrm{d}\mu(r)}{\int_{[0,1)}r^{2m}\mathrm{d}\mu(r)}=0. \end{equation} \end{proposition} \begin{proof} Let $\epsilon>0$ be given. By Lemma \ref{L:cpt}, there is an integer $m_{\epsilon}>0$ so that for all integers $m\geq m_{\epsilon}$ and $p\in\mathcal{H}_{m}$, we have $\|T_fp\|\leq\epsilon\|p\|$. In particular, $\|T_fa_j^{(m)}\|\leq\epsilon\|a_j^{(m)}\|$ for $1\leq j\leq s_m$ for each such $m$. Now, \begin{align*} \Big|\int_{\mathbb{B}}f(x)\varphi_m(x)\mathrm{d}\nu(x)\Big| & = \Big|\sum_{j=1}^{s_m}\int_{\mathbb{B}}f(x)a_j^{(m)}(x)\bar{a}_{j}^{(m)}(x)\mathrm{d}\nu(x)\Big| \\ & = \Big|\sum_{j=1}^{s_m}\langle T_fa_j^{(m)},a_j^{(m)}\rangle\Big|\\ & \leq\sum_{j=1}^{s_m}\|T_fa_j^{(m)}\|\|a_j^{(m)}\|\\ & \leq\sum_{j=1}^{s_m}\epsilon\|a_j^{(m)}\|^2 = \epsilon\int_{\mathbb{B}}\varphi_m(x)\mathrm{d}\nu(x). \end{align*} Therefore, $$\dfrac{|\int_{\mathbb{B}}f(x)\varphi_m(x)\mathrm{d}\nu(x)|}{\int_{\mathbb{B}}\varphi_m(x)\mathrm{d}\nu(x)}\leq\epsilon$$ for all $m\geq m_{\epsilon}$ and hence \eqref{Eqn:cptLimita} follows. Let $e^{(m)}_1, \ldots, e^{(m)}_{h_m}$ be an orthonormal basis for $\mathcal{H}_m$ and define $$R_m(x,y)=\sum_{j=1}^{h_m}e^{(m)}_j(x)\bar{e}^{(m)}_{j}(y)$$ for $x,y\in\mathbb{B}$. Then $R_m$ is the reproducing kernel for $\mathcal{H}_m$ and since $\mathcal{H}_{m}$ is invariant under the action of the group of orthogonal transformations $O(n)$, $R_m(Ty,x)=R_m(y,T^{-1}x)$ for all $T\in O(n)$ and all $x, y$ in $\mathbb{B}$. The proof of these assertions is the same as the proof of \cite[Proposition 5.27]{AxlerSpringer2001}. For $x\in\mathbb{B}$ and $T\in O(n)$, we have $R_m(Tx,Tx)=R_m(x,x)$. This implies that $R_m(x,x)=d_m|x|^{2m}$ for some constant $d_m$, which shows that $|x|^{2m} = d_m^{-1}\sum_{j=1}^{h_m}|e_j^{(m)}(x)|^2$. By choosing $\varphi_m(x)=|x|^{2m}$ and using integration in polar coordinates, we obtain \begin{align*} \int_{\mathbb{B}}f(x)\varphi_m(x)\mathrm{d}\nu(x) & =\int_{\mathbb{B}}f(x)|x|^{2m}\mathrm{d}\nu(x) = \int_{[0,1)}\Big(\int_{\mathbb{S}}f(r\zeta)\mathrm{d}\sigma(\zeta)\Big)r^{2m}\mathrm{d}\mu(r),\\ \int_{\mathbb{B}}\varphi_m(x)\mathrm{d}\nu(x)& =\int_{\mathbb{B}}|x|^{2m}\mathrm{d}\nu(x) = \int_{[0,1)}r^{2m}\mathrm{d}\mu(x). \end{align*} The limit \eqref{Eqn:cptLimitb} now follows from \eqref{Eqn:cptLimita}. \end{proof} \begin{theorem}\label{T:cptToeplitz} Suppose $f$ is a bounded function on $\mathbb{B}$ so that the radial limit $f_{*}(\zeta)=\lim_{r\uparrow 1}f(r\zeta)$ exists for $\sigma$-almost every $\zeta$ on $\mathbb{S}$. If $T_f$ is a compact operator on $b^2_{\nu}$, then $f_{*}(\zeta)=0$ for $\sigma$-almost every $\zeta\in\mathbb{S}$. \end{theorem} \begin{proof} For any multi-index $\alpha=(\alpha_1,\ldots,\alpha_n)\in\mathbb{N}_{0}^n$, the compactness of $T_f$ together with Corollary \ref{C:essSpectra} shows that $T_{f(x)x^{\alpha}}$ is a compact operator (here $x^{\alpha}=x_1^{\alpha_1}\cdots x_n^{\alpha_n}$). By Proposition \ref{P:cptToeplitz}, we have $$\lim\limits_{m\to\infty}\dfrac{\int_{[0,1)}\big(\int_{\mathbb{S}}f(r\zeta)r^{\alpha_1+\cdots+\alpha_n}\zeta^{\alpha}\mathrm{d}\sigma(\zeta)\big)r^{2m}\mathrm{d}\mu(r)}{\int_{[0,1)}r^{2m}\mathrm{d}\mu(r)}=0.$$ On the other hand, the hypothesis of the theorem together with the Dominated Convergence Theorem gives $$\lim_{r\uparrow 1} \int_{\mathbb{S}}f(r\zeta)r^{\alpha_1+\cdots+\alpha_n}\zeta^{\alpha}\mathrm{d}\sigma(\zeta)= \int_{\mathbb{S}}f_{*}(\zeta)\zeta^{\alpha}\mathrm{d}\sigma(\zeta).$$ It now follows from Lemma \ref{L:limitIntegrals} that $\int_{\mathbb{S}}f_{*}(\zeta)\zeta^{\alpha}\mathrm{d}\sigma(\zeta)=0$. Since this holds for any multi-index $\alpha$, we conclude that $f_{*}(\zeta)=0$ for $\sigma$-almost every $\zeta$ in $\mathbb{S}$. \end{proof} We say that a bounded function $f$ defined on $\mathbb{B}$ has a \textit{uniform radial limit} if there exists a function $f_{*}$ on $\mathbb{S}$ such that $$\lim_{r\uparrow 1}\Big(\sup\{|f(r\zeta)-f_{*}(\zeta)|: \zeta\in\mathbb{S}\}\Big)=0.$$ The function $f_{*}$ will be called the uniform radial limit of $f$. It is clear that any function $f$ in $C(\bar{\mathbb{B}})$ has a uniform radial limit, namely $f|_{\mathbb{S}}$. On the other hand, if $f_{*}$ is a bounded function on $\mathbb{S}$ and we define \begin{equation}\label{Eqn:extension} \varphi(x)=\begin{cases} |x|f_{*}(\frac{x}{|x|}) & \text{ if } 0<|x|\leq 1,\\ 0 & \text{ if } x=0, \end{cases} \end{equation} then it can be checked that $f_{*}$ is the uniform radial limit of $\varphi$. Moreover, if $f_{*}$ is continuous on $\mathbb{S}$, then $\varphi$ is continuous on $\bar{\mathbb{B}}$. Using Proposition \ref{P:cptMultiplication} we extend Corollary \ref{C:essSpectra} to functions with continuous uniform radial limits. \begin{corollary}\label{C:strongEssSpectra} Let $f$ be a bounded function on $\mathbb{B}$ with uniform radial limit $f_{*}$ on $\mathbb{S}$. Assume that $f_{*}$ is continuous on $\mathbb{S}$. Let $g$ be a bounded function on $\mathbb{B}$. Then $T_{gf}-T_gT_f$ and $T_{gf}-T_fT_g$ are compact operators, $\sigma_{e}(T_f)=f_{*}(\mathbb{S})$ and $\|T_f\|_{e}=\sup\{|f_{*}(\zeta)|: \zeta\in\mathbb{S}\}$. In particular, $T_f$ is compact if and only if $f_{*}$ vanishes on $\mathbb{S}$. \end{corollary} \begin{proof} Let $\varphi$ be defined as in \eqref{Eqn:extension}. Then $\varphi$ belongs to $C(\bar{\mathbb{B}})$ and we have $\lim_{|x|\uparrow 1}|f(x)-\varphi(x)|=0$. Proposition \ref{P:cptMultiplication} shows that $T_{f}-T_{\varphi}$ and $T_{gf}-T_{g\varphi}$ are compact. The conclusions now follow from Corollary \ref{C:essSpectra}, which says $T_{g\varphi}-T_gT_{\varphi}$ and $T_{g\varphi}-T_{\varphi}T_g$ are compact operators, $\sigma_{e}(T_{\varphi})=\varphi(\mathbb{S})=f_{*}(\mathbb{S})$ and $\|T_{\varphi}\|_{e}=\sup\{|\varphi(\zeta)|: \zeta\in\mathbb{S}\}=\sup\{|f_{*}(\zeta)|: \zeta\in\mathbb{S}\}$. \end{proof} For functions whose uniform radial limits may not be continuous, we are unable to decide the validity of all conclusions in Corollary \ref{C:strongEssSpectra} but we do obtain a characterization for compactness. \begin{corollary}\label{C:cptRadialLimit} Suppose $f$ is a bounded function on $\mathbb{B}$ with uniform radial limit $f_{*}$ on $\mathbb{S}$. Then $T_f$ is compact if and only if $f_{*}(\zeta)=0$ for $\sigma$-almost every $\zeta$ in $\mathbb{S}$. \end{corollary} \begin{proof} The ``if'' part follows from Proposition \ref{P:cptMultiplication} and the ``only if'' part follows from Theorem \ref{T:cptToeplitz}. \end{proof} Our last result in the paper is the fact that there is no non-zero compact Toeplitz operator with harmonic symbol. This was proved earlier by Choe, Koo and Na in \cite{ChoeRMJM2010} with a different approach for the case $\nu$ the normalized Lebesgue measure. \begin{corollary} Suppose $f$ is a bounded harmonic function on $\mathbb{B}$ so that $T_f$ is a compact operator on $b^2_{\nu}$, then $f(x)=0$ for all $x\in\mathbb{B}$. \end{corollary} \begin{proof} It is well known (see \cite[Theorems 6.13 and 6.39]{AxlerSpringer2001}) that the radial limit $f_{*}(\zeta)=\lim_{r\uparrow 1}f(r\zeta)$ exists for $\sigma$-almost every $\zeta$ on $\mathbb{S}$ and $f$ is the Poisson integral of $f_{*}$. Since $T_f$ is assumed to be compact, Theorem \ref{T:cptToeplitz} shows that $f_{*}(\zeta)=0$ for $\sigma$-almost every $\zeta$ in $\mathbb{S}$. Thus $f(x)=0$ for all $x$ in $\mathbb{B}$. \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2] \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,108,101,566,190
arxiv
\section{Introduction} Extrasolar Giant Planets (EGPs) are now being discovered at an accelerating pace \citep{schneider,butler}. In particular, an increasing interest has been focused on hot Jupiters that transit their parent stars, since they represent a valuable tool to determine key physical parameters of the EGPs, such as atmospheric composition and dynamics, thermal properties and presence of condensates \citep{seager}. The most studied transiting extrasolar planet, HD209458b, orbits a main sequence G-type star at 0.046 AU (period 3.52 days). It is the first one for which repeated transits across the stellar disk were observed ($\sim$ 1.6\% absorption; \citet{henry,charbonneau}). Along with radial velocity measurements \citep{mazeh}, it was possible to determine mass and radius ($M_{p} \sim 0.69 \, M_{Jup}$, $R_{p} \sim 1.4 \, R_{Jup}$), confirming the planet is a gas giant, with one of the lowest densities discovered so far. Owing to this property, its very extended atmosphere is one of the best candidates to be probed with transit techniques. In particular the upper atmosphere extends beyond the Roche lobe, showing a population of escaping atoms. This discovery was possibly due to the observed extraordinary deep absorptions in HI, OI and CII over the stellar emissions lines (15\%, 13\% and 7.5\% respectively; \citet{alfred1,alfred2}). The numerous follow-up observations of HD209458b, also include the detection and upper limits of absorption features in the deeper atmosphere \citep{charbonneaua, richardsona, richardsonb, demingb, richardsonc}. Most recently \citet{deminga} detected the thermal emission of this planet with the Spitzer Space Telescope during a secondary transit in the 24 $\mu$m band and \citet{richardsonc} detected the first primary eclipse in the same band. To explain these observations several models were proposed, including atmospheric photochemistry, thermal properties, 3-D circulation simulations, cloud and condensate height, escaping processes (e.g. \citet{fortney,burrows,liang1,lecav,yelle,tian,iro,seagerb}). For these reasons, the extrasolar planet HD209458b is the best known so far. Additional observations are required, though, to constrain the past, present and future modeling effort. The planet HD189733b, recently discovered by \citet{bouchy}, with mass $M_{p} \sim 1.15 \, M_{Jup}$ and $R_{p} \sim 1.26 \, R_{Jup}$, orbits an early main sequence K star at 0.0313 AU. It is an exoplanet transiting the brightest and closest star discovered so far. Here we focus our interest on both planets HD209458b and HD189733b, and we model the spectral absorption features in the Mid-Infrared (MIR) due to the most abundant atmospheric molecules during their \emph{primary eclipse}, i.e., when the planet passes in front of the parent star. The use of transmission spectroscopy to probe the upper layers of the transiting EGPs, has been particularly successful in the UV and visible spectral ranges \citep{charbonneaua,richardsona,richardsonb,demingb,alfred1,alfred2}, and only very recently attempted in the MIR, in the 24 $\mu$m band, using Spitzer observations \citep{richardsonc}. Two circumstances make it an approach worth considering. First, the high surface temperatures, relatively small masses, and mostly hydrogen atmospheres of close-in EGPs imply large atmospheric scale heights. As a consequence, for spectral features that span a reasonably wide bandwidth, so that the total photon flux is not too small, this is a feasible and diagnostically powerful technique. Second, this is a complementary approach to the secondary eclipse observations. Transmission spectroscopy, taken during primary eclipse, is sensitive to different parameters and regions of the atmosphere, compared to emission spectroscopy, on which the secondary eclipse method is based. \paragraph{Oxygen versus Carbon $\to$ H$_2$O versus CO} In a solar system like ours, a significant amount of water vapor (H$_2$O) can exist only in planetary atmospheres at orbital distances less than 1 AU. The requirement is certainly met for the known transiting EGPs. Carbon monoxide (CO) and methane (CH$_{4}$), and other photochemical products, such as carbon dioxide (CO$_{2}$) and acetylene (C$_{2}$H$_{2}$), are plausibly present in the atmospheres of EGPs, and possibly abundant to be detected. These species have strong absorption bands in the MIR, and more importantly, in spectral regions compatible with present and future space-based observations such as the Spitzer Space Telescope or the James Webb Space Telescope \citep{jwst}. Given O and C, H$_{2}$O and CO will be controlled mainly by the relative abundances of these two species. \begin{itemize} \item If C/O ratio is close to the solar, H$_{2}$O, CO and CH$_{4}$ abundances are determined by the thermodynamic equilibrium chemistry in the deep atmosphere \citep{liang1,liang2}. \item If C/O ratio is above solar, the atmospheric chemistry might change dramatically and, according to the scenario proposed by \citet{kuchner}, planets should show a significant paucity of water vapor in their atmospheres, carbon rich species should by contrast be enhanced. In particular, CO is expected to be the dominant carbon-bearing molecule at high temperatures and CH$_4$ the dominant at low temperatures \citep{kuchner}. \item If C/O ratio is below solar, the atmosphere is depopulated of carbon-bearing molecules and water-vapor is the dominant species between $\sim 10^{-10}$ to 1.5 bars. \end{itemize} \section{Description of the Model} We have built a model of planetary atmosphere and calculated the expected absorption of the stellar light when filtered through the planetary atmospheric layers. This has been already discussed in the literature. In particular for our simulations, we have used the geometry and the equations described in \citet{brown} (fig. 1, configuration 2) and \citet{david} (sec. 2.1, fig. 1). Our cloud and haze-free atmospheres were divided in forty layers spanning from $\sim 10^{-10}$ to 1 bars. Photochemical models are used to determine the molecular abundances of 33 species above $\sim$1 bar altitude level. We start with four parent molecules H$_2$, CO, H$_2$O, and CH$_4$. Their relative abundances are determined by thermochemistry in the deep atmosphere, and are fixed as our lower boundary condition. Chemical reactions and eddy mixing profiles are taken from \citet{liang1,liang2}. Details of the model can be referred to \citet{liang1,liang2} and references contained therein. For the simulation of the photochemistry of HD209458b, we adopt the solar spectrum. For HD189733b we use the spectrum of HD22049, which is a K2V star similar to HD189733 \citep{segura}. We have repeated our calculations for three temperature-pressure profiles (fig. \ref{model} right), to test the sensitivity of our results to these assumptions. The modeled chemical abundances show a negligible dependence on temperature (see \citep{liang2}) The absorption coefficients in the MIR were estimated using a line-by-line model, LBLABC \citep{meadows}, that generates monochromatic gas absorption coefficients from molecular line lists -HITEMP, \citet{rothman}-, for each of the gases present in the atmosphere. \section{Results} Fig. \ref{model} (left) shows the molecular profiles of H$_{2}$O, CO, CH$_{4}$, CO$_{2}$ and C$_{2}$H$_{2}$ for both planets HD189733b and HD209458b, calculated by the photochemistry model with solar C/O ratio as boundary condition. On HD189733b, H$_{2}$O, CH$_{4}$ and C$_{2}$H$_{2}$ are more abundant in the upper atmosphere compared to HD209458b, since HD189733 is a later type star, therefore the photo-dissociation processes occurring in the atmosphere of that planet are less significant. \emph{Sensitivity to molecular abundances and C/O ratio.} Figs.~\ref{ratio} show the predicted absorption signatures due to water vapor and CO on the planets HD189733b and HD209458b. The three plots compare the spectral absorptions of these two species when C/O ratio is solar (standard case, solid line, see fig. \ref{model} left for mixing ratios) and when is below and above solar. As specific examples, we have assumed H$_2$O to be 10 times more and at the same time CO 10 times less abundant than the standard case (dotted line), and vice-versa (dashed line). When we increase/diminish H$_2$O and CO of a factor 10, the absorption is increased/decreased by a constant $\sim$ 0.03~\% through all the selected wavelength range. In these figures, we show also the signatures (white rhombi, squares and triangles) relative to the three cases (standard, C/O ratio below and above solar) averaged over the IRAC, IRS and MIPS bandpasses (centered at 3.6, 4.5, 5.8, 8, 16 and 24 $\mu$m), the instruments on board the Spitzer Space Telescope (see table 1 for the calculated absorptions). Water vapor has strong absorption lines through all the selected spectral range in the MIR, the CO signature appears only in a narrower wavelength interval, where the IRAC 4.5 $\mu$m bandpass is centered. When the C/O ratio is above solar, the triangle is expected to appear above the rhombus in that band, indicating the strong CO contribution to the total absorption (table 1, numbers in bold). Note that we included the species CH$_{4}$, CO$_{2}$ and C$_{2}$H$_{2}$ in our calculations, which strongly absorb in the MIR. However their abundances are too small compared to CO and H$_2$O, and they are masked by these species. For example, we show in fig. \ref{co2} (left) the contribution due to CO$_2$ in the case of C/O above solar ratio. CO$_2$ is increased by a factor 10 here, to follow CO behavior consistently with chemistry predictions \citep{liang1}. When CO$_2$ is present, an increase in absorption of $\sim$ 0.02~\% is found in the band centered on 15 $\mu$m and a very narrow peak reaching 0.07~\% is visible at shorter wavelengths. Although not negligible, the contribution due to CO$_2$ is masked by CO and water. \emph{Sensitivity to temperature.} The effects due to temperature variations (fig. \ref{co2} right) are negligible compared to the changes in molecular mixing ratios. Temperature plays a secondary role in the determination of the optical depth: it affects the absorption coefficients and the atmospheric scale heights (see eq. 2 in \citet{david}). For HD189733b, the discrepancy between the standard and the hot profile is less than 0.005\%, and between the standard and the very hot profile has a maximum of 0.014\% in the 15-30 $\mu$m range (fig. \ref{co2} right). Analogous results are obtained for HD209458b. \section{Discussion} In our model we did not include the contribution of hazes or clouds \citep{ackerman,lunine,fortneya}. Due to their presence, the atmospheric optical depth might increase, partially masking the absorption features due to atmospheric molecules. In the case of water vapor and CO, only clouds/hazes lying at altitudes higher than 1 bar might affect our results. Predictions of cloud/haze is particularly difficult for EGPs, since the few observations we have are not sufficient to constrain all the cloud microphysics and aerosol parameters. Moreover, the planetary limb observable during the star occultation might show the signatures of both the night and day side of those planets, which are presumably tidally locked \citep{iro}. The thermal profiles, hence the condensate dynamics, might be very different on the two sides. Consequently, a more complete model able to predict cloud and haze location and optical characteristics, should contain a 3-D dynamical simulation of the atmosphere. In this paper we limit our simulations to the cloud-haze free atmosphere, with the caveat they might be perturbed by the possible presence (constant or variable) of optically thick particles in the atmosphere above 1 bar pressure. Same considerations are valid for the thermal profiles. A extensive literature is available on $T-P$ profiles for EGPs at pressures from $\sim$1 bar to 10$^{-4}$-10$^{-6}$ bars, most recently including 3D dynamical effects \citep{showman,iro,cho,burrowsa}. For transmission spectroscopy in the MIR, we need to consider also the contribution of the upper atmosphere. The $T-P$ profiles calculated by \citet{tian,yelle} suggest that the trend for the atmospheric temperature is to increase in the exosphere. For our simulations, we use a $T-P$ profile compatible with the lower atmosphere models cited above up to 10$^{-3}$-10$^{-4}$ bars, and then we consider three cases: the atmospheric temperature decreases up to 10$^{-10}$ bars (standard), the atmosphere is isothermal (hot, very hot profiles). Our results, show that the differences among the spectra calculated with the three profiles are within 0.009~\% for $\lambda \le 14 \mu$m and within 0.014~\% at longer wavelengths, so we are confident our simulations will not significantly change using a more refined thermal structure. Our model atmospheres extend to 10$^{-10}$ bars, where non local thermodynamic equilibrium (non-LTE) effects might occur \citep{kutepov}. However, if we truncate our calculations to 10$^{-5}$ bars, we obtain a maximum error of $\sim 0.02 \%$ at 30 $\mu$m (no discrepancy for wavelengths shorter than 20 $\mu$m), indicating that our calculated absorptions in LTE regime are correct at first order approximation. In order to detect the presence of water vapor and CO on HD189733b and HD209458b in the MIR, an extra absorption of $\sim$ 0.15~\% is expected to be added to the 2.85~\% and 1.6~\% due to the optically thick disks at 1 bar atmospheric level. To estimate the chemical abundances, an accuracy of at least 0.03~\% is needed. By inspection of the relative absorption of the IRAC 4.5 $\mu$m bandpass with respect to the others, we might be able to infer the C/O ratio, but in this case an extremely high S/N is required. HD189733 is a bright K0V star of magnitude K = 5.5. We estimate the brightness in the four IRAC bands to be of the order of 1850, 1100, 730 and 400 mJy at 3.6, 4.5, 5.8 and 8 $\mu$m respectively. For HD209458, a G0V star with K-magnitude of 6.3, the IRAC predicted fluxes are 878, 556, 351 and 189 mJy. According to these numbers, a better S/N should be obtainable for HD189733b \citep{demingc}, and this makes HD189733b a better candidate for observations. \section{Conclusions} In this paper we have presented simulations of transmission spectra of two extrasolar giant planets during their transit in front of their parent star. According to our calculations, we estimate an excess absorption in the IR of up to 0.15 \% for HD189733b and up to 0.12 \% for HD209458b (C/O ratio $\sim$ solar), in addition to the nominal 2.85 \% and 1.6 \% absorptions measured at shorter wavelengths. If water were far less abundant, other species might be observable, depending on their mixing ratios. Among them, CO$_{2}$, CH$_{4}$ and C$_{2}$H$_{2}$ are the best candidates. According to our simulations, transmission spectra of EGPs in the MIR are sensitive to molecular abundances and less to temperature variations. Temperature influences the transmission spectrum by way of its influence on the atmospheric scale height, as discussed by Brown (2001), and on the absorption coefficients. If water vapor and CO are as abundant as photochemical models predict, we expect they can be detected with the IRAC, IRS and MIPS instruments on board the Spitzer Telescope and with future telescopes like JWST. Moreover, if an accuracy of 0.03~\% is obtainable, future observations may give a first direct estimate of H$_{2}$O and CO abundances in the upper atmosphere of EGPs and possibly -depending on their mixing ratios- a constraint on CO$_{2}$, CH$_{4}$ and C$_{2}$H$_{2}$. \acknowledgments \section*{Acknowledgments} We would like to thank the anonymous referee for his help to improve the paper, L. S. Rothman for having provided the HITEMP data list, R. Ferlet, J. M. D\'esert, F. Bouchy, G. Hebrard, A. Noriega Crespo and S. Carey, for their valuable inputs, and C. D. Parkinson for useful comments. G. Tinetti is currently sponsored by the European Space Agency. M. C. Liang and Y. L. Yung are supported by the NASA grant NASA5-13296 to the California Institute of Technology.
1,108,101,566,191
arxiv
\section{The category of reduced orbifolds}\label{sec_redorbcat} To define an orbifold category where the objects are orbifolds and the morphisms are equivalence classes of charted orbifold maps we have to answer the following questions: \begin{enumerate}[(i)] \item \label{q1} When shall two charted orbifold maps be considered as equal? In other words, what shall be the equivalence relation? \item \label{q2} What shall be the identity morphism of an orbifold? \item \label{q3} How does one compose $\varphi\in \Orbmap(\mathcal V,\mathcal V')$ and $\psi\in\Orbmap(\mathcal V',\mathcal V'')$? \item \label{q4} What is the composition in the category? \end{enumerate} The leading idea is that charted orbifold maps are equivalent if and only if they induce the same charted orbifold map on common refinements of the orbifold atlases. Therefore, we will introduce the notion of an induced charted orbifold map. It turns out that answers to the questions \eqref{q2} and \eqref{q3} naturally extend to answers of \eqref{q1} and \eqref{q4}, and that the arising category has a counterpart in terms of marked atlas groupoids and homomorphisms. We start with the definition of the identity morphism of an orbifold. This definition is based on the idea that the identity morphism of $(Q,\mathcal U)$ shall be represented by a collection of local lifts of $\id_Q$ which locally induce $\id_S$ on some orbifold charts, and that each such collection which satisfies (\apref{R}{liftings}{}) shall be a representative. \subsection{Composition of charted orbifold maps} \begin{construction}\label{constr_comp} Let $(Q,\mathcal U)$, $(Q',\mathcal U')$ and $(Q'',\mathcal U'')$ be orbifolds, and \begin{align*} \mc V \mathrel{\mathop:}= \{ (V_i,G_i,\pi_i) \mid i\in I\},\qquad \mc V' \mathrel{\mathop:}= \{ (V'_j,G'_j,\pi'_j) \mid j\in J \} \end{align*} resp.\@ $\mathcal V''$ be representatives for $\mathcal U$, $\mathcal U'$ resp.\@ $\mathcal U''$, where $\mc V$ resp.\@ $\mc V'$ are indexed by $I$ resp.\@ $J$. Suppose that \[ \hat f = (f, \{\tilde f_i\}_{i\in I}, [P_f,\nu_f]) \in \Orbmap(\mathcal V, \mathcal V')\] and \[ \hat g = (g, \{\tilde g_j\}_{j\in J}, [P_g,\nu_g]) \in \Orbmap(\mathcal V',\mathcal V'')\] are charted orbifold maps and that $\alpha\colon I \to J$ is the unique map such that for each $i\in I$, $\tilde f_i$ is a local lift of $f$ w.r.t.\@ $(V_i,G_i,\pi_i)$ and $(V'_{\alpha(i)}, G'_{\alpha(i)}, \pi'_{\alpha(i)})$. The composition \[ \hat g \circ \hat f \mathrel{\mathop:}= \hat h = (h, \{\tilde h_i\}_{i\in I}, [P_h,\nu_h]) \in \Orbmap(\mathcal V,\mathcal V'')\] is given by $h\mathrel{\mathop:}= g\circ f$ and $\tilde h_i \mathrel{\mathop:}= \tilde g_{\alpha(i)} \circ \tilde f_i$ for all $i\in I$. To construct a representative $(P_h,\nu_h)$ of $[P_h,\nu_h]$ we fix representatives $(P_f,\nu_f)$ and $(P_g,\nu_g)$ of $[P_f,\nu_f]$ and $[P_g,\nu_g]$, respectively. The leading idea to define $(P_h,\nu_h)$ is to take $P_h = P_f$ and $\nu_h = \nu_g \circ \nu_f$. But since $\nu_f(\lambda)$ is not necessarily in $P_g$ for $\lambda\in P_f$, the composition $\nu_g\circ\nu_f$ might be ill-defined. In the following we refine this idea. Let $\mu \in P_f$ and suppose that $\dom\mu \subseteq V_i$ and $\cod\mu\subseteq V_j$ for the orbifold charts $(V_i,G_i,\pi_i)$ and $(V_j,G_j,\pi_j)$ in $\mathcal V$. By (\apref{R}{invariant}{}) \[ \tilde f_j \circ \mu = \nu_f(\mu) \circ \tilde f_i \vert_{\dom \mu},\] where $\nu_f(\mu) \in \Psi(\mc U')$. By possibly shrinking domains, we may assume that $\nu_f(\mu) \in \Psi(\mc V')$. For $x\in\dom\mu$ we set $y_x \mathrel{\mathop:}= \tilde f_i(x)$, which is an element of $\dom\nu_f(\mu)$. Hence we find (and fix a choice) $\xi_{\mu,x} \in P_g$ with $y_x\in \dom\xi_{\mu,x}$ and an open set $U'_{\mu,x} \subseteq \dom \xi_{\mu,x} \cap \dom \nu_f(\mu)$ such that $y_x \in U'_{\mu,x}$ and \[ \xi_{\mu,x}\vert_{U'_{\mu,x}} = \nu_f(\mu)\vert_{U'_{\mu,x}}. \] Then we find (and fix) an open set $U_{\mu,x} \subseteq \dom\mu$ with $x\in U_{\mu,x}$ such that $\tilde f_i(U_{\mu,x}) \subseteq U'_{\mu,x}$. By adjusting choices we achieve that for $\mu_1,\mu_2\in P_f$ and $x_1\in\dom\mu_1$, $x_2\in\dom\mu_2$ we either have \begin{equation}\label{wll1} \mu_1\vert_{U_{\mu_1,x_1}} \not= \mu_2\vert_{U_{\mu_2,x_2}}\quad\text{or}\quad \xi_{\mu_1,x_1} = \xi_{\mu_2,x_2}. \end{equation} Define \[ P_h\mathrel{\mathop:}= \big\{ \mu\vert_{U_{\mu,x}} \big\vert\ \mu\in P_f,\ x\in\dom\mu \big\}, \] which obviously is a quasi-pseudogroup generating $\Psi(\mc V)$, and set \[ \nu_h\big( \mu\vert_{U_{\mu,x}} \big) \mathrel{\mathop:}= \nu_g(\xi_{\mu,x}) \] for $\mu\vert_{U_{\mu,x}}\in P_h$. Property~\eqref{wll1} yields that $\nu_h$ is a well-defined map from $P_h$ to $\Psi(\mc U'')$. One easily sees that $\nu_h$ satisfies (\apref{R}{invariant}{}) - (\apref{R}{unit_pg}{}), and that the equivalence class of $(P_h,\nu_h)$ does not depend on the choices we made for the construction of $P_h$ and $\nu_h$. \end{construction} \begin{remark}\label{F1_equivariant} The construction of the composition of two charted orbifold maps immediately implies that the maps $F_1$ and $F_2$ (cf.\@ Proposition~\ref{conclusion}) are both functorial. \end{remark} The following lemma provides the definition of induced charted orbifold map and shows its relation to lifts of the identity. \begin{lemmadefi}\label{onlyinduced} Let $(Q,\mathcal U)$ and $(Q',\mathcal U')$ be orbifolds. Further let \begin{align*} \mathcal V & = \{ (V_i, G_i, \pi_i) \mid i\in I \} \ \text{be a representative of $\mathcal U$, indexed by $I$,} \\ \mathcal V' & = \{ (V'_l, G'_l, \pi'_l) \mid l\in L\} \ \text{be a representative of $\mathcal U'$, indexed by $L$,} \\ \hat f & = \big( f, \{ \tilde f_i\}_{i\in I}, [P_f,\nu_f] \big) \in \Orbmap(\mathcal V, \mathcal V'), \end{align*} and $\beta\colon I\to L$ be the unique map such that for each $i\in I$, $\tilde f_i$ is a local lift of $f$ w.r.t.\@ $(V_i,G_i,\pi_i)$ and $(V'_{\beta(i)}, G'_{\beta(i)}, \pi'_{\beta(i)})$. Suppose that we have \begin{itemize} \item a representative $\mathcal W = \{ (W_j, H_j, \psi_j) \mid j\in J \}$ of $\mathcal U$, indexed by $J$, \item a subset $\{ (W'_j, H'_j, \psi'_j) \mid j\in J \}$ of $\mc U'$, indexed by $J$ (not necessarily an orbifold atlas), \item a map $\alpha\colon J\to I$, \item for each $j\in J$, an open embedding \[ \lambda_j \colon \big(W_j, H_j, \psi_j\big) \to \big(V_{\alpha(j)}, G_{\alpha(j)}, \pi_{\alpha(j)}\big),\] and an open embedding \[ \mu_j \colon \big(W'_j, H'_j, \psi'_j\big) \to \big(V'_{\beta(\alpha(j))}, G'_{\beta(\alpha(j))}, \pi'_{\beta(\alpha(j))}\big)\] such that \[ \tilde f_{\alpha(j)}\big( \lambda_j(W_j)\big) \subseteq \mu_j(W'_j).\] \end{itemize} For each $j\in J$ set \[ \tilde h_j \mathrel{\mathop:}= \mu_j^{-1} \circ \tilde f_{\alpha(j)}\circ \lambda_j \colon W_j \to W'_j.\] Then \begin{enumerate}[{\rm (i)}] \item \label{firstident} $\varepsilon\mathrel{\mathop:}= \big(\id_Q, \{ \lambda_j\}_{j\in J}, [P_\varepsilon,\nu_\varepsilon]\big) \in \Orbmap(\mathcal W,\mathcal V)$ (with $[P_\varepsilon,\nu_\varepsilon]$ provided by Proposition~\ref{extending}) is a lift of $\id_{(Q,\mc U)}$. \item \label{secondident} The set $\{ (W'_j, H'_j, \psi'_j)\mid j\in J\}$ and the family $\{\mu_j\}_{j\in J}$ can be extended to a representative \[ \mathcal W' = \big\{ (W'_k, H'_k,\psi'_k) \ \big\vert\ k\in K \big\} \] of $\mathcal U'$ and a family of open embeddings $\{\mu_k\}_{k\in K}$ such that \[ \varepsilon'\mathrel{\mathop:}= \big(\id_{Q'}, \{ \mu_k\}_{k\in K}, [P_{\varepsilon'},\nu_{\varepsilon'}]\big) \in \Orbmap(\mathcal W',\mathcal V') \] (with $[P_{\varepsilon'},\nu_{\varepsilon'}]$ provided by Proposition~\ref{extending}) is a lift of the identity $\id_{(Q',\mc U')}$. \item \label{mapind} There is a uniquely determined equivalence class $[P_h,\nu_h]$ such that \[ \hat h \mathrel{\mathop:}= (f, \{\tilde h_j\}_{j\in J}, [P_h,\nu_h]) \in \Orbmap(\mathcal W,\mathcal W') \] and such that the diagram \[ \xymatrix{ & \mathcal V \ar[r]^{\hat f} & \mathcal V' \\ \mathcal W \ar[ur]^{\varepsilon} \ar[rrr]^{\hat h} &&& \mathcal W' \ar[ul]_{\varepsilon'} } \] commutes. \end{enumerate} We say that $\hat h$ is \emph{induced} by $\hat f$. \end{lemmadefi} \begin{proof} \eqref{firstident} is clear by Proposition~\ref{induceslifts} and \ref{extending}. To show that \eqref{secondident} holds we construct one possible extension: Let \[ y\in Q' \setminus \bigcup_{j\in J} \psi'_j(W'_j).\] Then there is a chart $(V', G',\pi') \in \mathcal V'$ such that $y\in\pi'(V')$. Extend the set \[ \{ (W'_j, H'_j,\psi'_j) \mid j\in J\} \] with $(V',G',\pi')$ and the family $\{ \mu_j\}_{j\in J}$ with $\id_{V'}$. If this is done iteratively, one finally gets on orbifold atlas of $Q'$ as wanted. Then Proposition~\ref{induceslifts} and \ref{extending} yield the remaining claim of \eqref{secondident}. The following considerations are independent of the specific choices of extensions. Concerning \eqref{mapind} we remark that each $\tilde h_j$ is obviously a local lift of $f$. Fix a representative $(P_f,\nu_f)$ of $[P_f,\nu_f]$. In the following we construct a pair $(P_h,\nu_h)$ for which $\hat h$ is an orbifold map and the diagram in \eqref{mapind} commutes. It will be clear from the construction that the equivalence class $[P_h,\nu_h]$ is independent of the choice of $(P_f,\nu_f)$ and uniquely determined by the requirement of the commutativity of the diagram. Let $\gamma\in \Psi(\mathcal W)$ and $x\in\dom\gamma$. Possibly shrinking the domain of $\gamma$, we may assume that $\dom \gamma \subseteq W_j$ and $\cod \gamma \subseteq W_k$ for some $j,k\in J$. In the following we further shrink the domain of $\gamma$ to be able to define $\nu_h$ as a composition of $\nu_f$ with elements of $\{\mu_j\}_{j\in J}$. Let $y \mathrel{\mathop:}= \lambda_j(x)$. Since \[ \tilde\gamma\mathrel{\mathop:}= \lambda_k \circ \gamma\circ \left(\lambda_j\vert_{\dom\gamma} \right)^{-1} \colon \lambda_j(\dom\gamma) \to \lambda_k(\cod\gamma) \] is an element of $\Psi(\mathcal V)$, we find $\beta_\gamma \in P_f$ such that $y\in\dom\beta_\gamma$ and $\germ_y\beta_\gamma = \germ_y\tilde \gamma$. Then \[ z\mathrel{\mathop:}= \tilde f_{\alpha(j)}(y) \in \dom\nu_f(\beta_\gamma) \cap \mu_j(W'_j).\] Since \[ \nu_f(\beta_\gamma)(z) = \tilde f_{\alpha(k)}(\beta_\gamma(y)) \in \mu_k(W'_k), \] the set \[ U' \mathrel{\mathop:}= \dom\nu_f(\beta_\gamma) \cap \mu_j(W'_j) \cap \nu_f(\beta_\gamma)^{-1}(\mu_k(W'_k)) \] is an open neighborhood of $z$. Define \[ U_1 \mathrel{\mathop:}= \{ w\in\dom\beta_\gamma \cap \lambda_j(\dom\gamma) \mid \germ_w\beta_\gamma = \germ_w\tilde\gamma \},\] which is an open neighborhood of $y$. Then also \[ U \mathrel{\mathop:}= U_1 \cap \tilde f_{\alpha(j)}^{-1}(U') \] is an open neighborhood of $y$. We fix an open neighborhood $U_{\gamma,x}$ of $x$ in $\lambda_j^{-1}(U)$. Further we suppose that for $\gamma_1,\gamma_2\in \Psi(\mc W)$, $x_1\in\dom\gamma_1$, $x_2\in\dom\gamma_2$, we either have \begin{equation}\label{wll2} \gamma_1\vert_{U_{\gamma_1,x_1}} \not= \gamma_2\vert_{U_{\gamma_2,x_2}} \quad\text{or}\quad \nu_f(\beta_{\gamma_1}) = \nu_f(\beta_{\gamma_2}). \end{equation} Then we define \[ P_h \mathrel{\mathop:}= \big\{ \gamma\vert_{U_{\gamma,x}} \big\vert\ \gamma\in\Psi(\mc W),\ x\in\dom\gamma \big\} \] and set \[ \nu_h\big(\gamma\vert_{U_{\gamma,x}}\big) \mathrel{\mathop:}= \mu_k^{-1}\circ \nu_f(\beta_\gamma) \circ \mu_j \] for $\gamma\vert_{U_{\gamma,x}} \in P_h$ with $x\in W_j$ and $\gamma(x) \in W_k$ ($j,k\in J$). The map $\nu_h\colon P_h\to \Psi(\mc W')$ is well-defined by \eqref{wll2}. One easily checks that $(P_h,\nu_h)$ satisfies all requirements of \eqref{mapind}. \end{proof} We consider two charted orbifold maps as equivalent if they induce the same charted orbifold map on common refinements of the orbifold atlases. The following definition provides a precise specification of this idea. \begin{defi}\label{mapequiv} Let $(Q,\mathcal U)$ and $(Q',\mathcal U')$ be orbifolds. Further let $\mathcal V_1, \mathcal V_2$ be representatives of $\mathcal U$, and $\mathcal V'_1, \mathcal V'_2$ be representatives of $\mathcal U'$. Suppose that $\hat f_1\in \Orbmap(\mathcal V_1,\mathcal V'_1)$ and $\hat f_2\in \Orbmap(\mathcal V_2,\mathcal V'_2)$. We call $\hat f_1$ and $\hat f_2$ \textit{equivalent} ($\hat f_1 \sim \hat f_2$) if there are a representative $\mathcal W$ of $\mathcal U$, a representative $\mathcal W'$ of $\mathcal U'$, $\varepsilon_1\in \Orbmap(\mathcal W,\mathcal V_1)$, $\varepsilon_2\in \Orbmap(\mathcal W,\mathcal V_2)$ lifts of $\id_{(Q,\mc U)}$, $\varepsilon'_1\in \Orbmap(\mathcal W',\mathcal V'_1)$, $\varepsilon'_2\in \Orbmap(\mathcal W',\mathcal V'_2)$ lifts of $\id_{(Q',\mc U')}$, and a map $\hat h\in \Orbmap(\mathcal W,\mathcal W')$ such that the diagram \[ \xymatrix{ & \mathcal V_1 \ar[r]^{\hat f_1} & \mathcal V'_1 \\ \mathcal W \ar[ur]^{\varepsilon_1} \ar[rrr]^{\hat h} \ar[dr]_{\varepsilon_2} &&& \mathcal W' \ar[ul]_{\varepsilon'_1} \ar[dl]^{\varepsilon'_2} \\ & \mathcal V_2 \ar[r]_{\hat f_2} & \mathcal V'_2 } \] commutes. \end{defi} Proposition~\ref{mapwell} below shows that $\sim$ is indeed an equivalence relation. For its proof we need the following two lemmas. The first lemma discusses how local lifts which belong to the same charted orbifold map are related to each other. The second lemma shows that two charted orbifold maps which are induced from the same charted orbifold map induce the same charted orbifold map on common refinements of orbifold atlases. This means that $\sim$ satisfies the so-called diamond property. \begin{lemma}\label{welldefinedll} Let $(Q,\mathcal U)$ and $(Q',\mathcal U')$ be orbifolds and let \[ \hat f \mathrel{\mathop:}= (f, \{ \tilde f_i \}_{i\in I}, [P,\nu]) \in \Orbmap(\mathcal V, \mathcal V')\] be a charted orbifold map where $\mathcal V$ is a representative of $\mathcal U$ and $\mathcal V'$ one of $\mathcal U'$. Suppose that we have orbifold charts $(V_a, G_a, \pi_a), (V_b, G_b, \pi_b) \in \mathcal V$ and points $x_a\in V_a$, $x_b\in V_b$ such that $\pi_a(x_a) = \pi_b(x_b)$. Then there are arbitrarily small orbifold charts $(W,K,\chi)\in \mathcal U$, $(W',K',\chi')\in\mathcal U'$ and open embeddings \begin{align*} \lambda & \colon (W,K,\chi) \to (V_a, G_a, \pi_a) \\ \lambda' & \colon (W',K',\chi') \to (V'_a, G'_a, \pi'_a) \\ \mu & \colon (W,K,\chi) \to (V_b, G_b,\pi_b) \\ \mu' & \colon (W', K',\chi') \to (V'_b, G'_b, \pi'_b) \end{align*} with $x_a\in \lambda(W)$ and $x_b\in \mu(W)$ such that the induced lift $\tilde g$ of $f$ \wrt $\tilde f_a, \lambda, \lambda'$ coincides with the one induced by $\tilde f_b$, $\mu$, $\mu'$. In other words, the diagram \[ \xymatrix{ & V_a \ar[r]^{\tilde f_a} & V'_a \\ W \ar[ur]^{\lambda} \ar[rrr]^{\tilde g} \ar[dr]_{\mu} &&& W' \ar[ul]_{\lambda'} \ar[dl]^{\mu'} \\ & V_b \ar[r]^{\tilde f_b} & V'_b } \] commutes. \end{lemma} \begin{proof} By compatibility of orbifold charts we find an arbitrarily small restriction $(W,K,\chi)$ of $(V_a, G_a,\pi_a)$ with $x_a\in W$ and an open embedding \[ \mu\colon (W,K,\chi) \to (V_b, G_b, \pi_b) \] such that $\mu(x_a) = x_b$. Then $\mu\colon W\to \mu(W)$ is an element of $\Psi(\mathcal V)$. Fix a representative $(P,\nu)$ of $[P,\nu]$. Hence there is $\gamma\in P$ with $x_a\in \dom\gamma$ and an open neighborhood $U$ of $x_a$ such that $U\subseteq \dom \gamma \cap W$ and \[ \mu\vert_U = \gamma\vert_U.\] W.l.o.g.\@, $\gamma = \mu$. Property~(\apref{R}{invariant}{}) yields that \[ \nu(\mu)\circ \tilde f_a\vert_W = \tilde f_b\circ\mu.\] By shrinking the domain of $\nu(\mu)$, we can achieve that $\cod \nu(\mu) \subseteq V'_b$ and still $\tilde f_a(W)\subseteq \dom \nu(\mu) \mathrel{=\mkern-4.5mu{\mathop:}} W'$. With $\mu'\mathrel{\mathop:}= \nu(\mu)$ it follows \[ \tilde f_b (\mu(W)) = \mu'(\tilde f_a(W)) \subseteq \mu'(W') \] and further \[ \tilde f_a\vert_W = (\mu')^{-1}\circ \tilde f_b \circ \mu.\] This proves the claim. \end{proof} \begin{lemma}\label{fortrans} Let $(Q,\mathcal U)$ and $(Q',\mathcal U')$ be orbifolds, $\mathcal V$ a representative of $\mathcal U$, and $\mathcal V'$ one of $\mathcal U'$. Further let $\hat f \in \Orbmap(\mathcal V,\mathcal V')$. Suppose that $\hat h \in \Orbmap(\mathcal W_1, \mathcal W'_1)$ and $\hat g \in \Orbmap(\mathcal W_2, \mathcal W'_2)$ are both induced by $\hat f$. Then we find a representative $\mathcal W$ of $\mathcal U$ and charted orbifold maps $\varepsilon_1 \in \Orbmap(\mathcal W, \mathcal W_1)$, $\varepsilon_2\in \Orbmap(\mathcal W,\mathcal W_2)$ which are lifts of $\id_{(Q,\mc U)}$, and a representative $\mathcal W'$ of $\mathcal U'$ and charted orbifold maps $\varepsilon'_1 \in \Orbmap(\mathcal W',\mathcal W'_1)$, $\varepsilon'_2\in \Orbmap(\mathcal W',\mathcal W'_2)$ which are lifts of $\id_{(Q',\mc U')}$, and a charted orbifold map $\hat k \in \Orbmap(\mathcal W,\mathcal W')$ such that the diagram \[ \xymatrix{ & \mathcal W_1 \ar[r]^{\hat h} & \mathcal W'_1 \\ \mathcal W \ar[ur]^{\varepsilon_1} \ar[rrr]^{\hat k} \ar[dr]_{\varepsilon_2} &&& \mathcal W' \ar[ul]_{\varepsilon'_1} \ar[dl]^{\varepsilon'_2} \\ & \mathcal W_2 \ar[r]^{\hat g} & \mathcal W'_2 } \] commutes. \end{lemma} \begin{proof} Suppose that $\hat f = (f, \{ \tilde f_a\}_{a\in A}, [P_f,\nu_f])$, $\hat h = (f, \{\tilde h_i\}_{i\in I}, [P_h,\nu_h])$ and $\hat g = (f, \{\tilde g_j\}_{j\in J}, [P_g,\nu_g])$. Let \begin{align*} \mathcal W_1 & \mathrel{\mathop:}= \{ (W_{1,i}, H_{1,i}, \psi_{1,i} ) \mid i \in I \}, \text{ indexed by $I$,} \\ \mathcal W'_1 & \mathrel{\mathop:}= \{ (W'_{1,k}, H'_{1,k}, \psi'_{1,k}) \mid k\in K \}, \text{ indexed by $K$,} \\ \mathcal W_2 & \mathrel{\mathop:}= \{ (W_{2,j}, H_{2,j}, \psi_{2,j}) \mid j\in J \}, \text{ indexed by $J$,} \\ \mathcal W'_2 & \mathrel{\mathop:}= \{ (W'_{2,l}, H'_{2,l}, \psi'_{2,l}) \mid l\in L \}, \text{ indexed by $L$,} \end{align*} and let $\alpha_1\colon I\to K$ resp.\@ $\alpha_2\colon J\to L$ be the map such that for each $i\in I$, $\tilde h_i$ is a local lift of $f$ w.r.t.\@ $(W_{1,i}, H_{1,i}, \psi_{1,i})$ and $(W'_{1,\alpha_1(i)}, H'_{1,\alpha_1(i)}, \psi'_{1,\alpha_1(i)})$ resp.\@ for each $j\in J$, $\tilde g_j$ is a local lift of $f$ w.r.t.\@ $(W_{2,j}, H_{2,j}, \psi_{2,j})$ and $(W'_{2,\alpha_2(j)}, H'_{2,\alpha_2(j)}, \psi'_{2,\alpha_2(j)})$. Further let \begin{align*} \delta_1 & = (\id_Q, \{ \lambda_{1,i} \}_{i\in I}, [R_1,\sigma_1]) \in \Orbmap(\mathcal W_1,\mathcal V), \\ \delta_2 & = (\id_Q, \{ \lambda_{2,j} \}_{j\in J}, [R_2,\sigma_2]) \in \Orbmap(\mathcal W_2,\mathcal V) \intertext{be lifts of $\id_{(Q,\mc U)}$ and} \delta'_1 & = (\id_{Q'}, \{ \mu_{1,k} \}_{k\in K}, [R'_1,\sigma'_1]) \in \Orbmap(\mathcal W'_1, \mathcal V'), \\ \delta'_2 & = (\id_{Q'}, \{ \mu_{2,l} \}_{l\in L}, [R'_2, \sigma'_2]) \in \Orbmap(\mathcal W'_2, \mathcal V') \end{align*} be lifts of $\id_{(Q',\mc U')}$ such that $\hat f \circ\delta_1 = \delta'_1\circ \hat h$ and $\hat f\circ\delta_2 = \hat g\circ\delta'_2$. W.l.o.g.\@ we assume that all $\lambda_{1,i}$, $\mu_{1,k}$, $\lambda_{2,j}$ and $\mu_{2,l}$ are open embeddings. We will use Lemma~\ref{onlyinduced} to show the existence of $\hat k$. More precisely, we attach to each $q\in Q$ an orbifold chart $(W_q,H_q,\psi_q) \in \mc U$ with $q\in \psi_q(W_q)$ and an orbifold chart $(W'_q, H'_q, \psi'_q) \in \mc U'$ with $f(q) \in \psi'_q(W'_q)$. We consider orbifold charts defined for distinct $q$ to be distinct. In this way, we get a representative \begin{equation}\label{defW} \mc W \mathrel{\mathop:}= \{ (W_q,H_q,\psi_q) \mid q\in Q\} \end{equation} of $\mc U$ which is indexed by $Q$, and a subset $\{ (W'_q,H'_q,\psi'_q) \mid q\in Q\}$ of $\mc U'$, indexed by $Q$ as well. Moreover, we will find maps $\beta_1 \colon Q \to I$ and $\beta_2\colon Q\to J$ and open embeddings \begin{align*} \xi_{1,q} & \colon \big(W_q,H_q,\psi_q\big) \to \big(W_{1,\beta_1(q)}, H_{1,\beta_1(q)}, \psi_{1,\beta_1(q)}\big) \\ \xi_{2,q} & \colon \big(W_q,H_q,\psi_q\big) \to \big(W_{2,\beta_2(q)}, H_{2,\beta_2(q)}, \psi_{2,\beta_2(q)}\big) \\ \chi_{1,q} & \colon \big(W'_q,H'_q,\psi'_q\big) \to \big(W'_{1,\alpha_1(\beta_1(q))}, H'_{1,\alpha_1(\beta_1(q))}, \psi'_{1,\alpha_1(\beta_1(q))}\big) \\ \chi_{2,q} & \colon \big(W'_q,H'_q,\psi'_q\big) \to \big(W'_{2,\alpha_2(\beta_2(q))}, H'_{2,\alpha_2(\beta_2(q))}, \psi'_{2,\alpha_2(\beta_2(q))}\big) \end{align*} such that for each $q\in Q$ the local lift $\tilde k_q$ of $f$ induced by $\tilde h_{\beta_1(q)}$, $\xi_{1,q}$ and $\chi_{1,q}$ coincides with the one induced by $\tilde g_{\beta_2(q)}$, $\xi_{2,q}$ and $\chi_{2,q}$. Then Lemma~\ref{onlyinduced} shows that $\hat h$ resp.\@ $\hat g$ induce a charted orbifold map $(f, \{\tilde k_q\}_{q\in Q}, [P_1,\nu_1])$ resp.\@ $(f,\{\tilde k_q\}_{q\in Q}, [P_2,\nu_2])$. It then remains to show that we can choose all the open embeddings $\xi_{1,q}, \xi_{2,q}, \chi_{1,q}, \chi_{2,q}$ such that $[P_1,\nu_1]$ equals $[P_2,\nu_2]$. Let $q\in Q$. We fix $i\in I$ such that $q\in \psi_{1,i}(W_{1,i})$ and we pick $w_1\in W_{1,i}$ with $q=\psi_{1,i}(w_1)$. We set $\beta_1(q) \mathrel{\mathop:}= i$. Further we fix $j\in J$ such that $q\in \psi_{2,j}(W_{2,j})$ and pick an element $w_2\in W_{2,j}$ with $q=\psi_{2,j}(w_2)$. We set $\beta_2(q) \mathrel{\mathop:}= j$. By Lemma~\ref{welldefinedll} we find orbifold charts $(W_q, H_q, \psi_q) \in \mathcal U$ with $q\in \psi_q(W_q)$, say $q=\psi_q(w_q)$, and $(W'_q,H'_q,\psi'_q)\in\mathcal U'$ with $f(q) \in \psi'_q(W'_q)$ and open embeddings $\xi_{1,q}$, $\xi_{2,q}$, $\chi_{1,q}$, $\chi_{2,q}$ with $w_1=\xi_{1,q}(w_q)$, $w_2=\xi_{2,q}(w_q)$, and a local lift $\tilde k_q$ of $f$ such that the diagram \[ \xymatrix{ & \lambda_{1,\beta_1(q)}\big(W_{1,\beta_1(q)}\big) \ar[r]^{\tilde f_{\beta_1(q)}} & \mu_{1,\alpha_1(\beta_1(q))}\big(W'_{1,\alpha_1(\beta_1(q))}\big) \\ & W_{1,\beta_1(q)} \ar[u]^{\lambda_{1,\beta_1(q)}} \ar[r]^{\tilde h_{\beta_1(q)}} & W'_{1,\alpha_1(\beta_1(q))} \ar[u]_{\mu_{1,\alpha_1(\beta_1(q))}} \\ W_q \ar[ur]^{\xi_{1,q}} \ar[dr]_{\xi_{2,q}} \ar[rrr]^{\tilde k_q} &&& W'_q \ar[ul]_{\chi_{1,q}} \ar[dl]^{\chi_{2,q}} \\ & W_{2,\beta_2(q)} \ar[r]^{\tilde g_{\beta_2(q)}} \ar[d]_{\lambda_{2,\beta_2(q)}} & W'_{2,\alpha_2(\beta_2(q))} \ar[d]^{\mu_{2,\alpha_2(\beta_2(q))}} \\ & \lambda_{2,\beta_2(q)}\big(W_{2,\beta_2(q)}\big) \ar[r]^{\tilde f_{\beta_2(q)}} & \mu_{2,\alpha_2(\beta_2(q))}\big(W'_{2,\alpha_2(\beta_2(q))}\big) } \] commutes. We may assume that $\xi_{1,q}=\id$ and $\chi_{1,q}=\id$. Now \[ \eta\mathrel{\mathop:}=\lambda_{2,\beta_2(q)}\circ\xi_{2,q}\circ\lambda_{1,\beta_1(q)}^{-1} \colon \lambda_{1,\beta_1(q)}(W_q) \to \lambda_{2,\beta_2(q)}\big(\xi_{2,q}(W_q)\big) \] is an element of $\Psi(\mathcal V)$ with $y\mathrel{\mathop:}= \lambda_{1,\beta_1(q)}(\xi_{1,q}(w_q))$ in its domain. We pick a representative $(P_f,\nu_f)$ of $[P_f,\nu_f]$. Then there is an element $\gamma\in P_f$ with $y\in\dom\gamma$ and an open neighborhood $U$ of $y$ such that $U\subseteq \dom\gamma\cap\dom\eta$ and $\eta\vert_U = \gamma\vert_U$. By (\apref{R}{invariant}{}), \[ \nu_f(\gamma)\circ \tilde f_{\beta_1(q)}\vert_U = \tilde f_{\beta_2(q)}\circ\gamma\vert_U = \tilde f_{\beta_2(q)}\circ \eta\vert_U. \] The map \[ \mu \mathrel{\mathop:}= \mu_{2,\alpha_2(\beta_2(q))} \circ \chi_{2,q} \circ \mu^{-1}_{1,\alpha_1(\beta_1(q))} \colon \mu_{1,\alpha_1(\beta_1(q))}(W'_q)\to \mu_{2,\alpha_2(\beta_2(q))}\big(\chi_{2,q}(W'_q)\big) \] is a diffeomorphism as well. Further there exists an open neighborhood $V$ of $y$ such that \[ \tilde f_{\beta_2(q)}\circ \eta\vert_V = \mu\circ \tilde f_{\beta_1(q)}\vert_V. \] Hence \[ \nu_f(\gamma)\circ\tilde f_{\beta_1(q)} = \mu \circ \tilde f_{\beta_1(q)} \] on some neighborhood of $y$. Therefore, after possibly shrinking $W_q$, we can redefine $W'_q$, $\chi_{2,q}$ and $\tilde k_q$ such that \begin{equation}\label{nubeta} \chi_{2,q} = \mu_{2,\alpha_2(\beta_2(q))}^{-1} \circ \nu_f(\gamma) \circ \mu_{1,\alpha_1(\beta_1(q))}\vert_{W'_q}. \end{equation} We remark that this redefinition might be quite serious if $\tilde f_{\beta_1(q)}$ and hence $\tilde h_{\beta_1(q)}$, $\tilde g_{\beta_2(q)}$ and $\tilde f_{\beta_2(q)}$ are highly non-injective. But since these maps all behave in the same way, we may perform the changes without running into problems. Let $\mc W$ be defined by \eqref{defW}. Lemma~\ref{onlyinduced}, more precisely its proof, shows that $\hat h$ resp.\@ $\hat g$ induces the orbifold maps \[ \hat k_1 = (f, \{\tilde k_q\}_{q\in Q}, [P_1,\nu_1]) \quad\text{resp.}\quad \hat k_2 = (f, \{\tilde k_q\}_{q\in Q}, [P_2,\nu_2]) \] with $(\mc W,\mc W')$, where $\mc W'$ is a representative of $\mc U'$ which contains the set \[ \{ (W'_q,H'_q,\psi'_q) \mid q\in Q\} \] (the proof of Lemma~\ref{onlyinduced} shows that we can indeed have the same $\mc W'$ for $\hat k_1$ and $\hat k_2$). It remains to show that $[P_1,\nu_1] = [P_2,\nu_2]$. Recall from Lemma~\ref{onlyinduced} that $[P_1,\nu_1]$ is uniquely determined by $\hat h$, $\{\xi_{1,q}\}_{q\in Q}$ and $\{\chi_{1,q}\}_{q\in Q}$, and analogously for $[P_2,\nu_2]$. Alternatively, we may consider $\hat k_1$ and $\hat k_2$ to be induced by $\hat f$. Thus, $[P_1,\nu_1]$ is uniquely determined by $\hat f$, $\{\lambda_{1,\beta_1(q)}\circ\xi_{1,q}\}_{q\in Q}$ and $\{\mu_{1,\alpha_1(\beta_1(q))}\circ \chi_{1,q}\}_{q\in Q}$, and $[P_2,\nu_2]$ is uniquely determined by $\hat f$, $\{\lambda_{2,\beta_2(q)}\circ\xi_{2,q}\}_{q\in Q}$ and $\{\mu_{2,\alpha_2(\beta_2(q))}\circ\chi_{2,q}\}_{q\in Q}$. We fix a representative $(P_f,\nu_f)$ of $[P_f,\nu_f]$. Let $\gamma$ be a change of charts in $\Psi(\mathcal W)$ and $x\in \dom \gamma$. Suppose $\dom\gamma \subseteq W_p$ and $\cod\gamma\subseteq W_q$. Using the same arguments and notation as in the proof of Lemma~\ref{onlyinduced} (without discussing the necessary shrinking of domains, since we are only interested in equality in a neighborhood of $x$) we have \begin{align*} \beta_h & =\lambda_{1,\beta_1(q)}\circ \gamma\circ\lambda_{1,\beta_1(p)}^{-1}, \\ \beta_g & = \lambda_{2,\beta_2(q)}\circ\xi_{2,q}\circ \gamma\circ\xi_{2,p}^{-1}\circ \lambda_{2,\beta_2(p)}^{-1}, \\ \nu_1(\gamma) &= \mu_{1,\alpha_1(\beta_1(q))}^{-1}\circ\nu_f(\beta_h) \circ \mu_{1,\alpha_1(\beta_1(p))}, \\ \nu_2(\gamma) &= \chi_{2,q}^{-1}\circ\mu_{2,\alpha_2(\beta_2(q))}^{-1}\circ\nu_f(\beta_g) \circ \mu_{2,\alpha_2(\beta_2(p))}\circ \chi_{2,p}. \end{align*} Hence \[ \beta_g= \lambda_{2,\beta_2(q)}\circ\xi_{2,q}\circ\lambda_{1,\beta_1(q)}^{-1}\circ\beta_h\circ\lambda_{1,\beta_1(p)} \circ \xi_{2,p}^{-1}\circ\lambda_{2,\beta_2(p)}^{-1}.\] Definition~\eqref{nubeta} shows that \[ \nu_f( \lambda_{2,\beta_2(q)} \circ \xi_{2,q} \circ \xi^{-1}_{1,q} \circ \lambda^{-1}_{1,\beta_1(q)}) = \mu_{2,\alpha_2(\beta_2(q))} \circ \chi_{2,q} \circ \mu^{-1}_{1,\alpha_1(\beta_1(q))}. \] Then \begin{align*} \nu_2(\gamma) & = \mu^{-1}_{1,\alpha_1(\beta_1(q))} \circ \nu_f(\beta_h)\circ \mu_{1,\alpha_1(\beta_1(p))} = \nu_1(\gamma). \end{align*} Hence the induced equivalence classes $[P_1,\nu_1]$ and $[P_2,\nu_2]$ indeed coincide. The lift $\varepsilon_1$ of $\id_{(Q,\mc U)}$ is given by the family $\{ \xi_{1,q} \}_{q\in Q}$, the lift $\varepsilon_2$ by $\{\xi_{2,q}\}_{q\in Q}$, the lift $\varepsilon'_1$ of $\id_{(Q',\mc U')}$ is any extension of $\{ \chi_{1,q}\}_{q\in Q}$, and the lift $\varepsilon'_2$ is any extension of $\{ \chi_{2,q}\}_{q\in Q}$. \end{proof} \begin{prop}\label{mapwell} The relation $\sim$ from Definition~\ref{mapequiv} is an equivalence relation. \end{prop} \begin{proof} Let $(Q,\mathcal U)$ and $(Q',\mathcal U')$ be orbifolds. Suppose that for all $i\in\{1,2,3\}$ the orbifold atlases $\mathcal V_i$ are representatives of $\mathcal U$ and $\mathcal V'_i$ are representatives of $\mathcal U'$, and $\hat f_i \in \Orbmap(\mathcal V_i, \mathcal V'_i)$ are charted orbifold maps such that $\hat f_1 \sim \hat f_2\quad\text{and}\quad \hat f_2 \sim \hat f_3$. This means that we find representatives $\mathcal W_1$, $\mathcal W_2$ of $\mathcal U$, representatives $\mathcal W'_1$, $\mathcal W'_2$ of $\mathcal U'$, charted orbifold maps $\hat h_1 \in \Orbmap(\mathcal W_1, \mathcal W'_1)$, $\hat h_2\in \Orbmap(\mathcal W_2,\mathcal W'_2)$ and lifts of the respective identities $\varepsilon_1 \in \Orbmap(\mathcal W_1, \mathcal V_1)$, $\varepsilon_2 \in \Orbmap(\mathcal W_1,\mathcal V_2)$, $\varepsilon'_1 \in\Orbmap(\mathcal W'_1,\mathcal V'_1)$, $\varepsilon'_2\in \Orbmap(\mathcal W'_1, \mathcal V'_2)$, $\eta_1 \in \Orbmap(\mathcal W_2, \mathcal V_2)$, $\eta_2\in \Orbmap(\mathcal W_2,\mathcal V_3)$, $\eta'_1 \in \Orbmap(\mathcal W'_2,\mathcal V'_2)$, $\eta'_2 \in \Orbmap(\mathcal W'_2, \mathcal V'_3)$ such that the diagrams \[ \xymatrix{ & \mathcal V_1 \ar[r]^{\hat f_1} & \mathcal V'_1 &&& \mathcal V_2 \ar[r]^{\hat f_2} & \mathcal V'_2 \\ \mathcal W_1 \ar[rrr]^{\hat h_1} \ar[ru]^{\varepsilon_1} \ar[rd]_{\varepsilon_2} &&& \mathcal W'_1 \ar[lu]_{\varepsilon'_1} \ar[ld]^{\varepsilon'_2} & \mathcal W_2 \ar[rrr]^{\hat h_2} \ar[ru]^{\eta_1} \ar[rd]_{\eta_2} &&& \mathcal W'_2 \ar[lu]_{\eta'_1} \ar[ld]^{\eta'_2} \\ & \mathcal V_2 \ar[r]^{\hat f_2} & \mathcal V'_2 &&& \mathcal V_3 \ar[r]^{\hat f_3} & \mathcal V'_3 } \] commute. Since $\hat h_1$ and $\hat h_2$ are both induced by $\hat f_2$, Lemma~\ref{fortrans} shows that there are representatives $\mathcal W$ of $\mathcal U$, $\mathcal W'$ of $\mathcal U'$, a charted orbifold map $\hat k\in \Orbmap(\mathcal W,\mathcal W')$ and lifts of identity $\delta_1 \in \Orbmap(\mathcal W,\mathcal W_1)$, $\delta_2\in \Orbmap(\mathcal W,\mathcal W_2)$, $\delta'_1\in\Orbmap(\mathcal W',\mathcal W'_1)$, $\delta'_2\in \Orbmap(\mathcal W',\mathcal W'_2)$ such that the diagram \[ \xymatrix{ && \mathcal V_1 \ar[r]^{\hat f_1} & \mathcal V'_1 \\ & \mathcal W_1 \ar[rrr]^{\hat h_1} \ar[ru]^{\varepsilon_1} &&& \mathcal W'_1 \ar[lu]_{\varepsilon'_1} \\ \mathcal W \ar[rrrrr]^{\hat k} \ar[ru]^{\delta_1}\ar[rd]_{\delta_2} &&&&& \mathcal W' \ar[lu]_{\delta'_1} \ar[ld]^{\delta'_2} \\ & \mathcal W_2 \ar[rrr]^{\hat h_2} \ar[rd]_{\eta_2} &&& \mathcal W'_2 \ar[ld]^{\eta'_2} \\ && \mathcal V_3 \ar[r]^{\hat f_3} & \mathcal V'_3 } \] commutes. Since compositions of lifts of identity remain lifts of identity, it follows that $\hat f_1 \sim \hat f_3$. \end{proof} The equivalence class of a charted orbifold map $\hat f$ with respect to the equivalence from Definition~\ref{mapequiv} is denoted by $[\hat f]$. It will always be clear from context whether $\hat f$ is a charted orbifold map and $[\hat f]$ denotes an equivalence class of charted orbifold maps, or $\hat f$ is a representative of an orbifold map and $[\hat f]$ denotes an equivalence class of representatives, that is a charted orbifold map (cf.\@ Definition~\ref{Pnu_equiv}). \subsection{Groupoids and homomorphisms} A groupoid is a small category in which each morphism is an isomorphism. In the context of orbifolds this concept is most commonly expressed (equivalently) in terms of sets and maps. The morphisms are then called arrows. \begin{defi}\label{set_groupoid} A \textit{groupoid} $G$ is a tuple $G=(G_0,G_1,s,t,m,u,i)$ consisting of the set $G_0$ of \textit{objects}, or the \textit{base} of $G$, the set $G_1$ of \textit{arrows}, and five \textit{structure maps}, namely the \textit{source map} $s\colon G_1\to G_0$, the \textit{target map} $t\colon G_1\to G_0$, the \textit{multiplication} or \textit{composition} $m\colon G_1\pullback{s}{t} G_1\to G_1$, the \textit{unit map} $u\colon G_0\to G_1$, and the \textit{inversion} $i\colon G_1\to G_1$ which satisfy that \begin{enumerate}[(i)] \item\label{groupoidi} for all $(g,f)\in G_1\pullback{s}{t} G_1$ it holds $s(m(g,f)) = s(f)$ and $t(m(g,f)) = t(g)$, \item for all $(h,g),(g,f)\in G_1\pullback{s}{t} G_1$ we have $m( h, m(g,f)) = m(m(h,g),f)$, \item for all $x\in G_0$ we have $s(u(x)) = x = t(u(x))$, \item for all $x\in G_0$ and all $(u(x),f), (g,u(x))\in G_1\pullback{s}{t} G_1$ it follows $m(u(x),f) = f$ and $m(g,u(x)) = g$, \item for all $g\in G_1$ we have $s(i(g)) = t(g)$ and $t(i(g)) = s(g)$, and $m(g,i(g)) = u(t(g))$ and $m(i(g),g)= u(s(g))$. \end{enumerate} We often use the notations $m(g,f) = gf$, $u(x) = 1_x$, $i(g) = g^{-1}$, and $g\colon x\to y$ or $\stackrel{g}{x\to y}$ for an arrow $g\in G_1$ with $s(g)=x$, $t(g)=y$. Moreover, $G(x,y)$ denotes the set of arrows from $x$ to $y$. A \textit{Lie groupoid} is a groupoid $G$ for which $G_0$ is a smooth Hausdorff manifold, $G_1$ is a smooth (possibly non-Hausdorff) manifold, the structure maps $s,t\colon G_1\to G_0$ are smooth submersions (hence $G_1\pullback{s}{t} G_1$, the domain of $m$, is a smooth, possibly non-Haus\-dorff manifold), and the structure maps $m,u$ and $i$ are smooth. \end{defi} \begin{defi}\label{gr_homom} Let $G$ and $H$ be groupoids. A \textit{homomorphism} from $G$ to $H$ is a functor $\varphi\colon G\to H$, \ie it is a tuple $\varphi=(\varphi_0,\varphi_1)$ of maps $\varphi_0\colon G_0\to H_0$ and $\varphi_1\colon G_1\to H_1$ which commute with all structure maps. If $G$ and $H$ are Lie groupoids, $\varphi$ is a homomorphism between them, if it is a homomorphism of the abstract groupoids with the additional requirement that $\varphi_0$ and $\varphi_1$ be smooth maps. \end{defi} \begin{defi} Let $G$ be a groupoid. The \textit{orbit} of $x\in G_0$ is the set \[ Gx \mathrel{\mathop:}= t(s^{-1}(x)) = \left\{ y\in G_0\left\vert\ \exists\, g\in G_1\colon x\stackrel{g}{\rightarrow} y\right.\right\}.\] Two elements $x,y\in G_0$ are called \textit{equivalent}, $x\sim y$, if they are in the same orbit. The quotient space $|G|\mathrel{\mathop:}= G_0/_\sim$ is called the \textit{orbit space} of $G$. The canonical quotient map $G_0 \to |G|$ is denoted by $\pr$ or $\pr_G$, and $[x] \mathrel{\mathop:}= \pr(x)$ for $x\in G_0$. \end{defi} \section{The orbifold category in terms of marked atlas groupoids}\label{atlascategory} Proposition~\ref{conclusion} and Remark~\ref{F1_equivariant} show that charted orbifold maps and their composition correspond to homomorphisms between marked atlas groupoids and their composition. By characterizing lifts of identity and equivalence of charted orbifold maps in terms of marked atlas groupoids and their homomorphisms, we construct a category for marked atlas groupoids which is isomorphic to the one of reduced orbifolds. To that end we first show that lifts of identity correspond to unit weak equivalences, a notion we define below. Throughout this section let $\pr_1$ denote the projection to the first component. A homomorphism $\varphi=(\varphi_0,\varphi_1)\colon G\to H$ between Lie groupoids is called a \textit{weak equivalence} if \begin{enumerate}[(i)] \item the map \[ t\circ \pr_1\colon H_1\pullback{s}{\varphi_0} G_0 \to H_0 \] is a surjective submersion, and \item the diagram \[ \xymatrix{ G_1 \ar[r]^{\varphi_1}\ar[d]_{(s,t)} & H_1 \ar[d]^{(s,t)} \\ G_0\times G_0 \ar[r]^{\varphi_0\times\varphi_0} & H_0\times H_0 } \] is a fibered product. \end{enumerate} Two Lie groupoids $G,H$ are called \textit{Morita equivalent} if there is a Lie groupoid $K$ and weak equivalences \[ \xymatrix{ G & K \ar[l]_{\varphi} \ar[r]^{\psi} & H. } \] \begin{defi} Let $(G_1,\alpha_1,X_1)$ and $(G_2,\alpha_2,X_2)$ be marked atlas groupoids. A homomorphism \[ \varphi=(\varphi_0,\varphi_1)\colon (G_1,\alpha_1, X_1) \to (G_2,\alpha_2, X_2) \] is called a \textit{unit weak equivalence} if $\varphi\colon G_1 \to G_2$ is a weak equivalence and $\alpha_2\circ |\varphi| \circ\alpha_1^{-1} = \id_{X_1}$. Necessarily we have $X_1=X_2\mathrel{=\mkern-4.5mu{\mathop:}} X$. A \textit{unit Morita equivalence} between $(G_1,\alpha_1, X)$ and $(G_2,\alpha_2, X)$ is a pair $(\psi_1, \psi_2)$ of unit weak equivalences \[ \psi_j\colon (G,\alpha, X) \to (G_j,\alpha_j, X) \] where $(G,\alpha, X)$ is some marked atlas groupoid. If such a unit Morita equivalence exists, then the marked atlas groupoids $(G_1,\alpha_1, X)$ and $(G_2,\alpha_2, X)$ are called \textit{unit Morita equivalent}. \end{defi} In contrast to Morita equivalence of Lie groupoids, unit Morita equivalence of marked atlas groupoids requires the third (marked) Lie groupoid to be an atlas groupoid. In Proposition~\ref{Morequivgr} below we will show that unit Morita equivalence of marked atlas groupoids is indeed an equivalence relation. The following proposition identifies lifts of identity with unit weak equivalences. \begin{prop}\label{charuwe} Let $\mathcal U$ and $\mathcal U'$ be orbifold structures on the topological space $Q$. Further let $\mc V$ resp.\@ $\mc W'$ be a representative of $\mathcal U$ resp.\@ of $\mathcal U'$. \begin{enumerate}[{\rm (i)}] \item\label{charuwei} Suppose that $\mc U = \mc U'$. If $\hat f\in \Orbmap(\mathcal V,\mathcal W')$ is a lift of $\id_{(Q,\mc U)}$, then $F_1(\hat f)$ is a unit weak equivalence. \item\label{charuweii} Let $\varepsilon \in \Hom(\Gamma(\mathcal V), \Gamma(\mathcal W'))$ be a unit weak equivalence. Then $\mc U = \mc U'$, and $F_2(\varepsilon)$ is a lift of $\id_{(Q,\mc U)}$. \end{enumerate} \end{prop} \begin{proof} Let \[ \mathcal V = \{ (V_i, G_i, \pi_i) \mid i\in I\}\quad \text{resp.\@}\quad \mathcal W' = \{ (W'_j, H'_j, \psi'_j)\mid j\in J\},\] indexed by $I$ resp.\@ by $J$, and let $G\mathrel{\mathop:}= \Gamma(\mathcal V)$ and $H\mathrel{\mathop:}= \Gamma(\mathcal W')$. We will first prove \eqref{charuwei}. Suppose $\hat f = (\id_Q, \{ \tilde f_i \}_{i\in I}, [P,\nu])$. By Proposition~\ref{orb_gr} it suffices to show that $\varepsilon=(\varepsilon_0,\varepsilon_1)\mathrel{\mathop:}= F_1(\hat f)$ is a weak equivalence. We first show that \[ t\circ \pr_1 \colon \left\{ \begin{array}{ccc}H_1 \pullback{s}{\varepsilon_0} G_0 & \to & H_0 \\ (h,x) & \mapsto & t(h) \end{array} \right. \] is a submersion. Let $(h,x)\in H_1\pullback{s}{\varepsilon_0} G_0$. Recall from Proposition~\ref{induceslifts} that $\varepsilon_0$ is a local diffeomorphism, and from Special Case~\ref{atlasgroupoid} that $G$ and $H$ are \'etale groupoids. Choose open neighborhoods $U_x$ of $x$ in $G_0$ and $U_h$ of $h$ in $H_1$ such that $\varepsilon_0\vert_{U_x}$ and $s\vert_{U_h}$ are open embeddings with $s(U_h) = \varepsilon_0(U_x)$. Then $U_h\pullback{s}{\varepsilon_0}U_x$ is open in $H_1\pullback{s}{\varepsilon_0}G_0$. Further \begin{align*} U_h\pullback{s}{\varepsilon_0}U_x & = \left\{ (k,y)\in U_h\times U_x \left\vert\ s(k) = \varepsilon_0(y) \right.\right\} \\ & = \left\{ (k, \varepsilon_0^{-1}(s(k)) \left\vert\ k\in U_h \right.\right\}. \end{align*} Therefore, \[ \pr_1 \colon U_h\pullback{s}{\varepsilon_0}U_x \to U_h \] is a diffeomorphism. Since $t$ is a local diffeomorphism, $t\circ\pr_1$ is a submersion. Now we prove that $t\circ\pr_1$ is surjective. Let $y\in H_0$, say $y\in W'_j$, and set $\psi'_j(y) \mathrel{=\mkern-4.5mu{\mathop:}} q\in Q$. Then there is an orbifold chart $(V_i, G_i, \pi_i)\in \mathcal V$ such that $q\in \pi_i(V_i)$, say $q=\pi_i(x)$. \[ \xymatrix{ V_i \ar[rr]^{\tilde f_i}\ar[dr]_{\pi_i} && W'_i \ar[dl]^{\psi'_i} \\ & Q } \] Set $z\mathrel{\mathop:}= \tilde f_i(x)$, hence $\psi'_i(z) = q = \psi'_j(y)$. Hence, there are a restriction $(S',K',\chi')$ of $(W'_i,H'_i,\psi'_i)$ with $z\in S'$ and an open embedding \[ \lambda\colon (S', K', \chi') \to (W'_j, H'_j, \psi'_j)\] such that $\lambda(z) = y$. Then $\lambda\in \Psi(\mathcal W')$ and $(\germ_z \lambda, x)\in H_1 \pullback{s}{\varepsilon_0} G_0$ with \[ t\circ \pr_1(\germ_z\lambda, x) = t(\germ_z\lambda) = y. \] This means that $t\circ\pr_1$ is surjective. Set \[ K \mathrel{\mathop:}= (G_0\times G_0) \pullback{(\varepsilon_0,\varepsilon_0)}{(s,t)} H_1.\] It remains to show that the map \[ \beta\colon\left\{ \begin{array}{ccc} G_1 & \to & K \\ \germ_x g & \mapsto & (x, g(x), \varepsilon_1(\germ_x g)) \end{array}\right. \] is a diffeomorphism. Note that $\beta = (s, t, \varepsilon_1)$. Let $(x,y,\germ_{\varepsilon_0(x)}h)$ be in $K$, hence $\germ_{\varepsilon_0(x)} h\colon \varepsilon_0(x) \to \varepsilon_0(y)$. By the definition of $H_1$ there are open neighborhoods $U'_1$ of $\varepsilon_0(x)$ and $U'_2$ of $\varepsilon_0(y)$ in $W'\mathrel{\mathop:}= \coprod_{j\in J} W'_j$ such that $h\colon U'_1 \to U'_2$ is an element of $\Psi(\mathcal W')$. Since $\varepsilon_0$ is a local diffeomorphism, there are open neighborhoods $U_1$ of $x$ and $U_2$ of $y$ in $V\mathrel{\mathop:}= \coprod_{i\in I} V_i$ such that $\varepsilon_0\vert_{U_k}$ is an open embedding with $\varepsilon_0(U_k) \subseteq U'_k$ ($k=1,2$). After shrinking $U'_k$ we can assume that $\varepsilon_0(U_k) = U'_k$. Let $\gamma_k\mathrel{\mathop:}= \varepsilon_0\vert_{U_k}$. Then \[ g\mathrel{\mathop:}= \gamma_2^{-1}\circ h \circ \gamma_1 \colon U_1 \to U_2 \] is a diffeomorphism, hence $g\in \Psi(\mathcal V)$. Note that \[ \varepsilon_1(\germ_x g) = \germ_{\varepsilon_0(x)} h \] by Proposition~\ref{extending}. Finally, we see \[ \beta(\germ_x g) = (x, g(x), \varepsilon_1(\germ_x g)) = (x,y, \germ_{\varepsilon_0(x)} h). \] Therefore $\beta$ is surjective. Since $\germ_x g$ does not depend on the choice of $U_k$ and $U'_k$, the map $\beta$ is also injective. Finally, we will show that $\beta$ is a local diffeomorphism. Since $s$ and $t$ are local diffeomorphisms, we only have to prove that $\varepsilon_1$ is one as well. Let $\germ_x f\in G_1$. Choose an open neighborhood $U$ of $x$ such that $U\subseteq \dom f$ and $\varepsilon_0\vert_U\colon U\to \varepsilon_0(U)$ is a diffeomorphism. By the germ topology, the set \[ \tilde U \mathrel{\mathop:}= \{ \germ_y f\mid y\in U \}\] is open in $G_1$, and the set \[ \tilde V \mathrel{\mathop:}= \{ \germ_z \nu(f) \mid z\in \varepsilon_0(U)\} \] is open in $H_1$. Further the diagrams \[ \xymatrix{ \tilde U \ar[r]^{\varepsilon_1} \ar[d] & \tilde V\ar[d] &&& \germ_yf \ar@{|->}[r]^{\varepsilon_1} \ar@{|->}[d] & \germ_{\varepsilon_0(y)} \nu(f)\ar@{|->}[d] \\ U \ar[r]_{\varepsilon_0} & \varepsilon_0(U) &&& y \ar@{|->}[r]_{\varepsilon_0} & \varepsilon_0(y) } \] commute. Since the vertical arrows are diffeomorphisms by definition, also $\varepsilon_1\vert_{\tilde U} \colon \tilde U \to \tilde V$ is a diffeomorphism. This completes the proof of \eqref{charuwei}. We will now prove \eqref{charuweii}. Proposition~\ref{backorb} shows that the orbifold atlases $\mc V$ and $\mc W'$ are determined completely by the marked atlas groupoids $\Gamma(\mc V)$ and $\Gamma(\mc W')$, resp. Hence we can apply Proposition~\ref{inducedorbgr}, which shows that $F_2(\varepsilon)$ is well-defined. Suppose that \[ F_2(\varepsilon) = \big( f, \{\tilde f_i\}_{i\in I}, [P,\nu]). \] Proposition~\ref{inducedorbgr} yields $f=\id_Q$. By \cite[Exercises~5.16(4)]{Moerdijk_Mrcun} $\varepsilon_0$ is a local diffeomorphism. Thus, Proposition~\ref{inducedorbgr} implies that each $\tilde f_i$ is a local diffeomorphism. The domain atlas of $F_2(\varepsilon)$ is $\mc V$, its range family is $\mc W'$. From Proposition~\ref{samestructure} it follows that $\mc U = \mc U'$. By Definition~\ref{liftiddef} $F_2(\varepsilon)$ is a lift of $\id_{(Q,\mc U)}$. \end{proof} The combination of Propositions~\ref{backorb}, ~\ref{charuwe} and Remark~\ref{F1_equivariant} now allows to identity each step in the construction of the category of reduced orbifolds and each intermediate object in terms of marked atlas groupoids. For an orbifold $(Q,\mc U)$ we define \[ \Gamma(Q,\mc U) \mathrel{\mathop:}= \big\{ \big(\Gamma(\mc V), \alpha_{\mc V}, Q \big) \ \big\vert\ \text{$\mc V$ is a representative of $\mc U$} \big\}. \] Then Proposition~\ref{equivclassid} gives rise to the following proposition. \begin{prop}\label{Morequivgr} Unit Morita equivalence of marked atlas groupoids is an equivalence relation. Further, if $\mc V$ be a representative of the orbifold structure $\mc U$ of the orbifold $(Q,\mc U)$, then the unit Morita equivalence class of $(\Gamma(\mc V), \alpha_{\mc V}, Q)$ is $\Gamma(Q,\mc U)$. \end{prop} Equivalence of charted orbifold maps translates to marked atlas groupoids as follows. \begin{defi} Let $(G_1,\alpha_1,X)$, $(G_2,\alpha_2,X)$, as well as $(H_1,\beta_1,Y)$ and $(H_2,\beta_2,Y)$ be marked atlas groupoids. For $j=1,2$ let \[ \psi_j \colon (G_j,\alpha_j, X) \to (H_j,\beta_j, Y) \] be a homomorphism of marked Lie groupoids. We call $\psi_1$ and $\psi_2$ \textit{unit Morita equivalent} if there exist marked atlas groupoids $(G,\alpha,X)$ and $(H,\beta,Y)$, a homomorphism $\chi\colon (G,\alpha, X) \to (H,\beta, Y)$, and unit weak equivalences $\varepsilon_j\colon (G,\alpha, X) \to (G_j,\alpha_j, X)$, $\delta_j\colon (H,\beta, Y) \to (H_j,\beta_j, Y)$ such that the diagram \[ \xymatrix{ & (G_1,\alpha_1,X) \ar[r]^{\psi_1} & (H_1,\beta_1,Y) \\ (G,\alpha, X) \ar[ur]^{\varepsilon_1} \ar[dr]_{\varepsilon_2} \ar[rrr]^{\chi} &&& (H,\beta, Y) \ar[ul]_{\delta_1} \ar[dl]^{\delta_2} \\ & (G_2,\alpha_2,X) \ar[r]^{\psi_2} & (H_2,\beta_2,Y) } \] commutes. \end{defi} Proposition~\ref{mapwell} in terms of atlas groupoids means the following. \begin{prop} Unit Morita equivalence of homomorphisms between marked atlas groupoids is an equivalence relation. \end{prop} We define the category $\Agr$ of marked atlas groupoids as follows: Its class of objects consists of all $\Gamma(Q,\mathcal U)$. The morphisms from $\Gamma(Q,\mc U)$ to $\Gamma(Q',\mc U')$ are the unit Morita equivalence classes $[\varphi]$ of homomorphisms $\varphi\colon (G,\alpha, Q) \to (G',\alpha', Q')$ where $(G,\alpha, Q)$ is any representative of $\Gamma(Q,\mc U)$ and $(G',\alpha',Q')$ is any representative of $\Gamma(Q',\mc U')$. The composition of two morphisms $[\varphi]\in \Morph\big(\Gamma(Q,\mc U),\Gamma(Q',\mc U')\big)$ and $[\psi]\in\Morph\big(\Gamma(Q',\mc U'),\Gamma(Q'',\mc U'')\big)$ is defined as follows: Choose representatives \[ \varphi\colon (G,\alpha,Q)\to (G',\alpha',Q')\quad\text{of}\ [\varphi] \] and \[ \psi\colon(H',\beta',Q')\to (H'',\beta'',Q'')\quad \text{of}\ [\psi]. \] Then find representatives $(K,\gamma,Q)$, $(K',\gamma',Q')$, $(K'',\gamma'',Q'')$ of the classes $\Gamma(Q,\mc U)$, $\Gamma(Q',\mc U')$, $\Gamma(Q'',\mc U'')$, resp., and unit Morita equivalences \begin{align*} \varepsilon & \colon (K,\gamma, Q) \to (G,\alpha, Q), \\ \varepsilon'_1 & \colon (K',\gamma', Q') \to (G',\alpha',Q'), \\ \varepsilon'_2 & \colon (K',\gamma',Q') \to (H',\beta',Q'), \\ \varepsilon'' & \colon (K'',\gamma'',Q'') \to (H'',\beta'', Q''), \end{align*} and homomorphisms of marked Lie groupoids \begin{align*} \chi & \colon (K,\gamma,Q) \to (K',\gamma', Q'), \\ \kappa & \colon (K',\gamma', Q') \to (K'',\gamma'', Q'') \end{align*} such that the diagram \[\def\scriptstyle{\scriptstyle} \xymatrix{ (G,\alpha,Q) \ar[r]^\varphi & (G',\alpha',Q') & & (H',\beta',Q') \ar[r]^\psi & (H'',\beta'', Q'') \\ (K,\gamma,Q) \ar[u]^\varepsilon \ar[rr]^\chi && (K',\gamma', Q') \ar[ul]_{\varepsilon'_1} \ar[ur]^{\varepsilon'_2} \ar[rr]^\kappa && (K'',\gamma'', Q'') \ar[u]_{\varepsilon''} } \] commutes. Then the composition of $[\varphi]$ and $[\psi]$ is defined as \[ [\psi] \circ [\varphi] \mathrel{\mathop:}= [\kappa\circ\chi]. \] Invoking Lemmas~\ref{onlyinduced}, \ref{compwd} and Proposition~\ref{compositiongood} we deduce the following proposition. \begin{prop} The composition in $\Agr$ is well-defined. \end{prop} We define an assignment $F$ from the orbifold category $\Orbmap$ to the category of marked atlas groupoids $\Agr$ as follows. On the level of objects, $F$ maps the orbifold $(Q,\mc U)$ to $\Gamma(Q,\mc U)$. Suppose that $[\hat f]$ is a morphism from the orbifold $(Q,\mc U)$ to the orbifold $(Q',\mc U')$. Then $F$ maps $[\hat f]$ to the morphism $[F_1(\hat f)]$ from $\Gamma(Q,\mc U)$ to $\Gamma(Q',\mc U')$. \begin{thm} The assignment $F$ is a covariant functor from $\Orbmap$ to $\Agr$. Even more, $F$ is an isomorphism of categories. The functor $F$ and its inverse are constructive. \end{thm} To end we show in the following example that the representatives of orbifold maps from Example~\ref{Pfinite} define different orbifold maps. In this example we use $G(x,y)$ to denote the set of arrows $g$ of the groupoid $G$ with $s(g)=x$ and $t(g) =y$. \begin{example} Recall the representatives of orbifold maps \[ \hat f_1 = (f,\widetilde f, P, \nu_1)\quad\text{and}\quad \hat f_2 = (f, \widetilde f, P, \nu_2) \] from Example~\ref{Pfinite} and \ref{differenthomoms}. We claim that $\hat f_1$ and $\hat f_2$ are representatives of different orbifold maps. Assume for contradiction that $\hat f_1$ and $\hat f_2$ define the same orbifold map on $(Q,\mc U_1)$. This means that the groupoid homomorphisms $\varphi$ and $\psi$ from Example~\ref{differenthomoms} are Morita equivalent. Hence there exist marked atlas groupoids $K$ and $H$, unit weak equivalences \begin{align*} \alpha = (\alpha_0,\alpha_1) & \colon K \to \Gamma, & \gamma = (\gamma_0,\gamma_1) & \colon H \to \Gamma, \\ \beta = (\beta_0,\beta_1) & \colon K \to \Gamma, & \delta = (\delta_0,\delta_1) & \colon H \to \Gamma, \end{align*} and a homomorphism $\chi= (\chi_0,\chi_1) \colon K \to H$ such that the diagram \[ \xymatrix{ & \Gamma \ar[r]^\varphi & \Gamma \\ K \ar[ur]^{\alpha} \ar[dr]_\beta \ar[rrr]^\chi & & & H \ar[ul]_\gamma \ar[dl]^\delta \\ & \Gamma \ar[r]^\psi & \Gamma } \] commutes. Since $\alpha$ is a (unit) weak equivalence, there exists $x\in K$ and $g\in\Gamma$ with $s(g) = \alpha_0(x)$ and $t(g) = 0$. Necessarily, $g\in \{ \germ_0 (\pm\id) \}$, and hence $\alpha_0(x) = 0$. In turn, $\alpha_1$ induces a bijection between $K(x,x)$ and $\Gamma(0,0)$. Thus $K(x,x)$ consists of two elements, say $K(x,x) = \{ k_1,k_2 \}$. Let $x' \mathrel{\mathop:}= \chi_0(x)$. Then \[ 0 = \varphi_0( \alpha_0(x) ) = \gamma_0( x' ). \] This shows that $\gamma_1$ induces a bijection between $H(x',x')$ and $\Gamma(0,0)$. For $j=1,2$ we have \[ \gamma_1(\chi_1(k_j)) = \varphi_1(\alpha_1(k_j)) = \germ_0 \id, \] which implies that $\chi_1(k_1) = \chi_1(k_2)$. Further $\beta_1$ induces a bijection between $K(x,x)$ and $\Gamma(\beta_0(x),\beta_0(x))$. Hence $\beta_0(x) = 0$, and thus \[ \psi_1(\beta_1(k_1)) \not= \psi_1(\beta_1(k_2)). \] But this contradicts to \[ \psi_1(\beta_1(k_1)) = \delta_1(\chi_1(k_1)) = \delta_1(\chi_1(k_2))= \psi_1(\beta_1(k_2)). \] In turn, $\varphi$ and $\psi$ are not Morita equivalent. \end{example} \section{Groupoid homomorphisms in local charts}\label{sec_homom} In this section we characterize homomorphisms between marked atlas groupoids on the orbifold side, \ie in terms of local charts. We proceed in a two-step process. First we define a concept which we call representatives of orbifold maps. Each representative of an orbifold map gives rise to exactly one homomorphism between the associated marked atlas groupoids. Since, in general, each groupoid homomorphism corresponds to several such representatives, we then impose an equivalence relation on the class of all representatives for fixed orbifold atlases. The equivalence classes, called charted orbifold maps, turn out to be in bijection with the homomorphisms between the marked atlas groupoids. The constructions in this section are subject to a fixed choice of representatives of the orbifold structures. In the following sections we will use this construction as a basic building block for a notion of maps (or morphisms) between orbifolds which is independent of the chosen representatives. Throughout this section let $(Q,\mathcal U)$, $(Q',\mathcal U')$ denote two orbifolds. \begin{defi} Let $f\colon Q\to Q'$ be a continuous map, and suppose that $(V,G,\pi)\in\mathcal U$, $(V',G',\pi')\in\mathcal U'$ are orbifold charts. A \textit{local lift} of $f$ \wrt $(V,G,\pi)$ and $(V',G',\pi')$ is a smooth map $\tilde f\colon V\to V'$ such that $\pi'\circ\tilde f = f\circ\pi$. In this case, we call $\tilde f$ a \textit{local lift of $f$ at $q$} for each $q\in \pi(V)$. \end{defi} Recall the pseudogroup $\mathcal A(M)$ from Definition~\ref{def_pseudogroup}. \begin{defi}\label{generatepsgr} Let $M$ be a manifold and $A$ a pseudogroup on $M$ which satisfies the gluing property from Definition~\ref{def_pseudogroup} and is closed under restrictions. The latter means that if $f\in A$ and $U\subseteq \dom f$ is open, then the map $f\vert_U\colon U \to f(U)$ is in $A$. Suppose that $B$ is a subset of $\mathcal A(M)$. Then $A$ is said to be \textit{generated} by $B$ if $B\subseteq A$ and for each $f\in A$ and each $x\in\dom f$ there exists some $g\in B$ with $x\in \dom g$ and an open set $U\subseteq \dom f \cap \dom g$ such that $x\in U$ and $f\vert_U = g\vert_U$. If $B$ is a subset of $\mc A(M)$ and there exists exactly one pseudogroup $A$ on $M$ which satisfies the gluing property from Definition~\ref{def_pseudogroup}, is closed under restrictions and is generated by $B$, then we say that $B$ \textit{generates} $A$. \end{defi} \begin{defi} Let $M$ be a manifold. A subset $P$ of $\mathcal A(M)$ is called a \textit{quasi-pseudogroup} on $M$ if it satisfies the following two properties: \begin{enumerate}[(i)] \item If $f\in P$ and $x\in \dom f$, then there exists an open set $U$ with $x\in U\subseteq \dom f$ and $g\in P$ such that there exists an open set $V$ with $f(x)\in V\subseteq \dom g$ and \[ \big( f\vert_U\big)^{-1} = g\vert_V. \] \item If $f,g\in P$ and $x\in f^{-1}(\cod f \cap \dom g)$, then there exists $h\in P$ with $x\in \dom h$ such that we find an open set $U$ with \[ x\in U \subseteq f^{-1}(\cod f \cap \dom g) \cap \dom h\quad\text{and}\quad g\circ f\vert_U = h\vert_U. \] \end{enumerate} \end{defi} A quasi-pseudogroup is designed to work with the germs of its elements. Therefore identities (like inversion and composition) of elements in quasi-pseudo\-groups are only required to be satisfied locally, whereas for (ordinary) pseudogroups these identities have to be valid globally. One easily proves that each quasi-pseudogroup generates a unique pseudogroup which satisfies the gluing property from Definition~\ref{def_pseudogroup} and is closed under restrictions. Conversely, each generating set for such a pseudogroup is necessarily a quasi-pseudogroup. In the following definition of a representative of an orbifold map, the underlying continuous map $f$ is the only entity which is stable under change of orbifold atlases or, in other words, under the choice of local lifts. The pair $(P,\nu)$ should be considered as one entity. It serves as a transport of changes of charts from one orbifold to another. Here we ask for a quasi-pseudogroup $P$ instead of working with all of $\Psi(\mathcal V)$ (recall $\Psi(\mc V)$ from Special Case~\ref{atlasgroupoid}) for two reasons. In general, $P$ is much smaller than $\Psi(\mathcal V)$. Sometimes it may even be finite. In Example~\ref{Pfinite} below we see that for some orbifolds, $P$ may consist of only two elements. Moreover, if the orbifold is a connected manifold, $P$ can always be chosen to be $\{\id\}$. The other reason is that it is much easier to construct a quasi-pseudogroup $P$ and a compatible map $\nu$ from a given groupoid homomorphism than a map $\nu$ defined on all of $\Psi(\mathcal V)$. Examples~\ref{notf}, \ref{Pfinite} below show that the objects requested in the following definition need not exist nor, if they exist, are uniquely determined. \begin{defi}\label{repchartedorbmap} A \textit{representative of an orbifold map} from $(Q,\mathcal U)$ to $(Q',\mathcal U')$ is a tuple \[ \hat f\mathrel{\mathop:}= (f, \{\tilde f_i\}_{i\in I}, P, \nu )\] where \begin{enumerate}[(R1)] \item \label{f} $f\colon Q\to Q'$ is a continuous map, \item \label{liftings} for each $i\in I$, $\tilde f_i$ is a local lift of $f$ \wrt some orbifold charts $(V_i, G_i,\pi_i)\in\mathcal U$, $(V'_i, G'_i, \pi'_i)\in\mathcal U'$ such that \[ \bigcup_{i\in I}\pi_i(V_i) = Q \] and $(V_i, G_i, \pi_i) \not= (V_j, G_j, \pi_j)$ for $i,j\in I$, $i\not= j$, \item \label{P} $P$ is a quasi-pseudogroup which consists of changes of charts of the orbifold atlas \[ \mathcal V \mathrel{\mathop:}= \{ (V_i, G_i, \pi_i)\mid i\in I \}\] of $(Q,\mc U)$ and generates $\Psi(\mathcal V)$. \item \label{nu} Let $\psi\mathrel{\mathop:}= \coprod_{i\in I} \tilde f_i$. Then $\nu\colon P \to \Psi(\mc U')$ is a map which assigns to each $\lambda\in P$ an open embedding \[ \nu(\lambda) \colon (W', H', \chi') \to (V', G', \varphi')\] between some orbifold charts in $\mathcal U'$ such that \begin{enumerate}[(a)] \item \label{invariant} $\psi\circ\lambda = \nu(\lambda)\circ \psi\vert_{\dom \lambda}$, \item \label{welldefined} for all $\lambda,\mu\in P$ and all $x\in\dom\lambda\cap\dom\mu$ with $\germ_x\lambda=\germ_x\mu$, we have \[ \germ_{\psi(x)}\nu(\lambda) = \germ_{\psi(x)}\nu(\mu), \] \item \label{mult_pg} for all $\lambda,\mu\in P$, for all $x\in \lambda^{-1}(\cod\lambda\cap\dom\mu)$ we have \[ \germ_{\psi(\lambda(x))}\nu(\mu) \cdot \germ_{\psi(x)}\nu(\lambda) = \germ_{\psi(x)}\nu(h)\] where $h$ is an element of $P$ with $x\in\dom h$ such that there is an open set $U$ with \[ x\in U\subseteq \lambda^{-1}(\cod\lambda\cap\dom \mu)\cap\dom h \] and $\mu\circ\lambda\vert_U = h\vert_U$, \item \label{unit_pg} for all $\lambda\in P$ and all $x\in \dom \lambda$ such that there exists an open set $U$ with $x\in U\subseteq\dom \lambda$ and $\lambda\vert_U = \id_U$ we have \[ \germ_{\psi(x)}\nu(\lambda) = \germ_{\psi(x)}\id_{U'} \] with $U'\mathrel{\mathop:}= \coprod_{i\in I} V'_i$. \end{enumerate} \end{enumerate} The orbifold atlas $\mathcal V$ is called the \textit{domain atlas} of the representative $\hat f$, and the set \[ \{ (V'_i, G'_i, \pi'_i) \mid i\in I\}\] is called the \textit{range family} of $\hat f$. The latter set is not necessarily indexed by $I$. \end{defi} Condition (\apref{R}{mult_pg}{}) is in fact independent of the choice of $h$. The technical (and easily satisfied) condition in (\apref{R}{liftings}{}) that each two orbifold charts in $\mc V$ be distinct is required because we use $I$ as an index set for $\mathcal V$ in (\apref{R}{P}{}) and other places. Example~\ref{notf} below shows that the continuous map $f$ in (\apref{R}{f}{}) cannot be chosen arbitrarily. It is not even sufficient to require $f$ to be a homeomorphism. \begin{example}\label{notf} Recall the orbifold $(Q,\mathcal U_1)$ from Example~\ref{notcompatible}. The map \[ f\colon Q\to Q, \quad f(x) = \sqrt{x}, \] is a homeomorphism on $Q$. We show that $f$ has no local lift at $0$. Each orbifold chart in $\mc U_1$ that uniformizes a neighborhood of $0$ is isomorphic to an orbifold chart of the form $(I, \{\pm \id_I\}, \pr)$ where $I=(-a,a)$ for some $0<a<1$. Seeking a contradiction assume that $\tilde f$ is a local lift of $f$ at $0$ with domain $I = (-a,a)$. For each $x\in I$, necessarily $\tilde f(x) \in \big\{ \pm \sqrt{|x|} \big\}$. Since $\tilde f$ is required to be continuous, there only remain four possible candidates for $\tilde f$, namely \begin{align*} \tilde f_1(x) &= \sqrt{ |x| },& \tilde f_2 &= -\tilde f_1, \\ \tilde f_3 (x) & = \begin{cases} \sqrt{x} & x\geq 0 \\ -\sqrt{-x} & x\leq 0, \end{cases} & \tilde f_4 & = -\tilde f_3. \end{align*} But none of these is differentiable in $x=0$, hence there is no local lift of $f$ at $0$. \end{example} The following example shows that the pair $(P,\nu)$ is not uniquely determined by the choice of the family of local lifts. \begin{example}\label{Pfinite} Recall the orbifold $(Q,\mathcal U_1)$ and the representative $\mathcal V_1 = \{V_1\}$ of $\mathcal U_1$ from Example~\ref{notcompatible}. The map $f\colon Q\to Q$, $q\mapsto 0$, is clearly continuous and has the local lift \[ \tilde f \colon\left\{ \begin{array}{ccc} (-1,1) & \to & (-1,1) \\ x & \mapsto & 0 \end{array}\right. \] with respect to $V_1$ and $V_1$. Consider the quasi-pseudogroup $P=\{ \pm\id_{(-1,1)} \}$ on $(-1,1)$. Proposition~2.12 in \cite{Moerdijk_Mrcun} implies that $P$ generates $\Psi(\mathcal V_1)$. The triple $(f,\tilde f, P)$ can be completed in the following two different ways to representatives of orbifold maps on $(Q,\mc U_1)$: \begin{enumerate}[(a)] \item $\nu_1(\pm \id_{(-1,1)}) \mathrel{\mathop:}= \id_{(-1,1)}$, \item $\nu_2(\id_{(-1,1)}) \mathrel{\mathop:}= \id_{(-1,1)}$, $\nu_2(-\id_{(-1,1)}) \mathrel{\mathop:}= -\id_{(-1,1)}$. \end{enumerate} We will see in Example~\ref{differenthomoms} below that $(f,\tilde f, P,\nu_1)$ and $(f,\tilde f, P,\nu_2)$ give rise to different groupoid homomorphisms. \end{example} \begin{prop}\label{orb_gr} Let $\hat f = (f, \{\tilde f_i\}_{i\in I}, P, \nu )$ be a representative of an orbifold map from $(Q,\mathcal U)$ to $(Q',\mathcal U')$. Suppose that $\mathcal V= \{ (V_i,G_i,\pi_i) \mid i\in I\}$, is the domain atlas of $\hat f$, which is an orbifold atlas of $(Q,\mc U)$ indexed by $I$. Let $\mathcal V'$ be an orbifold atlas of $(Q',\mc U')$ which contains the range family $\big\{ (V'_i, G'_i, \pi'_i) \ \big\vert\ i\in I \big\}$. Define the map $\varphi_0\colon \Gamma(\mathcal V)_0 \to \Gamma(\mathcal V')_0$ by \[ \varphi_0 \mathrel{\mathop:}= \coprod_{i\in I}\tilde f_i.\] Suppose that $\varphi_1\colon \Gamma(\mathcal V)_1 \to \Gamma(\mathcal V')_1$ is determined by \[ \varphi_1(\germ_x\lambda) \mathrel{\mathop:}= \germ_{\varphi_0(x)}\nu(\lambda)\] for all $\lambda\in P$, $x\in\dom\lambda$. Then \[ \varphi = (\varphi_0,\varphi_1) \colon \Gamma(\mathcal V) \to \Gamma(\mathcal V') \] is a homomorphism. Moreover, $\alpha_{\mathcal V'}\circ |\varphi| = f\circ \alpha_{\mathcal V}$. \end{prop} \begin{proof} Obviously, $\varphi_0$ is smooth. To show that $\varphi_1$ is a well-defined map on all of $\Gamma(\mc V)_1$, let $g\in\Psi(\mathcal V)$ and $x\in \dom g$. Then there exists $\lambda\in P$ such that $x\in\dom\lambda$ and \[ g\vert_U = \lambda\vert_U\] for some open subset $U\subseteq \dom g \cap \dom \lambda$ with $x\in U$. Hence $\germ_x g = \germ_x\lambda$. So \[ \varphi_1(\germ_x g) = \varphi_1(\germ_x\lambda) = \germ_{\varphi_0(x)}\nu(\lambda).\] If there is $\mu\in P$ such that $x\in\dom\mu$ and $g\vert_W = \mu\vert_W$ for some open subset $W$ of $\dom g \cap \dom \mu$ with $x\in W$, then $\germ_x \mu= \germ_x\lambda$. By (\apref{R}{welldefined}{}), $\germ_{\varphi_0(x)}\nu(\mu) = \germ_{\varphi_0(x)}\nu(\lambda)$ and thus \[ \varphi_1(\germ_x \mu) = \varphi_1(\germ_x\lambda).\] This shows that $\varphi_1$ is indeed well-defined on all of $\Gamma(\mathcal V)_1$. The properties (\apref{R}{invariant}{}), (\apref{R}{mult_pg}{}) and (\apref{R}{unit_pg}{}) yield that $\varphi$ commutes with the structure maps. It remains to show that $\varphi_1$ is smooth. For this, let $\germ_x\lambda\in \Gamma(\mathcal V)_1$ with $\lambda\in P$. The definition of $\nu$ shows that $\varphi_1$ maps \[ U\mathrel{\mathop:}= \{ \germ_y\lambda\mid y\in \dom\lambda\}\] to \[ U'\mathrel{\mathop:}= \{ \germ_z\nu(\lambda) \mid z\in\dom\nu(\lambda) \}.\] Now the diagram \[ \xymatrix{ U \ar[d]_s\ar[r]^{\varphi_1} & U'\ar[d]^s && \germ_y\lambda \ar@{|->}[r] \ar@{|->}[d] & \germ_{\varphi_0(y)}\nu(\lambda)\ar@{|->}[d] \\ \dom\lambda \ar[r]^{\varphi_0} & \dom\nu(\lambda) && y \ar@{|->}[r] & \varphi_0(y) } \] commutes, the vertical maps (restriction of source maps) are diffeomorphisms and $\varphi_0$ is smooth, so $\varphi_1$ is smooth. Finally, suppose $x\in V_i$. Then \begin{align*} \left(\alpha_{\mathcal V'}\circ|\varphi|\right)\big([x]\big) & = \alpha_{\mathcal V'}\big([\varphi_0(x)]\big) = \pi'_i(\tilde f_i(x)) = f(\pi_i(x)) = \left(f\circ \alpha_{\mathcal V}\right)\big([x]\big). \end{align*} This completes the proof. \end{proof} \begin{example}\label{differenthomoms} Recall the setting of Example~\ref{Pfinite} and the associated groupoid $\Gamma\mathrel{\mathop:}=\Gamma(\mathcal V_1)$ from Example~\ref{orbexample}. The homomorphism $\varphi=(\varphi_0,\varphi_1)$ of $\Gamma$ induced by $(f,\tilde f, P, \nu_1)$ is $\varphi_0 = \tilde f$ and \[ \varphi_1( \germ_x(\pm \id_{(-1,1)}) ) = \germ_0 \id_{(-1,1)}.\] The homomorphism $\psi=(\psi_0,\psi_1)\colon\Gamma\to \Gamma$ induced by $(f,\tilde f, P,\nu_2)$ is $\psi_0=\tilde f$ and \begin{align*} \psi_1(\germ_x\id_{(-1,1)}) & = \germ_0 \id_{(-1,1)}, \\ \psi_1(\germ_x(-\id_{(-1,1)}) ) & = \germ_0(-\id_{(-1,1)}). \end{align*} \end{example} The following proposition is the converse to Proposition~\ref{orb_gr}. Its proof is constructive. In Section~\ref{atlascategory} we will use this construction to define the functor between the category of orbifolds and that of marked atlas groupoids. \begin{prop}\label{inducedorbgr} Let $\mathcal V$ be a representative of $\mc U$, $\mathcal V'$ a representative of $\mc U'$, and \[ \varphi = (\varphi_0,\varphi_1)\colon \Gamma(\mathcal V) \to \Gamma(\mathcal V') \] a homomorphism. Then $\varphi$ induces a representative of an orbifold map \[ (f, \{\tilde f_i\}_{i\in I}, P, \nu) \] with domain atlas $\mathcal V$, range family contained in $\mathcal V'$, and \[ \tilde f_i = \varphi_0\vert_{V_i}\] for all $i\in I$. Moreover, we have $f= \alpha_{\mathcal V'}\circ|\varphi|\circ \alpha_{\mathcal V}^{-1}$. \end{prop} \begin{proof} We start by showing that for each $f\in\Psi(\mathcal V)$ and each $x\in\dom f$ there exist an element $g\in\Psi(\mathcal V')$ and an open neighborhood $U$ of $x$ (which may depend on $g$) with $U\subseteq \dom f$ such that for each $y\in U$ we have \begin{equation}\label{thickidentity} \varphi_1(\germ_y f) = \germ_{\varphi_0(y)} g. \end{equation} Let $f\in\Psi(\mc V)$ and $x\in \dom f$. By definition of $\Gamma(\mathcal V)_1$ and $\varphi_1$, there exists $g\in\Psi(\mathcal V')$ such that \[ \varphi_1(\germ_x f) = \germ_{\varphi_0(x)} g.\] Since $\varphi_1$ is continuous, the preimage of the $\germ_{\varphi_0(x)} g$--neighborhood \[ U'_g = \{ \germ_z g\mid z\in\dom g \} \] is a neighborhood of $\germ_x f$. Hence there exists an open neighborhood $U$ of $x$ with $U\subseteq \dom f$ such that \[ U_{f\vert U} = \{ \germ_y f\mid y\in U\} \subseteq \varphi_1^{-1}\left(U'_g\right).\] Thus, for all $y\in U$ we have \eqref{thickidentity} as claimed. We remark that each two possible choices for $g$ coincide on some neighborhood of $\varphi_0(x)$. For each $f\in\Psi(\mc V)$ and each $x\in\dom f$ we now choose a pair $(g,U)$ where $g\in\Psi(\mc V')$ is an embedding between some orbifold charts in $\mc U'$ and $U$ is an open neighborhood of $x$ such that $f\vert_U$ is a change of charts of $\mc V$. Let $P(f,x)\mathrel{\mathop:}= (g,U)$. We adjust choices such that for $f_1,f_2\in\Psi(\mc V)$ and $x_1\in\dom f_1$, $x_2\in\dom f_2$ the chosen pairs $P(f_1,x_1) = (g_1,U_1)$ resp.\@ $P(f_2,x_2) = (g_2,U_2)$ either equal or $U_1\not=U_2$. Let $P$ denote the family of the changes of charts we have chosen in this way: \[ P = \{ f\vert_U\colon U \to f(U)\mid f\in\Psi(\mc V),\ x\in\dom f,\ P(f,x) = (g,U)\}. \] By construction, $P$ is a quasi-pseudogroup which generates $\Psi(\mathcal V)$. We define the map $\nu\colon P \to \Psi(\mc V')$ by \[ \nu(\lambda) \mathrel{\mathop:}= g\] where $g$ is the unique element in $\Psi(\mc V')$ attached to $\lambda\in P$ by our choices. For $\lambda\in P$ and $x\in\dom\lambda$ we clearly have \begin{equation} \label{welldefinedgerms} \varphi_1(\germ_x \lambda) = \germ_{\varphi_0(x)}\nu(\lambda). \end{equation} Properties~(\apref{R}{nu}{}) are easily checked using the compatibility of $\varphi$ with the structure maps. It remains to show that the image of $\varphi_0\vert_{V_i}$ is contained in $V'_j$ for some orbifold chart $(V'_j, G'_j, \pi'_j)\in\mathcal V'$. Since $V_i$ is connected, the image $\varphi_0(V_i)$ is connected as well. The connected components of $\Gamma(\mc V')_0$ are exactly the sets $W'$ with $(W',G',\varphi')\in \mc V$. From this the claim follows. \end{proof} Proposition~\ref{inducedorbgr} guarantees that each homomorphism \[ \varphi = (\varphi_0,\varphi_1) \colon \Gamma(\mathcal V) \to \Gamma(\mathcal V') \] induces a representative of an orbifold map $(f, \{\tilde f_i\}_{i\in I}, P,\nu)$ with domain atlas $\mathcal V$, range family contained in $\mathcal V'$, $\tilde f_i = \varphi_0\vert_{V_i}$, and $f=\alpha_{\mc V'}\circ|\varphi|\circ\alpha_{\mc V}^{-1}$. For the pair $(P,\nu)$, Proposition~\ref{inducedorbgr} allows (in general) a whole bunch of choices. On the other hand, different representatives of an orbifold map may induce the same groupoid homomorphism. In view of Proposition~\ref{orb_gr} and the proof of Proposition~\ref{inducedorbgr}, the relevant information stored by the pair $(P,\nu)$ are the germs of the elements in $P$ and the via $\nu$ associated germs of elements in $\Psi(\mathcal V')$. This observation is the motivation for the equivalence relation in the following definition. \begin{defi}\label{Pnu_equiv} Let \[ \hat f \mathrel{\mathop:}= (f, \{\tilde f_i\}_{i\in I}, P_1, \nu_1)\quad\text{and}\quad \hat g\mathrel{\mathop:}= (g,\{\tilde g_i\}_{i\in I}, P_2,\nu_2) \] be two representatives of orbifold maps with the same domain atlas $\mathcal V$ representing the orbifold structure $\mc U$ on $Q$ and both range families being contained in the orbifold atlas $\mathcal V'$ of $(Q',\mc U')$. Set $\psi\mathrel{\mathop:}=\coprod_{i\in I}\tilde f_i$. We say that $\hat f$ is \textit{equivalent} to $\hat g$ if $f=g$, $\tilde f_i = \tilde g_i$ for all $i\in I$, and \[ \germ_{\psi(x)} \nu_1(\lambda_1) = \germ_{\psi(x)}\nu_2(\lambda_2)\] for all $\lambda_1\in P_1$, $\lambda_2\in P_2$, $x\in\dom \lambda_1 \cap \dom\lambda_2$ with $\germ_x\lambda_1 = \germ_x\lambda_2$. This defines an equivalence relation. The equivalence class of $\hat f$ will be denoted by $[\hat f]$ or \[ (f, \{\tilde f_i\}_{i\in I}, [(P_1, \nu_1)]), \] or even $\hat f$ if it is clear that we refer to the equivalence class. It is called an \textit{orbifold map with domain atlas $\mathcal V$ and range atlas $\mathcal V'$}, in short \textit{orbifold map with $(\mathcal V,\mathcal V')$} or, if the specific orbifold atlases are not important, a \textit{charted orbifold map}. The set of all orbifold maps with $(\mathcal V,\mathcal V')$ is denoted $\Orbmap(\mathcal V,\mathcal V')$. For convenience we often denote an element $\hat h\in \Orbmap(\mathcal V,\mathcal V')$ by \[ \mc V \stackrel{\hat h}{\longrightarrow} \mc V'. \] \end{defi} Propositions~\ref{orb_gr} and \ref{inducedorbgr} yield the following statement of which we omit the proof. \begin{prop}\label{conclusion} Let $\mathcal V$ be a representative of $\mc U$, and $\mathcal V'$ a representative of $\mc U'$. Then the set $\Orbmap(\mathcal V,\mathcal V')$ of all orbifold maps with $(\mathcal V, \mathcal V')$ and the set $\Hom(\Gamma(\mathcal V),\Gamma(\mathcal V'))$ of all homomorphisms from $\Gamma(\mathcal V)$ to $\Gamma(\mathcal V')$ are in bijection. More precisely, the construction in Proposition~\ref{orb_gr} induces a bijection \[ F_1\colon \Orbmap(\mathcal V,\mathcal V') \to \Hom(\Gamma(\mathcal V),\Gamma(\mathcal V')), \] and the construction in Proposition~\ref{inducedorbgr} defines a bijection \[ F_2\colon \Hom(\Gamma(\mathcal V),\Gamma(\mathcal V')) \to \Orbmap(\mathcal V,\mathcal V'),\] which is inverse to $F_1$. \end{prop} \section{Introduction} The purpose of this article is to propose a definition of the category of reduced (smooth) orbifolds, and the definition of an isomorphic category in terms of a certain kind of Lie groupoids. In both categories, the morphisms will be explicitely given. In the orbifold category morphisms are defined via local charts and maps between these charts. In the groupoid category morphisms are described as certain equivalence classes of groupoid homomorphisms. Moreover, the isomorphism functor between the two categories is explicitely given. Given a reduced orbifold and an orbifold atlas representing its orbifold structure it is well known that one has an explicit construction of a proper effective foliation groupoid (orbifold groupoids) from these data (see, \eg Haefliger \cite{Haefliger_orbifold} or the book by Moerdijk and Mr\v{c}un \cite{Moerdijk_Mrcun}). Over the years various authors (in particular, Moerdijk \cite{Moerdijk_survey}, Pronk~\cite{Pronk}) used this link to give a definition of a category of orbifolds by proposing a definition of a category of orbifold groupoids, either as a 2-category or as a bicategory of fractions. Lerman \cite{Lerman_survey} provides a very good discussion of these approaches. They all have in common that the morphisms in the orbifold category are only given implicitely. Moreover, all the proposed groupoid categories are only equivalent, not isomorphic, to the orbifold category. This is caused by the fact that the construction mentioned above assigns the same groupoid to different (but isomorphic) orbifolds, and conversely various (Morita equivalent) groupoids to the same orbifold. Moerdijk and Pronk ~\cite{Moerdijk_Pronk} show that isomorphism classes of orbifolds correspond to Morita equivalence classes of orbifold groupoids. For many investigations about orbifolds, an equivalence of categories suffices to translate the problem to groupoids. However, one cannot evaluate e.g.\@ the diffeomorphism group of an orbifold using any of the groupoid categories. Moreover, for any of the categories an intrinsic description of the orbifold morphisms (that is, in terms of local charts) is missing. For this one needs, as a first step, a characterization of (classical) groupoid homomorphisms in local charts. Unfortunately, the characterization given by Lupercio and Uribe \cite{Lupercio_Uribe} (which to the knowledge of the author is the only attempt in the existing literature) is flawed. In this article we provide a correct characterization. After that we use the arising maps between local charts to define a geometrically motivated notion of orbifold maps. Then we characterize orbifolds and orbifold maps in terms of groupoids and groupoid homomorphisms. This enables us to define a category in terms of groupoids (which is not the classical category of groupoids) which is isomorphic to the category formed by reduced orbifolds with orbifold maps as morphisms. We start by recalling briefly the necessary background material on orbifolds, groupoids, pseudogroups, and the well-known construction of a groupoid from an orbifold and an orbifold atlas representing its orbifold structure. Groupoids which arise in this way will be called \textit{atlas groupoids}. To overcome the problem that different orbifolds are identified with the same atlas groupoid we introduce, in Section~\ref{sec_marked}, a certain marking of atlas groupoids. It consists in attaching to an atlas groupoid a certain topological space and a certain homomorphism between its orbit space and the topological space. The general concept of marking already appeared in \cite{Moerdijk_survey}. There, however, the relation between a marking of a groupoid and an orbifold atlas (in local charts) is not discussed. The specific marking of an atlas groupoid introduced here allows to recover the orbifold. There is an obvious notion of homomorphisms between marked atlas groupoids. In Section~\ref{sec_homom} we characterize these homomorphisms in local charts. On the orbifold side, this characterization involves the choice of representatives of the orbifold structures, namely those orbifold atlases which were used to construct the marked atlas groupoids. Hence, at this point we get a notion of orbifold map with fixed representatives of orbifold structures, which we will call \textit{charted orbifold maps}. In Section~\ref{sec_redorbcat} we introduce a natural definition of composition of charted orbifold maps and a geometrically motivated definition of the identity morphism (a certain class of charted orbifold maps), which allows us to establish a natural equivalence relation on the class of charted orbifold maps. An orbifold map (which does not depend on the choice of orbifold atlases) is then an equivalence class of charted orbifold maps. The leading idea for this equivalence relation is geometric: we consider charted orbifold maps as equivalent if and only if they induce the same charted orbifold map on common refinements of the orbifold atlases. Moreover, using the same idea, we define the composition of orbifold maps. In this way, we construct a category of reduced orbifolds. Finally, in Section~\ref{atlascategory}, we characterize orbifolds as certain equivalence classes of marked atlas groupoids, and orbifold maps as equivalence classes of homomorphisms of marked atlas groupoids. These equivalence relations are natural adaptations of the classical Morita equivalence. In this way, there arises a category of marked atlas groupoids which is isomorphic to the orbifold category. As an additional benefit the isomorphism functor is constructive. We expect that the constructed category of marked atlas groupoids is isomorphic to a category of which the class of objects consists of equivalence classes of all marked proper effective foliation groupoids and the morphisms are given by certain equivalence classes of groupoid homomorphisms. \textit{Acknowledgments:} This work emerged from a workshop on orbifolds in 2007 which took place in Paderborn in the framework of the International Research Training Group 1133 ``Geometry and Analysis of Symmetries''. The author is very grateful to the participants of this workshop for their abiding interest and enlightening conversations. In particular, she likes to thank Joachim Hilgert for a very careful reading of the manuscript and for many helpful comments. Moreover, she wishes to thank Dieter Mayer and the Institut f\"ur Theoretische Physik in Clausthal for the warm hospitality where part of this work was conducted. The author was partially supported by the International Research Training Group 1133 ``Geometry and Analysis of Symmetries'', the Sonderforschungsbereich/Transregio 45 ``Periods, moduli spaces and arithmetic of algebraic varieties'', the Max-Planck-Institut f\"ur Mathematik in Bonn, and the SNF (200021-127145). \textit{Notation and conventions:} We use $\N_0 = \N \cup \{0\}$ to denote the set of non-negative integers. If not stated otherwise, every manifold is assumed to be real, second-countable, Hausdorff and smooth ($C^\infty$). If $M$ is a manifold, then $\Diff(M)$ denotes the group of diffeomorphisms of $M$. If $G$ is a subgroup of $\Diff(M)$, then $G\backslash M$ denotes the space of cosets $\{ gM \mid g\in G\}$ endowed with the final topology. If $A_1, A_2, B$ are sets (manifolds) and $f_1\colon A_1\to B$, $f_2\colon A_2\to B$ are maps (submersions), then we denote the fibered product of $f_1$ and $f_2$ by $A_1\pullback{f_1}{f_2}A_2$ and identify it with the set (manifold) \[ A_1\pullback{f_1}{f_2}A_2 = \{ (a_1,a_2) \in A_1\times A_2 \mid f_1(a_1) = f_2(a_2)\}. \] Finally, we say that a family $\mc V = \{ V_i \mid i\in I\}$ is \textit{indexed by $I$} if $I \to \mc V$, $i\mapsto V_i$, is a bijection. \subsection{The identity morphism}\label{liftofid} \begin{defirem} Let $(Q,\mathcal U)$ and $(Q',\mathcal U')$ be orbifolds and let $f\colon Q\to Q'$ be a continuous map. Suppose that $\tilde f$ is a local lift of $f$ \wrt the orbifold charts $(V,G,\pi)\in \mathcal U$ and $(V', G',\pi') \in \mathcal U'$. Further suppose that \[ \lambda \colon (W,K,\chi) \to (V,G,\pi)\quad\text{and}\quad \mu\colon (W',K',\chi')\to (V',G',\pi') \] are open embeddings between orbifold charts in $\mathcal U$ resp.\@ in $\mathcal U'$ such that \[ \tilde f(\lambda (W) ) \subseteq \mu(W'). \] Then the map \[ \tilde g\mathrel{\mathop:}= \mu^{-1}\circ\tilde f\circ \lambda\colon W \to W' \] is a local lift of $f$ \wrt $(W,K,\chi)$ and $(W',K',\chi')$. We say that $\tilde f$ \textit{induces the local lift $\tilde g$ \wrt $\lambda$ and $\mu$}, and we call $\tilde g$ the \textit{induced lift of $f$ \wrt $\tilde f$, $\lambda$ and $\mu$.} \[ \xymatrix{ & V \ar[r]^{\tilde f} & V' \\ & \lambda(W) \ar[r]^{\tilde f\vert_{\lambda(W)}} \ar@{^{(}->}[u] & \mu(W') \ar@{_{(}->}[u] \\ W \ar[uur]^{\lambda}\ar[rrr]^{\tilde g} \ar@{>->>}[ur]_{\lambda} &&& W' \ar[uul]_{\mu} \ar@{>->>}[ul]^{\mu} } \] \end{defirem} Suppose that $\tilde f$ is a local lift of the identity $\id_Q$ for some orbifold $(Q,\mathcal U)$. Proposition~\ref{induceslifts} below shows that $\tilde f$ induces the identity on sufficiently small orbifold charts. This means that locally $\tilde f$ is related to the identity itself via open embeddings. In particular, $\tilde f$ is a local diffeomorphism. For its proof we need the following lemma, which is easily shown. \begin{lemma}\label{SIN} Let $M$ be a manifold, $G$ a finite subgroup of $\Diff(M)$, and $x\in M$. There exist arbitrary small open $G$--stable neighborhoods $S$ of $x$. Moreover, one can choose $S$ so small that $G_S=G_x$, the isotropy group of $x$. \end{lemma} \begin{prop}\label{induceslifts} Let $(Q,\mathcal U)$ be an orbifold and suppose that $\tilde f$ is a local lift of $\id_Q$ \wrt $(V,G,\pi)$, $(V',G',\pi')\in \mathcal U$. For each $v\in V$ there exist a restriction $(S, G_S, \pi\vert_S)$ of $(V,G,\pi)$ with $v\in S$ and a restriction $(S', (G')_{S'}, \pi'\vert_{S'})$ of $(V', G', \pi')$ such that $\tilde f\vert_S$ is an isomorphism from $(S, G_S, \pi\vert_S)$ to $(S', (G')_{S'}, \pi'\vert_{S'})$. In particular, $\tilde f\vert_S$ induces the identity $\id_S$ \wrt $\id_S$ and $\left(\tilde f\vert_S\right)^{-1}$. \end{prop} \begin{proof} Let $v\in V$ and set $v'\mathrel{\mathop:}= \tilde f(v)$. Then $\pi(v) = \pi'(v')$. By compatibility of orbifold charts there exist a restriction $(W,H,\chi)$ of $(V,G,\pi)$ with $v\in W$ and an open embedding $\lambda \colon (W,H,\chi) \to (V',G',\pi')$ such that $\lambda(v) = v'$. Lemma~\ref{SIN} yields an open $H$--stable neighborhood $S$ of $v$ with $S\subseteq \tilde f^{-1}(\lambda(W))\cap W$. Let \[ \tilde g \mathrel{\mathop:}= \lambda^{-1}\circ \tilde f\vert_S \colon S\to W\] denote the induced lift of $\id_Q$. Since $\chi\circ\tilde g = \chi$, \cite[Lemma~2.11]{Moerdijk_Mrcun} shows the existence of a unique $h\in H$ such that $\tilde g = h\vert_S$. Thus $\tilde f\vert_S= \lambda\circ h\vert_S\colon S\to \lambda(h(S))$ is a diffeomorphism. In turn \[\tilde f\vert_S\colon (S, H_S, \chi\vert_S) \to (\tilde f(S), G'_{\tilde f(S)}, \pi'\vert_{\tilde f(S)})\] is an isomorphism of orbifold charts. \end{proof} Not each local lift of the identity is a global diffeomorphism, as the following example shows. \begin{example}\label{localliftnotdiffeom} Let $Q$ be the open annulus in $\R^2$ with inner radius $1$ and outer radius $2$ centered at the origin, \ie \[ Q =\{ w\in \C \mid 1 < |w| < 2 \}. \] The map $\alpha\colon Q \to \C \times \R$, \[ \alpha(w) \mathrel{\mathop:}= \left( \frac{w^2}{|w|^2}, |w| -1 \right) \] maps $Q$ onto the cylinder $Z \mathrel{\mathop:}= S^1 \times (0,1)$. Note that $\alpha(Q)$ covers $Z$ twice. Then the map $\beta\colon Z\to \C$, \[ \beta(z, s) \mathrel{\mathop:}= \frac{2}{2-s}z\] is the linear projection of $Z$ from the point $(0,2)\in \C\times \R$ to the complex plane. The composed map $\tilde f = \beta\circ\alpha\colon Q\to \C$, \[ \tilde f(w) \mathrel{\mathop:}= \frac{2w^2}{(3-|w|) |w|^2} \] is smooth (where we use $\C=\R^2$) and maps $Q$ onto $Q$. Further it induces a homeomorphism between $Q/\{\pm \id\}$ and $Q$. Hence, if we endow $Q$ with the orbifold atlas \[ \big\{ \big(Q, \{\pm \id \}, \tilde f\big), \big(Q,\{\id\}, \id \vphantom{\tilde f}\big) \big\},\] then $\tilde f$ is a local lift of $\id_Q$ \wrt $(Q,\{\pm\id\},\tilde f)$ and $(Q,\{\id\}, \id)$ but not a global diffeomorphism. \end{example} \begin{prop}\label{extending} Let $(Q,\mc U)$ be an orbifold and $\{ \tilde f_i \}_{i\in I}$ a family of local lifts of $\id_Q$ which satisfies (\apref{R}{liftings}{}). Then there exists a pair $(P,\nu)$ such that $(\id_Q, \{\tilde f_i\}_{i\in I}, P, \nu)$ is a representative of an orbifold map on $(Q,\mc U)$. The pair $(P,\nu)$ is unique up to equivalence of representatives of orbifold maps. \end{prop} \begin{proof} This follows immediately from Proposition~\ref{induceslifts} in combination with (\apref{R}{invariant}{}). \end{proof} \begin{prop}\label{samestructure} Let $Q$ be a topological space and suppose that $\mathcal U$ and $\mathcal U'$ are orbifold structures on $Q$. Let \[ \hat f = \big( f, \{ \tilde f_i\}_{i\in I}, [P,\nu] \big) \] be a charted orbifold map for which $f=\id_Q$, the domain atlas $\mathcal V$ is a representative of $\mathcal U$, the range family $\mathcal V'$, which here is an orbifold atlas, is a representative of $\mathcal U'$, and for each $i\in I$, the map $\tilde f_i$ is a local diffeomorphism. Then $\mathcal U=\mathcal U'$. \end{prop} \begin{proof} Let $(V_i, G_i, \pi_i) \in\mathcal V$, $(V'_j, G'_j, \pi'_j) \in \mathcal V'$ and $x\in V_i$, $y\in V'_j$ such that $\pi_i(x) = \pi'_j(y)$. Since $\tilde f_i \colon V_i \to V'_i$ is a local diffeomorphism, there are open neighborhoods $U$ of $x$ in $V_i$ and $U'$ of $\tilde f_i(x)$ in $V'_i$ such that $\tilde f_i\vert_U\colon U\to U'$ is a diffeomorphism. We have \[ \pi'_i\big(\tilde f_i(x)\big) = \pi_i(x) = \pi'_j(y).\] Therefore there exist open neighborhoods $W$ of $\tilde f_i(x)$ in $U'$ and $W'$ of $y$ in $V'_j$ and a diffeomorphism $h\colon W\to W'$ satisfying $\pi'_j\circ h= \pi'_i$. Shrinking $U$ shows that $(V_i, G_i, \pi_i)$ and $(V'_j, G'_j, \pi'_j)$ are compatible. Thus $\mc U = \mc U'$. \end{proof} The following example shows that the requirement in Proposition~\ref{samestructure} that the local lifts be local diffeomorphisms is essential. \begin{example} Recall the orbifolds $(Q,\mc U_i)$, $i=1,2$, from Example~\ref{notcompatible}, the representatives $\mc V_1 \mathrel{\mathop:}= \{V_1\}$ and $\mc V_2 \mathrel{\mathop:}= \{V_2\}$ of $\mc U_1$ resp.\@ $\mc U_2$, and set $g(x) \mathrel{\mathop:}= x^2$ for $x\in (-1,1)$. Then $g$ is a lift of $\id_Q$ \wrt $V_2$ and $V_1$. Further let \[ P\mathrel{\mathop:}= \{ \pm \id_{(-1,1)}\}\quad\text{and}\quad \nu( \pm \id_{(-1,1)}) \mathrel{\mathop:}= \id_{(-1,1)}. \] Then $(\id_Q, \{g\}, [P,\nu])$ is an orbifold map with $(\mc V_2, \mc V_1)$ from $(Q,\mc U_2)$ to $(Q,\mc U_1)$, but $\mc U_1 \not= \mc U_2$. \end{example} Motivated by Propositions~\ref{extending} and \ref{samestructure} we make the following definition. \begin{defi}\label{liftiddef} Let $(Q,\mc U)$ be an orbifold and let $\hat f = (f, \{\tilde f_i\}_{i\in I}, [P,\nu])$ be a charted orbifold map whose domain atlas is a representative of $\mc U$. If and only if $f=\id_Q$ and $\tilde f_i$ is a local diffeomorphism for each $i\in I$, we call $\hat f$ a \textit{lift of the identity} $\id_{(Q,\mc U)}$ or a \textit{representative} of $\id_{(Q,\mc U)}$. The set of all lifts of $\id_{(Q,\mc U)}$ is the \textit{identity morphism} $\id_{(Q,\mc U)}$ of $(Q,\mathcal U)$. \end{defi} \section{Marked Lie groupoids and their homomorphisms}\label{sec_marked} In Example~\ref{orbexample} we have seen that it may happen that the same atlas groupoid is associated to two different orbifolds. The reason for this is that in the definition of the pseudogroup which is needed for the construction of the atlas groupoid one loses information about the projection maps $\varphi$ of the orbifold charts $(V,G,\varphi)$. To be able to distinguish atlas groupoids constructed from different orbifolds, we mark the groupoids with a topological space and a homeomorphism. It will turn out that this marking suffices to identify the orbifold one started with. \begin{defi} A \textit{marked Lie groupoid} is a triple $(G, \alpha , X)$ consisting of a Lie groupoid $G$, a topological space $X$, and a homeomorphism $\alpha\colon|G| \to X$. \end{defi} The following proposition proves the existence of a particular marking of an atlas groupoid. This marking is crucial for the isomorphism between the categories. \begin{prop}\label{atlashomeom} Let $(Q,\mathcal U)$ be an orbifold and \[ \mathcal V = \{ (V_i,G_i,\pi_i) \mid i\in I\} \] a representative of $\mc U$ indexed by $I$. Set $V\mathrel{\mathop:}= \coprod_{i\in I} V_i$ and $\pi\mathrel{\mathop:}= \coprod_{i\in I} \pi_i \colon V\to Q$. Then the map \[ \alpha \colon \left\{ \begin{array}{ccc} |\Gamma(\mathcal V)| & \to & Q \\{} [x] & \mapsto & \pi(x) \end{array} \right. \] is a homeomorphism. \end{prop} \begin{proof} To show that $\alpha$ is well-defined, suppose $[x_1] = [x_2]$. Then there is an arrow $x_1 \to x_2$. Hence there exists $f\in \Psi(\mathcal V)$ such that $x_1\in \dom f$ and $f(x_1) = x_2$. From this it follows that $\pi(x_1) = \pi(f(x_1)) = \pi(x_2)$. Obviously, $\alpha$ is surjective. For the proof of injectivity let $\pi(x_1) = \pi(x_2)$ for some $x_1,x_2\in V$. Then there are orbifold charts $(V_i, G_i, \pi_i)\in\mathcal V$ with $x_i\in V_i$ ($i=1,2$). By compatibility of these orbifold charts there is $f\in\Psi(\mathcal V)$ such that $x_1\in\dom f$ and $f(x_1)=x_2$. This means that $\germ_{x_1}f \colon x_1\to x_2$ is an element of $\Gamma(\mathcal V)_1$. Thus, $[x_1] = [x_2]$. Let $\pr\colon V \to |\Gamma(\mc V)|$ be the canonical quotient map on the orbit space. Then $\alpha\circ\pr=\pi$. One easily proves that $\pi$ is continuous and open. From this it follows that $\alpha$ is a homeomorphism. \end{proof} Let $(Q,\mathcal U)$ be an orbifold. To each orbifold atlas $\mathcal V$ of $Q$ we assign the marked atlas groupoid $(\Gamma(\mathcal V), \alpha_{\mathcal V}, Q)$ with $\alpha_{\mathcal V}$ being the homeomorphism from Proposition~\ref{atlashomeom}. We often only write $\Gamma(\mathcal V)$ to refer to this marked groupoid. \begin{example} Recall from Example~\ref{orbexample} the orbifolds $(Q,\mathcal U_i)$ for $i=1,2$, their respective orbifold atlases $\mathcal V_i$, and the associated grou\-poids $\Gamma=\Gamma(\mathcal V_i)$. The orbit of $x\in \Gamma_0$ is $\{x,-x\}$. Hence the homeomorphism associated to $(Q,\mc U_i)$ is $\alpha_{\mathcal V_i}\colon |\Gamma| \to Q$ given by $\alpha_{\mathcal V_1}([x]) = |x|$ resp.\@ $\alpha_{\mathcal V_2}([x]) = x^2$. Thus, the associated marked groupoids $(\Gamma, \alpha_{\mc V_1},Q)$ and $(\Gamma,\alpha_{\mc V_2},Q)$ are different. \end{example} \begin{prop}\label{backorb} Let $(Q,\mc U)$ and $(Q',\mc U')$ be orbifolds. Suppose that $\mc V$ is a representative of $\mc U$, and $\mc V'$ a representative of $\mc U'$. If the associated marked atlas groupoids $(\Gamma(\mc V), \alpha_{\mc V}, Q)$ and $(\Gamma(\mc V'), \alpha_{\mc V'}, Q')$ are equal, then the orbifolds $(Q,\mc U)$ and $(Q',\mc U')$ are equal. More precisely, we even have $\mc V = \mc V'$. \end{prop} \begin{proof} Clearly, $Q=Q'$. Suppose that \[ \mc V = \{ (V_i, G_i, \pi_i) \mid i\in I\} \quad\text{and}\quad \mc V' = \{ (V'_j, G'_j, \pi'_j) \mid j\in J\}, \] indexed by $I$ resp.\@ by $J$. From $\Gamma(\mc V) = \Gamma(\mc V')$ it follows that \[ \coprod_{i\in I} V_i = \Gamma(\mc V)_0 = \Gamma(\mc V')_0 = \coprod_{j\in J} V'_j. \] Since each $V_i$ and each $V'_j$ is connected, there is a bijection between $I$ and $J$. We may assume $I=J$ and $V_i = V'_i$ for all $i\in I$. Let $x\in V_i$. Then $\pi_i(x) = \alpha_{\mc V}([x]) = \alpha_{\mc V'}([x]) = \pi'_i(x)$. Therefore $\pi_i=\pi'_i$ for all $i\in I$. Thus $\Psi(\mc V) = \Psi(\mc V')$. Moreover \[ G_i = \{ f\in \Psi(\mc V) \mid \dom f= V_i = \cod f\} = G'_i. \] To show that the actions $G_i$ and $G'_i$ on $V_i$ are equal, let $g\in G_i$. For each $x\in V_i$ we have $\pi'_i( g(x)) = \pi'_i(x)$. This shows that $g(x) \in G'_ix$ for each $x\in V_i$. By \cite[Lemma~2.11]{Moerdijk_Mrcun} there exists a unique element $g'\in G'_i$ such that $g=g'$. From this it follows that $G_i = G'_i$ as acting groups. Thus, $\mc V = \mc V'$. \end{proof} \begin{lemma} Let $(G,\alpha, X)$ and $(H,\beta, Y)$ be marked Lie groupoids and suppose that $\varphi = (\varphi_0,\varphi_1)\colon G\to H$ is a homomorphism of Lie groupoids. Then $\varphi$ induces a unique map $\psi$ such that the diagram \[ \xymatrix{ G_0 \ar[r]^{\pr_G} \ar[d]_{\varphi_0}& |G| \ar[r]^\alpha & X\ar[d]^\psi \\ H_0 \ar[r]^{\pr_H} & |H| \ar[r]^\beta & Y } \] commutes. Moreover, $\psi$ is continuous. \end{lemma} \begin{proof} The map $\varphi$ induces a unique map $\varphi|\colon |G|\to |H|$ such that $|\varphi|\circ \pr_G = \pr_H\circ \varphi_0$, which is continuous. Then $\psi =\beta\circ |\varphi| \circ \alpha^{-1}$. \end{proof} \subsection{Pseudogroups and groupoids} In this section we recall how to construct a Lie groupoid from an orbifold and a representative of its orbifold structure. This construction is well known, see e.g.\@ the book by Moerdjik and Mr\v{c}un \cite{Moerdijk_Mrcun}. We provide it here for convenience of the reader and to introduce the notations we will use later on. It is a two-step process in which one first assigns a pseudogroup to the orbifold, which depends on the representative of the orbifold structure. Then one constructs an \'{e}tale Lie groupoid from the pseudogroup. For reasons of generality and clarity we start with the second step. \begin{defi}\label{def_pseudogroup} Let $M$ be a manifold. A \textit{transition} on $M$ is a diffeomorphism $f\colon U\to V$ where $U,V$ are open subsets of $M$. In particular, the empty map $\emptyset\to\emptyset$ is a transition on $M$. The \textit{product} of two transitions $f\colon U\to V$, $g\colon U'\to V'$ is the transition \[ f\circ g\colon g^{-1}(U\cap V') \to f(U\cap V'),\ x\mapsto f(g(x)).\] The \textit{inverse} of $f$ is the inverse of $f$ as a function. If $f\colon U\to V$ is a transition, we use $\dom f$ to denote its \textit{domain} and $\cod f$ to denote its \textit{codomain}. Further, if $x\in \dom f$, then $\germ_x f$ denotes the germ of $f$ at $x$. Let $\mathcal A(M)$ be the set of all transitions on $M$. A \textit{pseudogroup} on $M$ is a subset $P$ of $\mathcal A(M)$ which is closed under multiplication and inversion. A pseudogroup $P$ is called \textit{full} if $\id_U\in P$ for each open subset $U$ of $M$. It is said to be \textit{complete} if it is full and satisfies the following gluing property: Whenever there is a transition $f\in \mathcal A(M)$ and an open covering $(U_i)_{i\in I}$ of $\dom f$ such that $f\vert_{U_i}\in P$ for all $i\in I$, then $f\in P$. \end{defi} A Lie groupoid is called \textit{\'etale} if its source and target map are local diffeomorphisms. We now recall how to construct an \'etale Lie groupoid from a full pseudogroup. \begin{construction}\label{constr_groupoid} Let $M$ be a manifold and $P$ a full pseudo\-gr\-oup on $M$. The \textit{associated groupoid} $\Gamma\mathrel{\mathop:}= \Gamma(P)$ is given by \[ \Gamma_0 \mathrel{\mathop:}= M,\quad \Gamma_1\mathrel{\mathop:}= \{ \germ_x f\mid f\in P,\ x\in\dom f\},\] and, in particular, \[ \Gamma(x,y) \mathrel{\mathop:}= \{ \germ_x f \mid f\in P,\ x\in\dom f,\ f(x) = y\}.\] For $f\in P$ define $U_f \mathrel{\mathop:}= \left\{ \germ_x f\left\vert\ x\in\dom f \vphantom{\germ_x f}\right.\right\}$. The topology and differential structure of $\Gamma_1$ is given by the germ topology and germ differential structure, that is, for each $f\in P$ the bijection \[ \varphi_f\colon \left\{ \begin{array}{ccc} U_f & \to & \dom f \\ \germ_x f & \mapsto & x \end{array} \right. \] is required to be a diffeomorphism. The structure maps $(s,t,m,u,i)$ of $\Gamma$ are the obvious ones, namely \begin{align*} s(\germ_x f) & \mathrel{\mathop:}= x \\ t(\germ_x f) & \mathrel{\mathop:}= f(x) \\ m(\germ_{f(x)} g, \germ_x f) & \mathrel{\mathop:}= \germ_x(g\circ f) \\ u(x) & \mathrel{\mathop:}= \germ_x\id_U \text{ for an open neighborhood $U$ of $x$} \\ i(\germ_x f) & \mathrel{\mathop:}= \germ_{f(x)} f^{-1}. \end{align*} Obviously, $\Gamma(P)$ is an \'etale Lie groupoid. \end{construction} \begin{specialcase}\label{atlasgroupoid} Let $(Q,\mathcal U)$ be an orbifold, and let \[ \mathcal V = \{ (V_i, G_i, \pi_i) \mid i\in I \} \] be a representative of $\mc U$ indexed by $I$. We define \[ V\mathrel{\mathop:}= \coprod_{i\in I} V_i\quad\text{and}\quad \pi\mathrel{\mathop:}= \coprod_{i\in I}\pi_i.\] Then \[ \Psi(\mathcal V) \mathrel{\mathop:}= \big\{ \text{$f$ transition on $V$} \ \big\vert\ \pi\circ f = \pi\vert_{\dom f} \big\}.\] is a complete pseudogroup on $V$. The associated groupoid \[ \Gamma(\mathcal V)\mathrel{\mathop:}= \Gamma(\Psi(\mathcal V)) \] is the \'etale Lie groupoid we shall associate to $Q$ and $\mathcal V$. Note that this groupoid depends on the choice of the representative of the orbifold structure $\mc U$ of $Q$. A groupoid which arises in this way we call \textit{atlas groupoid}. \end{specialcase} \begin{example}\label{orbexample} Recall the orbifolds $(Q,\mathcal U_i)$ $(i=1,2)$ from Example~\ref{notcompatible}, and consider the representative $\mathcal V_i \mathrel{\mathop:}= \{V_i\}$ of $\mathcal U_i$. Proposition~2.12 in \cite{Moerdijk_Mrcun} implies that \[ \Psi(\mathcal V_i) = \big\{ g\vert_U\colon U \to g(U) \ \big\vert\ \text{$U\subseteq (-1,1)$ open, $g\in \{\pm \id\}$} \big\}. \] In both cases the associated groupoid $\Gamma\mathrel{\mathop:}= \Gamma(\mathcal V_i)$ is \begin{align*} \Gamma_0 & = (-1,1) \\ \Gamma(x,y) & = \begin{cases} \big\{ \germ_0 \id, \germ_0 (-\id) \big\} & \text{if $x=0=y$} \\ \big\{ \germ_x \id \big\} & \text{if $x=y\not=0$} \\ \big\{ \germ_x (-\id) \big\} & \text{if $x=-y\not=0$} \\ \emptyset & \text{otherwise.} \end{cases} \end{align*} \end{example} \section{Reduced orbifolds, groupoids, and pseudogroups}\label{sec_redorb} This section has a preliminary character. It recalls definitions and results concerning reduced orbifolds and groupoids. \subsection{Reduced orbifolds} We stick to the definition of reduced orbifolds given by Haefliger \cite{Haefliger_orbifold}. \begin{defi}\label{orb_atlas2} Let $Q$ be a topological space. Let $n\in \N_0$. A \textit{reduced orbifold chart} of dimension $n$ on $Q$ is a triple $(V,G,\varphi)$ where $V$ is an open connected $n$--manifold, $G$ is a finite subgroup of $\Diff(V)$, and $\varphi\colon V\to Q$ is a map with open image $\varphi(V)$ that induces a homeomorphism from $G\backslash V$ to $\varphi(V)$. In this case, $(V,G,\varphi)$ is said to \textit{uniformize} $\varphi(V)$. Two reduced orbifold charts $(V,G,\varphi)$, $(W,H,\psi)$ on $Q$ are called \textit{compatible} if for each pair $(x,y)\in V\times W$ with $\varphi(x) = \psi(y)$ there are open connected neighborhoods $\widetilde V$ of $x$ and $\widetilde W$ of $y$ and a diffeomorphism $h\colon \widetilde V\to \widetilde W$ with $\psi\circ h = \varphi\vert_{\widetilde V}$. The map $h$ is called a \textit{change of charts}. A \textit{reduced orbifold atlas} of dimension $n$ on $Q$ is a collection of pairwise compatible reduced orbifold charts \[ \mathcal V \mathrel{\mathop:}= \{ (V_i, G_i, \varphi_i) \mid i\in I\} \] of dimension $n$ on $Q$ such that $\bigcup_{i\in I} \varphi_i(V_i) = Q$. Two reduced orbifold atlases are \textit{equivalent} if their union is a reduced orbifold atlas. A \textit{reduced orbifold structure} of dimension $n$ on $Q$ is a (\wrt in\-clu\-sion) maximal reduced orbifold atlas of dimension $n$ on $Q$, or equivalently, an equivalence class of reduced orbifold atlases of dimension $n$ on $Q$. A \textit{reduced orbifold} of dimension $n$ is a pair $(Q,\mathcal U)$ where $Q$ is a second-countable Hausdorff space and $\mathcal U$ is a reduced orbifold structure of dimension $n$ on $Q$. Let $\mc U$ be a reduced orbifold structure on $Q$. Each reduced orbifold atlas $\mc V$ in $\mc U$ (hence either $\mc V \subseteq \mc U$ for the point of view that $\mc U$ is a maximal reduced orbifold atlas, or $\mc V \in \mc U$ if one interprets $\mc U$ as an equivalence class) is called a \textit{representative} of $\mc U$ or a \textit{reduced orbifold atlas of} $(Q,\mc U)$. \end{defi} Since we are considering reduced orbifolds only, we omit the term ``reduced'' from now on. The neighborhoods $\widetilde V$ and $\widetilde W$ and the diffeomorphism $h$ in Definition~\ref{orb_atlas2} can always be chosen in such a way that $h(x)=y$. Moreover $\widetilde V$ may assumed to be open $G$-stable. In this case, $\widetilde W$ is open $H$-stable by \cite[Proposition~2.12(i)]{Moerdijk_Mrcun} (note that the notion of orbifolds used by Moerdijk and Mr{\v{c}}un in \cite{Moerdijk_Mrcun} is equivalent to the one from above). Let $M$ be a manifold and $G$ a subgroup of $\Diff(M)$. A subset $S$ of $M$ is called \textit{$G$--stable}, if it is connected and if for each $g\in G$ we either have $gS=S$ or $gS\cap S=\emptyset$. \begin{defi} Let $(V,G,\varphi)$, $(W,H,\psi)$ be orbifold charts on the topological space $Q$. Then an \textit{open embedding} $\mu\colon (V,G,\varphi) \to (W,H,\psi)$ between these two orbifold charts is an open embedding $\mu\colon V\to W$ between manifolds which satisfies $\psi\circ\mu = \varphi$. If, in addition, $\mu$ is a diffeomorphism between $V$ and $W$, then $\mu$ is called an \textit{isomorphism} from $(V,G,\varphi)$ to $(W,H,\psi)$. Suppose that $S$ is an open $G$--stable subset of $V$ and let $G_S \mathrel{\mathop:}= \{g\in G\mid gS=S\}$ denote the \textit{isotropy group} of $S$. Then $(S,G_S,\varphi\vert_S)$ is an orbifold chart on $Q$, the \textit{restriction} of $(V,G,\varphi)$ to $S$. \end{defi} \begin{remark}\label{groupiso} Suppose that $\mu\colon (V,G,\varphi) \to (W,H,\psi)$ is an open embedding. In \cite[Proposition~2.12(i)]{Moerdijk_Mrcun} it is shown that $\mu(V)$ is an open $H$--stable subset of $W$, and moreover that there is a unique group isomorphism $\overline\mu \colon G \to H_{\mu(V)}$ for which $\mu(gx) = \overline\mu(g)\mu(x)$ for $g\in G$, $x\in V$. \end{remark} In the following example we construct two orbifolds with the same underlying topological space. These orbifolds are particularly simple since both orbifold structures have one-chart-representatives. Despite their simplicity they serve as motivating examples for several definitions in this manuscript. \begin{example}\label{notcompatible} Let $Q\mathrel{\mathop:}= [0,1)$ be endowed with the induced topology of $\R$. The map \[ f\colon \left\{ \begin{array}{ccc} Q & \to & Q \\ x & \mapsto & x^2 \end{array}\right. \] is a homeomorphism. Further the map $\pr\colon (-1,1) \to [0,1)$, $x\mapsto |x|$, induces a homeomorphism ${\{\pm\id \}}\backslash (-1,1) \to Q$. Then \[ V_1\mathrel{\mathop:}= \big( (-1,1), \{\pm \id\}, \pr \big) \quad\text{and}\quad V_2\mathrel{\mathop:}= \big( (-1,1), \{\pm\id\}, f\circ\pr\big) \] are two orbifold charts on $Q$. We claim that these two orbifold charts are not compatible. To see this, assume for contradiction that they are compatible. Since $f\circ\pr(0) = 0 = \pr(0)$, there exist open connected neighborhoods $\widetilde V_1$, $\widetilde V_2$ of $0$ in $(-1,1)$ and a diffeomorphism $h\colon \widetilde V_2 \to \widetilde V_1$ such that $\pr \circ h = f\circ\pr\vert_{\widetilde V_2}$. Hence for each $x\in \widetilde V_2$ we have $h(x) \in \{\pm x^2\}$. Since $h$ is continuous, it must be one of the four maps \begin{align*} h_1(x) & \mathrel{\mathop:}= x^2 & h_2(x) & \mathrel{\mathop:}= -x^2 \\ h_3(x) & \mathrel{\mathop:}= \begin{cases} x^2 & x\geq 0 \\ -x^2 & x\leq 0\end{cases} & h_4(x) & \mathrel{\mathop:}= \begin{cases} -x^2 & x\geq 0 \\ x^2 & x\leq 0, \end{cases} \end{align*} neither of which is a diffeomorphism. This gives the contradiction. Let $\mathcal U_1$ be the orbifold structure on $Q$ generated by $V_1$, and $\mathcal U_2$ be the one generated by $V_2$. \end{example} \subsection{The orbifold category}\label{redorbcat} Now we can define the category of reduced orbifolds. \begin{defi}\label{defredorbcat} The category $\Orbmap$ of reduced orbifolds is defined as follows: Its class of objects is the class of orbifolds. For two orbifolds $(Q,\mc U)$ and $(Q',\mc U')$, the morphisms (\textit{orbifold maps}) from $(Q,\mathcal U)$ to $(Q',\mathcal U')$ are the equivalence classes $[\hat f]$ of all charted orbifold maps $\hat f \in \Orbmap(\mathcal V, \mathcal V')$ where $\mathcal V$ is any representative of $\mathcal U$, and $\mathcal V'$ is any representative of $\mathcal U'$, that is \begin{align*} \Morph\big( (Q,\mathcal U), (Q',\mathcal U') \big) \mathrel{\mathop:}= \left\{ \big[\hat f\big] \left\vert\ \hat f\in \Orbmap(\mathcal V, \mathcal V'),\ \text{$\mathcal V$ repr.\@ of $\mathcal U$,\ $\mathcal V'$ repr.\@ of $\mathcal U'$} \right.\right\}. \end{align*} We now describe the composition in $\Orbmap$. For this let \[ [\hat f] \in \Morph\big( (Q,\mathcal U), (Q',\mathcal U')\big)\quad\text{and}\quad[\hat g] \in \Morph\big( (Q',\mathcal U'), (Q'',\mathcal U'')\big) \] be orbifold maps. Choose representatives $\hat f\in \Orbmap(\mathcal V, \mathcal V')$ of $[\hat f]$ and $\hat g\in \Orbmap(\mathcal W',\mathcal W'')$ of $[\hat g]$. Then find representatives $\mathcal K$, $\mathcal K'$, $\mathcal K''$ of $\mathcal U$, $\mathcal U'$, $\mathcal U''$, \resp, and lifts of identity $\varepsilon\in \Orbmap(\mathcal K, \mathcal V)$, $\varepsilon'_1\in \Orbmap(\mathcal K',\mathcal V')$, $\varepsilon'_2\in \Orbmap(\mathcal K',\mathcal W')$, $\varepsilon'' \in \Orbmap(\mathcal K'', \mathcal W'')$ and two charted orbifold maps $\hat h \in \Orbmap(\mathcal K, \mathcal K')$, $\hat k \in \Orbmap(\mathcal K',\mathcal K'')$ such that the diagram \[ \xymatrix{ & \mathcal V \ar[r]^{\hat f} & \mathcal V' && \mathcal W' \ar[r]^{\hat g} & \mathcal W'' \\ \mathcal K \ar[ur]^{\varepsilon} \ar[rrr]^{\hat h} &&& \mathcal K' \ar[ul]_{\varepsilon'_1} \ar[ur]^{\varepsilon'_2} \ar[rrr]^{\hat k} &&& \mathcal K'' \ar[ul]_{\varepsilon''} } \] commutes. The composition of $[\hat g]$ and $[\hat f]$ is defined to be \[ [\hat g] \circ [\hat f] \mathrel{\mathop:}= [ \hat k \circ \hat h].\] \end{defi} The following lemma shows that this composition is always possible, Proposition~\ref{compositiongood} below that it is well-defined. \begin{lemma} \label{compwd} Let $(Q,\mathcal U)$, $(Q',\mathcal U')$ and $(Q'',\mathcal U'')$ be orbifolds. Further let $\mathcal V$ be a representative of $\mathcal U$, $\mathcal V'$ and $\mathcal W'$ be representatives of $\mathcal U'$, and $\mathcal W''$ a representative of $\mathcal U''$. Suppose that $\hat f \in \Orbmap(\mathcal V,\mathcal V')$ and $\hat g\in\Orbmap(\mathcal W',\mathcal W'')$. Then there exist representatives $\mathcal K$ of $\mathcal U$, $\mathcal K'$ of $\mathcal U'$, $\mathcal K''$ of $\mathcal U''$, lifts of the respective identities $\varepsilon \in \Orbmap(\mathcal K,\mathcal V)$, $\eta_1 \in \Orbmap(\mathcal K',\mathcal V')$, $\eta_2\in \Orbmap(\mathcal K',\mathcal W')$, $\delta\in \Orbmap(\mathcal K'',\mathcal W'')$, and charted orbifold maps $\hat h\in \Orbmap(\mathcal K,\mathcal K')$, $\hat k\in \Orbmap(\mathcal K',\mathcal K'')$ such that the diagram \[ \xymatrix{ & \mathcal V \ar[r]^{\hat f} & \mathcal V' && \mathcal W' \ar[r]^{\hat g} & \mathcal W'' \\ \mathcal K \ar[rrr]^{\hat h} \ar[ur]^{\varepsilon} &&& \mathcal K' \ar[rrr]^{\hat k} \ar[ul]_{\eta_1} \ar[ur]^{\eta_2} &&& \mathcal K'' \ar[ul]_{\delta} } \] commutes. \end{lemma} \begin{proof} Let $\hat f = \big(f, \{ \tilde f_i\}_{i\in I}, [P_f,\nu_f] \big)$ and $\hat g = \big(g, \{ \tilde g_j\}_{j\in J}, [P_g,\nu_g] \big)$. Suppose that \begin{align*} \mathcal V & = \{ (V_i, G_i, \pi_i) \mid i\in I\}, \text{ indexed by $I$,} \\ \mathcal V' & = \{ (V'_c, G'_c, \pi'_c) \mid c\in C\}, \text{ indexed by $C$,} \\ \mathcal W' & = \{ (W'_j, H'_j, \psi'_j) \mid j\in J\}, \text{ indexed by $J$,} \\ \mathcal W'' & = \{ (W''_d, H''_d, \psi''_d) \mid d\in D \}, \text{ indexed by $D$.} \end{align*} Let $\tau\colon I\to C$ be the map such that for each $i\in I$, $\tilde f_i$ is a local lift of $f$ w.r.t.\@ $(V_i,G_i,\pi_i)$ and $(V'_{\tau(i)}, G'_{\tau(i)}, \pi'_{\tau(i)})$, and $\nu\colon J\to D$ the map such that for each $j\in J$, $\tilde g_j$ is a local lift of $g$ w.r.t.\@ $(W'_j, H'_j, \psi'_j)$ and $(W''_{\nu(j)}, H''_{\nu(j)}, \psi''_{\nu(j)})$. By Lemma~\ref{onlyinduced} it suffices to find \begin{itemize} \item a representative $\mathcal K = \{ (K_a, L_a, \chi_a) \mid a\in A\}$ of $\mathcal U$, indexed by $A$, \item a representative $\mathcal K' = \{ (K'_b, L'_b, \chi'_b) \mid b\in B\}$ of $\mathcal U'$, indexed by $B$, \item a subset $\{ (K''_b, L''_b, \chi''_b) \mid b\in B\}$ of $\mc U''$, indexed by $B$, \item a map $\alpha\colon A\to I$, \item an injective map $\beta\colon A\to B$, \item for each $a\in A$, an open embedding \[ \lambda_a \colon (K_a, L_a, \chi_a) \to (V_{\alpha(a)}, G_{\alpha(a)}, \pi_{\alpha(a)}) \] and an open embedding \[ \mu_a \colon (K'_{\beta(a)}, L'_{\beta(a)}, \chi'_{\beta(a)}) \to (V'_{\tau(\alpha(a))}, G'_{\tau(\alpha(a))}, \pi'_{\tau(\alpha(a))})\] such that \[ \tilde f_{\alpha(a)}\big( \lambda_a(K_a) \big)\subseteq \mu_a (K'_{\beta(a)}), \] \item a map $\gamma\colon B \to J$, \item for each $b\in B$, an open embedding \[ \varrho_b \colon (K'_b, L'_b, \chi'_b) \to (W'_{\gamma(b)}, H'_{\gamma(b)}, \psi'_{\gamma(b)}) \] and an open embedding \[ \sigma_b\colon (K''_b, L''_b, \chi''_b) \to (W''_{\nu(\gamma(b))}, H''_{\nu(\gamma(b))}, \psi''_{\nu(\gamma(b))}) \] such that \[ \tilde g_{\gamma(b)}\big( \varrho_b(K'_b) \big) \subseteq \sigma_b(K''_b).\] \end{itemize} Let $q\in Q$ and set $r\mathrel{\mathop:}= f(q)$. We fix $i\in I$ and $j\in J$ such that $q\in \pi_i(V_i)$ and $r\in \psi'_j(W'_j)$. Further we choose $v'\in V'_{\tau(i)}$ and $w'\in W'_j$ such that $\pi'_{\tau(i)}(v') = r = \psi'_j(w')$. By compatibility of orbifold charts we find a restriction $(K'_q, L'_q, \chi'_q)$ of $(V'_{\tau(i)}, G'_{\tau(i)}, \pi'_{\tau(i)})$ with $v'\in K'_q$ and an open embedding \[ \varrho_q\colon (K'_q, L'_q, \chi'_q) \to (W'_j, H'_j, \psi'_j). \] Since $\tilde f_i$ is continuous, there is a restriction $(K_q, L_q, \chi_q)$ of $(V_i, G_i, \pi_i)$ such that $q\in \chi_q(K_q)$ and $\tilde f_i(K_q) \subseteq K'_q$. We set \[ (K''_q, L''_q, \chi''_q) \mathrel{\mathop:}= (W''_j, H''_j, \psi''_j). \] We consider orbifold charts constructed for distinct $q$ to be distinct. Then we set \begin{align*} &A\mathrel{\mathop:}= Q, \quad &\alpha(q) &\mathrel{\mathop:}= i, \quad \lambda_q \mathrel{\mathop:}= \id, \quad \mu_q \mathrel{\mathop:}= \id, \\ &B \mathrel{\mathop:}= Q \sqcup Q'\setminus f(Q), \quad & \beta(q) &\mathrel{\mathop:}= q, \quad \gamma(q) \mathrel{\mathop:}= j, \quad \sigma_q \mathrel{\mathop:}= \id \quad\text{for $q\in Q$.} \end{align*} For $q' \in Q'\setminus f(Q)$ we fix $j\in J$ with $q' \in \psi'_j(W'_j)$ and set $\gamma(q') \mathrel{\mathop:}= j$. Further we set \[ (K'_{q'}, L'_{q'}, \chi'_{q'}) \mathrel{\mathop:}= (W'_j, H'_j, \psi'_j)\quad\text{and}\quad (K''_{q'}, L''_{q'}, \chi''_{q'}) \mathrel{\mathop:}= (W''_j, H''_j, \psi''_j). \] Again we consider orbifold charts build for distinct $q'$ to be distinct and to be distinct from all defined for some $q\in Q$, and define $\varrho_{q'} \mathrel{\mathop:}= \id$ and $\sigma_{q'} \mathrel{\mathop:}= \id$. Then all requirements are satisfied. \end{proof} \begin{prop}\label{compositiongood} The composition in $\Orbmap$ is well-defined. \end{prop} \begin{proof} We use the notation from the definition of the composition. We have to show that the composition of $[\hat f]$ and $[\hat k]$ neither depends on the choice of the induced orbifold maps $\hat h$ and $\hat k$ nor on the choice of the representatives of $[\hat f]$ and $[\hat g]$. To prove independence of the choice of $\hat h$ and $\hat k$, suppose that we have two pairs $(\hat h_j, \hat k_j)$ of induced orbifold maps $\hat h_j \in \Orbmap(\mathcal K_j, \mathcal K'_j)$, $\hat k_j \in \Orbmap(\mathcal K'_j, \mathcal K''_j)$ ($j=1,2$) such that the diagram \[ \xymatrix{ \mathcal K_1 \ar[rrr]^{\hat h_1} \ar[dr] &&& \mathcal K'_1 \ar[rrr]^{\hat k_1} \ar[dl] \ar[dr] &&& \mathcal K''_1 \ar[dl] \\ & \mathcal V \ar[r]^{\hat f} & \mathcal V' && \mathcal W' \ar[r]^{\hat g} & \mathcal W'' \\ \mathcal K_2 \ar[rrr]^{\hat h_2} \ar[ur] &&& \mathcal K'_2 \ar[rrr]^{\hat k_2} \ar[ul] \ar[ur] &&& \mathcal K''_2 \ar[ul] } \] commutes. The non-horizontal maps are lifts of identity. Lemma~\ref{fortrans} shows the existence of representatives $\mathcal H$ of $\mathcal U$, $\mathcal H', \mathcal I'$ of $\mathcal U'$, $\mathcal I''$ of $\mathcal U''$, and charted orbifold maps $\hat h_3\in \Orbmap(\mathcal H, \mathcal H')$, $\hat k_3\in \Orbmap(\mathcal I',\mathcal I'')$, and appropriate lifts of identity such that the diagrams \[ \xymatrix{ & \mathcal K_1 \ar[r]^{\hat h_1} & \mathcal K'_1 &&& \mathcal K'_1 \ar[r]^{\hat k_1} & \mathcal K''_1 \\ \mathcal H \ar[rrr]^{\hat h_3} \ar[ur] \ar[dr] &&& \mathcal H' \ar[ul] \ar[dl] & \mathcal I' \ar[rrr]^{\hat k_3} \ar[ur] \ar[dr] &&& \mathcal I'' \ar[ul] \ar[dl] \\ & \mathcal K_2 \ar[r]^{\hat h_2} & \mathcal K'_2 &&& \mathcal K'_2 \ar[r]^{\hat k_2} & \mathcal K''_2 } \] commute. By Lemma~\ref{compwd} we find representatives $\mathcal K, \mathcal K', \mathcal K''$ of $\mathcal U, \mathcal U', \mathcal U''$, resp.\@, charted orbifold maps $\hat h\in \Orbmap(\mathcal K, \mathcal K')$, $\hat k\in \Orbmap(\mathcal K',\mathcal K'')$, and appropriate lifts of identity such that \[ \xymatrix{ & \mathcal H \ar[r]^{\hat h_3} & \mathcal H' && \mathcal I' \ar[r]^{\hat k_3} & \mathcal I'' \\ \mathcal K \ar[rrr]^{\hat h} \ar[ur] &&& \mathcal K' \ar[rrr]^{\hat k} \ar[ur] \ar[ul] &&& \mathcal K'' \ar[ul] } \] commutes. Hence, altogether we have the commutative diagram \[ \xymatrix{ \mathcal K_1 \ar[r]^{\hat h_1} & \mathcal K'_1 \ar[r]^{\hat k_1} & \mathcal K''_1 \\ \mathcal K \ar[r]^{\hat h} \ar[u] \ar[d] & \mathcal K' \ar[r]^{\hat k} \ar[u] \ar[d] & \mathcal K'' \ar[u] \ar[d] \\ \mathcal K_2 \ar[r]^{\hat h_2} & \mathcal K'_2 \ar[r]^{\hat k_2} & \mathcal K''_2 } \] which shows that $\hat k_1 \circ \hat h_1$ and $\hat k_2\circ\hat h_2$ are equivalent. For the proof of the independence of the choices of the representatives of $[\hat f]$ and $[\hat g]$, let $\hat f_1 \in \Orbmap(\mathcal V_1,\mathcal V'_1)$, $\hat f_2 \in \Orbmap(\mathcal V_2,\mathcal V'_2)$ be representatives of $[\hat f]$, and $\hat g_1 \in \Orbmap(\mathcal W'_1, \mathcal W''_1)$, $\hat g_2\in \Orbmap(\mathcal W'_2,\mathcal W''_2)$ be representatives of $[\hat g]$. Further, for $j=1,2$, let $\hat h_j \in \Orbmap(\mathcal K_j, \mathcal K'_j)$ be induced by $\hat f_j$, and $\hat k_j \in \Orbmap(\mathcal K'_j, \mathcal K''_j)$ be induced by $\hat g_j$. Since $\hat f_1$ and $\hat f_2$ are equivalent, we find representatives $\mathcal V$, $\mathcal V'$ of $\mathcal U$, $\mathcal U'$, resp.\@, a charted orbifold map $\hat f\in \Orbmap(\mathcal V,\mathcal V')$ and appropriate lifts of identities, and analogously for $\hat g_1$ and $\hat g_2$, such that the diagrams \[ \xymatrix{ & \mathcal V_1 \ar[r]^{\hat f_1} & \mathcal V'_1 &&& \mathcal W'_1 \ar[r]^{\hat g_1} & \mathcal W''_1 \\ \mathcal V \ar[rrr]^{\hat f} \ar[ur] \ar[dr] &&& \mathcal V' \ar[ul] \ar[dl] & \mathcal W' \ar[rrr]^{\hat g} \ar[ur] \ar[dr] &&& \mathcal W'' \ar[ul] \ar[dl] \\ & \mathcal V_2 \ar[r]^{\hat f_2} & \mathcal V'_2 &&& \mathcal W'_2 \ar[r]^{\hat g_2} & \mathcal W''_2 } \] commute. Lemma~\ref{compwd} yields the existence of $\hat h\in \Orbmap(\mathcal K, \mathcal K')$ and $\hat k\in \Orbmap(\mathcal K', \mathcal K'')$ and appropriate lifts of identities such that \[ \xymatrix{ & \mathcal V \ar[r]^{\hat f} & \mathcal V' && \mathcal W' \ar[r]^{\hat g} & \mathcal W'' \\ \mathcal K \ar[rrr]^{\hat h} \ar[ur] &&& \mathcal K' \ar[rrr]^{\hat k} \ar[ul] \ar[ur] &&& \mathcal K'' \ar[ul] } \] commutes. Since $\hat h$ is induced by $\hat f_1$ and by $\hat f_2$, and likewise, $\hat k$ is induced by $\hat g_1$ and by $\hat g_2$, we conclude as above that $\hat k_1 \circ \hat h_1$ and $\hat k_2\circ \hat h_2$ are both equivalent to $\hat k\circ\hat h$. This yields that the composition map is well-defined. \end{proof} We end this section with a discussion of the equivalence class represented by a lift of identity. The following proposition shows that it is precisely the class of all lifts of identity of the considered orbifold. This justifies the notion ``identity morphism'' in Definition~\ref{liftiddef}. \begin{prop}\label{equivclassid} Let $(Q,\mathcal U)$ be an orbifold and $\varepsilon$ a lift of $\id_{(Q,\mc U)}$. Then the equivalence class $[\varepsilon]$ of $\varepsilon$ consists precisely of all lifts of $\id_{(Q,\mc U)}$. \end{prop} \begin{proof} Let $\varepsilon_1 \in \Orbmap(\mathcal V_1, \mathcal W_1)$ and $\varepsilon_2 \in \Orbmap(\mathcal V_2, \mathcal W_2)$ be two lifts of $\id_{(Q,\mc U)}$. Propositions~\ref{induceslifts} and \ref{extending} imply that there is a representative $\mc V$ of $\mc U$ such that $\varepsilon_1$ and $\varepsilon_2$ both induce the orbifold map \[ \widehat \id_Q \mathrel{\mathop:}= ( \id_Q, \{ \id_{V_i} \}_{i\in I}, [R,\sigma] ) \] with $(\mathcal V,\mc V)$. Thus, each two lifts of $\id_{(Q,\mc U)}$ are equivalent. Let now $\hat f$ be a charted orbifold map which is equivalent zu $\varepsilon$. W.l.o.g.\@ we may assume that $\varepsilon=\widehat\id_Q$. To fix notation let \begin{align*} \mathcal V & = \{ (V_i, G_i, \pi_i) \mid i\in I\}, \text{indexed by $I$,} \\ \mathcal K_1 & = \{ (K_{1,a}, L_{1,a}, \chi_{1,a} ) \mid a\in A \}, \text{ indexed by $A$,} \\ \mathcal K_2 & = \{ (K_{2,b}, L_{2,b}, \chi_{2,b} ) \mid b\in B \}, \text{ indexed by $B$,} \\ \mathcal W_1 & = \{ (W_{1,j}, H_{1,j}, \psi_{1,j} ) \mid j\in J \}, \text{ indexed by $J$,} \\ \mathcal W_2 & = \{ (W_{2,k}, H_{2,k}, \psi_{2,k} ) \mid k\in K \}, \text{ indexex by $K$,} \end{align*} be representatives of $\mathcal U$. Let \[ \hat f = \big( f, \{ \tilde f_j\}_{j\in J}, [P_f,\nu_f] \big) \in \Orbmap(\mc W_1,\mc W_2). \] Suppose that \[ \hat g = \big( g, \{ \tilde g_a\}_{a\in A}, [P_g,\nu_g] \big) \in \Orbmap(\mc K_1,\mc K_2) \] is a charted orbifold map and \begin{align*} \varepsilon_1 & = \big( \id_Q, \{ \lambda_{1,a} \}_{a\in A}, [P_1,\nu_1] \big) \in \Orbmap(\mc K_1,\mc V) \\ \varepsilon_2 & = \big( \id_Q, \{ \lambda_{2,a} \}_{a\in A}, [P_2,\nu_2] \big) \in \Orbmap(\mc K_1,\mc W_1) \\ \delta_1 & = \big( \id_Q, \{ \mu_{1,b} \}_{b\in B}, [R_1,\sigma_1] \big) \in \Orbmap(\mc K_2, \mc V) \\ \delta_2 & = \big( \id_Q, \{ \mu_{2,b} \}_{b\in B}, [R_2, \sigma_2] \big) \in \Orbmap(\mc K_2,\mc W_2) \end{align*} are lifts of $\id_{(Q,\mc U)}$ such that the diagram (which shows that $\hat f$ and $\hat \id_Q$ are equivalent) \[ \xymatrix{ & \mathcal V \ar[r]^{\widehat \id_Q} & \mathcal V \\ \mathcal K_1 \ar[ur]^{\varepsilon_1} \ar[dr]_{\varepsilon_2} \ar[rrr]^{\hat g} &&& \mathcal K_2 \ar[ul]_{\delta_1} \ar[dl]^{\delta_2} \\ & \mathcal W_1 \ar[r]^{\hat f} & \mathcal W_2 } \] commutes. Clearly, $g=\id_Q$ and hence $f=\id_Q$. Let $\alpha \colon A\to I$, $\beta\colon A \to J$, $\gamma\colon A\to B$, $\delta \colon B \to I$, $\eta\colon B \to K$ and $\zeta\colon J \to K$ be the induced maps on the index sets as, e.g., in Construction~\ref{constr_comp}. For each $a\in A$, we have \[ \id_{V_{\alpha(a)}}\circ \lambda_{1,a} = \mu_{1,\gamma(a)}\circ \tilde g_a.\] Since $\id_{V_{\alpha(a)}}$, $\lambda_{1,a}$ and $\mu_{1,\gamma(a)}$ are local diffeomorphisms, so is $\tilde g_a$. Now \[ \tilde f_{\beta(a)} \circ \lambda_{2,a} = \mu_{2,\gamma(a)} \circ \tilde g_a\] for each $a\in A$. Hence $\tilde f_{\beta(a)}$ is a local diffeomorphism. Lemma~\ref{welldefinedll} implies that $\tilde f_j$ is a local diffeomorphism for each $j\in J$. Therefore, $\hat f$ is a lift of $\id_{(Q,\mc U)}$. \end{proof}
1,108,101,566,192
arxiv
\section{Introduction} Silica (SiO$_2$), a key constituent of Earth, terrestrial (``rocky'') and even giant planets, is an important compound for theory, basic science and technology, including as a laboratory standard for high-energy-density (HED) experiments. Its response to dynamic compression helps to determine i) how planets form through giant impacts, and ii) the high pressure–temperature material properties that control, for example, how the deep interior of planets evolve. Starting from $\alpha$-quartz at ambient condition, SiO$_2$ goes through a series of phase transitions as pressure increases~\cite{Gillan2006}: first to coesite at 2 GPa, then to stishovite at 8 GPa, a CaCl$_2$ structure at 50 GPa, an $\alpha$-PbO$_2$ structure at 100 GPa, and a pyrite-type structure at 200 GPa. At higher pressures (700~GPa), simulations predict a cotunnite structure (if the temperature exceeds $\sim$1000 K), or a Fe$_2$P phase, with the latter being stable to 2000 GPa (2 TPa)~\cite{Tsuchiya2011}. In addition to the thermodynamically stable phases, a number of metastable silica polymorphs and their transformation have been studied~\cite{DUBROVINSKY2004231,Cernok2018ncomm,Shelton2018}. The dynamic response of fused silica~\cite{Tracy2018} has recently also been measured with {\it in situ} x-ray diffraction. Developments in dynamic experiments over the past two decades have provided important constraints on the high-temperature phase diagram and properties of SiO$_2$ at 100 GPa and above. Hicks {\it et al.}~\cite{HicksPRL2006} measured temperatures and reflectivities along the Hugoniots of $\alpha$-quartz and fused silica from near the melting curve up to 1~TPa, and reported specific heat capacities that exceed the Dulong--Petit limit. Kraus {\it et al.}~\cite{Kraus2012} performed shock-and-release experiments, and set criteria for vaporization of $\alpha$-quartz. Millot {\it et al.}~\cite{MillotSci2015} conducted laser shock experiments on stishovite crystals and determined the temperature-pressure-density equation of state (EOS), electronic conductivity, and melting temperature along the Hugoniot of stishovite. McCoy {\it et al.}~\cite{McCoy2016b} used an unsteady wave method for measuring the sound velocity of fused silica shocked up to 1.1~TPa, which relies on an analytic release model for the sound velocity of the $\alpha$-quartz reference. Li {\it et al.}~\cite{Li2018} developed a lateral release approach to continuously measure the sound velocity along the Hugoniot of $\alpha$-quartz, and calculated its Gr\"uneisen parameters to 1.45~TPa. Recently, Guarguaglini {\it et al.}~\cite{Guarguaglini2021} designed double-shock experiments of $\alpha$-quartz and explored the EOS and two-color reflectivity of silica in the temperature-pressure regime between the Hugonoit curves of $\alpha$-quartz and stishovite. To date, magnetically and laser driven experiments combined with first-principles molecular dynamics simulations~\cite{KD2013_aquartz,QiPoP2015,Knudson2013aerogel,McCoy2016,Marshall2019QuartzMo,Root2019fusedsilica,SjostromAIP2017,MillotSci2015} have produced a large number of Hugoniot data, up to 6.2~TPa for $\alpha$-quartz, 0.2~TPa for silica aerogels, 1.6~TPa for fused silica, and 2.5~TPa for stishovite. These results have established $\alpha$-quartz and fused silica as standards for impedance matching at up to 1.2~TPa. Despite this progress, questions remain about Hugoniot temperatures~\cite{Falk2014,SjostromAIP2017} and reflectivity~\cite{QiPoP2015,ScipioniPNAS2017} estimated in laser shock experiments, as well as about changes in the structure of silica liquids~\cite{HicksPRL2006,Kraus2012,ScipioniPNAS2017,Green2018}. Based on anomalies (i.e., minimum) in the observed heat capacity, Hicks {\it et al.}~\cite{HicksPRL2006} proposed that a temperature-induced bonded-to-atomic transition occurs near 37,000 K in liquid silica, with little variation up to pressures of about 1~TPa. In contrast, the heat-capacity variations were interpreted as non-dissociative changes in atomic and electronic structure in a recent computational study~\cite{ScipioniPNAS2017}. Despite overall agreement between previous theoretical results and experimentally measured Hugoniots in the liquid regime of silica~\cite{QiPoP2015,ScipioniPNAS2017,SjostromAIP2017,Root2019fusedsilica}, the atomistic and electronic structure and their changes with temperature and pressure have not been addressed explicitly. Understanding liquid structural changes at extreme conditions, particularly upon bonded-to-atomic transitions (e.g., molecular-to-atomic transition), not only helps to clarify phase transitions and metallization that generally occurs in materials such as hydrogen~\cite{Morales2010,Rillo2019,Celliers2018,Ohta2015,Zaghoo2016,Jiang2020H,McWilliams2016,Hinz2020PRR} and nitrogen~\cite{Nellis1991,Weck2017,Jiang2018,Kim2022}, but can also shed light on material transport properties (e.g., electrical and thermal conductivity) critical to modeling the dynamics of magma ocean and magnetic field generation in early Earth and super-Earth exoplanets~\cite{MillotSci2015,ScipioniPNAS2017,Soubiran2018,Stixrude2020}, as well as for numerical simulations of giant impacts~\cite{Melosh2007,Stewart2020,Kraus2012,Green2018}. The goal of this work is to provide in-depth analysis and theoretical insights about the structural changes and the nature of the bonded-to-atomic transition in liquid silica at extreme conditions by way of first-principles quantum simulations. The manuscript is outlined as follows: Sec.~\ref{sec:method} provides the computational details; Sec.~\ref{sec:result} shows our results that elucidate the transition from various perspectives; and Sec.~\ref{sec:conclusions} discusses questions of interest for future studies. \section{Methods}\label{sec:method} We conducted molecular dynamics (MD) simulations of silica along five different isochores based on Kohn--Sham density functional theory (DFT)~\cite{ks1965}. The density and temperature ranges that we have considered are between 2.65--7.95~g/cm$^3$ (1--3 times the ambient density of $\alpha$-quartz) and 5000--100,000~K. The corresponding temperature and pressure conditions are around those experimentally probed along the Hugoniots of $\alpha$-quartz and fused silica~\cite{HicksPRL2006}. The simulation cells contained 8 or 24 formula units (f.u.) of SiO$_2$, except for certain cases (indicated with yellow pentagons in Fig.~\ref{fig:hugtrhop}(a)) where we used 64-f.u. (192-atom) cells. By using a Nos\'{e} thermostat~\cite{Nose1984}, we generated a canonical (constant-$NVT$, where $N$, $V$, and $T$ are respectively the number of atoms, volume, and the equilibrium temperature of the system) ensemble at each temperature-density condition of interest that typically consisted of a DFT-MD trajectory of at least 2000 steps for the 8--24~f.u. and 5000--25,000 steps for the 64-f.u. simulations (timestep is 0.2--0.5~fs). When analyzing the EOS, we threw away the beginning part (20\%) of each MD trajectory to ensure the reported EOS represents that under thermodynamic equilibrium. Ion kinetic contributions to the EOS are manually included by following an ideal gas formula (i.e., internal energy $E_\textrm{ion kin.}=3Nk_\textrm{B}T/2$ and pressure $P_\textrm{ion kin.}=3Nk_\textrm{B}T/V$, where $k_\text{B}$ is the Boltzmann constant), while all other contributions (ion-ion, ion-electron, and electron-electron interactions and the electron kinetic term) are calculated explicitly in the Vienna \textit{Ab initio} Simulation Package ({\footnotesize VASP})~\cite{kresse96b}. Electrons are enforced to follow a Fermi-Dirac distribution with the temperature equal to that of the ions~\cite{mermin1965}. DFT calculations were done by using the projector augmented wave (PAW)~\cite{Blochl1994} method, plane-wave basis sets, and exchange-correlation (XC) functionals under the local density approximation (LDA)~\cite{Ceperley1980,Perdew81}, which are implemented in {\footnotesize VASP}. We use the hardest available PAW potentials with a core radius of 1.6 Bohr and four electrons treated as the valence for silicon, and a 1.1-Bohr core and six valence electrons for oxygen. We used the $\Gamma$ point to sample the Brillouin zone during the calculations. We cross checked the calculations by using Perdew-Burke-Ernzerhof (PBE)-type or GW-type pseudopotentials and XC functionals based on the generalized gradient approximation (GGA, such as the PBE~\cite{Perdew96} or Armiento-Mattsson (AM05)~\cite{AM05a} types) at selected conditions, in order to determine the methodological error of our results. We use a denser 4$\times$4$\times$4 $k$-point mesh to check the finite size effects on our results. \section{Results}\label{sec:result} \subsection{Equation of state, shock Hugoniot, and thermodynamic properties} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{fig1.pdf} \caption{(a) Temperature-density and (b) pressure-density plots of the shock Hugoniot of SiO$_2$ in the initial form of fused silica (red), $\alpha$-quartz (blue) and stishovite (green). In (a), black symbols denote the conditions of the first-principles simulations for the EOS (`+' and `x' represent 24-f.u. and 8-f.u. simulations, respectively). In (b), experimental Hugoniots from Refs.~\onlinecite{KD2013_aquartz,Root2019fusedsilica,Furnish2017_stishovite} are shown for comparison. Yellow pentagons denote the near-Hugoniot conditions (``hug00--05'') at which 192-atom simulations were performed.} \label{fig:hugtrhop} \end{figure} Based on conservation of mass, momentum, and energy across the shock front, the state of a material under steady shock is generally related to its initial state through the Rankine--Hugoniot equation $E-E_i+(P+P_i)(V-V_i)/2=0$, where $(E, P, V)$ denote the internal energy, pressure, and volume of the material in the shocked state and $(E_i, P_i, V_i)$ are the corresponding values at the initial unshocked state. This equation defines the locus of states that the material can reach when being shocked, which is known as the Hugoniot. Numerically, one way to determine the Hugoniot is by starting from the EOS calculations on a temperature-density grid. In this work, the initial states are estimated by starting from DFT calculations at the desired densities (2.20, 2.65, and 4.29~g/cm$^3$ for fused silica, $\alpha$-quartz, and stishovite, respectively)~\footnote{We perform ground-state DFT calculations under LDA for $\alpha$-quartz at the fixed density of 2.65~g/cm$^3$ and use the resultant values in internal energy (-26.089~eV/SiO$_2$) and pressure (-0.825 GPa) for $E_i$ and $P_i$ when calculating the Hugoniot of $\alpha$-quartz. We use the same $E_i$ value and $P_i=0$ to respectively approximate the initial internal energy and pressure of fused silica at 2.2~g/cm$^3$. This is a reasonable guess because $\alpha$-quartz and fused silica are common polymorphs of SiO$_2$ at ambient condition ($P\approx0$), indicating the minimum of their respective $E(V)$ cold curves are similar to each other (so that their common tangent, if exists, has zero slope). We have also tried a slightly higher value (by 20 meV/SiO$_2$) for $E_i$ of fused silica to approximate possible differences from other sources (e.g., vibration and nuclear quantum effects), and the resultant Hugoniots remain similar. The good agreements with experimental Hugoniots (Fig.~\ref{fig:hugtrhop}(b)) also suggest that our initial conditions are reasonable for estimating the Hugoniots of liquid silica. For stishovite with the initial density of 4.29~g/cm$^3$, we use the same method to estimate its initial energy and note that we get a similar value (-26.079~eV/SiO$_2$) from zero-pressure DFT calculations, using which as $E_i$ the stishovite Hugoniot remains the same (the difference is less than 0.2\%).}, then we consider each isotherm with temperature $T$ and fit the pressure and energy data along the isotherm as functions of density by using cubic splines~\footnote{We use at least three data points in each fitting.}, and the density $\rho$ at which the energy term $[E-E_i]$ equals the pressure term $[(P+P_i)(V_i-V)/2]$ defines the Hugoniot, which has definitive values in $T, \rho, P$ and $E$. We also calculate the shock velocity $u_s$ and particle velocity $u_p$, relevant to shock experiments, by $u_s^2=\xi/\eta$ and $u_p^2=\xi\eta$, where $\xi=(P-P_i)/\rho_i$ and $\eta=1-\rho_i/\rho$. This approach has been used previously for calculating the Hugoniot of several other materials~\cite{Zhang2017ch,Zhang2018ch,Zhang2018b,Zhang2019bn,Zhang2020b4c1,Zhang2020b4c2,Millot2020}, and was found to produce consistent Hugoniots with other computational methods, such as progressive determination by running a large number of EOS calculations around the Hugoniot curve~\cite{Shamp2017,lepape2013} In order to cross check the validity of the Hugoniot results based on the relatively sparse temperature-density grid, we have recalculated the Hugoniot of $\alpha$-quartz by performing 2D interpolation of the pressure and energy data as functions of $(T,\rho)$ and then determined the conditions at which the function $\mathcal{H}(\rho,T)=E-E_i+(P+P_i)(V-V_i)/2$ equals zero. We have also made tests by using a partial set of our EOS data (by excluding the 6.62~g/cm$^3$ isochore). The Hugoniots obtained from the different methods differ by no more than 3.5\%. Such small differences in the Hugoniot do not affect our comparisons with experiments or other discussions in the following sections. {\color{black}We have performed additional EOS calculations to 41,000~K along the 9.27-g/cm$^3$ isochore by using similar settings as the lower-density ones. Adding these data to our EOS table does not affect the calculated Hugoniots but extends our stishovite Hugoniot from 9500 to 27,400~K.} Figure~\ref{fig:hugtrhop} shows our shock Hugoniots on an EOS grid at 4.5--8.5~g/cm$^3$. The overall agreement with experimentally measured $P$--$\rho$ Hugoniots for fused silica, $\alpha$-quartz, and stishovite~\cite{KD2013_aquartz,Root2019fusedsilica,Furnish2017_stishovite} suggests our calculations and results on liquid silica are reliable. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{fig2.pdf} \caption{(a) Internal energy $E(T)$ and (b) specific heat capacity $C_V(T)$ along four different isochores. In (a), `+' and `x' symbols denote results from 24-f.u. and 8-f.u. simulations, respectively; $E(T)$ profiles along two different Hugoniots are shown for comparison. In (b), the minimum of each curve (used to define the condition of anomaly in heat capacity) is shown with a cross.} \label{fig:ecv} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{fig3.pdf} \caption{Density of states (DOS, solid curves) and Fermi occupancy (multiplied by a factor of 10, dashed curves) of electronic states from 192-atom-cell simulations of liquid silica at different near-Hugoniot conditions (hug01--03 as labeled in Fig.~\ref{fig:hugtrhop}). Results from a solidified structure (hug00 in Fig.~\ref{fig:hugtrhop}) are shown (in grey curve and symbols) for comparison. All profiles are aligned at 0 eV (the highest occupied state for ``hug00'' or the Fermi level for other cases). A 2.5-eV bandgap exists in the solidified structure (evidently shown by the discontinuity in Fermi occupancy) and is gradually filled in the liquid states with increasing temperature. A smearing technique (with the broadening parameter equal to the corresponding temperature) was employed to ensure smoothness of the DOS results.} \label{fig:dos} \end{figure} In order to elucidate the bonded-to-atomic transition, we have firstly looked into the thermodynamic properties by plotting the internal energies and the heat capacities as functions of temperature. Figure~\ref{fig:ecv} shows the results along four different isochores (solid curves) and along the Hugoniots (dashed curves). The energy smoothly increases with temperature, while the heat capacity curve has an anomaly (i.e., local minimum), around which the values of $C_V$ are larger. These observations imply a likely higher-than-first-order transition. $C_V$ anomalies were similarly reported in previous experimental~\cite{HicksPRL2006} and theoretical~\cite{ScipioniPNAS2017,Green2018} studies of liquid silica. Such high values of $C_V$ were also reported for other materials (e.g., hexagonal close-packed iron at high temperatures and pressures~\cite{Alfe2001} and shock melted magnesium oxide~\cite{McWilliamsScience2012} or diamond~\cite{EggertNphys2010}) and explained~\cite{Kraus2012,ScipioniPNAS2017} by the large degrees of freedom in the melt relative to the solid~\footnote{Note that the excess heat capacity was mistakenly interpreted in~\cite{Green2018} as a result of anharmonic vibration by referencing to a previous study on iron~\cite{Alfe2001}. Actually, Alf\`e {\it et al}.~\cite{Alfe2001} clearly stated that this is due to the electronic thermal excitation and that the anharmonic contribution to heat capacity is small.}. {\color{black}The anomaly in $C_V$ is a result of joint ion and electron thermal effects. First, the emergence of large $C_V$ (higher than $3k_\text{B}/\text{atom}$, the ``Dulong--Petit'' limit) upon melting is because of the combined vibration, rotation, and translational motion of local atomic pairs and clusters that form due to the loss of crystal symmetry. With increasing temperature, bonds break frequently as kinetic motion increases, which effectively costs less energy to compress them. Therefore, the ion thermal contribution to $C_V$ decreases, and eventually approaches the ideal-gas value of $1.5k_\text{B}/\text{ion}$ in the limit of infinitely high temperatures~\footnote{This trend is in accord with various theories for ion thermal free energies such as the Cowan model~\cite{leos1qeos,Benedict_2014,Zhang2018ch}.}; meanwhile, the electron contribution to $C_V$ increases with temperature due to thermal excitation, which exceeds the ion thermal part and induces a turnover in the heat capacity profile (shown with crosses in Fig.~\ref{fig:ecv}(b)). The increasingly significant role of the electron thermal effects is closely related to the conductive nature of liquid silica, as shown in Fig.~\ref{fig:dos}. While a bandgap exists in SiO$_2$ solids (shown by the discontinuity in Fermi-occupancy between 0--2.5~eV with grey symbols in Fig.~\ref{fig:dos}), it closes and a pseudogap forms at the Fermi level at higher temperatures when the system liquifies (purple curve in Fig.~\ref{fig:dos}, previously also found in MgO and MgSiO$_3$ in a computational study~\cite{Soubiran2018}), which is eventually filled at 28,000~K or above (green and blue curves). We note that a similar behavior was found in hydrocarbons~\cite{Zhang2018ch} and is believed to be associated with metallization of the system, and also in boron carbide (B$_4$C) as a result of the disappearance of mid-range order and molecular motifs within the liquid~\cite{Shamp2017}.} According to our DFT-MD data, the anomaly in $C_V(T)$ occurs at 2--3$\times10^4$~K along the isochores of 4--8~g/cm$^3$. Under the same criteria as that used by Hicks {\it el al.}~\cite{HicksPRL2006}, these are the temperatures of chemical bond dissociation. Our data suggest the bonded-to-atomic transition occurs at lower temperatures and is more sensitive to pressure than the previous estimates based on laser-driven Hugoniot measurements (black line-diamond curve in Fig.~\ref{fig:phase}). In order to understand these differences, a closer examination of the physics in atomistic and electronic levels is required. We choose five different conditions nearly along the $\alpha$-quartz Hugoniot and perform calculations with much larger simulation cells and more in depth analysis of the structural, electronic, and thermodynamic properties. The results are presented in the following sections. \subsection{Structural evolution at Hugoniot conditions} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{fig4.pdf} \caption{Pair correlation functions of liquid silica at different conditions near the Hugoniot of $\alpha$-quartz: four 192-atom simulations (colored curves, corresponding to hug01-04 as labeled in Fig.~\ref{fig:hugtrhop}) and a 24-atom simulation (`+'). } \label{fig:gr} \end{figure} \begin{figure*}[ht] \centering \includegraphics[width=0.8\linewidth]{fig5.pdf} \caption{Interatomic distances, between a Si atom (``Si0'') and all O atoms (``O1--O128'') in a 192-atom cell, as a function of time (in windows of 200 fs) during simulations at four near-Hugoniot conditions (hug01-04 as labeled in Fig.~\ref{fig:hugtrhop}).} \label{fig:bond} \end{figure*} \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{fig6.pdf} \caption{Electron density distributions in planes around Si-O pairs from 192-atom simulations at five near-Hugoniot conditions (hug01--05 as labeled in Fig.~\ref{fig:hugtrhop}). The blue and red spheres denote the Si and O atoms, respectively, whose distances are 1.35~\AA~in all cases. The colormap displays the difference between the calculated valence electron density and the superposition of proto-atomic values.} \label{fig:rhoe} \end{figure} An often useful way to describe the structure of a condensed fluid is by looking at the pair correlation function $g(r)$, which is defined by the ratio between the time-averaged number density of atoms at distances $r$ from a given atom and that in an ideal gas of the same density (i.e., the average number density of the system)~\cite{AllenTildesley1987}. Figure~\ref{fig:gr} shows the $g(r)$ results for Si--O, Si--Si, and O--O from 192-atom simulations at several different conditions along the $\alpha$-quartz Hugoniot, in comparison with that from simulations using a smaller 24-atom cell. All $g(r)$ results show peak-valley features with tails approaching unity, typical of that in fluids, except for the lowest temperature and density condition ``hug00'' (indicated in Fig.~\ref{fig:hugtrhop}) where we observe more structure that originates from crystallization of the simulated structure~\footnote{{\color{black}At 5000~K and 5.30~g/cm$^3$, we found the system stabilizes into a structure that is dominated by chains of edge-shared octahedrons, with each nearby pair of SiO$_6$ units from neighbored chains sharing an O atom.}} near the solid-liquid phase boundary (Fig.~\ref{fig:phase}). With increasing temperature and density, the primary peak in $g(r)$ drops in height and sharpness and shifts closer to $r=0$, as a result of thermal broadening in spatial distributions and increased compression of the system. The features are fully captured in the 192-atom but not by the 24-atom simulations, indicating the importance of using large cells for detailed structural analysis. Although the small cells are sufficient in producing converged EOS data at high temperatures, much larger ones are required to understand the structure of the fluids. Moreover, our results show clear differences between the $g(r)$ profile at 14,500 K (``hug01'') and higher-temperature ones (``hug02--05'') in appearance and values at $r<5$~\AA, while variations among the higher-temperature profiles between 28,000 and 70,000 K are small. This suggests the microscopic structure of liquid silica changes qualitatively over the range of 14,500--28,000 K. This temperature (approximately 1--3 eV) is comparable to that of typical chemical bond energies (1.5--11.1 eV)~\cite{chembondenergy}, which implies the chemical bonds in liquid silica could be subject to breaking due to the large kinetic energy. We have therefore calculated the interatomic distances $d_\text{Si-O}$ between a randomly selected Si atom and all O atoms in the 192-atom simulation cell and monitored their changes with time. The results at four different conditions (``hug01--04'') and in a window of 200~fs are shown in Fig.~\ref{fig:bond}(a)--(d). At 14,500 K, only a few (less than 10) oxygen atoms enter the window and some of them stay for long duration, whereas at $T\ge28,000$~K a much larger (by more than 2$\times$) number of oxygen atoms come in and out of the window, more so at higher temperatures. This suggests that chemical bonds between Si and O both break and form more readily (or have a shorter lifetime) in liquid silica at higher temperatures and pressures. \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{fig7.pdf} \caption{(a--b) Electron localization function and (c) the negative of the crystal orbital Hamilton population integrated to the Fermi level (-iCOHP, which characterizes the bond strength) for atomic pairs as functions of inter-atomic distances for structure snapshots from 192-atom simulations at different near-Hugoniot conditions (hug01 and 05 as labeled in Fig.~\ref{fig:hugtrhop}). In (a) and (b), black circles denote examples of covalent bonds between homo-species (a O--O pair and a Si--Si--Si cluster). In (c), the results from a solidified structure (hug00 in Fig.~\ref{fig:hugtrhop}) that is dominated by ionic Si--O bonds are denoted by grey symbols for comparison.} \label{fig:bonding} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{fig8.pdf} \caption{Lifetime of Si-O bonds as a function of (a) temperature or (b) pressure along five different isochores. Open and filled circles denote simulations using 8-f.u. and 24-f.u. cells, respectively. Lifetime values of 100, 50, and 25 fs are shown with dashed horizontal lines.} \label{fig:bondlifetime} \end{figure} In order to further clarify these points, we have taken a near-equilibrium snapshot of each of the simulations at different near-Hugoniot conditions ``hug01--05'', performed a self-consistent field calculation of the valence electron density at the corresponding temperature, and compared with the proto-atomic values~\footnote{The proto-atomic values of the electron density are reconstructed within the PAW method in the {\footnotesize VASP} code. }. The resultant colormaps of the electron density difference $\Delta n_\text{electron}=n^\text{SCF}-n^\text{proto}$ at the five conditions are shown in Fig.~\ref{fig:rhoe}(a)--(e), highlighting a planar region surrounding a local Si-O structural unit in each scenario. {\color{black}The results show that the electron density in the region between Si and O is slightly higher or similar to that of proto-atomic values at 14,500--28,000 K (yellowish green--green colors in (a)--(b)), and then becomes increasingly depleted at higher temperatures (more blueish from (c) to (e)). The absence of a significant gain in electron density between Si and O is indicative of the ionic nature of the bonds, and the increasing depletion of density between the atoms at higher temperatures is suggestive of a facile transition of the system from a bonded to an atomic fluid. The electron localization function (ELF)~\cite{ELF1994Natur.371..683S} identifies regions of space that can be associated with electron pairs. Therefore, high values of the ELF are characteristic of covalent bonds or lone pairs. Figure~\ref{fig:bonding}(a)--(b) show ELF results for selected snapshots (same as the corresponding ones shown in Fig.~\ref{fig:rhoe} for electron densities) at two different conditions (hug01 and hug05 as labeled in Fig.~\ref{fig:hugtrhop}). The plot shows large ELF values around oxygen, but not around silicon or between Si--O. A few regions possess large ELF values between pairs of oxygen atoms, or silicon clusters. This suggests the Si--O bonding in silica is ionic, whereas some covalent homonuclear bonds are formed at the conditions studied here. The strength of the bonds, defined by the -iCOHP (the negative of the crystal orbital Hamilton population integrated to the Fermi level)~\cite{pCOHP.Deringer2011,lobster2016,lobster2020}, follows the same trends at different temperature-pressure conditions and ranges up to 12~eV/bond and 27~eV/bond for Si--O and O--O, respectively, depending on the interatomic distance (see Fig.~\ref{fig:bonding}(c)). In comparison, the covalent bonds between silicon atoms are weaker (up to 8--10~eV/bond), particularly at lower temperatures and pressures, but they are still stronger than typical Si--Si bonds at ambient conditions (approximately 3.5~eV~\cite{chembondenergy} for single bonds with an average length of 2.34~\AA~\cite{singleSibond}) as chances are higher that atoms come closer to each other at higher temperatures and densities. Figure~\ref{fig:bonding}(c) also shows that the bond strengths decrease with temperature. This is consistent with our calculated Mulliken and L\"owdin charges~\cite{Ertural2019}, which decreases from approximately $+2$ for Si and $-1$ for O in the solidified structrue (``hug00'') to around $+1$ for Si and $-0.5$ for O in the liquid states at higher temperatures and densities (``hug01--05''). The decrease in charge therefore weakens the bonds formed by electrostatic forces, lowers the bond energy, and also contributes to the decrease in $C_V$ before it is taken control by electron thermal effects. } We have also calculated the lifetime of Si-O bonds from the DFT-MD simulations in order to understand the kinetic effects. The calculation is done by defining a function $F(t, r_\text{cutoff})$, which represents the probability that Si--O bonds (defined by Si--O pairs that satisfy $d_\text{Si-O}<r_\text{cutoff}$, where $r_\text{cutoff}$ is set to 2.4~\AA, the approximate position of the first valley in the pair correlation function for Si--O shown in Fig.~\ref{fig:gr}(a)) persist at time $t$. The probability function is generally observed to be exponentially decaying with time by following $F(t, r_\text{cutoff})=\exp(-t/\tau)$, where $\tau$ is the bond lifetime. We can therefore calculate the value of $\tau$ by fitting the probability function to the simulation time in each of the temperature-density conditions at which we have performed DFT-MD calculations. Figure~\ref{fig:bondlifetime} shows our results for the Si--O bond lifetime along five different isochores~\footnote{Here, since we only need to count the nearest Si--O pairs ($d$ $\approx$ 1--2 \AA), our 8- and 24-f.u. simulations are useful for the bond lifetime analysis}. Within the whole range in density (2.65--7.95~g/cm$^3$) and temperature (5,000--100,000~K) that we have considered, the Si--O bond lifetime is in general longer at lower temperatures and it gradually drops from approximately 1000 to 20 fs, as temperature increases. This trend is similar for all densities, while the value of $\tau$ is typically larger at higher densities (i.e., when the system is more squeezed). Remarkably, the temperature corresponding to a lifetime of 50~fs increases from 15,000~K at 40 GPa to 38,000 K at 1000 GPa (green solid line-triangle curve in Fig.~\ref{fig:phase}), which is similar to the bonded-to-atomic transition defined by using the anomaly in heat capacity. This indicates that we may choose the lifetime of 50-fs as another criteria to define the bonded-to-atomic transition, as it corresponds to a uniform length in time that Si--O bonds can stably exist. The two approaches consistently suggest a transition boundary that is lower in temperature and more dependent on pressure than previous findings (37,000 K and weakly dependent on pressure), which were based on an analysis of the shock temperature to infer approximated values for the specific heat $C_V$ along the fused silica and $\alpha$-quartz Hugoniots~\cite{HicksPRL2006}. We note that, if choosing $\tau_\text{Si-O}=25$ (or 100)~fs as the criteria for bond dissociation, the corresponding temperatures are higher (or lower) by 1--2$\times$10$^4$~K than when choosing $\tau_\text{Si-O}=50$~fs, while the sensitivity to pressure remains similar, as shown with dotted (or dashed) line-triangles in in Fig.~\ref{fig:phase}. Figure~\ref{fig:phase} shows overall consistency between our simulations and the measurements in the $T-P$ Hugoniots for $\alpha$-quartz and fused-silica, while slightly larger differences are noticed for the stishovite near the melting (similar to that found in a previous EOS and DFT-MD study of SiO$_2$ in the fluid regime~\cite{SjostromAIP2017}). This picture remains similar in the plot of $T$ {\it vs} $u_s$ (Fig.~\ref{fig:t-us}), which are measurable in the experiments and do not reply on the choice of $u_s$--$u_p$ relations (in contrast to $P$ that is calculated by $\rho_0u_su_p$ and thus depends on the $u_s$--$u_p$ relation, which lacks data along the stishovite Hugoniot at 0.2--1.2~TPa.)~\footnote{{\color{black} We also note differences between computation and experiment along the fused silica Hugoniot at temperatures above 40,000~K, similar to that shown in a $T$--$P$ Hugoniot plot in Ref.~\onlinecite{SjostromAIP2017}. This does not affect the conclusions in this work and can be worthwhile to study in the future.}} {\color{black}This implies that the discrepancy in pressure dependence for the bonded-to-atomic transition may be related to the limited thermodynamic space that was probed in the experiment. This poses a higher requirement on the resolution to determine the transition temperature and pressure than that achievable by using the relatively simple models for estimating temperature and isochoric heat capacity. In contrast, our DFT-MD simulations provide a more complete sampling of temperature conditions along different isochores, which allows straightforward diagnosis of the structure and thermodynamic properties. However, the accuracy of DFT-MD results is also known to be dependent on the exchange-correlation (XC) functional, pseudopotential, and finite sizes of the simulation cell, which we will address in the following Sec.~\ref{subsec:resultc}. } \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{fig9.pdf} \caption{Phase diagram of SiO$_2$ featuring the bonded-to-atomic liquid transition determined in this work (black curve with diamond symbols) as compared to a previous estimation (Hicks {\it et al.}\cite{HicksPRL2006}, grey dashed curve). Also shown are the conditions for three values of the Si--O bond lifetime (green line-triangles), Hugoniots of silica from this work (darker-colored curves in red, blue, and turquoise for fused silica, $\alpha$-quartz, and stishovite, respectively) in comparison to experiments~\cite{HicksPRL2006,MillotSci2015} (lighter-colored symbols), the melting curve (solid: measured; dashed: extrapolated) from Millot {\it et al.}~\cite{MillotSci2015}, and the conditions of interest (blue shaded area) to giant impacts~\cite{CanupIracus2004}. Yellow pentagons correspond to the near-Hugoniot conditions ``hug00--04'' labeled in Fig.~\ref{fig:hugtrhop}. } \label{fig:phase} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{fig10.pdf} \caption{Temperature-shock velocity plot of SiO$_2$ Hugoniots in initial forms of fused silica (red), $\alpha$-quartz (blue) and stishovite (turquoise) from our simulations (line-diamonds) compared to experiments (light-colored symbols)~\cite{HicksPRL2006,MillotSci2015}.} \label{fig:t-us} \end{figure} \subsection{Validity of the DFT-MD results}\label{subsec:resultc} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{fig11.pdf} \caption{Effect of different XC functionals, simulation cell sizes, and pseudopotentials on (a) the internal energy $E(T)$ and (b) the corresponding heat capacity $C_V(T)$ profiles. Notations in (a): ‘+’ denotes 24-f.u. cell simulation; ‘x’ denotes 8-f.u. cell simulation; ‘$\square$’ denotes 4$\times$4$\times$4 k-mesh simulation; ’h’ denotes hard pseudopotentials; ‘gw’ denotes GW-type pseudopotentials. In (b), the blue dashed curve is obtained based on mixed 24-f.u. (at low-T) and 8-f.u. (at high-T) simulations.} \label{fig:XCpsp} \end{figure} We have performed additional calculations by using GGA functionals (AM05 or PBE) and GW-type pseudopotentials to cross-check our findings on the bonded-to-atomic transition based on LDA. The thermodynamic results in $E(T)$ and $C_V(T)$ along two different isochores are shown in Fig.~\ref{fig:XCpsp} and compared to the previous results based on the LDA XC functional and hard PAW method. Our results show that, within GGA (AM05 or PBE), the transition temperature (as defined by the anomaly in heat capacity) decreases by 3000~K at 5.30~g/cm$^3$ and by 800~K at 7.95~g/cm$^3$, relative to LDA. At the minimum, $C_V$ is higher than LDA because the slope of $E(T)$ curves are larger; this indicates the electron thermal contribution to energies under GGA is larger than that within LDA. We also find that, when using GW-type pseudopotentials, the slope of $E(T)$ decreases. Therefore, $C_V$ decreases relative to hard-type pseudopotentials, and the transition temperature obtained using PBE is consistent with that obtained using hard pseudopotentials under LDA. Moreover, we have tried switching from 24-f.u. to 8-f.u. or from $\Gamma$ to $4\times4\times4$ k-mesh for the calculations but observed no meaningful difference in EOS and the transition temperature (see results at 5.30~g/cm$^3$ shown in blue in Fig.~\ref{fig:XCpsp}), which shows our findings are robust and not affected by finite-size effects. \section{Conclusions}\label{sec:conclusions} We have performed extensive simulations from first principles and in-depth analysis of the structure, electron density, and thermodynamic properties of liquid silica and provided insights about the nature of the bonded-to-atomic transition in liquid silica. Our results show smooth internal energy curves as a function of temperature, indicating the transition is likely second-order. The heat capacity anomaly, which defines the bonded-to-atomic transition, happens at 2--3$\times10^4$~K (1.5--2.5~eV) over the pressure range of 0.1--1~TPa. The transition temperature is lower and more sensitive to pressure than previous estimations~\cite{HicksPRL2006}. These results render a new bonded-to-atomic boundary of liquid silica that overlaps with the conditions of interest to giant-impact simulations~\cite{CanupIracus2004}, which indicates more complex variations (i.e., decrease and then increase with temperatures) in heat capacity than that considered previously~\cite{HicksPRL2006}. This can rebalance the dissipation of irreversible work into temperature and entropy in events of giant impact, necessitating reconsideration of predictions by simulations that are based on empirical EOS models~\cite{Melosh2007,Kraus2012}. Furthermore, even though the temperature-density grid considered for EOS calculation in this work is relatively sparse, our calculated Hugoniots show overall agreement with experimental results and are similar to previous calculations using alike methods~\cite{ScipioniPNAS2017,SjostromAIP2017}. {\color{black}The discrepancies between theory and experiment in the stishovite temperature-pressure Hugoniot near melting, together with the previously shown inconsistencies at 1.0--2.5 TPa~\cite{SjostromAIP2017}, also emphasizes the need for further development in both numerical simulations and dynamic compression experiments to improve constraints on the phase diagram, EOS, and properties of SiO$_2$ in regions off the Hugoniots of $\alpha$-quartz and fused silica and elucidate the exotic behaviors affecting matter at extreme condition. These include simulations that overcome the increased limitations by pseudopotentials and computational cost for reaching convergence at the high density-temperature conditions or go beyond LDA/GGA for the XC functional, as well as more in-depth experimental studies, currently lacking benchmarking $u_s$--$u_p$ data for stishovite between 0.2--1.2 TPa and relying on pyrometry and a grey-body approximation~\cite{QiPoP2015,Falk2014} for temperature estimation.} \section*{Acknowledgements} We appreciate Dr. M. Li, Dr. A. Samanta, and Dr. H. Whitley for beneficial discussions and assistance during the research. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0003856, the University of Rochester, and the New York State Energy Research and Development Authority. The Flatiron Institute is a division of the Simons Foundation. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract number DE-AC52-07NA27344. M.M. acknowledges support from LLNL LDRD project 19-ERD-031. R. J. and E. Z. thank the the Center for Matter at Atomic Pressures (CMAP), a National Science Foundation (NSF) Physics Frontier Center, under Award PHY-2020249. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect those of the National Science Foundation. This report was prepared as an account of work sponsored by an agency of the U.S. Government. Neither the U.S. Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the U.S. Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the U.S. Government or any agency thereof.
1,108,101,566,193
arxiv
\section{Introduction} Much of the theory of random processes has been driven by the study of evolving physical systems that clearly either have no memory, or can be assumed forgetful to a good approximation. This leads naturally to models that require the Markov property in one of its forms. However, many real-world systems exhibit marked long-range dependence, together with the phenomenon of `lock-in', (or `self-organization'), and other manifestations of non-ergodic behaviour. The customary assumption of the Markov property, though extraordinarily convenient, limits the extent to which such properties can feature in the development of the process. A natural next step therefore is to abandon that restriction, and allow the short-term development of the process to have an explicit dependence on any part, or all, of its history. Applications include: history dependent (HD) quantum dynamics \citep{BL}, HD dynamic random utility \citep{FIS}, HD predator-prey models \citep{GE}, HD materials \citep{MBE}, HD social networks \citep{CBHT,PS}, and so on. Models for describing such phenomena fall into several types, and for a broad survey see \citet{P}. Among the various types of HD random processes, perhaps the most developed strand comprises the HD random walks (RW); these are known in general as self-exciting RW, or self-reinforcing RW, or self-interacting RW, or self-avoiding RW, and so on. Further sub-divisions arise according as the walk may be edge-reinforced, vertex reinforced, step reinforced, etc. See for example \citet{KM,B}. Notable special cases include: first the Shark Random Swim \citep{BU}; second the Elephant Random Walk (ERW), first introduced by \citet{SGT}, and see also \citet{BE}; and third, the Reverting Random Walk \citep{BR,CS1}. The field now extends to HD Brownian motion and HD L\'{e}vy processes \citep{BERT}, and HD spatio-temporal processes \citep{R}. Note that in many applications, especially the social sciences, such processes are called path-dependent, but this term is also used in stochastic analysis to denote the solution of a stochastic differential equation whose coefficients depend on the paths of another process, such as the Wiener process. Here, we consider a type of HD growth process, first suggested by Stanislaw Ulam where the next step depends on the entire past of the process. Specifically, every member of the sequence of values is the sum of two values chosen from the previous history. This type of sequence was studied in discrete time by \citet{BSU}, thus \begin{equation}\label{eq:BSU} X_{n+1} = X_{U(n)} + X_{V(n)}, \ \ n \geqslant 2, \end{equation} where $X_1=x_1$ and $X_2=x_2$ are given, and $(U(n),V(n); n \geqslant 1)$ comprise a sequence of independent random variables such that for given $n$, $U(n)$ and $V(n)$ are each uniformly distributed on $\{1,\dots,n\}$. They note that $E X_n = \frac13 (x_1+x_2)n,$ for $n\geqslant 3,$ and they conjectured from computer simulations that $E X^2_n$ grows quadratically with $n$ as $n \to \infty$. (They made 5000 simulations each with 100 steps.) They also note that since the process $(X_n; n \geqslant 1)$ does not enjoy the Markov property, or similar simplifications, it is not straightforward to analyse. The sequence defined in \eqref{eq:BSU} was later considered by \citet{BNK}, with the initial condition $X_1=x_1.$ They note that in this case $E X_n = n x_1, n \geqslant 1.$ On the basis of further simulations ($10^8$ realisations, each of $1000$ steps), they conjectured that $E X^2_n$ grows quadratically and also that $E X^3_n$ grows with the cube of $n$. We shall verify these conjectures, and identify a martingale that further elucidates the behaviour of $(X_n; n \geqslant 1)$. We will then consider a new randomised adding sequence, in which changes occur randomly with probability $p$. Finally we consider a related adding process in continuous time, in which changes are regulated by a Poisson process. Such processes have previously been introduced in the context of history-dependent growth processes \citep{CS}. The process is shown to reproduce, in continuous time, the essential properties of Ulam's discrete time sequence. Furthermore, similar analyses can be made of many more general processes, which we briefly outline. \section{The adding process in discrete time}\label{sec:2} Consider the process defined in \eqref{eq:BSU} with initial fixed sequence $(X_1=x_1,\dots,X_r=x_r)$ and let $s_r = \sum_{k=1}^r x_k$ and $t_r=\sum_{k=1}^r x_k^2$. Denote the mean of $X_n$ by $m_n = E X_n$. By conditional expectation, $$m_{n+1} = \frac{2}{n} \sum_{k=1}^n m_k, \quad n \geqslant r, $$ and an easy induction gives \begin{equation} \label{eq:martmean} m_n = \frac{2 n}{r(r+1)} \sum_{k=1}^r x_k, \quad n > r. \end{equation} For the second moment we have this: \begin{theorem} \label{thm:qlimit} \begin{equation} \label{eq:qlimit} \frac{E X^2_n}{n^2} \to K(x_1,\dots,x_r),\quad \text{as $n \to \infty$}, \end{equation} where \begin{equation} \label{eq:K} K(x_1,\dots,x_r) = \frac{\sinh(\pi)}{2\pi W_r} \left\{2 (r+1)\left(\frac{t_r}{r} + \frac{s_r^2}{r^2}\right) -(r+2)x_r^2\right\}, \end{equation} and \begin{equation} W_r=\frac12 \left\{(r+1)^2+1\right\} \prod_{k=1}^r\left(1+\frac{1}{k^2}\right). \end{equation} \end{theorem} \begin{proof} Define $S_n = \sum_{k=1}^n X_k$ and let $p_n = E S_n^2$ and $q_n= E X^2_n$. By conditional expectation, for $n \geqslant r$, \begin{equation} \label{eq:qn+1} q_{n+1} = \frac{2}{n} \sum_{k=1}^n q_k + \frac{2}{n^2} p_n. \end{equation} Also by conditional expectation, \begin{equation} \label{eq:an+1} p_{n+1} = E (S^2_n + 2 S_n X_{n+1} + X^2_{n+1}) = \frac{n+4}{n} p_n + q_{n+1}. \end{equation} Eliminating $p_n$, we have \begin{equation} \label{eq:LRadding} (n+1)^2 q_{n+2} - 2(n+1)(n+2)q_{n+1} + \{(n+2)^2+1\} q_n = 0, \end{equation} with initial conditions \begin{equation} \label{eq:ICadding} q_r = x_r^2 \quad \text{and} \quad q_{r+1} = \frac{2t_r}{r} + \frac{2s^2_r}{r^2}. \end{equation} By inspection, a particular solution of \eqref{eq:LRadding} is $q^*_n = n+1$. From the theory of difference equations \citep{Elaydi}, a second, linearly independent, solution $q^\circ_n$ is given by $$ q^\circ_n = q^*_n \sum_{k=0}^{n-1} \frac{W_k}{(k+1)(k+2)},\quad n>r ,$$ where $W_k$ is the Casoratian associated with the difference equation. We may set $W_0 =1$ and in this instance $W_n$ is given by the recursion \begin{eqnarray*} W_{n+1} &=& \frac{(n+2)^2+1}{(n+1)^2} W_n\\ &=& \frac12 \left\{(n+2)^2+1\right\} \prod_{k=1}^{n+1} \left(1+ \frac{1}{k^2}\right)\\ &\sim& n^2 (2 \pi)^{-1} \sinh(\pi), \end{eqnarray*} where we have used the product limit of \citet[\S 156]{Euler} and where the notation $f(n) \sim g(n)$ indicates that $\lim_{n\to \infty} f(n)/g(n) =1.$ The general solution of \eqref{eq:LRadding} is given by $q_n = A(n+1) + Bq^{\circ}_n$, where the constants $A$ and $B$ are determined by the initial conditions \eqref{eq:ICadding}. Hence for $n> r$, $$q_n = \frac{(n+1)x_r^2}{r+1} + \frac{(n+1)}{W_r}\left\{2(r+1)\left(\frac{t_r}{r} + \frac{s_r^2}{r^2}\right) -(r+2)x_r^2)\right\}\sum_{k=r}^{n-1}\frac{W_k}{(k+1)(k+2)},$$ and since $W_n \sim n^2 (2\pi)^{-1} \sinh(\pi)$, the limit \eqref{eq:qlimit} follows. For the special case considered by \citet{BNK}, the constant $K$ becomes $\sinh(\pi)/( 2 \pi) \doteq 1.83804$ in good agreement with the approximate value of $1.84$ that they obtained by simulation.\end{proof} Higher moments can be obtained in a similar fashion. For simplicity we restrict attention to the special case with initial condition $x_1=1$, as in \citet{BNK}. For the third moment $t_n = E(X_n^3)$ we define $$ a_n^{[j,k]} = E\left\{X_n^j S^k_{n-1}\right\} \quad \text{and} \quad b_n^{[j,k]} = E\left\{\left(\textstyle{\sum_{\nu=1}^n X^j_\nu}\right) S^k_n\right\}. $$ By the usual conditional expectation arguments we find \begin{eqnarray*} a_{n+1}^{[0,3]} &=& a_{n}^{[0,3]} + 3 a_{n}^{[1,2]} + 3 a_{n}^{[2,1]} + a_{n}^{[3,0]},\quad a_{n+1}^{[1,2]} = \frac{2}{n} a_{n+1}^{[0,3]},\\ a_{n+1}^{[2,1]} &=& \frac{2}{n^2} a_{n+1}^{[0,3]} + \frac{2}{n} b_{n}^{[2,1]}, \quad a_{n+1}^{[3,0]} = \frac{2}{n} b_{n}^{[3,0]} + \frac{6}{n^2} b_{n}^{[2,1]},\\ b_{n+1}^{[2,1]} &=& a_{n}^{[0,3]}\left(1+\frac{2}{n}\right) + a_{n+1}^{[2,1]} + a_{n+1}^{[3,0]}, \quad b_{n+1}^{[3,0]} = b_{n}^{[3,0]} + a_{n+1}^{[3,0]}, \end{eqnarray*} with initial conditions $a_1^{[j,k]} = 0$ and $b_1^{[j,k]} = 1$. Reducing this system to a single recurrence for $t_n = a_{n}^{[3,0]}$ yields \begin{equation}\label{eq:3rdmoment} \begin{split} (4 n-3)(n+1)^2(n+2)^2t_{n+3} - 3(4n^3+17n^2+14n-21)(n+1)^2t_{n+2}\\ +(12n^5 + 87n^4+234n^3+177n^2-84n-126)t_{n+1}\\ -(n^3+5n^2+11n-5)(4n+1)(n+3)t_{n}=0, \end{split} \end{equation} with initial conditions $t_1=1, t_2 = 8, t_3 = 63/2$. Applying the methods of \citet{Adams,Birkhoff}, we substitute trial solutions of the form $n^\sigma \delta^n$ and then $n^\rho$ into \eqref{eq:qpadding} and determine the values of $\sigma$, $\delta$ and $\rho$ for which the leading term is zero. We find that $\delta= 1$ and then $\rho = 1,2,3$ and therefore conclude that $t_n$ grows asymptotically with the cube of $n$. Solving the recurrence numerically for the given initial conditions we find that $\lim_{n \to \infty} n^{-3}t_n \doteq 5.7946$. This can be compared with the estimate $5.76$ obtained by simulation in \citet{BNK}. For the fourth moment $f_n = E(X^4_n) = a_{n}^{[4,0]}$ we have corresponding equations \begin{eqnarray*} a_{n+1}^{[0,4]} &=& a_{n}^{[0,4]} + 4a_{n}^{[1,3} + 6a_{n}^{[2,2]} + 4a_{n}^{[3,1]} + a_{n}^{[4,0]}, \quad a_{n+1}^{[4,0]} = \frac{2}{n} b_{n}^{[4,0]} + \frac{8}{n^2} b_{n}^{[3,1]} + \frac{6}{n^2} c_n^{[2]},\\ a_{n+1}^{[1,3]} &=& \frac{2}{n} a_{n+1}^{[0,4]}, \quad a_{n+1}^{[2,2]} = \frac{2}{n} b_{n}^{[2,2]} + \frac{2}{n^2} a_{n+1}^{[0,4]},\quad a_{n+1}^{[3,1]} = \frac{2}{n} b_{n}^{[3,1]} + \frac{6}{n^2} b_{n}^{[2,2]},\\ b_{n+1}^{[2,2]} &=& b_{n}^{[2,2]}\left(1 + \frac{4}{n} + \frac{2}{n^2}\right) + \frac{2}{n}c_n^{[2]} + a_{n+1}^{[2,2]} + 2a_{n}^{[3,1]} + a_{n}^{[4,0]}, \quad b_{n+1}^{[4,0]} = b_{n}^{[4,0]} + a_{n+1}^{[4,0]},\\ c^{[2]}_{n+1} &=& c^{[2]}_{n}\left(1 + \frac{4}{n}\right) + \frac{4}{n^2}b_{n}^{[2,2]} + a_{n+1}^{[0,4]}, \quad b_{n+1}^{[3,1]} = b_{n}^{[3,1]}\left(1+ \frac{2}{n}\right) + a_{n+1}^{[3,1]} + a_{n+1}^{[4,0]}, \end{eqnarray*} where $c_n^{[2]} = E\left\{\left(\sum_{k=1}^n X^2_k\right)^2\right\}$. Again, reducing the system to a single recurrence for $f_n$ and applying the methods of \citet{Adams,Birkhoff}, we find that $f_n$ grows asymptotically with the fourth power of $n$. Solving the recurrence numerically we have $\lim_{n \to \infty} n^{-4}f_n \doteq 31.585$. An understanding of further properties of the process $(X_n; n \geqslant 1)$ is greatly aided by the content of the following: \begin{lemma} \label{thm:martingale} Let $M_n = S_n/(n(n+1))$ where $S_n = \sum_{k=1}^n X_k$, then $(M_n; n \geqslant r)$ is a martingale with respect to the increasing sequence of $\sigma$-fields $(\mathcal{F}_n; n \geqslant r)$ generated by the sequence $(X_n)$, or equivalently $(S_n)$. Furthermore, there exists a non-degenerate random variable $M$, such that $M_n$ converges to $M$ almost surely and in mean-square as $n \to \infty$, where $$E M = \frac{1}{r(r+1)} \sum_{k=1}^r x_k \quad \text{and} \quad E (M^2) = \frac16 K(x_1,\dots,x_r).$$ \end{lemma} \begin{proof} By conditional expectation $$E \left(M_{n+1} | \mathcal{F}_n\right) = \frac{E(S_n + X_{n+1}| \mathcal{F}_n) }{(n+1)(n+2)} = \frac{S_n+2S_n/n}{(n+1)(n+2)} = M_n.$$ The mean of $M_n$, and hence of $M$, follows from \eqref{eq:martmean}. Dividing \eqref{eq:qn+1} by $n^2$, allowing $n \to \infty$, and noting \eqref{eq:qlimit}, yields \begin{equation} \label{eq:alimit} \lim_{n\to \infty}n^{-4} E S^2_n = \frac16 K(x_1,\dots,x_r). \end{equation} The existence of this limit ensures that $E M_n^2$ is uniformly bounded for all $n$. The probabilistic limit results then follow from the martingale convergence theorem \citep{Doob}. \end{proof} As an immediate corollary, we remark that \begin{equation} \label{eq:Xmart} E (X_m M_n | \mathcal{F}_m) = \frac{X_m S_m}{m(m+1)}, \quad \text{for $n \geqslant m$}. \end{equation} We now turn to consider the auto-covariance properties of the process $(X_n)$, where we have the following result. \begin{theorem} \label{thm:2} Let $m,n \to \infty$, with $m \leqslant n$ then, \begin{equation} n^{-2} E(X_m X_n) \to \begin{cases}\frac{2}{3} \theta K & \text{if $m/n \to \theta \in (0,1)$},\\ K& \text{if $m=n$}, \end{cases} \end{equation} where $K=K(x_1,\dots,x_r)$ is defined in \eqref{eq:K} above. \end{theorem} \begin{proof} By conditional expectation, we have $$ E (X_{n+1}S_{n+1}) = \frac{2}{n}p_n + q_{n+1}, $$ so that \eqref{eq:alimit} and theorem \ref{thm:qlimit} give \begin{equation} \label{eq;xslimit} \lim_{n\to \infty}n^{-3} E(X_{n+1}S_{n+1}) = \textstyle{\frac13} K. \end{equation} Again by conditional expectation, we have \begin{equation} \label{eq:autocov} E(X_m X_{n+1}) = \frac{2}{n} E (X_m S_n), \quad n \geqslant m, \end{equation} so that with $m=n$ \begin{equation} \lim_{n\to \infty}n^{-2} E(X_{n}X_{n+1}) = \lim_{n\to \infty}2 n^{-3} E(X_{n}S_{n}) = \textstyle{\frac23} K. \end{equation} Finally, considering $m,n \to \infty$ with $m/n \to \theta$, for some fixed $\theta \in (0,1)$, by \eqref{eq:autocov} and \eqref{eq:Xmart} we have $$\frac{E (X_m X_n)}{n(n+1)} = \frac{2 E(X_mS_{n-1})}{(n-1)n(n+1)} = \frac{2 E(X_m S_m)}{(n-1)m(m+1)} \to \textstyle{\frac23} \theta K.$$ \end{proof} \subsection{Sample paths}\label{sec:samplepaths} We can now give an informal but quite precise description of a typical trajectory of the process $(X_n, n \geqslant 1)$ for large $n$. From lemma \ref{thm:martingale} we see that each trajectory has its own limiting value of $M_n =S_n/(n(n+1))$. These values vary from trajectory to trajectory with $E M^2_n \to K/6$ as $n \to \infty$. Computer simulations show that, when scaled by $S_n/(n+1)$, the variables $X_{U(n)} + X_{V(n)}$ have approximate probability density $f_W(w) = w e^{-w}, w >0$, as illustrated in figure \ref{fig:1}. This is not unexpected since a related energy splitting model (also due to Ulam, see \citet{BM}) has $f_W(w)$ as its fixed point density. Specifically, the process $(X_n, n \geqslant 1)$ can be reformulated as follows. At stage $n+1$, sample from the collection $ \{Y_k, k=1,\dots,n\}$ where $Y_k=X_k/k$, by selecting an index $k$ uniformly from $\{1,\dots,n\}$. Then multiply $Y_k$ by $k/(n+1)$, repeat independently and add the results to obtain $Y_{n+1}$. The analogous splitting model is defined by the distributional equality $W \overset{d}{=} U W_1 + V W_2 $ where $U$ and $V$ are independent variables uniform on $(0,1)$ and $W_1$ and $W_2$ are independent copies of $W$. It is straightforward to show that $f_W(w)$ is the fixed point density and it also follows that $UW_1$ and $VW_2$ are independently exponentially distributed. More formally we have the following \begin{theorem}\label{thm:weak} \begin{equation}\label{eq:weak1}\lim_{n \to \infty} P(X_n/(M n)\leqslant x) = \int_0^x w e^{-w} dw, \quad x \geqslant 0. \end{equation} Furthermore \begin{equation}\label{eq:weak2}\lim_{n \to \infty} P(X_{U(n)}/(M n) \leqslant x) = \int_0^x e^{-w} dw, \quad x \geqslant 0. \end{equation} \end{theorem} \begin{proof} Our approach follows that of \citet{rosler1991limit} and \citet{geiger2000new} in their analysis of the Quicksort algorithm and Yaglom's exponential limit law. We make use of the Mallows distance (also known as Vassershtein distance) between the distribution of random variables $X$ and $Y$, namely $$ d_2(X,Y) = \inf_{\mathscr{C}} \sqrt{E(X-Y)^2},$$ where the infimum is over all couplings of $X$ and $Y$, i.e.\ over all joint distributions with the specified margins. For background see \citet[\S 8]{bickel1981some}. In particular, note that the infimum is always attained. Let $\tilde{Y}(k) = X_k/(k M_{k- 1}), k = 1,2,\dots$ and let $U',V'$ be independently uniform on $(0,1)$ then \eqref{eq:BSU} can be recast as \begin{equation}\label{eq:Y} \tilde{Y}(n+1) = \frac{X_{\lceil U'n \rceil}}{(n+1) M_n} + \frac{X_{\lceil V' n \rceil}}{(n+1) M_n}. \end{equation} Now let $\beta_k$ be the squared Mallows distance between the distribution of $\tilde{Y}(k)$ and the distribution of $W$, $k \geqslant 1$. We will always choose a representation $\tilde{Y}(k)$ that attains the infimum of $E(\tilde{Y}(k)-W)^2$ over all couplings $\mathscr{C}$, so that $$\beta_k = d^{\,2}_2(\tilde{Y}(k),W) = E(\tilde{Y}(k)-W)^2, \quad k = 1, 2, \dots.$$ From lemma 8.3 in \citet{bickel1981some} for \eqref{eq:weak1}, it is sufficient to show $\beta_{n} \to 0$ as $n \to 0$. From \eqref{eq:Y} and using $W \overset{d}{=} U W_1 + V W_2 $ we have \begin{align}\label{eq:beta1}\beta_{n+1} &= d^{\,2}_2(\tilde{Y}(n+1),W) = d^{\,2}_2(\tilde{Y}(n+1),W_1 U + W_2 V)\nonumber\\ &= d^{\,2}_2\left(\frac{X_{\lceil U' n \rceil}}{(n+1) M_n} + \frac{X_{\lceil V' n \rceil}}{(n+1) M_n},W_1 U + W_2 V\right)\nonumber\\ &\leqslant d^{\,2}_2\left(\frac{X_{\lceil U' n \rceil}}{(n+1) M_n},W_1 U\right) + d^{\,2}_2\left(\frac{X_{\lceil V' n \rceil}}{(n+1) M_n},W_2 V\right)\nonumber\\ &= 2d^{\,2}_2\left(\frac{X_{\lceil U' n \rceil}}{(n+1) M_n},W_1 U\right), \end{align} using lemma 8.7 of \citet{bickel1981some} for the inequality, since $X_{\lceil U' n \rceil}/((n+1)M_n)$ and $X_{\lceil V' n \rceil}/((n+1)M_n)$ are iid conditional on $\mathcal{F}_n$, each with the same mean as $W_1 U$ and $W_2V$. With the coupling $U' = U$ we then have \begin{align}\label{eq:beta2}\beta_{n+1} &\leqslant 2 E\Big(\frac{X_{\lceil U n \rceil}}{(n+1) M_n}-W_1 U\Big)^2\nonumber\\ &= 2E\big((Y(\lceil U n \rceil)-W_1)\phi_n(U) + W_1 (\phi_n(U) - U)\big)^2\nonumber\\ &\leqslant 2E\big((Y(\lceil U n \rceil)-W_1)\phi_n(U)\big)^2 + 2E\big(W_1^2(\phi_n(U) - U)^2\big) \nonumber\\ &=2 \sum_{k=1}^n E(Y(k)-W)^2 \int_{(k-1)/n}^{k/n}\phi_n^2(u)du + 2E(W^2) E(\phi_n(U)-U)^2\nonumber\\ &=2 \sum_{k=1}^n \beta_k \frac{k^2 M^2_{k-1}}{n(n+1)^2M^2_n}+ 12E\big(\phi_n(U)-U)^2, \end{align} where $\phi_n(u) = \lceil u n \rceil M_{\lceil u n \rceil - 1}/((n+1) M_n), 0\leqslant u \leqslant 1.$ We can write the first term in \eqref{eq:beta2} as the weighted average of $(\beta_k M^2_{k-1})$ multiplied by a term converging to $2/(3M^2)$ as $n \to \infty$, so that for the $\limsup$ we have $$ \limsup \sum_{k=1}^n \frac{6 k^2 \beta_k M^2_{k-1}}{n(n+1)(2 n+1)} \frac{n(n+1)(2 n+1)}{3n(n+1)^2M_n^2} \leqslant \limsup \{\beta_n M^2_{n-1}\} \frac{2}{3M^2} = \frac23 \beta,$$ where $\beta = \limsup \beta_n$. Since the second term in \eqref{eq:beta2} converges to $0$ as $n \to \infty$, by taking the $\limsup$ of both sides of \eqref{eq:beta2} we have $\beta \leqslant \frac23 \beta$ and hence $\beta_n \to 0$. The assertion \eqref{eq:weak2} follows in the same way via \eqref{eq:beta2} since $W_1 U$ has the standard exponential distribution. \end{proof} Informally it can then be argued that $Y$, the limiting value of $Y_n$, is of the form $WM$ where $M$ is the limiting distribution of $M_n$. As a consequence, the moments of $Y$ should have a simple relation to those of $M$, for example $E(Y^2) = 6 E(M^2)$. \begin{figure}[h] \centering \centering \includegraphics[width=0.45\linewidth,angle=90,trim=5cm 1cm 7cm 1cm]{adding1a.pdf} \includegraphics[width=0.45\linewidth,trim=1cm 4.7cm 1cm 7cm]{adding1b.pdf} \caption{Left: simulated density of $\log(2 M_{100000})$ (---) with fitted normal density (...) and gamma density (- - -). Right: simulated density of $X_{100000}/M_{99999}$ (---) with fitted density $f_W(w)$ (- - -); $10^4$ independent realisations.} \label{fig:1} \end{figure} Furthermore, we can write $$M_{n+1}=\frac{S_{n+1}}{(n+1)(n+2)} = \frac{S_n + X_{U(n)} +X_{V(n)}}{(n+1)(n+2)} = M_n\left\{1+ \frac{W_n-2}{n+2}\right\},$$ where $W_n = \left(X_{U(n)}+X_{V(n)}\right)/M_n,$ and since we have empirical evidence that $(W_n)$ are independently distributed from the same distribution, we can anticipate that the limiting distribution of $M_n$ will be approximately log-normal or, more generally, in the log-gamma family. Figure \ref{fig:1} shows the estimated density of $\log (2 M_n)$, simulated from the initial condition $x_1=1$, compared with fitted gamma and normal densities. (The factor of $2$ is introduced for convenience, so that the mean of $2M_n$ is 1 with this initial condition.) The log-gamma density is seen to provide an excellent fit. As a more rigorous test, we can compare the numerically determined moments of $2M$ with those of the candidate distributions. Using $\mu_k = E\{(2M)^k\} = E\{(2Y)^k\}/ E(W^k)$ for $k=1,2,3,4$ and the moments of $Y$ obtained in the previous section we find $\mu_1=1, \mu_2 \doteq 1.225,\mu_3 \doteq 1.932,\mu_4 \doteq 4.211$. The fourth moment of a log-gamma distribution fitted by the first three of these moments is $4.194$ which is within half a percent of the value $\mu_4$. \section{The \texorpdfstring{$p$}{p}-adding process in discrete time}\label{sec:3} We now consider a simple modification of the process defined in \eqref{eq:BSU}, where history-dependent updates occur randomly and independently with probability $p$, where $p<1$. The new process is as follows. \medskip {\sc Definition 3.1}. Let $(J_n,n\geqslant 1)$ be a sequence of independent Bernoulli variables each with success probability $p$ and let $(U(n))$ and $(V(n))$ be sequences of independent variables (also independent of $(J_n)$) such that for any given $n$, $U(n)$ and $V(n)$ are each uniformly distributed on $\{1,\dots,n\}$. The $p$-adding process with fixed initial condition $(X_1=x_1,\dots,X_r=x_r)$ is defined by \begin{equation}\label{eq:p_add} X_{n+1} = J_n[X_{U(n)} +X_{V(n)}] +(1-J_n)X_n, \quad n\geqslant r. \end{equation} \begin{theorem}\label{thm:paddingmean} The $p$-adding process has mean \begin{equation} \label{eq:paddingmean} E X_n = (\nu + p n)\left\{\frac{x_r}{\nu +p r} + \frac{C p r}{\nu^{r-1}}\sum_{k=r}^{n-1} \frac{\nu^{k-1}}{k(\nu+p k)(1 + pk)}\right\},\quad n \geqslant r, \end{equation} where $\nu=1-p$, $C= 2 s_r (\nu+pr)/r - (1+\nu+pr)x_r$ and $s_r= \sum_{k=1}^r x_k$, with the convention that the summation in \eqref{eq:paddingmean} is zero when the upper limit is less than the lower. \end{theorem} \begin{proof} Let $m_n= E X_n$, then by the usual conditioning arguments \begin{equation} \label{eq:paddingint} m_{n+1} = \nu m_n + \frac{2 p}{n} \sum^n_{k=1} m_k,\quad n\geqslant r, \end{equation} which can be recast as the difference equation \begin{equation} \label{eq:paddingDE} (n+1)m_{n+2} -[n(1+\nu)+ p+1]m_{n+1} + n\nu m_n = 0, \quad n\geqslant r. \end{equation} By inspection, $\nu+pn$ is seen to be a solution and the Casoratian can be shown to be $\nu^{k-1}/k$. The general solution is then $$ (\nu+pn)\left\{A + B \sum_{k=1}^{n-1} \frac{\nu^{k-1}}{k(\nu+p k)(1 + pk)}\right\},n\geqslant r, $$ where $A$ and $B$ are arbitrary constants. The solution \eqref{eq:paddingmean} then follows from the initial conditions $m_r = x_r$ and $m_{r+1} = \nu x_r + 2p s_r/r$. \end{proof} From \eqref{eq:paddingmean}, since the partial sum has a finite limit, we see that $m_n$ grows linearly with $n$ as $n \to \infty$. For the second moment we have \begin{theorem} \label{thm:qpaddinglimit} \begin{equation}\label{eq:qpaddinglimit} \frac{E X_n^2}{n^2} \to K(p,x_1,\dots,x_r), \quad \text{as $n\to \infty,$} \end{equation} where $K$ is a function of $p$ and $x_1,\dots,x_r$; equal to $K$ in \eqref{eq:K} when $p=1$. \end{theorem} \begin{proof} Define $S_n = \sum_{k=1}^n X_k$ and let $p_n = E S_n^2$, $q_n= E X^2_n$, $w_n = E X_n S_{n-1}$ and $t_n=\sum_{k=1}^n q_k$. Using the usual conditional expectation arguments, we have \begin{equation} \label{eq:qw} q_{n+1} = (1-p) q_n +p\left[\frac2n t_n +\frac2{n^2}p_n\right], \quad w_{n+1} = (1-p)(w_n +q_n) +\frac{2p}n p_n,\quad n \geqslant r, \end{equation} with the additional identities $t_{n+1} = t_{n} + q_n$ and $p_n = p_{n-1} + 2 w_n + q_n$. Omitting the details for the sake of brevity, this system of recurrences can be reduced to the single fourth order linear difference equation for $q_n$. \begin{equation}\label{eq:qpadding} \begin{split} (n+3)^2 q_{n+4} + [(2p-4)n^2+(4p-18)n-3p-21]q_{n+3} \\ +[(6-6p+p^2)n^2+(18-8p-2p^2)n+15+2p^2]q_{n+2}\\ - (1-p)[(4-2p)n^2+(2p+6)n+3]q_{n+1} +(1-p)^2 n^2q_n =0. \end{split} \end{equation} As before, we refer to \citet{Adams,Birkhoff} and substitute trial solutions of the form $n^\sigma \delta^n$ and then $n^\rho$ into \eqref{eq:qpadding}. By considering the leading terms in the resulting expressions, we find that $\delta= 1, 1-p$ and then $\rho = 1,2$ and therefore conclude that $q_n$ grows quadratically as $n \to \infty$. \end{proof} \subsection{Numerical results} We have investigated the behaviour of $K(p,x_1,\dots,x_r)$ numerically for various values of $p$ in the case $x_r=r=1$. For comparison purposes, we rescale time so that for each $p$ jumps occur at mean rate 1. On this time scale the limiting constant is $K(p,1)/p^2$. The results are illustrated in figure \ref{fig:2}. The exact value at $p=1$ is given in theorem \ref{thm:qlimit}. Theorem \ref{thm:8} for the continuized model provides the limiting value as $p\to0$, namely $\cosh(\pi \sqrt{7}/2)/(4 \pi) \doteq 2.53961.$ Note that a simple lower bound for $K(p,1)/p^2$ in all cases can be obtained from the observation that $E S^2_n \geqslant (E S_n )^2$. Then, since $n^{-2}S_n \to \frac12 p$ from \eqref{eq:paddingmean} and $n^{-4}E S^2_n \to \frac16 K(p,1)$, as will be shown in theorem \ref{thm:5} below, it follows that $K(p,1)/p^2 \geqslant 1.5$. A similar calculation in terms of $E X^2_n$ yields the uniformly worse lower bound of $1$. Numerical values of the product moment $n^{-2} E(X_m X_n)$ for the $p$-adding process are shown in figure \ref{fig:3}. A simple limiting pattern emerges with a discontinuity at $m=n$, at which the value drops by one third. This phenomenon is explained in theorem \ref{thm:5} below. \begin{figure}[h] \centering \begin{minipage}[t]{0.47\linewidth} \centering \includegraphics[width=0.92\linewidth,angle=90,trim=5cm 1cm 7cm 1cm]{adding2.pdf} \caption{Asymptotic growth rate of the second moment, $q_n$.} \label{fig:2} \end{minipage}% \hspace{0.5cm}% \begin{minipage}[t]{0.47\linewidth} \centering \includegraphics[width=0.92\linewidth,angle=90,trim=5cm 1cm 7cm 1cm]{adding3.pdf} \caption{Convergence of the product moment, $n=50,100,\dots,1000$; $p=0.2$.} \label{fig:3} \end{minipage} \end{figure} In considering the product moment of the basic adding process we are aided by the existence the martingale that yields \eqref{eq:Xmart}. Similar conclusions can be drawn for the $p$-adding process, as follows. \begin{theorem}\label{thm:5} Let $m,n \to \infty$, with $m \leqslant n$ then the limiting product moment of the $p$-adding process $(X_n)$ with $p<1$ is given by \begin{equation} n^{-2} E(X_m X_n) \to \begin{cases}\frac{2}{3} \theta K & \text{if $m/n \to \theta \in (0,1)$},\\ K& \text{if $m=n$}, \end{cases} \end{equation} where $K=K(p,x_1,\dots,x_r)$ is defined in \eqref{eq:qpaddinglimit} above. \end{theorem} \begin{proof} First note that as a consequence of theorem \ref{thm:qpaddinglimit}, we have $t_n/n^3 \to \frac13 K$. Dividing the first equation in \eqref{eq:qw} by $n^2$, we then have $p_n/n^4 \to \frac16 K$ and dividing the second equation by $n^3$ we have $w_n/n^3 \to \frac13 K$. By the usual conditioning arguments we also have $E (X_{n+1}S_{n+1}) = \nu [E(X_n S_n) + q_n] +2p n^{-1} p_n$ and, dividing both sides by $n^3$, we have $n^{-3} E(X_n S_n) \to \frac13 K$ as $n \to \infty$ Now let $c_{m,n} = E (X_m X_n)$ then, as in \eqref{eq:paddingint}, we have \begin{equation}\label{eq:paddingcov} c_{m,n+1} = \nu c_{m,n} + \frac{2 p}{n} \sum^n_{k=1} c_{m,k},\quad n \geqslant m \geqslant r, \end{equation} with solution $$ c_{m,n}= (\nu+pn)\left\{A + B \sum_{k=r}^{n-1} \frac{\nu^{k-1}}{k(\nu+p k)(1 + pk)}\right\},\quad n\geqslant m \geqslant r, $$ where $A$ and $B$ can be determined from the initial conditions $c_{m,m} = q_m$ and $c_{m,m+1} = \nu q_m +2 p m^{-1}\sum_{k=1}^{m} c_{m,k}$. Thus \begin{equation} \label{eq:paddingcovsol} c_{m,n}= (\nu+pn)\left\{\frac{c_{m,m+1}}{\nu+p(m+1)} \frac{H_{n-1}}{H_{m}} - \frac{q_m}{\nu+p m}\frac{H_{n-1}-H_{m}}{H_m}\right\}, \quad n\geqslant m \geqslant r, \end{equation} where $H_n = \sum_{k=r}^{n} \nu^{k-1}/[k(\nu+p k)(1 + pk)].$ Returning to \eqref{eq:paddingcov} with $n=m$ we have $c_{m,m+1} = \nu q_m + 2 p n^{-1} E(X_m S_m)$. Dividing by $m^2$ and using the earlier limit results we thus have $c_{m,m+1}/m^2 \to \frac23 K$. Finally dividing \eqref{eq:paddingcovsol} by $n^2$, letting both $m \to \infty$ and $n \to \infty$ so that $m/n \to \theta < 1$ and noting that $\lim_{n \to \infty} H_n < \infty$, we have $c_{m,n}/n^2 \to \frac23 \theta K$, as claimed. \end{proof} For the discrete time $p$-adding process we have the following: \begin{lemma} \label{thm:p-martingale} Let $$M_n = \frac{p A_n}{n(1+np)}S_n + \frac{\nu B_n}{(n+1)(2+np)}X_n, \quad \nu = 1-p,$$ where $A_n = \sum_{k=n}^\infty a_n/a_k$ with $a_k = \nu^{-k} k(\nu+kp)(1+kp)$ and $B_n = \sum_{k=n}^\infty b_n/b_k$ with $b_k = \nu^{-k} k(k+1)(\nu+kp+1)(2+kp)$ then $(M_n; n\geqslant r)$ is a martingale with respect to the increasing $\sigma$-fields $(\mathcal{F}_n;n\geqslant r)$ generated by $(X_n)$ or equivalently $(S_n)$. Furthermore, there exists a non-degenerate random variable $M$ with finite variance, such that $M_n$ converges to $M$ almost surely and in mean-square as $n \to \infty$, where $$E M = \frac{pA_r}{r(1+rp)}\sum_{k=1}^r x_k + \frac{\nu B_r x_r}{(r+1)(2+rp)}.$$ Note that both $A_n$ and $B_n$ converge to $1/p$ as $n \to \infty$. \end{lemma} \begin{proof} Consider the sequence $(\alpha_n S_n + \beta_n X_n, n\geqslant r)$. In order for this to be a martingale we require that $E (\alpha_{n+1} S_{n+1} + \beta_{n+1} X_{n+1} | \mathcal{F}_n) = \alpha_n S_n + \beta_n X_n, n\geqslant r$. Referring to \eqref{eq:p_add} we have $$\alpha_{n+1}\left(S_n + \frac{2p}{n}S_n + (1-p)X_n \right) + \beta_{n+1}\left(\frac{2p}{n}S_n + (1-p)X_n\right) = \alpha_n S_n + \beta_n X_n.$$ Equating the coefficients of $X_n$ and $S_n$ gives the pair of difference equations: $$\beta_n= (1-p)(\alpha_{n+1} + \beta_{n+1}), \quad \alpha_n = \alpha_{n+1}(n+2p)/n + \beta_{n+1} 2p/n.$$ Eliminating $\beta_n$ to produce a second order difference equation for $\alpha_n$ and proceeding as in the solution of \eqref{eq:paddingDE} we have \begin{align*} \alpha_n &= \nu^{-n} (\nu+np)\left\{C_1+C_2 \sum_{k=1}^{n-1} \frac{\nu^k}{k(\nu+kp)(1+kp)}\right\}\\ &=\nu^{-n} (\nu+np)\left\{C_1+C_2\Phi_p - C_2 \sum_{k=n}^\infty \frac{\nu^k}{k(\nu+kp)(1+kp)}\right\}, \end{align*} where $\Phi_p = \sum_{k=1}^\infty \nu^k/(k(\nu+kp)(1+kp))$. To find a positive solution that decreases with $n$ we start by setting $C_1 = -C_2 \Phi_p$. Rearranging terms then gives $$\alpha_n = \frac{-C_2}{n(1+np)}\sum_{k=n}^\infty \frac{\nu^{k-n}n(\nu+np)(1+np)}{ k(\nu+kp)(1+kp)} = \frac{-C_2}{n(1+np)} A_n,$$ and we can now see that $A_n \to 1/p$ as $n \to \infty$. The solution for $\beta_n$ follows similarly. Taken together the constants in the solutions can be determined to solve the original pair of difference equations. Finally from the calculations in the proof of theorem \ref{thm:qpaddinglimit}, we see that $E M_n^2$ is uniformly bounded and so the probabilistic limit follows from the martingale limit theorem \citep{Doob}.\end{proof} It follows easily that we have this result, paralleling that of lemma \ref{thm:martingale} : \begin{corollary} $ S_n/(n(n + 1))$ converges to $pM$ almost surely and in mean-square as $n \to \infty$, where $M$ is as defined in lemma \ref{thm:p-martingale}. \end{corollary} \begin{proof} From theorem \ref{thm:qpaddinglimit}, we have that $E X_n^2/n^2 \to K(p, x_1, . . . , x_r)$, as $n \to \infty$, and hence $E X_n^2/n^4 \to 0$ as $n \to \infty$, so that $X_n/n^2$ converges in m.s.\ to $0$. Also, by Chebyshov's inequality, $P(X_n/n^2 > a) < EX_n^2/(a^2 n^4) \sim K/(a^2 n^2)$ for $a>0$ as $n \to \infty$, and the convergence of $\sum (1/n)^2$ implies the a.s. convergence of $X_n/n^2$ to $0$, by the first Borel-Cantelli lemma, as $a$ is arbitrarily small. Since $B_n$ converges to $1/p$ as $n \to \infty$, it follows that the second term $\nu B_n X_n/{(n + 1)(2 + n p)}$ in the definition of $M_n$ converges a.s. and in m.s.\ to $0$ and since the sum of two convergent sequences of random variables converges to the sum of the limiting variables both in m.s.\ and almost surely, the assertion of the corollary follows immediately, when we note that $A_n$ converges to $1/p$ as $n \to \infty$.\end{proof} The conclusions of section \ref{sec:samplepaths}, about the sample paths of Ulam's base process, are now seen to transfer in just the same way to the $p$-adding process, {\it mutatis mutandis}. The existence of the convergent martingale was crucial in this. \section{The continuized adding process}\label{sec:simplecont} A familiar method for gaining insight into many discrete-time processes is to consider analogous problems in continuous time. And of course, such processes are of natural interest in their own right. In this case the underlying idea is that the jumps of the discrete process $(X_n)$ should take place at the jump instants of a Poisson process $(N(t))$; the process $(X_n)$ is then said to be subordinate to $(N(t))$. Such continuized (or Poisson-regulated) processes have been used previously in analysing other history dependent random sequences \citep{CS} and are also discussed by \citet{Feller}. We define the continuized random adding process thus: \medskip {\sc Definition 4.1}. Let $(T_r; r \geqslant 1)$ be the successive jump times of a Poisson process $(N(t), t > \tau )$ where $\tau \geqslant 0 $ and $N(\tau) =0$. For notational convenience let $T_0= \tau$. Without essential loss of generality, we will take the Poisson intensity $\lambda$ to be $1$. Let $(U_r; r \geqslant 1)$ and $(V_r; r \geqslant 1)$ be independent sequences of independent random variables, such that $U_r$ and $V_r$ are uniformly distributed on $[0,T_r]$. With initial conditions $X(t) = x(t), 0 \leqslant t \leqslant \tau$, the process is defined by \begin{align} X(t) &= X(T_{r-1}) \quad \text{for} \quad T_{r-1} \leqslant t < T_r, \nonumber\\ X(T_r) &= X(U_r) + X(V_r). \end{align} Note that many, more general, constructions are possible, in that \begin{itemize} \item[(a)] we could permit $U_r$ and $V_r$ to have a distribution other than uniform, \item[(b)] we could consider weighting factors so that $$X(T_r) = A X(U_r) + B X(V_r),$$ where $A$ and $B$ are constants, or even random variables, \item[(c)] the regulating Poisson process could be non-homogeneous, of rate $\lambda(t)$. \end{itemize} We return later to consider some of these more general problems. For the process $(X(t))$ of definition 4.1, we have this \begin{theorem} \label{thm:7} Let $m(t) = E X(t)$ be the mean of $X(t)$, then for $t \geqslant \tau > 0$ \begin{equation}\label{meanDEsoln} m(t) = (1+t) \left(\frac{x(\tau)}{1+\tau} + C \int_\tau^t \frac{e^{-y}}{y(1+y)^2} dy\right),\end{equation} where $$\frac{C e^{-\tau}}{\tau(1+\tau)} + \frac{2+\tau}{1+\tau} x(\tau) = \frac{2}{\tau} \int_0^\tau x(u) du.$$ When $\tau=0$, and $x(0)=1$, this yields \begin{equation}\label{meanDEspecial} m(t)=1+t. \end{equation} \end{theorem} \begin{proof} for small $h>0$, let $\mathcal{I}_{h,t}$ be the indicator of the event that $N(t+h) = N(t).$ Then by conditional expectation, for $t \geqslant \tau$, $$m(t+h) = E\{E[X(t+h)|\mathcal{I}_{h,t}]\} = (1-h)m(t) + h E\{X(U)+X(V)\} + o(h),$$ where $U$ and $V$ are uniformly and independently distributed over $[0,t]$. Hence $$m' + m = \frac{2}{t} \int_0^t m(u) du.$$ It follows that \begin{equation}\label{eq:mddot} t m'' + (1+t) m' - m =0, \end{equation} where $m'$ and $m''$ are the first and second derivatives of $m(t)$. By inspection, $m(t) = 1+t$ is a particular solution of \eqref{eq:mddot}. The complete solution \eqref{meanDEsoln} follows routinely, on applying the initial conditions $$m(\tau) = x(\tau) \quad \text{and} \quad m'(\tau) = - x(\tau) + \frac{2}{\tau} \int_0^{\tau} x(u) du.$$ \end{proof} We observe that the special case \eqref{meanDEspecial} essentially reproduces the behaviour of the discrete adding process started at $X_1=1$. For the second moment $q(t) = E(X^2(t))$, we have \begin{theorem} \label{thm:8} As $t \to \infty$, $q(t)$ grows quadratically with $t$. In particular, when $\tau=0$ and $x(0)=1$, we have $q(t)/t^2 \to \cosh(\pi \sqrt{7}/2)/(4 \pi) \doteq 2.53961,$ as $t \to \infty$. The second moment is seen to have the same quadratic asymptotic growth behaviour as that of the discrete time processes, but with a larger constant; as perhaps is to be expected intuitively. \end{theorem} \begin{proof} Conditioning on events of the Poisson process $(N(t))$ during the interval $(t, t+h)$, as above, gives \begin{equation} q' + q = E\{X^2(U)\} + E\{X^2(V)\} + 2 E\{X(U) X(V)\}, \end{equation} where $U$ and $V$ are independently uniform, so that \begin{equation} \label{eq:qdot} q' + q = \frac{2}{t} \int^t_0 q(u) du +\frac{2}{t^2} \int_0^t \int_0^t c(u,v) du dv, \end{equation} where $c(u,v) = E\{X(u) X(v)\}.$ In addition, for $u<t$, by similar conditioning, we have \begin{equation} \label{eq:cdot1} \frac{\partial c(u,t)}{\partial t} + c(u,t) = \frac{2}{t} \int_0^t c(u,y) dy, \end{equation} and, for $v < t$, \begin{equation} \label{eq:cdot2} \frac{\partial c(t,v)}{\partial t} + c(t,v) = \frac{2}{t} \int_0^t c(x,v) dx. \end{equation} Now define $Q(t) = \int_0^t \int_0^t c(u,v) du dv$, with first derivative \begin{equation} \label{eq:Qdot} Q' = \int_0^t c(u,t) du + \int_0^t c(t,v) dv. \end{equation} Differentiating again and substituting from \eqref{eq:cdot1},\eqref{eq:cdot2} and \eqref{eq:Qdot}, we obtain \begin{equation} \label{eq:Qddot} Q'' + Q' = 2 q + \frac{4}{t} Q. \end{equation} Eliminating $Q$ from \eqref{eq:Qddot} and \eqref{eq:qdot} gives \begin{equation}\label{eq:contqDE} t^2 q^{(\text{iv})} + (6 t + t^2) q''' +(6+4t +t^2) q'' - (6+2t) q' + 2 q =0. \end{equation} Following \citet{Erdelyi}, we determine the asymptotic growth rate of $q(t)$, as $t \to \infty$, by substituting trial solutions of the form $q_1 = t^\sigma e^{\delta t}$ and $q_2 = t^\rho$; this procedure yields the leading term in an asymptotic expansion developed in inverse powers of $t$. For $q_1$, we find that the leading term is zero when $\delta^4 + 2 \delta^3 + \delta^2 = 0$ whence $\delta = 0$ or $\delta = -1$. We therefore consider substitutions of the form $q_2$, which then gives $(\rho-2)(\rho -1) = 0$. Thus $q(t) \sim K t^2$, as asserted, where $K$ is a constant depending on the initial conditions $\{x(t), 0 \leqslant t \leqslant \tau\}$ . For the base case where $\tau=0$ and $x(0)=1$, we can determine the coefficient $K$ explicitly. We start by defining the transform $g(s) = s^{-1} \int_0^\infty e^{-t/s} q(t) dt$ for $s>0$. The function $g$ is well defined since we have established that $q(t) \sim K t^2$. The asymptotic behaviours of $g$ and $q$ are related by a Tauberian theorem \citep[page 220]{Feller}, namely \begin{equation}\label{eq:Tauber} q(t) \sim K t^\alpha,\; \text{as $t \to \infty$} \quad \text{if and only if} \quad g(s) \sim K s^\alpha \Gamma(\alpha+1),\; \text{as $s \to \infty$}. \end{equation} For the base case, using \eqref{eq:qdot} and \eqref{eq:Qdot}, the initial conditions for $q$ are found to be $q(0) = 1$, $q'(0) =3$, $q''(0) = 8/3$ and $q'''(0) = 4/9$. Applying the transform to \eqref{eq:contqDE}, after some reduction, we have \begin{equation}\label{eq:contgDE} s(s+1)^2 g''(s) + 2(1-s^2) g'(s) + 2(s-3)g(s) = 0, \end{equation} with $g(0)=1$ and $g'(0) = 3$. The method of Frobenius provides solutions for $g(s)$ of the form $C_1 P(s) + C_2 R(s)$ where \begin{eqnarray*} P(s) &=& 1 + 3 s + 8s^2/3 + \dots,\\ Q(s) &=& \log(s)[2 + 6 s + 16 s^2/ 3 + \dots] + s^{-1} [ 1+ 4 s + 2 s^2 + \dots]. \end{eqnarray*} Clearly $P(s)$ is the required solution of \eqref{eq:contgDE} but, expressed as a power series, it provides no immediate access to the asymptotic growth of $g(s)$. An alternative pair of solutions can be found by shifting to the singular point $s=-1$, i.e.\ by defining $g(s) = u(1+s)(1+s)^{2+\beta}$ and considering the differential equation satisfied by $u$. Taking $\beta$ to be $\frac12(1-i \surd 7)$ or its complex conjugate, we find \begin{equation} w(w-1) u''(w) + 2[(\beta+1)w - \beta]u'(w) + 2\beta^2 u(w) = 0, \end{equation} a hypergeometric differential equation \citep[\S 15.5.1]{AS} with a solution $G(\beta,w) = F(\beta,\beta+1, 2\beta, w)$ where $F$ is the hypergeometric function defined in \citep[\S 15.1.1]{AS}. It follows that \eqref{eq:contgDE} has solution \begin{equation}\label{eq:gsoln} g(s) = B_1(1+s)^{2+\beta} G(\beta,1+s) +B_2(1+s)^{2+\bar{\beta}} G(\bar{\beta},1+s), \end{equation} where $\bar{\beta}$ is the complex conjugate of $\beta$ and $(B_1,B_2)$ are complex constants chosen so that $g(s) = P(s)$. First note that the general Frobenius solution has the property that $s[C_1P(s)+C_2Q(s)]$ converges to $C_2$ as $s \to 0$. So, in order that $C_2 = 0$ we must have $s g(s) \to 0$ as $s\to 0$ in \eqref{eq:gsoln}. Furthermore using \citet[\S 15.3.3]{AS} we have $- s G(\beta ,1+s)= F(\beta,\beta-1, 2\beta,1+s),$ and using \citet[\S 15.1.20]{AS}, the limiting value of the right-hand side of this equation, as $s \to 0$, is given by $A(\beta) = \Gamma(2\beta)/[\Gamma(\beta)\Gamma(\beta+1)].$ It follows that as $s \to 0$, $- s g(s) \to B_1 A(\beta) +B_2 A(\bar{\beta}) = 0$ and hence $B_2/B_1 = -A(\beta)/A(\bar{\beta})$. The solution we require is then \begin{equation}\label{eq:B0} B_0[A(\bar{\beta})(1+s)^{2+\beta} G(\beta,1+s) - A(\beta)(1+s)^{2+\bar{\beta}} G(\bar{\beta},1+s)], \end{equation} where $B_0$ has to be found so that $g(s)$ satisfies the initial condition $g(0) = 1$. From \citet[\S 15.3.12]{AS} the constant term in the expansion of $G(\beta, 1+s)$ about $s=0$, i.e.\ the term with $n=0$, is given by $$\frac{\Gamma(2 \beta)}{\Gamma(\beta-1)\Gamma(\beta)} [\psi(\beta) + \psi(\beta+1) - \psi(1) - \psi(2)] = H(\beta) \quad \text{(say)},$$ where $\psi(z) = d/{dz}\log \Gamma(z).$ Thus the constant term on expanding $(1+s)^{2+\beta} G(\beta, 1+s)$ is $H(\beta) - (2+\beta) A(\beta)$, and after some simplification, the constant term in \eqref{eq:B0} is $$B_0\{A(\bar{\beta})[H(\beta)- \beta A(\beta)] - A(\beta) [ H(\bar{\beta}) - \bar{\beta} A(\bar{\beta})]\} = B_0 i \surd 7,$$ from which it follows that $B_0 = (i \surd 7)^{-1}.$ We now have the required solution explicitly in the form \eqref{eq:B0}. It remains to determine the asymptotic behaviour as $s \to \infty$. From \citet[\S 15.3.4]{AS} we have $G(\beta, 1+s) = (-s)^{-\beta} F(\beta,\beta-1,2 \beta, s^{-1}(1+s))$, so that, from the definition of $A(\beta)$, $$ \lim_{s \to \infty}(1+s)^\beta G(\beta, 1+s) = \lim_{s \to \infty} \left(\frac{1+s}{-s}\right)^\beta A(\beta) = i e^{\frac12 \pi \surd 7} A(\beta).$$ Consequently using the form \eqref{eq:B0} \begin{equation} \lim_{s \to \infty} \frac{g(s)}{(1+s)^2} = \frac{A(\bar{\beta}) A(\beta)}{ \surd 7}\left[ e^{\frac12 \pi \surd 7} - e^{-\frac12 \pi \surd 7} \right] = \frac{\cosh^2(\frac12 \pi \surd 7)}{2 \pi \sinh(\pi \surd 7 )} \times \textstyle{2 \sinh(\frac12 \pi \surd 7)}. \end{equation} Therefore $g(s)/s^2 \to \cosh( \frac12 \pi \surd 7) /(2 \pi)$ and from the Tauberian relation \eqref{eq:Tauber} with $\alpha = 2$ we have $K = \cosh( \frac12 \pi \surd 7) /(4 \pi)$ as claimed. \end{proof} For the product-moment function $c(s,t) = E\{X(s) X(t)\}$ we have this. \begin{theorem} \label{thm:9} For $\tau \leqslant s < t$ \begin{equation} \label{eq:csoln} c(s,t) = (1+t) \left\{ \frac{q(s)}{1+s} + e ^s\left[(1+s)Q'-(2+s)s q \right] \int_s^t\frac{e^{-y}}{y(1+y)^2} dy\right\}, \end{equation} and if $s, t \to \infty$, with $s \leqslant t$ then, \begin{equation} t^{-2} c(s,t) \to \begin{cases}\frac{2}{3} \theta K & \text{if $s/t \to \theta \in (0,1)$},\\ K& \text{if $s=t$}. \end{cases} \end{equation} \end{theorem} \begin{proof} From \eqref{eq:cdot1} we have $ t c'' + (1+t) c' - c =0,$ where $c = c(u,t)$, $u <t$ and $c'$ indicates that differentiation is with respect to $t$. This is essentially \eqref{eq:mddot}, so that we have as before \begin{equation}\label{eq:generalcsoln} c(s,t) = (1+t) \left[A(s) + B(s) \int_s^t \frac{e^{-y}}{y(1+y)^2}dy\right], \end{equation} for suitable $A(s)$ and $B(s)$. The boundary conditions at $t=s$ are $$ c(s,t)\big| _{t=s} = q(s) \quad \text{and} \quad \frac{\partial c(s,t)}{\partial t}\big| _{t=s} = Q'(s)/s - q(s), $$ the latter following from \eqref{eq:cdot1} and \eqref{eq:Qdot}. The required result \eqref{eq:csoln} then follows. Now set $s= \theta t$ in \eqref{eq:csoln}, where $\theta$ is a fixed number between $0$ and $1$. As $s,t \to \infty$, either integrating by parts or by use of $8.215$ in \citet{GR}, we find that the leading term in the asymptotic expansion of the integral term is $(\theta t)^{-3} e^{-\theta t}$. From theorem \ref{thm:8}, we have $q \sim Kt^2$ and hence $\int_0^t q(u) du \sim K t^3/2$, so that from \eqref{eq:qdot} $Q \sim K t^4/6$ and hence $Q' \sim 2K t^3/3$. Substituting these asymptotic results in \eqref{eq:csoln}, after some reduction, we have $$ \lim_{t \to \infty} t^{-2} c(\theta t,t ) = \textstyle{\frac23} \theta K,$$ as required. Once again we note that this is similar to the behaviour of the product-moment in the discrete case. \end{proof} For the third moment $E\{X^3(t)\}$, we remark that a similar asymptotic analysis may be pursued. Introducing the notation $S_j(t) = E\{\int_0^t X^j(u)du\}$ and $$\alpha_j = E\{X^j(t) [S_1(t)]^{3-j}\},\quad \beta_j = E\{S_j(t) [S_1(t)]^{3-j} \},\quad\gamma_j = E\{X^j(t)S_{3-j}(t)\},$$ and using the usual conditional expectation arguments, we have \begin{eqnarray*} \alpha_0'&=& 3 \alpha_1, \quad \alpha_1' = -\alpha_1 +2 \alpha_2 +\frac{2}{t}\alpha_0, \quad \alpha_2' = -\alpha_2 + \alpha_3 + \frac{2}{t} \beta_2 + \frac{2}{t^2} \alpha_0, \\ \quad \alpha_3' &=& -\alpha_3 +\frac{2}{t} \beta_3 +\frac{6}{t}\beta_2, \quad \beta_2'= \alpha_2 + \gamma_1 , \quad \beta_3' = \alpha_3, \quad \gamma_1' = -\gamma_1 + \alpha_3 + \frac{2}{t} \beta_2. \end{eqnarray*} Reducing this system to a single differential equation for $\beta_3 = E\{\int_0^t X^3(u)du\}$ yields \begin{equation} \begin{split} t^4 \beta_3^\text{(vii)}+4(t+4)t^3 \beta_3^\text{(vi)} +2(3 t^2+21 t+37) t^2 \beta_3^\text{(v)}\\ +2(2t^3+15t^2+40t+54)t \beta_3^\text{(iv)} +(t^4-2t^3-44t^2-80t+36)\beta_3'''\\ -(6t^3+32t^2+72t+112)\beta_3'' +(18t^2+92t+136)\beta_3' -12(2t+3)\beta_3 = 0. \end{split} \end{equation} Again following \citet{Erdelyi} we consider the asymptotic expansion of $\beta_3$ developed in inverse powers of $t$ for large $t$, and find the leading term by substituting trial solutions of the form $\beta_3 = t^\sigma e^{\delta t}$ and $t^\rho.$ These show that $\delta$ is either $0$ or $-1$ and $\rho $ is either $2$, $3$ or $4$. We conclude that $E\{\int_0^t X^3(u)du\}$ grows as $t^4$, and hence that $E\{X^3(t)\}$ grows as $t^3$. As with the adding and $p$-adding processes a martingale is available: \begin{lemma} Let $$M(t) = \frac{A(t)}{t(1+t)} S(t) + \frac{B(t)}{t(2+t)} X(t), \quad t \geqslant \tau,$$ where $S(t) = \int_0^t X(v)dv$ and where $A(t) = \int_t^\infty a(t)/a(v) dv $ with $a(v) = e^{-u} v(1+v)$ and $B(t) \int_t^\infty b(t)/b(v) dv $ with $b(v) = e^{-v} v^2(2+v)^2$. Then $(M(t); t\geqslant \tau)$ is a martingale with respect to the increasing $\sigma$-fields $(\mathcal{F}(t);t\geqslant \tau)$ generated by $(X(t))$ or equivalently $(S(t))$. Furthermore, there exists a non-degenerate random variable $M$ with finite variance, such that $M(t)$ converges to $M$ almost surely and in mean-square as $t \to \infty$, where $$E M = \frac{A(\tau)}{\tau(1+\tau)}\int_{0}^\tau x(v)dv + \frac{B(\tau) x(\tau)}{\tau(2+\tau)}.$$ Note that both $A(t)$ and $B(t)$ converge to $1$ as $t \to \infty$. \end{lemma} \begin{proof} From theorem \ref{meanDEsoln} we know that $E X(t) | \mathcal{F}(u)$ and hence $E S(t) |\mathcal{F}(u)$ depend only on $X(u)$ and $S(u)$ for $\tau \leqslant u \leqslant t$. Now consider the random process $(M(t) = \alpha(t)S(t) + \beta(t)X(t), t \geqslant \tau)$ where $\alpha$ and $\beta$ are differentiable functions. Let $m(t) = E X(t) | \mathcal{F}(u)$ and $\theta(t) = E S(t) |\mathcal{F}(u)$. The conditional expectation $E M(t)| \mathcal{F}(u)$ is then $\alpha(t)\theta(t) + \beta(t)m(t))$. For $(M(t))$ to be a martingale we require that $$E\, \big(\alpha(t) S(t) + \beta(t) X(t)\big)|\mathcal{F}(u) = \alpha(u)S(u) + \beta(u)X(u), \tau \leqslant u \leqslant t$$ in particular the left-hand side should not depend on $t$, and consequently \begin{align*} \frac{d}{dt} \big(\alpha \theta + \beta m \big) &= \alpha'\theta + \alpha\theta' + \beta' m + \beta m'\\ &=\alpha'\theta + \alpha m + \beta' m + \beta \left(\frac{2}{t}\theta-m\right)\\ &=\theta\left(\alpha' +\frac{2}{t}\beta\right) + m(\alpha + \beta' -\beta) =0 \end{align*} Solving the pair of differential equations $(\alpha' +\frac{2}{t}\beta =0; \alpha + \beta' -\beta=0)$, we have $$\alpha = (1+t)e^{t}\int_t^\infty \frac{e^{-v}}{v(1+v)^2} dv = \frac{1}{t(1+t)}\int_t^\infty\frac{e^{t-v}t(1+t)^2}{v(1+v)^2} dv = \frac{A(t)}{t(1+t)}$$ and $$\beta = t(2+t)e^{t}\int_t^\infty \frac{e^{-v}}{v^2(2+v)^2} dv = \frac{1}{t(2+t)}\int_t^\infty\frac{e^{t-v}t^2(2+t)^2}{v^2(2+v)^2} dv = \frac{B(t)}{t(2+t)}.$$ Both of the integrals $A(t)$ and $B(t)$ converge to 1 as $t \to \infty$ by dominated convergence. Finally from theorem \ref{thm:9}, we see that $E M^2(t)$ is uniformly bounded and so the probabilistic limit follows from the martingale limit theorem. \end{proof} Recalling lemma \ref{thm:p-martingale} and its corollary at the end of section \ref{sec:3}, it is seen, by exactly the same argument, that $S(t)/t^2$ converges to $M$ a.s. and in m.s. as $t \to \infty$. And this implies very similar conclusions for the behaviour of the sample paths of the continuous-time process as that given in section \ref{sec:2} for Ulam's discrete-time process. Once again, we see the great utility of a suitable martingale. \section{Generalized random adding} A natural generalization of the simple adding process is to allow weighting and non-uniform selection from the past. We define such a process thus: \medskip {\sc Definition 5.1}. With the notation and structure of definition 4.1, at jump times $(T_n)$, set $$X(T_n) = A X(U_n) + B X(V_n), \quad n \geqslant 1, $$ where now $(U_r)$ and $(V_r)$ comprise sequences of independent random variables with respective distribution functions \begin{eqnarray*} P \{U_r \leqslant u| T_r =t\} &=& (u/t)^{\alpha}, \quad 0 < u < t, \\ P \{V_r \leqslant v| T_r =t\} &=& (v/t)^{\beta}, \quad 0 < v < t. \end{eqnarray*} Here $A$ and $B$ are non-zero constants, $\alpha$ and $\beta$ are positive constants. As in section \ref{sec:simplecont}, we assume that the initial values in the process are fixed at $X(t) = x(t)$ for $0\leqslant t \leqslant \tau$. \begin{theorem} \label{thm:10} Let $m(t) = E X(t) $ then $m(t)$ grows asymptotically as $t^\sigma$ as $t \to \infty$, where $\sigma$ is a root of the following equation; in general that root having the larger real part: \begin{equation} \label{eq:sigma} \sigma^2+[\beta(1-B) + \alpha(1-A)]\sigma +[(1-A-B)\alpha\beta] =0. \end{equation} \end{theorem} \begin{proof} Conditioning on the events of the Poisson process $(N(t))$, we have $$ m' + m = \frac{A\alpha}{t^{\alpha}} \int_0^t u^{\alpha-1} m(u) du + \frac{B\beta}{t^{\beta}} \int_0^t v^{\beta-1} m(v) dv. $$ Differentiating with respect to $t$, we obtain, for $t \geqslant \tau$, \begin{equation}\label{eq:genmDE} \begin{split} (1-A-B)\alpha\beta m+ t^2 m''' + [(\alpha+ \beta +1)t + t^2]m''\\ + \{\alpha\beta + [1+\beta(1-B) +\alpha(1-A)]t\} m'=0. \end{split} \end{equation} Following \citet{Erdelyi}, substituting the usual trial solutions in \eqref{eq:genmDE} and equating coefficients of the highest order terms, we find that $m(t)$ grows asymptotically as $t^\sigma$, where $\sigma$ is given by \eqref{eq:sigma}. Note that when $\alpha=\beta=1$ and $A=B=1$ then $\sigma=1$, as we know from \ref{meanDEspecial}. \end{proof} We investigate the implications of equation \eqref{eq:sigma} beginning with the question of when the roots are imaginary, corresponding to potentially oscillatory behaviour for $m(t)$. For brevity of notation, we write $x= 1-A$ and $y=1-B$. The roots $\sigma_1$ and $\sigma_2$ of \eqref{eq:sigma} are real or imaginary according as the function $$f(\alpha,\beta,x,y) = \alpha^2x^2 + 2 \alpha\beta x y + \beta^2 y^2 - 4\alpha\beta(x+y-1) $$ is greater than or equal to, or less than, zero. We observe that $f=0$ defines a parabola $\mathcal{P}$ in the $(x,y)$ plane for suitable fixed $\alpha$ and $\beta$. Writing $f$ as $$ f(\alpha,\beta,x,y)= \left[\alpha x + \beta y - \frac{2 \alpha \beta (\alpha + \beta)}{\alpha^2 + \beta^2}\right]^2 - \frac{4 \alpha \beta(\beta-\alpha)}{\alpha^2 + \beta^2}\left[\beta x - \alpha y+ \frac{\alpha^3-\beta^3}{\alpha^2+\beta^2}\right], $$ we see that, the axis of $\mathcal{P}$ is $$\alpha x + \beta y = \frac{2 \alpha \beta(\alpha + \beta )}{\alpha^2+\beta^2}, $$ and the tangent $T$ at the vertex is $$\beta x - \alpha y + \frac{\alpha^3 - \beta^3}{\alpha^2 + \beta^2} = 0. $$ Note that the roots $\sigma_1$ and $\sigma_2$ are complex inside the parabola (with an obvious convention). If $\alpha > \beta$, then the parabola is above $T$; if $\alpha< \beta$, then $\mathcal{P}$ lies below $T$; if $\alpha= \beta$ then the case is degenerate and $\mathcal{P}$ is the line $x+y=2$ (corresponding to $A+B=0$). Now let us consider the points $(1,0)$ and $(0,1)$ with respect to $\mathcal{P}$. The polar of $(1,0)$, i.e.\ the chord of contact of the tangents from the point $(0,1)$, is $$ f_1 = \alpha(\alpha x+ \beta y) - 2 \alpha \beta(x+y-1),$$ and the power of $(1,0)$ with respect to $\mathcal{P}$ is $f_{11} = \alpha^2.$ Therefore the tangents to $\mathcal{P}$ meeting at $(1,0)$ are given by the line pair $$ 0 = f_1^2 - f f_{11} = \alpha^2 (\beta- \alpha) (x-1)(x+y-1). $$ Likewise, the tangents to $\mathcal{P}$ from $(0,1)$ are the line pair $(y-1)(x+y-1) = 0$. From an early result attributed to \citet{Lambert} we know that the locus of the focus of parabolas with three specified tangent lines is the circle though the vertices of the triangle formed by the intersections of the lines; in this case the points $(0,1)$, $(1,0)$ and $(1,1)$. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth,trim=2cm 7cm 1cm 7cm]{adding4.pdf} \caption{Region of oscillatory behaviour (shaded) in the case $\alpha > \beta$. When $\alpha < \beta$ the oscillatory region is given bv by reflection in the line $y=x$. \label{fig:4} } \end{figure} As $\alpha$ and $\beta$ run over all positive values, the three points of contact with $x=1$, $y=1$, and $x+y=1$, are seen to trace all points of these lines except those that lie in the region $\{x < 1\} \cap \{y <1\}$. Thus these lines delineate the envelope of the parabolic region in which $m(t)$ is oscillatory; see figure \ref{fig:4}. Secondly, we consider whether $\sigma_1$ or $\sigma_2$ has positive real part (corresponding to potentially unbounded solutions for $m(t)$). If the roots are imaginary, $(\alpha,\beta)$ lying inside $\mathcal{P}$, then (being conjugate) they have a positive real part if $\alpha x + \beta y < 0.$ If the roots are real, then at least one is positive if either $\alpha x + \beta y <0$, or $x+y<1$. The nature of the asymptotic behaviour of $m(t)$ as $t \to \infty$ is thus given in terms of the parameters $x$ and $y$; see figure \ref{fig:5}. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth,trim=2cm 7cm 1cm 7cm]{adding5.pdf} \caption{Region in which the roots of \eqref{eq:sigma} are complex with positive real part (shaded). \label{fig:5} } \end{figure} Alternatively, we may regard $A$ and $B$ and hence $x$ and $y$ fixed, and consider the quadratic form in $\alpha$ and $\beta$ given by $$\phi = \alpha^2 x^2 + 2 \alpha \beta (xy-2x -2y +2) +\beta^2 y^2. $$ A necessary condition for this to take negative values is that it should be a real line pair, for which a necessary and condition is that $(xy-2 x - 2 y +2)^2 > x^2 y^2$ which is equivalent to $(x-1)(y-1)(x+y-1) < 0$. Note that the two regions of oscillatory behaviour in figure \ref{fig:5} do indeed satisfy this constraint. The oscillatory region in the $(\alpha,\beta)$ plane then comprises those opposite angles lying between the line pairs in which $\phi$ is negative. In the case when $x<1$, $y<1$ and $x+y<1$, no part of this region lies in $\{\alpha >0\} \cap\{\beta >0\}$, so there are no oscillatory solutions there. Of course, we may also seek a solution of \eqref{eq:genmDE} as a power series in $t$. In the usual way, the indicial equation is found to be $c(c-\alpha+1)(c-\beta+1)= 0 $. which supplies the required three linearly independent solutions in the ordinary case when $\alpha$ and $\beta$ are neither equal nor differ by an integer. In these cases the method of Frobenius may generally be employed to yield the required distinct solution in series. We refrain from an extended discussion. However we do mention the special case when the boundary condition is $\tau=0$ with $x(0)=1$. In this instance, in general, the power series corresponding to $c=0$, with the form $m(t) = 1 + \sum_{r=1}^\infty a_r t^r $, supplies the solution that is regular at the origin. For example, if $A=B=1$ then it is seen that $m(t) \sim t^\sigma$ with $\sigma = [\alpha\beta]^{1/2}.$ If it is further assumed that $\alpha\beta = 4$ where neither $\alpha$ nor $\beta$ is an integer, then \eqref{eq:genmDE} has the solution $m(t) = 1+ t + 3 t^2/[2(\alpha+\beta) +12],$ by inspection. By the remarks above, this is the required $m(t)$ satisfying the boundary conditions $m(0) = m'(0) =1$ and is such that $m(t)$ grows quadratically as $t \to \infty$. \subsection{The second moment in the generalized case} In considering the second moment $q(t) = E X^2(t)$ of the process $X(t)$ of definition $5.1$, we will make the assumption that $\alpha=\beta > 0$, thus excluding the oscillatory behaviour. We have this: \begin{theorem} \label{thm:11} As $t \to \infty$, $q(t) \sim K t^\sigma$ where $$ \sigma = \alpha \max \{A^2 +B^2 -1, 2(A+B-1)\},$$ and $K$ is a constant depending on $A$, $B$ and $\alpha$ and initial conditions. \end{theorem} \begin{proof} Let $C_1=A+B$ and $C_2= A^2+B^2$, then by conditioning on the events of the Poisson process during $(t,t+h)$, we find in the usual way that \begin{equation}\label{eq:genQdot} q'+q = \frac{\alpha C_2}{t^{\alpha}} \int_0^t u^{\alpha-1} q(u) du + \frac{2 A B \alpha^2 Q}{t^{2\alpha}}, \end{equation} where $Q = \int_0^t\int_0^t (uv)^{\alpha-1} c(u,v) du dv$ and $c(u,v) = E\{ X(u) X(v)\}$. Likewise $$ \frac{\partial c(u,t)}{\partial t} + c(u,t) = \frac{\alpha C_1}{t^{\alpha}} \int_0^t y ^{\alpha-1} c(u,y) dy, \quad u<t, $$ with a similar equation for $\partial c(t,v)/\partial t$, when $v<t$. Differentiating $Q$ we find $$ \frac{d}{dt} \left(t^{-\alpha+1} \frac{dQ}{dt}\right) + t^{-\alpha+1}\frac{dQ}{dt} - 2t^{\alpha-1} q = 2\alpha C_1 t^{-\alpha} Q, $$ where we have substituted for $\partial c(u,t) /\partial t$ and $\partial c(t,v) /\partial t$, as necessary. Substituting for $Q$ throughout, using \eqref{eq:genQdot}, we have this equation for $q(t)$: \begin{equation} \begin{split} t^3 q^{(\text{iv})} +2 (t+2\alpha +1) t^2q''' +\{t^3+[(7-2 C_1-C_2)\alpha+3] t^2+(5\alpha^2+\alpha)t\}q''\\ + \{[(3-2 C_1-C_2)\alpha+1] t^2+[(7-4 A B-2 C_1-3 C_2)\alpha^2+\alpha]t + 2\alpha^3 -2\alpha^2\}q'\\ + [2(C_2-1)(C_1-1)t\alpha^2-2\alpha^2(\alpha-1)(2AB+C_2-1)]q = 0. \end{split} \end{equation} \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth,trim=2cm 7cm 1cm 7cm]{adding6.pdf} \caption{Outside the circle centred at $(1,1)$, the second moment increases or decreases as $t^{\sigma_1}$ depending on the sign of $\sigma_1 = \alpha(A^2+B^2 -1)$ Within this circle it grows as $t^{\sigma_2}$, where $\sigma_2 = 2\alpha(A+B-1)$; decreasing in the shaded region and otherwise increasing. The Ulam case is at the point $\scriptstyle{\circ}$. \label{fig:6} } \end{figure} Again following \citet{Erdelyi}, we find $q(t)$ grows as $t^\sigma$ for large $t$, where $\sigma$ is given by $$\sigma(\sigma-1) + [(\alpha+1) + \alpha(2-C_2-2C_1)]\sigma + 2\alpha^2(1-C_1)(1-C_2) = 0. $$ This factorises into $$[\sigma + 2\alpha(1-C_1)][\sigma + \alpha(1-C_2)] =0, $$ giving the two roots as claimed. The leading term is therefore $t^\sigma$ with $\sigma = \alpha(A^2+B^2 -1)$ everywhere in the $(A,B)$ plane, except inside the circle $(A-1)^2 + (B-1)^2=1$. This is illustrated in figure \ref{fig:6}. Numerical solutions of the differential equations, for various initial conditions and parameter values, demonstrate exactly the behaviour described theorems \ref{thm:10} and \ref{thm:11} \end{proof} Finally, we briefly discuss the effects on $(X(t))$ if at each jump $X(T_r) = A_r X(U_r) + B_r X(V_r)$ where now $(A_r)$ and $(B_r)$ comprise sequences of independent random variables, also independent of $(U_r, V_r)$, with means $\mu_A= E A_r$ and $\mu_B = E B_r$ respectively. It is easy to see that in \eqref{eq:sigma} and \eqref{eq:genmDE}, one simply replaces $A$ and $B$ by $\mu_A$ and $\mu_B$. The essential conclusions in figure \ref{fig:6} are the same, with some relabelling. For the second moment, we note that the product moment $E (AB)$ is irrelevant to first order. The end result is that $q(t)$ grows with $t^\sigma$ where now $\sigma= \alpha\max\{2(\mu_A+\mu_B-1), E A^2 + E B^2-1\}$. The nature of the final figure will then be similar, but dependent on the actual distributions of $A$ and $B$, as expressed in their first two moments. \subsection{Generalized adding processes in discrete time} Of course, one can also define such generalized adding processes in discrete time, but we avoid discussing these in detail. We strongly conjecture that they will show essentially the same behaviour as continuous-time generalized processes, and we sketch one example to illustrate this. In the usual way, in the notation of \eqref{eq:BSU} and lemma \ref{thm:martingale}, let $X_{n+1} = X_{U(n)} + 2 X_{V (n)}$, $n \geqslant 2$; and define $M_n = S_n/(n(n + 1)(n + 2))$ , where $S_n = \sum_{k=1}^n X_k$, as usual. Then $(M_n; n \geqslant r)$ is a martingale with respect to the increasing sequence of $\sigma$-fields $(F_n; n \geqslant r)$ generated by the sequence $(X_n)$, or equivalently by $(S_n)$. The convergence of this martingale, which we refrain from proving, shows that $E X_n$ grows asymptotically like $n^2$ ; and we note that this is entirely consistent with the result \eqref{eq:sigma} of theorem \ref{thm:10}, in the case when $A=1, B=2$, and $\alpha = \beta = 1$. Clearly numerous other similar martingales can be recruited to consider the behaviour of $X_n$ and $S_n$ in more general cases. As an illustration, using the same notation, with the recurrence $X_{n+1} = A X_{U(n)} + B X_{V(n)}$, $n \geqslant 2;$ for suitable constants $A$ and $B$, we find that $M_n = [(n - 1)!/(n + A + B - 1)!]\,S_n$ satisfies the martingale condition wrt $(F_n)$; with the usual falling factorial convention for $(n+c)!$, for $n+c$ not an integer. And then $M_n$ is a martingale for those values of $A$ and $B$ such that $E |M_n|$ is finite. The convergence of this martingale, whose proof we omit, entails the convergence of $S_n/n^{A+B}$ to some r.v. as $n \to \infty$, using the Stirling-DeMoivre formula for large $n$. From which one may deduce that $E X_n$ grows like $n^{A+B-1}$, in agreement with the continuous time results. The determination of the implicit constraints on $A$ and $B$, analogous to those given by theorems \ref{thm:10} and \ref{thm:11}, is an open problem. \section{Conclusion} We have considered Ulam's random adding process, introduced in \citet{BSU}, and verified the authors' conjecture about the quadratic growth of the process's second moment. We have also introduced a number of new, more general random adding processes, in both discrete and continuous time, showing that their moments exhibit similar behaviour. Furthermore, for the basic simple Ulam process of section \ref{sec:2}, we showed that $M_n = S_n/(n(n+1))$ converges almost surely and in mean-square to a limiting random variable $M$. The result depended crucially on the identification of a martingale. Related martingales were also identified for the $p$-adding and continuous-time processes, which established the a.s. and m.s. convergence of $S_n/n^2$ and $S(t)/t^2$ respectively, leading to similar conclusions about the behaviour of their sample paths. We have been unable to find suitable martingales for the generalized random adding processes of section 5, though it seems likely that similar convergence results will apply. A possible approach is to establish mean-square convergence by showing that the random sequence is Cauchy in mean-square. Our preliminary investigations suggest that limit results of the types given in theorems \ref{thm:5} and \ref{thm:9} are not precise enough for this purpose and that higher order approximations will be necessary. Finally, we note that there are many further obvious and interesting open problems about almost every aspect of this largely unexplored family of random processes. \paragraph{Remark:} The result \eqref{eq:qlimit} in the special case $r=1=x_r$, was obtained but not published by \citet{Turner}, while working with Mark Kac who analysed another of Ulam's history dependent recurrences \citep{Kac}. \newpage \bibliographystyle{apalike} \input{addingnew.bbl} \vspace{5ex} \hrule \vspace{1ex} \urlstyle{sf} Email address: {\sf [email protected]}\\ URL: \url{https://www.stats.ox.ac.uk/~clifford} Email address: {\sf [email protected]}\\ URL: \url{https://www.sjc.ox.ac.uk/discover/people/professor-david-stirzaker} \vspace{2ex} \hrule \end{document}
1,108,101,566,194
arxiv
\section{Introduction} Risk averse variants of Dynamic Programming are widely studied. Our work is very close to \cite{ruszczynski2010risk} who, for a very similar setting, proves the convergence of the Value Iteration algorithm. The contribution of our work is two-fold. First, instead of complicated axiomatic definition, we defined the one-stage risk mapping constructively by means of its risk envelope, which is moreover independent of a choice of an underlying probability measure; consequently, the whole exposition, including proofs, is much simpler. Second, instead of the convergence of a particular solution algorithm, we prove the contractive property of the Bellman operator, which can be then plugged into convergence proofs of many different algorithms. Let $(\Omega, \F, P)$ be a probability space and let $\mathbf{F} := (\F_0, . . . ,\F_t,...)$ be a filtration, i.e. a sequence of increasing sigma algebras: $\F_0 \subset \F_1 \subset ... \subset \F$. Consider a process $\left\{Z_t\right\}, t = 0,1,...,$ adapted to the filtration $\mathbf{F}$, specifically $Z_t \in L_2(\F_t), t = 0,1,...,$ . For such a process in time $t$ we use coherent conditional risk measures: $\sigma_{t|\F_{t-1}}(Z_t)$, $t>0$. By saying that a conditional risk measure is coherent we mean it is \textcolor{black}{measurable,} monotonous ($\sigma_{t|\F_{t-1}}(X)\geq \sigma_{t|\F_{t-1}}(Y)$ for any random variables $X\geq Y$, $X,Y\in L_2$), \textcolor{black}{sub-additive ($\sigma_{t|\F_{t-1}}(X+Y) \leq \sigma_{t|\F_{t-1}}(X) + \sigma_{t|\F_{t-1}}(Y)$ for any random variables $X,Y \in L_2$)} translation invariant ($\sigma_{t|\F_{t-1}}(X+C))=\sigma_{t|\F_{t-1}}(X)+C$ for any $C \in L_2(\F_{t-1})$, $X\in L_2$) and positively homogeneous ($\sigma_{t|\F_{t-1}}(\Lambda X))=\Lambda \sigma_{t|\F_{t-1}}(X)$ for any $\Lambda \in L_2(\F_{t-1})$, $X \in L_2$). Next we construct a nested risk measure $\rho_t$ as follows: $$ \rho_t(Z_t) = \textcolor{black}{\sigma_{1|\F_{0}}(\sigma_{2|\F_{1}}(...\sigma_{t|\F_{t-1}}(Z_t)))}.$$ \textcolor{black}{It can be easily seen that, once $\F_0$ is trivial, $\rho_t(Z_t)$ is deterministic } a coherent risk measure. We will be interested in the limit version of this measure $\rho_{\infty}$ defined as: \begin{equation} \rho_{\infty}(Z) = \lim_{t\rightarrow \infty} \rho_t(Z_t) \qquad \textcolor{black}{a.s.} \label{rhoinf} \end{equation} Such a limit, however, may not exist in general and, therefore, we first formulate the sufficient conditions for the existence of $\rho_{\infty}$. \begin{definition} \label{def1} \textcolor{black}{We say that process $\left\{Z_t\right\}, t = 0,1,...,$ has uniformly bounded support if there exist finite $a < b$ such that $\supp(Z_t) \subseteq \left\langle a,b \right\rangle$ for all $t$.} \end{definition} \begin{definition} We say that conditional risk measure $\textcolor{black}{\sigma_{t|\F_{t-1}}}$ is \textit{support-bounded} if for every $X \in L_2$ we have $\textcolor{black}{\sigma_{t|\F_{t-1}}(X)} \in \left\langle a,b \right\rangle$, where $\textcolor{black}{a=\mathrm{essinf}(X)}$ and $\textcolor{black}{b=\mathrm{esssup}(X)}$. \end{definition} \begin{theorem} Let process $\left\{Z_t\right\}, t = 0,1,...,$ be adapted to the filtration $\mathbf{F}$, a.s. non-increasing, and let \textcolor{black}{the process have uniformly bounded support.} Assume that a conditional risk measure $\sigma_{t|\F_{t-1}}(Z_t)$ is coherent and support-bounded for all $t$. Then $\rho_{\infty}(Z)$ exists. \label{thm:exists} \end{theorem} \begin{proof} \textcolor{black} First, thanks to the nested form of $\rho_t$ and the fact that conditional risk measures $\sigma_{t|\F_{t-1}}$ are support-bounded, we have bounded sequence ${\rho_t(Z_t)}$, $t=0,1,...$. Second, \begin{eqnarray*} \rho_{t}(Z_{t}) - \rho_{t+1}(Z_{t+1}) &=& \rho_t(Z_t) - \rho_t(\sigma_{t+1|\F_{t}}(Z_{t+1})) \\ &\geq& \rho_t(Z_t) - \rho_t(\sigma_{t+1|\F_{t}}(Z_{t})) \\ &=& \rho_t(Z_t) - \rho_t(Z_t) = 0. \end{eqnarray*} where the inequality follows from (i) coherency of $\sigma_{t+1|\F_{t}}$ and (ii) \textcolor{black}{the fact that $\left\{Z_t\right\}$ is } a.s. non-increasing, $t = 0,1,...,$. Hence, the sequence ${\rho_t(Z_t)}$, $t=0,1,...$ is non-increasing. Since every bounded non-increasing sequence has a limit, $\rho_{\infty}(Z)$ exists, which completes the proof.} \end{proof} \noindent For the majority of practical situations, it suffices to assume $\Omega = [0,1] \times [0,1] \times \dots$ and $P=U(0,1)\otimes U(0,1)\otimes \dots$ where $U$ is uniform distribution; indeed, any process can be made Markov by adding its history to the state space and any coordinate of a Markov process can be expressed as a function of the past and an uniform variable (see \cite{kallenberg2001foundations}, Chp. 8). It is well known (see \cite{ang2018dual}) that every coherent risk measure $\sigma$ can be expressed in a dual form: $\sigma(X)=\sup_{Q\in{\mathcal M}}\int X(\omega) Q(\omega)P(d\omega)$, where ${\mathcal M}=\{Q \in L_2: Q \geq 0, \E_P(Q)=1,\E_P(XQ)\leq \sigma(X),X\in L_2\}$ is a set of probability distributions known as risk envelope. Especially, if $P=U(0,1)$, then $\sigma(X)=\sup_{Q\in{\mathcal M}'}\int_0^1 q_X(\omega) Q(\omega)d\omega$ where ${\mathcal M}'$ is a (different) risk envelope and $q_X$ is a quantile function of $X$. Clearly, as conditional risk measures become coherent risk measures once the conditioning random element is fixed, it can be expressed by means of a dual representation too. Therefore, we further define $\sigma_{t|\F_{t-1}}$ by means of this representation; however, we do not allow the risk envelope to depend on $t$ and we do not allow it to be random: \begin{equation} \sigma_{t|\F_{t-1}}(X)=\Sigma(\L(X|\F_{t-1})),\quad t\geq 1, \qquad \Sigma(P)=\sup_{Q\in\mathcal{M}}\int_0^1 q_P(x) Q(x) dx, \label{eq:homog} \end{equation} where $\mathcal{M}$ is a deterministic risk envelope and $q_P$ is a quantile function corresponding to $P$. In practice this means that all the conditional measures are "of the same type", e.g. once $\sigma_{t|\F_{t-1}}$ is a conditional CVaR with risk level $\alpha$, then all the other conditional measures have to be conditional CVaRs with level $\alpha$, too. Later we shall use the following Proposition. \begin{proposition}\label{prop:homog} Assume (\ref{eq:homog}). Let $\F_0$ be trivial (implying that $Z_0$ is deterministic) and let \begin{equation} \label{eq:eps} 0 \leq Z_{t+1}-Z_t \leq \epsilon_t, \qquad t\geq 0, \end{equation} where $\epsilon_t$ is deterministic with $\sum_t \epsilon_t$ finite. Then $\rho_\infty$ exists and $$ \rho_\infty(Z) = Z_0 + \sigma(\rho_\infty(Z')), $$ where $Z'_t = Z_{t+1}-Z_0,t\geq 0$ and $\sigma(X) = \Sigma({\mathcal L}(X))$ ( see (\ref{eq:homog})). \end{proposition} \noindent Note that $\sigma$ is unconditional coherent risk measure. First we prove the following Lemma: \begin{lemma} (i) Let $Z_0$ be bounded and let (\ref{eq:eps}) hold. Let $\sigma_{t|\F_{t-1}}(Z_t)$ be defined by (\ref{eq:homog}) and support-bounded for all $t$. Then $\rho_\infty(Z)$ exists and the convergence in (\ref{rhoinf}) is uniform in max norm.\\ (ii) Any coherent risk measure is continuous with respect to uniform convergence in max norm. \label{lem:cauchy} \end{lemma} \begin{proof}[Proof of Lemma] (i) Clearly, $Z_t$ fulfills the assumptions of Theorem \ref{thm:exists}, so $\rho_\infty(Z)$ exists. For any $t > 0$ and $s >0$, knowing that $\rho_t(Z_t)$ is non-decreasing, we have $$0 \leq \rho_{t+s}(Z_{t+s})-\rho_t(Z_t) \leq \rho_{t+s}(Z_t + e_t) - \rho_t(Z_{t}) = \rho_{t}(Z_t + e_t) - \rho_t(Z_{t}) = e_t $$ where $e_t=\sum_{\tau \geq t} \epsilon_\tau$. \\ (ii) Let $Z_t \rightarrow Z^\star$ uniformly. Then there exists a sequence $e_t$ such that $ Z^\star - e_t \leq Z_t \leq Z^\star + e_t$, so, by coherence, $\sigma(Z^\star) - e_t\leq \sigma(Z_t) \leq \sigma(Z^\star) + e_t $ implying $\sigma(Z_t)\rightarrow \sigma(Z^\star)$. \end{proof} \begin{proof}[Proof of the Proposition] Thanks to Theorem \ref{thm:exists}, the limit defining the l.h.s. exists. From the definition and the coherence $$ \rho_{\infty}(Z) = \lim_{t\rightarrow \infty} \rho_t(Z_t) = Z_0 + \lim_{t\rightarrow \infty} (\sigma(S_t)) $$ where $S_0=0$ and $$S_t = \sigma_{2|\F_1}(\dots\sigma_{t|\F_{t-1}}(Z'_{t-1})\dots)= \Sigma(\L(\Sigma(\dots\Sigma(\L(Z'_{t-1}|\F_{t-1})\dots)|\F_0)),$$ $t\geq 0$. By Lemma \ref{lem:cauchy} (i), $S_{t}$ converges uniformly to $\rho_\infty(Z')$, so, by (ii) of the same Lemma, $ \lim_{t\rightarrow \infty} (\sigma(S_t))=\sigma(\lim_{t\rightarrow \infty}S_t) =\sigma(\rho_\infty(Z')). $ \end{proof} \section{Contractiveness of the Bellman Operator} Consider a dynamic programming problem \[ V(s_0) := \sup_{a_t\in A(s_t), s_{t+1}=T(s_{t},a_{t},W_{t+1}),t\geq 0}\varrho_\infty(\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})), \] Here, \begin{itemize} \item $T : S \times A \times X \rightarrow S$ is a measurable mapping, where $S$ is a complete state space, $A$ is a (measurable) action space and $X$ is a measurable space \item $W_{t} \in X$ is a Markov stochastic process (we may assume that it is i.i.d. uniform (see above)). \item $r$ is a \textcolor{black}{uniformly bounded non-negative function} \item $\varrho_\infty(Z) = -\rho_\infty(-Z)$, where $\rho_\infty$ is a limit nested risk measure (\ref{rhoinf}) defined by filtration $\F_{t}\defined\sigma(W_{t})$ and a support-bounded coherent risk measure $\sigma$ as in (\ref{eq:homog}). \item $\gamma<1$ is a discount factor \item $A(\bullet): S \rightarrow A$ is a set mapping. \end{itemize} \noindent Since $r$ is uniformly bounded non-negative and $\gamma<1$, process $-Z_T \defined \sum_{t=0}^{T}\gamma^{t}r_{t}(s_{t},a_{t}),$ $T = 0,1,...,$, has \textcolor{black}{uniformly bounded support and is non-increasing.} Combined with the assumption of coherent support-bounded conditional risk measure $\sigma$, it guarantees existence of $\varrho_\infty$. Hence, the problem is well defined. \begin{proposition} (Bellman Equation) \begin{equation} V(s)=\sup_{a\in A(s)}[r(s,a)+\gamma\varsigma(V(T(s,a,W))],\qquad s \in S, \label{eq:bellman} \end{equation} where $\varsigma(Z)=-\sigma(-Z)$. \end{proposition} \noindent Note that (\ref{eq:bellman}) may be also understood as a definition of a risk-averse version of a reinforcement learning problem (see \cite{sutton2018reinforcement}). \begin{proof} Thanks to Proposition \ref{prop:homog}, basic properties of supremum, and Lemma \ref{lem:cauchy} (ii), we get, for any $s_0 \in S$: \begin{multline*} V(s_0) = \sup_{a_t\in A(s_t),s_{t+1}=T(s_{t},a_{t},W_{t+1}),t\geq 0} \left[r(s_0,a_0) + \gamma \varsigma\left( \varrho_\infty\left(\sum_{t=1}^{\infty}\gamma^{t-1}r(s_{t},a_{t})\right) \right) \right] \\ \sup_{a_0\in A(s_0)} \left[r(s_0,a_0) + \gamma \varsigma\left(\sup_{a_t\in A(s_t), s_{t+1}=T(s_{t},a_{t},W_{t+1}),t\geq 0} \varrho_\infty\left(\sum_{t=1}^{\infty}\gamma^{t-1}r(s_{t},a_{t})\right) \right) \right], \end{multline*} which proves the Proposition. \end{proof} \begin{theorem} The operator \[ B:(BV)(s)\defined\sup_{a\in A(s)}[r(s,a)+\gamma\varsigma(V(T(s,a,W))] \] is a $\gamma$ contraction w.r.t. sup norm. \end{theorem} \begin{proof} Fix $\epsilon >0$ and, for any value function $V$, denote $a_{V,s}$ the $\epsilon$-optimal solution of $\sup_{a\in A(s)}[r(s,a)+\gamma\varsigma(V(T(s,a,W))]$. We have \begin{multline*} \|BU-BV\|_{\infty}=\underbrace{\sup_{s\in S_{U}\defined\{s:(BU)(s)>(BV)(s))\}}[(BU)(s)-(BV)(s)]}_{b_{U}}\\ \vee\underbrace{\sup_{s\in S_{V}\defined\{s:(BU)(s)\leq(BV)(s))\}}[(BV)(s)-(BU)(s)]}_{b_{V}} \end{multline*} Further we have \begin{multline*} b_{U}=\sup_{s\in S_{U}}|\sup_{a\in A(s)}[r(s,a)+\gamma\varsigma(U(T(s,a,W))]-\sup_{a\in A(s)}[r(s,a)+\gamma\varsigma(V(T(s,a,W))]|\\ \leq\sup_{s\in S_{U}}[r(s,a_{U,s})+\gamma\varsigma(U(T(s,a_{U,s},W)))-r(s,a_{U,s})-\gamma\varsigma(V(T(s,a_{U,s},W)))] - \epsilon\\ =\gamma\sup_{s\in S_{U}}[\varsigma(U(T(s,a_{U,s},W)))-\varsigma(V(T(s,a_{U,s},W)))] - \epsilon \\ =\gamma\sup_{s\in S_{U}}[-\sup_{Q\in\mathcal{M}}\int_0^1-U(T(s,a_{U,s},w))Q(w)dw+\sup_{Q\in\mathcal{M}}\int_0^1-V(T(s,a_{U,s},w))Q(w)dw] - \epsilon \\ \leq\gamma\sup_{s\in S_{U}}[-\int_0^1 -U(T(s,a_{U,s},w))Q_{V,s}(w)dw+\int_0^1-V(T(s,a_{U,s},w))Q_{V,s}(w)dw] - \epsilon \\ \leq\gamma\sup_{s\in S_{U}}\int_0^1|U(T(s,a_{U,s},w))-V(T(s,a_{U,s},w))|Q_{V,s}(w)dw - \epsilon \leq\gamma\|U-V\|_\infty - \epsilon \end{multline*} where $Q_{V,s}=\arg\max_{Q\in\mathcal{M}}\int_0^1-V(T(s,a_{U,s},w))Q(w)dw$ (the last inequality holds because $Q(w)dw$ is a probability measure). By releasing $\epsilon$ and performing a limit transition, we get the $b_U \leq \gamma\|U-V\|_\infty$. By making analogous steps for $b_V$, we get the Theorem. \end{proof} \noindent Thanks to these Theorem, many solution techniques, relying on the contractiveness of the Bellman operator, work when the expectation is replaced by a coherent risk measure. Here we only demonstrate this for the well known Value Iteration Algorithm (see \cite{sutton2018reinforcement}) Let $V_0: S \rightarrow [0,\infty)$ be arbitrary and let $\theta$ be a pre-chosen precision level. The Value Iteration Algorithm may be written as follows: \begin{center} \begin{minipage}{6cm} \begin{framed} \begin{algorithmic} \State $n \leftarrow 0$ \Repeat \State $n \leftarrow n+1$ \State $V_n \leftarrow B V_{n-1}$ \Until $\|V_n-V_{n-1}\|_\infty \leq \theta$ \end{algorithmic} \end{framed} \end{minipage} \end{center} \noindent The following result is a direct consequence of the Banach Fixed Point Theorem (see \cite{granas2003fixed}, 1.1). \begin{theorem} There exists $V_\star$ solving (\ref{eq:bellman}), $$ \| V_n - V_\star \|_\infty \leq \frac{\gamma^n}{1-\gamma } \| V_1 - V_ 0 \|_\infty $$ for any $n$. \end{theorem} \begin{corollary} The Value Iteration Algorithm stops after a finite number of steps. \end{corollary} \noindent {\bf Acknoledgements.} This work has been supported by grant No. GA19-11062S of the Czech Science Foundation.
1,108,101,566,195
arxiv
\subsection{Defining the Path Integral} The transition element to be analyzed in the remainder of this paper is given in its one-dimensional form by \begin{equation} \label{one} W_{fi} = \langle \, p_f \, | \exp \left ( - i \hat{H} T / \hbar \right ) | \, q_i \, \rangle \, . \end{equation} The final state, $| \, p_f \, \rangle$, is assumed to be an eigenstate of the momentum, $\hat{p}$, while the initial state, $| \, q_i \, \rangle$, is an eigenstate of the position, $\hat{q}$. The two operators satisfy the usual algebra $ [ \hat{q} , \hat{p} ] = i \hbar$. The Hamiltonian $\hat{H}$ is assumed to be a function of some ordering of $\hat{q}$ and $\hat{p}$, and its eigenstates, as well as those of $\hat{q}$ and $\hat{p}$, are determined consistent with any boundary conditions, such as periodicity in $q$. $W_{fi}$ is trivial to evaluate if $\hat{H}$ is cyclic, {\it i.e.}, a function solely of $\hat{p}$. For such a case it reduces to \begin{equation} \label{two} W_{fi} = \langle \, p_f \, | \, q_i \, \rangle \, \exp \left ( - i H ( p_f ) \, T / \hbar \right ) \, . \end{equation} The allowed values of the variables $p_f$ and $q_i$ appearing in the inner product in (\ref{two}) are determined by the boundary conditions of the original problem, although in one dimension the inner product for continuous systems takes the general form \begin{equation} \label{three} \langle \, p_f \, | \, q_i \, \rangle = \frac{1}{\sqrt{2 \pi \hbar}} \exp \left ( - i p_f q_i / \hbar \right ) \, . \end{equation} The propagator of the quantum mechanical problem can be derived from result (\ref{one}). Assuming that the momentum state spectrum is continuous, the propagator is obtained by a Fourier transform, \begin{equation} \label{four} \langle \, q_f \, | \exp \left ( - i \hat{H} T / \hbar \right ) | \, q_i \, \rangle = \int \frac{dp_f}{\sqrt{2 \pi \hbar}} \, e^{i p_f q_f / \hbar } \, \langle \, p_f \, | \exp \left ( - i \hat{H} T / \hbar \right ) | \, q_i \, \rangle \; . \end{equation} Obviously, a discrete spectrum of momentum eigenstates leads to a Fourier series. Once the propagator (\ref{four}) is known, results such as the ground state energy can be derived. The Hamiltonian path integral representation of (\ref{one}) may be derived by using the completeness of the position and momentum eigenstates to perform a time-slicing argument. This technique is well documented\cite{Swan1}, and its application here is accomplished by using the unit projection operator given by \begin{equation} \label{five} \hat{\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}} = \int \frac{dp_j}{\sqrt{2 \pi \hbar}} \, dq_{j+1} \, | \, q_{j+1} \, \rangle e^{i q_{j+1} p_j / \hbar} \langle \, p_j \, | \, . \end{equation} There is an important subtlety in (\ref{five}). As it is written it assumes that the spectra of the states are continuous; however, this will not be the case in the event that the configuration space of the system is compact or periodic. Putting aside such a possibility for the moment, the result of time-slicing $T$ into $N$ intervals of duration $\epsilon$, where $\epsilon = T/N$, gives\ \begin{equation} \label{six} \langle \, p_j \, | \exp ( - i \epsilon \hat{H} / \hbar ) | \, q_j \, \rangle = \frac{1}{\sqrt{2 \pi \hbar}} \exp \left \{ - \frac{i}{\hbar} \left [ q_j p_j + \epsilon H ( p_j , q_j ) + O ( \epsilon^2 ) \right ] \right \} \, , \end{equation} where the $O( \epsilon^2 )$ terms arise from commutators occurring in the ordering of the Hamiltonian power series. This immediately yields the Hamiltonian path integral recipe for calculating the transition amplitude: \begin{eqnarray} \label{seven} && W_{fi} = \langle \, p_f \, | e^{- i H T / \hbar } | \, q_i \, \rangle = \nonumber \\ && \frac{1}{\sqrt{2 \pi \hbar}} \int \prod_{j=1}^N \frac{dp_j}{2 \pi \hbar} \, dq_j \, \exp \left \{ \frac{i}{\hbar} \sum_{j = 1}^N \left [ - q_j ( p_{j+1} - p_j ) - \rule{0ex}{2.5ex} \epsilon H ( p_j , q_j ) \right ] - \frac{i}{\hbar} q_i p_1 \right \} \, , \end{eqnarray} where $p_{N+1} = p_f$ and the limits $N \rightarrow \infty$ and $\epsilon \rightarrow 0$ are understood. \subsection{\protect The Leading Behavior of $\Delta q$} It is standard practice to assume a continuous form for the path integral by identifying $\epsilon \rightarrow {\rm dt}$ and $q_j ( p_{j+1} - p_j ) = {\rm dt} \, q_j \, \dot{p}_j$. The latter identification is {\em purely formal}, since $p_{j+1}$ and $p_j$ are independent variables of integration unrelated by any time evolution. Even with this formal identification the action density in the path integral (\ref{seven}) can take the (semi)standard form, $- q \dot{p} - H$, only if $p_i = 0$ or $q_i = 0$. For these cases the final term can be written $- i q_i ( p_1 - p_i)$. This will be discussed in greater detail in the Sec.~IV. Another technicality arises since the argument of the path integral does not satisfy the criteria of a probability measure unless the time is continued to imaginary values, the so-called Wick rotation. Otherwise the oscillatory integrands result in distributions rather than functions. The Wick rotation will be used and assumed to yield a sensible measure for all path integrals considered in the remainder of this paper. To demonstrate the formal nature of the identification $q_j ( p_{j+1} - p_j ) = {\rm dt} \, q_j \, \dot{p}_j$ as well as derive results that will be important later in this paper, it will be of use to discuss the leading behavior in $\epsilon$ of the expectation value of the element $ \Delta q_j = q_{j+1} - q_j $. For it to be possible to treat $\Delta q_j$ as $ \dot{q} \, {\rm dt}$ its expectation value, $ \langle \Delta q_j \rangle_{fi}$, must be shown to be $O( \epsilon )$. The behavior of $\langle \Delta q_j \rangle$ is of course a function of the specific form of the Hamiltonian. However, if the Hamiltonian is cyclic, then it is always true that $ \langle \Delta q_j \rangle_{fi} $ is $O ( \epsilon )$. This is easy to demonstrate within the operator context, where the operator form for Hamilton's equation gives \begin{equation} \label{eight} \Delta \hat{q} (t) = \hat{q} ( t + \epsilon ) - \hat{q} ( t) = \epsilon \, \frac{i}{ \hbar} [ \hat{H} ( \hat{p} ) , \hat{q} ( t ) ] = \epsilon \, \frac{ \partial \hat{ H} ( \hat{p} ) }{ \partial \hat{p} } \, . \end{equation} Inserting (\ref{eight}) into (\ref{one}) and using (\ref{two}) immediately yields \begin{equation} \label{nine} \langle \Delta \hat{q} (t) \rangle_{fi} = \epsilon \, \langle \, p_f \, | \, q_i \, \rangle \, \frac{ \partial H ( p_f ) }{ \partial p_f } \exp \left ( - i H ( p_f ) \, T / \hbar \right ) \, . \end{equation} Demonstrating the path integral equivalent of result (\ref{nine}) requires adding a source term $ K_j \Delta q_j $ to the action. In order to avoid difficulties with the boundary conditions on $q_j$, the boundary conditions $K_N = K_0 = 0$ are imposed on the source function. The expectation value is then given by \begin{equation} \label{ten} \langle \Delta q_j \rangle_{fi} = - \left . \frac{i}{\hbar} \frac{ \partial W_{fi} [ K ] }{ \partial K_j } \right |_{K = 0} \, . \end{equation} The next step is to perform the path integral version of integrating by parts by using the boundary condition on $K$ to rearrange the sum over $j$: \begin{equation} \label{11} \sum_{j=1}^N K_j \Delta q_j = \sum_{j=1}^N K_j ( q_{j+1} - q_j ) = - \sum_{j=1}^{N} q_j ( K_j - K_{j-1} ) \, \equiv \, - \sum_{j=1}^N q_j \Delta K_j \, . \end{equation} Since $H$ depends only on $p$ all $q$ integrations can now be performed. Assuming that the range of the $q$ integrals is $\pm \infty$, each of the $N$ integrations over $q$ yields a Dirac delta, \begin{equation} \label{12} \int dq_j \, \exp \left \{ - \frac{i}{\hbar} q_j ( p_{j+1} - p_j + \Delta K_j ) \right \} = 2 \pi \hbar \, \delta ( p_{j+1} - p_j + \Delta K_j ) \, . \end{equation} The $p$ variables are now trivial to integrate, giving the result for the transition element \begin{equation} \label{13} W_{fi} [ K ] = \frac{1}{ \sqrt{ 2 \pi \hbar}} \exp \left \{ - \frac{i}{\hbar} \left ( p_f q_i + \sum_{j=1}^N \epsilon \, H ( p_f - K_{j-1} ) \right ) \right \} \, . \end{equation} Using (\ref{13}) in (\ref{ten}), along with the result that \begin{equation} \label{14} \lim_{N \rightarrow \infty} \sum_{j=1}^N \epsilon = T \; , \end{equation} reproduces the operator result (\ref{nine}): \begin{equation} \label{15} \lim_{N \rightarrow \infty} \langle \Delta q_j \rangle_{fi} = \frac{\epsilon}{ \sqrt{ 2 \pi \hbar}} \, \frac{ \partial H ( p_f ) }{ \partial p_f } \exp \left \{ - \frac{i}{\hbar} [ \, p_f q_i + H ( p_f ) \, T ] \right \} \, . \end{equation} Result (\ref{15}) does not necessarily follow for non-cyclic Hamiltonians. The argument used to derive (\ref{15}) can be applied to the harmonic oscillator action to show that $\langle ( \Delta q_j )^2 \rangle_{fi}$ is $O ( \epsilon )$. This is easily seen from the Gaussian nature of the Wick-rotated integrations. The $q$ integration results in \begin{equation} \int dq_j \, \exp \left [ - \epsilon \frac{1}{2} {q_j}^2 + q_j ( \Delta K_j + \Delta p_j ) \right ] = \sqrt{ \frac{2 \pi}{ \epsilon } } \exp \left [ - \frac { ( \Delta p_j + \Delta K_j )^2}{ \epsilon} \right ] \, . \end{equation} The $\Delta K_j $ dependence may be removed from this term by translating the $p_j$ variables according to $p_j \rightarrow p_j + K_j$. Doing so changes the ${p_j}^2$ term in the exponential of the path integral according to \begin{equation} - \epsilon \frac{1}{2} {p_j}^2 \rightarrow - \epsilon \frac{1}{2} {p_j}^2 - \epsilon p_j K_j - \epsilon \frac{1}{2} {K_j}^2 \end{equation} It is then clear that the second derivative of the resulting function with respect to $K_j$ will result in a term $O ( \epsilon )$. In an identical manner it is possible to show that the expectation value of $\Delta p_j = p_{j+1} - p_j $ vanishes if the Hamiltonian is cyclic. This is the quantum mechanical equivalent of the classical Hamilton's equation \begin{equation} \dot{p} = - \frac{ \partial H}{ \partial q} \, . \end{equation} As a result of (\ref{15}) it is possible to use a perturbative argument to show that certain types of terms in the Hamiltonian of the path integral are suppressed in the limit $N \rightarrow \infty$. The Hamiltonian under consideration has the form \begin{equation} \label{16} H = H_{cl} ( p , q ) + H_\Delta ( \Delta q , q ) \, , \end{equation} where all terms in $H_\Delta$ have at least one positive power of $\Delta q$ and $H_{cl}$ is the Hamiltonian inherited from the classical system. Such a Hamiltonian has no classical counterpart, since terms of the form $H_\Delta$ would be suppressed \cite{Prok}. If $H_{cl}$ is cyclic it can be shown that the terms $H_\Delta$ do not contribute to the path integral in the limit $N \rightarrow \infty$. The argument is similar to the one used to demonstrate (\ref{15}). The contribution of the terms $H_\Delta$ is written as a perturbation series using $H_{cl}$ as the basis Hamiltonian. This is accomplished by adding the source terms $ \epsilon K_j \Delta q_j$ and $ \epsilon J_j q_j$ to the action without $H_\Delta$ to give the function $W_{fi} [ K , J ]$. The perturbation series representation of the original transition element is then defined as \begin{equation} \label{18} \left . \exp \left \{ - \frac{i}{\hbar} \sum_{j=1}^{N} \epsilon \, H_\Delta \left ( \frac{ \hbar }{ i \epsilon} \frac{ \partial}{ \partial K_j} \, , \frac{ \hbar}{ i \epsilon} \frac{ \partial}{ \partial J_j} \right ) \right \} W_{fi} [ K , J ] \right |_{K,J = 0} \, . \end{equation} The function $W_{fi} [ K , J ]$ is readily evaluated to give \begin{equation} \label{19} W_{fi} [ K ] = \frac{1}{ \sqrt{ 2 \pi \hbar }} \exp \left \{ - \frac{i}{\hbar} \left ( p_f q_i + \sum_{j=1}^N \epsilon \, H ( p_f - \epsilon K_{j-1} - \sum_{l = 1}^{j} \epsilon \, J_l ) \right ) \right \} \, . \end{equation} While the $ \sum \epsilon J $ term results in an integral in the limit $\epsilon \rightarrow 0$, it is clear that the term $ \epsilon K$ is suppressed relative to the other terms by a factor of $T/N$. The derivatives with respect to $K$ are also suppressed by this factor as well, showing that terms of the form $H_\Delta$ do not contribute to the perturbation series. For the case that the basis Hamiltonian is cyclic such terms can therefore be discarded. In effect, this perturbative argument substantiates the general intuition that, for a cyclic Hamiltonian, $\Delta q$ can be replaced by $\epsilon \dot{q}$, where $\dot{q}$ is finite. Any resulting terms with factors of $O( \epsilon^2 )$ or greater can then be suppressed. Since perturbative arguments are fraught with pitfalls and loopholes, it is worth checking this result for exactly integrable cases. For example, the path integral whose Hamiltonian is given by \begin{equation} H = \frac{1}{2} p^2 + \lambda q \Delta q \end{equation} can be shown to reduce to the standard cyclic result (\ref{two}) with all terms proportional to $\lambda$ suppressed by an additional factor of $\epsilon^2$. Terms of the form $ q \Delta p$ or $p \Delta p$ can be integrated exactly to find the standard cyclic result in the limit $\epsilon \rightarrow 0$. However, there is at least one important set of cases not covered by this perturbative argument involving terms with quadratic powers of $p$. For example, if the term $ f(q) \Delta q \, p^2$ occurs in the Hamiltonian, its contribution cannot be discarded. It is not difficult to see the mechanism for this by examining the Hamiltonian \begin{equation} \label{20a} H = \frac{1}{2} p^2 + \frac{1}{2} \Delta q \, f(q) p^2 = \frac{1}{2} ( 1 + f(q) \Delta q) p^2 \, . \end{equation} The $p$ integrations in the Wick-rotated path integral are Gaussian, and take the general form \begin{eqnarray} \label{20} &&\int \frac{dp_j}{2 \pi \hbar } \exp \left \{ - \frac{\epsilon}{\hbar} \left [ \frac{1}{2} ( 1 + f(q_j) \Delta q_j ) {p_j}^2 - p_j \frac{\Delta q_j}{\epsilon} \right ] \right \} \nonumber \\ && = \sqrt{ \frac{ 1 }{ 2 \pi \hbar \epsilon ( 1 + f(q_j) \Delta q_j ) } } \; \exp \left [ \frac{ ( \Delta q_j )^2}{ 2 \hbar \epsilon ( 1 + f(q_j) \Delta q_j)} \right ] \, . \end{eqnarray} If $\Delta q$ remains proportional to some positive power or root of $\epsilon$, then the $\Delta q$ term in the denominator of the exponential can be discarded due to the factor of $\epsilon$ present in the denominator. However, the terms in the prefactor may contribute to the path integral. This follows from the fact that the prefactor terms can be written \begin{equation} \label{21} \frac{1}{ \sqrt{ 1 + f(q_j) \Delta q_j } } = \exp \left [ - \frac{1}{2} \ln ( 1 + f(q_j) \Delta q_j ) \right ] \approx \exp \left [ - \frac{1}{2} f(q_j) \Delta q_j \right ] \, . \end{equation} Even if $\Delta q_j \approx \epsilon$, the infinite sum in which (\ref{21}) becomes embedded can result in a nontrivial contribution since $N \epsilon \rightarrow T$. The upshot of result (\ref{21}) is to transmute the original interaction term $ f(q) \Delta q \, p^2$ into an effective velocity-dependent potential in the path integral when all momenta have been integrated. This velocity-dependent potential appears proportional to $\hbar$, since (\ref{20}) can be written \begin{equation} \label{22} \sqrt{ \frac{ 1 }{ 2 \pi \hbar \epsilon ( 1 + f(q_j) \Delta q_j ) } } \; \exp \left [ \frac{ ( \Delta q_j )^2}{ 2 \hbar \epsilon ( 1 + f(q_j) \Delta q_j)} \right ] \, \approx \, \sqrt{ \frac{ 1 }{ 2 \pi \hbar \epsilon} } \; \exp \left \{ \frac{ \epsilon }{ \hbar } \left [ \frac{1}{2} {\dot{q}}^2 + \frac{1}{2} \hbar f(q) \dot{q} \right ] \right \} \, , \end{equation} where the standard path integral notation $\Delta q = \epsilon \dot{q} $ has been used. Result (\ref{22}) is consistent with the idea that the classical Hamiltonian (\ref{20a}) would receive no contribution from such a potential. If it is to give a nontrivial contribution to the quantum mechanical theory, it must be equivalent to a term in the action of $O( \hbar )$ or higher. Clearly, similar results can be obtained for other Gaussian-like terms for specific choices of the cyclic Hamiltonian. A discussion of possible terms that may contribute is given by Prokhorov\cite{Prok}. It is worth noting for later reference that, if the Hamiltonian is cyclic, the path integral (\ref{seven}) can be evaluated exactly by translating the variables of integration by the classical solutions to Hamilton's equations consistent with the boundary conditions $q(t = 0) = q_i$ and $p (t = T) = p_f$, given by \begin{equation} \label{23} p_c ( t ) = p_f \, , \; \; q_c (t) = q_i + \frac{ \partial H ( p_f )}{ \partial p_f } t \; . \end{equation} This is possible because the difference between adjacent time-slices {\em does} reduce to the derivative for a classical function, {\it i.e.}, $ q_c ( t_{j+1} ) = q_c ( t_j + \epsilon ) \rightarrow q_c (t_j) + \epsilon \dot{q}_c (t_j)$. Performing an integration by parts similar to (\ref{11}) reduces the translated path integral (\ref{seven}) to \begin{eqnarray} && \label{24} \frac{1}{ \sqrt{ 2 \pi \hbar }} \exp \left \{ - \frac{i}{\hbar} [ \, p_f q_i + H ( p_f ) \, T ] \right \} \times \nonumber \\ && \int \prod_{j=1}^N \frac{dp_j}{2 \pi \hbar} \, dq_j \, \exp \left \{ \frac{i}{\hbar} \sum_{j = 1}^N \left [ - q_j ( p_{j+1} - p_j ) - \rule{0ex}{2.5ex} \epsilon \frac{1}{2} {p_j}^2 \frac{ \partial^2 H ( p_c (t_j) )}{ \partial {p_c (t_j)}^2 } - \ldots \right ] \right \} \, , \end{eqnarray} where the ellipsis refers to higher order terms present in the expansion of $H ( p_c (t_j) + p_j ) $ around $p_j$ and, because of the translation of variables, $p_{N+1} = 0$. It is precisely the latter result that reduces the translated path integral appearing in (\ref{24}) to unity when all integrations are performed, a fact that is apparent when result (\ref{12}) is examined for the case $K = 0$. Therefore, the only surviving factor in (\ref{24}) is the exponential of the classical action evaluated along the classical trajectory (\ref{23}). \subsection{Discrete Spectrum Path Integrals} Another relevant point regards the case where the allowed values of the momentum or energy constitute a denumerably infinite set rather than a continuous variable. Such a result is common in quantum mechanical systems, occurring in bound state spectra and in systems where the configuration space is compact or periodic boundary conditions are enforced. In wave mechanics the discrete spectrum can arise from demanding either that the bound state wave-function is normalizable or that the wave-function or its derivative vanishes on some boundaries. It is natural to expect that the measure of the path integral for such a system would differ from its ``free'' counterpart (\ref{seven}). However, it is often the case that the path integral representation of the transition amplitude (\ref{one}) for such a system is identical to the continuous result (\ref{seven}). This outcome is well-known within the context of specific systems \cite{Schul3}. Since this aspect of path integrals is relevant to canonical transformations, the general derivation of the range of integrations will be sketched for the specific case of a {\em free} particle constrained to be in a one-dimensional infinite square well. The position eigenstates range from $-a$ to $a$, while the momentum eigenstates, $ | \, n \, \rangle$, are discrete and indexed by an integer $n$. Unit projection operators are given by \begin{equation} \label{25} \hat{\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}} = \int_{-a}^a dq \, | \, q \, \rangle \langle \, q \, | \, , \; \; \hat{\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}} = \sum_{n = -\infty}^\infty | \, n \, \rangle \langle \, n \, | \, , \end{equation} while the inner product is given by \begin{equation} \label{26} \langle \, q \, | \, n \, \rangle = \frac{1}{ \sqrt{ 2 a } } \exp \left ( \frac{ i \pi n q}{a} \right ) \, . \end{equation} Of course, the physical energy eigenstates are linear combinations of $| \, n \, \rangle$ and $| \, -n \, \rangle$ consistent with the boundary conditions. The time-slicing argument that was used to construct (\ref{seven}) can be revisited using (\ref{25}) and (\ref{26}) to obtain \begin{eqnarray} \label{27} W_{fi} & = & \langle \, n_f \, | e^{- i \hat{H} T / \hbar} | \, q_i \, \rangle \nonumber \\ & = & (2a)^{- (N+1)/2 } \sum_{n_1, \ldots, n_N} \int_{-a}^{a} d q_1 \cdots d q_N \, \times \nonumber \\ && \exp \left \{ - \frac{i}{\hbar} \sum_{j=1}^{N} \left [ \frac{ n_j \pi \hbar }{ a } ( q_j - q_{j-1} ) - \epsilon H ( n_j ) \right ] - \frac{ i n_f \pi q_N}{a} \right \} \, , \end{eqnarray} where $q_0 = q_i$. Result (\ref{27}) can be rewritten using the Poisson resummation technique, which begins by using the identity \begin{equation} \label{28} \sum_{n = -\infty}^{\infty} f(n) = \sum_{k = -\infty}^\infty \int_{- \infty}^\infty dn \, f(n) \, e^{i 2 \pi k n} \, . \end{equation} Using (\ref{28}) and making the obvious definition $p_j = n_j \pi \hbar / a$ allows (\ref{27}) to be written as \begin{eqnarray} \label{29} W_{fi} & = & \frac{1}{ \sqrt{2a} } \int_{-\infty}^\infty \frac{dp_1}{2 \pi \hbar} \cdots \frac{ dp_N }{ 2 \pi \hbar} \int_{-a}^a dq_1 \cdots dq_N \sum_{k_1,\ldots,k_N} \times \nonumber \\ & & \exp \left \{ - \frac{i}{\hbar} \sum_{j=1}^{N} \left [ p_j ( q_j + 2a k_j - q_{j-1} ) - \epsilon H ( p_j ) \right ] - \frac{i}{\hbar} p_f q_N \right \} \, . \end{eqnarray} Because the Hamiltonian is independent of $q$ and the sums over the $k_j$ are infinite, the sums may be absorbed by extending the range of the $q_j$ integrations. However, this is contingent on the fact that \begin{equation} \label{30} \exp \left \{ - \frac{i}{\hbar} p_f ( q_N + k_N 2a) \right \} = \exp \left \{ - \frac{i}{\hbar} p_f q_N \right \} \, , \end{equation} which holds as long as $p_f = n_f \pi \hbar / a$ and $n_f$ is an integer. Because the wave-mechanical solution to the problem was used to derive the path integral form, it is clear that condition (\ref{30}) holds. The final form of the path integral is given by \begin{equation} \label{31} W_{fi} = \frac{1}{ \sqrt{2a} } \int_{-\infty}^\infty \frac{dp_1}{2 \pi \hbar} \cdots \frac{ dp_N }{ 2 \pi \hbar} \, dq_1 \cdots dq_N \exp \left \{ - \frac{i}{\hbar} \sum_{j=1}^{N} \left [ q_j ( p_{j+1} - p_j ) - \epsilon H ( p_j ) \right ] - \frac{i}{\hbar} p_1 q_i \right \} \, , \end{equation} where the limits on the $q_j$ integrations is now $\pm \infty$. Apart from the overall factor of $ ( 2 a )^{- 1/2 }$, result (\ref{31}), with its ambiguous symbol $p_{N+1} = p_f$, is formally identical in its measure and action to the free case (\ref{seven}). In that sense information about the system has been lost in the transition from (\ref{27}) to (\ref{31}) since the form (\ref{31}) does not specify a discrete spectrum for $p_f$. {\it A priori} knowledge of the momentum spectrum is required in order that the discrete form of the Fourier transform, rather than the continuous form, is employed to obtain the propagator (\ref{four}). \subsection{Fourier Methods for Evaluating Path Integrals} A final aspect of importance regarding the Hamiltonian path integral is the method that does allow the action in the path integral to be manipulated as if the formal time derivative was a true derivative. The $q_j$ and $p_j$ variables are first translated by a classical solution to Hamilton's equations of motion consistent with the boundary conditions. This means using classical solutions for both $p$ and $q$ that satisfy the conditions \begin{equation} p_c ( t = T ) = p_f \, , \; \; q_c ( t = 0) = q_i \, . \end{equation} Because there is no initial condition for $p$ or final condition for $q$ in the original form of the transition element, the translated $p$ and $q$ variables do not necessarily vanish at {\em both} $t = 0$ and $t = T$. Consistent with these boundary conditions, the $2N$ fluctuation variables $q_j$ and $p_j$ are written as Fourier expansions in terms of $2N$ new variables $q_n$ and $p_n$, \begin{eqnarray} \label{33} q_j & = & \sum_{n = 0}^{N-1} q_n \sin \left ( \frac{ (2n +1) \pi t_j }{ 2T } \right ) \\ \label{33.2} p_j & = & \sum_{n=0}^{N-1} p_n \left ( \frac{(2 n +1) \pi}{2 T } \right ) \cos \left ( \frac{ (2n+1) \pi t_j }{ 2T } \right ) \, . \end{eqnarray} The expansions of (\ref{33}) satisfy the proper translated quantum boundary conditions $p ( t = T) = 0$ and $q ( t = 0) = 0$, but are arbitrary at the remaining endpoints in order to accommodate the quantum nature of the coordinates. This is an outgrowth of the uncertainty principle for canonical variables, since the uncertainty principle forces $q_f$ to be undefined if $p_f$ is exactly known, with a similar relation between $q_i$ and $p_i$. However, the formal derivatives in the path integral now become true derivatives, since to $O (\epsilon)$ \begin{equation} \label{34} q_{j+1} - q_j \rightarrow \epsilon \sum_{n = 1}^N q_n \frac{( 2n +1) \pi}{2 T} \cos \left ( \frac{ (2n+1) \pi t_j }{2 T } \right ) \, . \end{equation} The measure is rewritten in terms of integrations over the coefficients of the Fourier expansions. This change of variables is accompanied by a Jacobian that is nontrivial, but one that can be inferred by forcing the new path integral to yield the same results as the configuration space measure version discussed in the previous part of this section. In addition to the usual Wick rotation $T \rightarrow - i T$, the Hamiltonian path integral also requires $p_n \rightarrow - i p_n$. The case of an arbitrary cyclic Hamiltonian is particularly easy since the integrations over the $q_n$ variables yield a factor of the form \begin{equation} \label{35} \left ( \frac{ 8 T}{ \pi^2 } \right )^N \prod_{n=0}^{N-1} ( 2n+1)^{-2} \delta( p_n ) \, , \end{equation} from which the Jacobian is inferred to be \begin{equation} \label{36} J = \left ( \frac{ \pi^2 }{ 8 T } \right )^N [ ( 2N - 1)!! ]^2 \, . \end{equation} The validity of this procedure can be tested on the harmonic oscillator transition element. There the classical solutions consistent with the boundary conditions are \begin{equation} \label{37} q_c ( t ) = A \sin ( \omega t + \delta ) \, , \; \; p_c ( t ) = m \omega A \cos ( \omega t + \delta ) \, , \end{equation} where \begin{equation} \label{38} A = q_i \, {\rm csc \,} \delta \, , \; \; \cot \delta = \frac{ p_f {\rm \, sec \,} (\omega T) }{ m \omega q_i } - \tan ( \omega T ) \, . \end{equation} Using (\ref{37}) and (\ref{38}) in the harmonic oscillator action yields the result \begin{equation} \label{39} \int_0^T {\rm dt} \, {\cal L} = q_i p_f {\rm \, sec \, } ( \omega T ) - \frac{1}{2} m \omega {q_i}^2 \tan ( \omega T ) - \frac{1}{2} \frac{ {p_f}^2 }{ m \omega } \tan ( \omega T ) \, . \end{equation} The remaining translated action reduces to Gaussians in both $p_n$ and $q_n$. Performing the integrations, combining the result with the Jacobian (\ref{36}), and undoing the Wick rotation yields the prefactor \begin{equation} \label{40} \lim_{N \rightarrow \infty} \prod_{n=0}^{N-1} \left ( 1 - \frac{ 4 \omega^2 T^2 }{ (2 n + 1)^2 } \right )^{-\frac{1}{2}} = \frac{ 1 }{ \sqrt{ \cos \omega T } } \; . \end{equation} Combining results (\ref{39}) and (\ref{40}) yields the correct form for the transition element (\ref{one}) for the harmonic oscillator. In Sec.~IV the ramifications of the quantum nature of the coordinate fluctuations for the boundary conditions of the canonically transformed coordinates will be discussed, and the results of this subsection will be used to define restrictions on the validity of the canonically transformed path integral. \newsection{Canonical Transformations} A classical canonical transformation is one from the coordinates $(q,p)$ to a new set of coordinates $(Q,P)$ such that the Poisson bracket structure, or equivalently the volume of phase space, is preserved. For convenience and consistency only canonical transformations of the third kind \cite{Gold} will be considered in the remainder of this paper, and these are defined by a choice for the generating function of the general form $F( p , Q , t)$. At the classical level the new variables are determined by solving the system of equations given by \begin{equation} \label{45} q = - \frac{ \partial F ( p , Q , t)}{ \partial p} \, , \; \; P = - \frac{ \partial F (p, Q, t)}{ \partial Q} \, . \end{equation} It is important to remember that $Q$ and $p$ are treated as independent in the definitions of the new coordinates given by (\ref{45}). However, the proof that the Poisson bracket structure is preserved depends on the identities obtained by differentiating (\ref{45}) and using the fact that $Q = Q(q,p)$. For example, it follows that \begin{equation} \label{45.1} 1 = \frac{ \partial q}{ \partial q} = - \frac{ \partial^2 F(p,Q,t)} {\partial Q \, \partial p} \frac{ \partial Q ( q , p , t ) }{ \partial q} \, . \end{equation} It is assumed that the equations of (\ref{45}) are well-defined and can be solved to yield $Q(q,p,t)$ and $P(q,p,t)$, or inverted to obtain $q(Q,P,t)$ and $p(Q,P,t)$. The action is transformed according to \begin{eqnarray} \label{46} & & \int_0^T dt \, \left [ - q \dot{p} - H ( p, q) \right ] \nonumber \\ & & = \int_0^T dt \, \left [ P \dot{Q} - \tilde{H} ( P , Q ) + \frac{dF}{dt} \right ] \nonumber \\ & & = F ( p_f , Q_f, t_f ) - F( p_i , Q_i, t_i ) + \int_0^T dt \, \left [ P \dot{Q} - \tilde{H} ( P , Q ) \right ] \, , \end{eqnarray} where \begin{equation} \label{47} \tilde{H} ( P , Q ) = H ( p(Q,P,t) , q(Q,P,t) ) + \frac{ \partial F(p(P,Q),Q,t)}{\partial t} \; . \end{equation} At the classical level there is no difficulty in obtaining initial and final values for both variables $q$ and $p$ since it is assumed that Hamilton's equations can be solved to obtain classical solutions consistent with any possible pair of boundary conditions over the arbitrary time interval $T$. The two unspecified endpoint values of the variables are simply those given by the classical solutions at the respective endpoint times. However, if canonical transformations are to be employed in a path integral setting in a manner similar to the classical result, it is necessary to deal with the quantum mechanical version of this problem, and there is no {\it a priori} reason to expect that the classical definition is consistent with the quantum mechanical transition amplitude (\ref{one}). This will be discussed in detail in Sec.~IV. It is apparent at the classical level that the values of the generating function evaluated at the endpoints, {\it i.e.}, the surface terms, correspond to a piece of the minimized original classical action not determined by the minimized transformed action. This is demonstrated by examining the well-known canonical transformation to cyclic coordinates for the harmonic oscillator. Using the generating function \begin{equation} \label{48} F( p , Q ) = - \frac{p^2}{2 m \omega} \tan Q \end{equation} gives \begin{equation} \label{49} Q = \arctan \frac{ m \omega q}{ p } \, , \; \; \omega P = \frac{p^2}{2 m} + \frac{1}{2} m \omega^2 q^2 \, , \end{equation} so that transformed Hamiltonian is $\tilde{H} = \omega P$. It follows that the transformed action vanishes when evaluated along the classical trajectory $Q_c = \omega t + Q_i$ and $P_c = P_f = P_i$. Therefore, the value of the original action along the classical trajectories $q_c$ and $p_c$ is contained entirely in the endpoint contributions of the generating function. An explicit calculation using the classical solutions (\ref{37}) in (\ref{48}) with $Q_i = \delta$ and $Q_f = \omega T + \delta$ verifies that the generating function endpoint values reproduce result (\ref{39}). Because the value of the action along the classical trajectory is the phase of the quantum transition amplitude (\ref{one}) in the WKB approximation, the value of knowing the form of the generating function for a canonical transformation to cyclic coordinates becomes apparent. Such a classical generating function already gives considerable information regarding the quantum transition amplitude. However, it is important to note that the choice of $\omega$ appearing in the transformed Hamiltonian $\tilde{H}$ is arbitrary. Using the generating function \begin{equation} \label{50} F( p , Q ) = - \frac{p^2}{2 m \omega} \tan \left ( \frac{\omega}{\omega^\prime} Q \right ) \end{equation} transforms the Hamiltonian to $\omega^\prime P$. While the solution for $Q$ becomes $ Q = \omega^\prime t + Q_i$, this has no effect on the solution for the original variable $q$ because of the offsetting factors in the generating function. This is merely a reflection of the fact that scaling $Q$ can be offset by scaling $P$ and $m$ or $\omega$ when the Hamiltonian is cyclic. The generating function can also undergo arbitrary translations of the $Q$ variable as well, which are offset simply by choosing a different value for $Q_i$ in the classical solution. The classical harmonic oscillator solution can be generalized to power potentials of the form \begin{equation} \label{51} H = \frac{p^2}{2m} + \frac{1}{n} m \lambda^n q^n \; , \end{equation} where $\lambda$ is a constant with the natural units of inverse length. Hamiltonians of the form (\ref{51}) are rendered cyclic by using the generating function \begin{equation} \label{52} F = - \frac{1}{2 \alpha} \left ( \frac{p^2}{m \lambda} \right )^{\alpha} f^{\gamma} (Q)\, , \end{equation} where $\alpha = (n+ 2)/2n$ and $\gamma = - 2/n$. Using this generating function gives \begin{eqnarray} \label{53} q & = & \frac{p^{2 \alpha -1}}{ (m \lambda)^\alpha} f^{\gamma} (Q) \; , \\ \label{54} P & = & \frac{\gamma}{2 \alpha} \frac{ p^{2 \alpha} }{(m \lambda)^\alpha} f^{(\gamma - 1)} (Q) \frac{\partial f(Q)}{\partial Q} \, . \end{eqnarray} Substituting (\ref{53}) into the original Hamiltonian gives \begin{equation} \label{55} H = p^2 \left [ \frac{1}{2m} + \frac{1}{n \lambda} \left ( \frac{ \lambda }{ m} \right )^{\frac{n}{2}} f^{\gamma n} (Q) \right ] \, . \end{equation} Using (\ref{54}) shows that (\ref{55}) reduces to \begin{equation} \label{56} \tilde{H} = \omega P^\beta \, , \end{equation} where $ \beta = 2n / (n+2) = 1 / \alpha$, if $f(Q)$ is chosen to satisfy the first-order differential equation \begin{equation} \label{57} \frac{ \partial f(Q) }{ \partial Q} = \frac{ 2 \alpha}{ \gamma } \left ( \frac{ \lambda}{ 2 \omega} \right )^\alpha \left [ f^2 (Q) + \frac{ 2m}{ n \lambda } \left ( \frac{ \lambda }{ m } \right )^{\frac{n}{2}} \right ]^\alpha \, . \end{equation} While a particular value for the $\omega$ in (\ref{56}) can be chosen, such a choice is arbitrary in the same way that the $\omega^\prime$ in (\ref{50}) is arbitrary. This arbitrariness in scale is similar to that which appears in equivariant cohomology approaches to the same problem \cite{Dykstra}. The sign for $\omega$ is determined from the range of the original Hamiltonian, and can be either positive or negative if the original Hamiltonian was such that $n <0$. Choosing a negative sign for $\omega$ will affect the final form of expression (\ref{57}). However, it is important to note that if $n$ is odd, the range of the original Hamiltonian is $- \infty$ to $\infty$. This will introduce difficulties in maintaining the range of the Hamiltonian in some cases. This will be demonstrated for the specific case of a linear potential in Sec.~V.B. Equation (\ref{57}) can be formally solved by integration, so that \begin{equation} \label{58} \frac{ 2 \alpha}{ \gamma } \left ( \frac{ \lambda}{ 2 \omega} \right )^\alpha ( Q - Q_i ) = \int \frac{ df }{ \left [ f^2 + \zeta^2 \right ]^\alpha} \, , \end{equation} where \begin{equation} \label{59} \zeta^2 = \frac{ 2m}{ n \lambda } \left ( \frac{ \lambda }{ m } \right )^{\frac{n}{2}} \, . \end{equation} The right hand side of (\ref{58}) is, up to the factor on the left hand side, the functional inverse of $f$, written $g$, so that $g ( f ( Q) ) = Q$. Therefore, inverting (\ref{58}), where possible, yields the function $f(Q)$ appearing in the canonical transformation. However, even if the expression generated by (\ref{58}) cannot be exactly inverted, it can still be used to determined the classical form for $Q(q,p)$ in the following way. Form (\ref{53}) shows that \begin{equation} \label{60} Q = g \left ( \left [ \frac{ m^\alpha \lambda^\alpha q }{ p^{2 \alpha - 1} } \right ]^{\frac{1}{\gamma}} \right ) \, , \end{equation} so that the result of the integral (\ref{58}), written as a function of $f$, must coincide with result (\ref{60}). Therefore, substituting \begin{equation} \label{61} f = \left ( \frac{ m^\alpha \lambda^\alpha q }{ p^{2 \alpha - 1} } \right )^{\frac{1}{\gamma}} \end{equation} into the result of the integration in (\ref{58}) gives $Q = Q( q, p)$. It is easy to show that the choices $n = 2$, $\omega = \lambda$, and $Q_i = \pi / 2$ reproduce the harmonic oscillator generating function (\ref{50}). However, the cyclic form (\ref{56}) for the transformed Hamiltonian is not unique since a second transformation using the generating function $F = - f(P) Q^\prime $ results in a Hamiltonian that is an arbitrary function of $P^\prime = f(P)$ alone. Nevertheless, in any cyclic Hamiltonian the remaining variable is some function of the original Hamiltonian, {\it i.e.}, $P = P ( H ( p, q) )$. Any attempts to use these results within the quantum mechanical context are immediately beset by ordering problems. While the classical Poisson bracket of $Q$ and $P$ remains unity, the original algebra of $q$ and $p$, coupled with the transcendental nature of the transformation, results in the commutator of $Q$ and $P$ being poorly defined. To lowest order in $\hbar$ it is true that $ [ Q , P ] = i \hbar$, but additional powers appear that are dependent on the ordering convention chosen for the expansion of the transcendental functions. In order to preserve the commutation relations it is necessary to institute a unitary transformation of the original operator variables, rather than a canonical transformation. Anderson \cite{Anderson} has discussed enlarging the Hilbert space of the original theory to accommodate non-unitary transformations that alter the commutation relations. Although some of the results obtained in such an approach are similar to those of canonical transformations, this is a fundamentally different approach to solving the equations of motion. As a result, it will not be discussed here. These ordering ambiguities can be demonstrated by examining the canonical transformation from Cartesian to spherical coordinates in two dimensions. The generating function for this transformation is given by \begin{equation} \label{SC1} F = - p_x r \cos \theta - p_y r \sin \theta \, , \end{equation} and this yields the standard result $x = r \cos \theta$, $y = r \sin \theta$, $ P_r = p_x \cos \theta + p_y \sin \theta$, and $P_\theta = - p_x r \sin \theta + p_y r \cos \theta$. In order to invert these equations it is necessary to choose an ordering convention for the operators. The most reliable of these is Weyl ordering, which symmetrizes all non-commuting operators. The result is \begin{eqnarray} p_x & = & \cos \theta \, P_r - \frac{ \sin \theta}{2r} P_\theta - P_\theta \frac{ \sin \theta}{2r} \\ p_y & = & \sin \theta \, P_r + \frac{\cos \theta}{2r} P_\theta + P_\theta \frac{\cos \theta}{2r} \, . \end{eqnarray} Using the commutators for the spherical coordinates yields \begin{equation} \label{SC2} {p_x}^2 + {p_y}^2 = {P_r}^2 + \frac{1}{r^2} {P_\theta}^2 + \frac{i \hbar}{r} P_r - \frac{\hbar^2}{ 4 r^2} \, , \end{equation} showing that this transformation takes a cyclic Hamiltonian into a non-cyclic Hamiltonian. It is not difficult to see that the $O(\hbar)$ term is essential to maintaining self-adjointness of the Hamiltonian when written in terms of spherical coordinates. This can be seen from an integration by parts for the expectation value of the Hamiltonian in spherical coordinates \cite{Swan2}. Of course, such a term is not generated by the classical transformation, giving further evidence that classical canonical transformations and their quantum counterparts may differ by terms that are functions of $\hbar$. The numerical factor appearing before the $O ( \hbar^2 )$ term in (\ref{SC2}) is a function of the Weyl ordering chosen for the original operator expressions. Nevertheless, because $[ Q, P] \approx i \hbar$, it is interesting to treat the new variables as if they were canonically conjugate quantum variables and pursue the solution of the transformed system (\ref{56}). Alternatively, one could start with the Hamiltonian (\ref{56}), enforce the exact commutator $[ Q, P] = i \hbar$, and solve for the energy levels of the system. While it is clear that the previously mentioned ordering problems prevent this solution from being that of the original system that led to the cyclic Hamiltonian, such a solution can serve as an approximation to $O(\hbar^2)$ of the original Hamiltonian. This solution can be found in a formal manner by assuming a discrete spectrum, {\it i.e.}, bounded from below, and defining the creation and annihilation operators \begin{equation} \label{62} a^\dagger = e^{i Q \hbar^{\alpha -1} } \sqrt{ (P - \delta) \hbar^{- \alpha} } \, , \; \; a = \sqrt{ (P - \delta)\hbar^{-\alpha }} \, e^{- i Q \hbar^{\alpha -1 } } \, . \end{equation} Using the commutator $ [ Q , P ] = i \hbar $ gives $[ a , a^\dagger] = 1$ regardless of the value of $\delta$. Since $\hbar^\alpha a^\dagger a = P - ( 1 + \delta) \hbar^\alpha$, the Hamiltonian (\ref{56}) becomes $H = \hbar \omega ( a^\dagger a + 1 + \delta )^\beta$. Defining a ground state $ | \, 0 \, \rangle$ by the relation $ a | \, 0 \, \rangle = 0$, it follows that the excitations of the system are obtained by applying suitably normalized factors of $a^\dagger$ to the ground state, leading to an energy spectrum $ E_n = \hbar \omega ( n + 1 + \delta)^\beta$. The arbitrariness of $\delta$ can be used to offset the ordering ambiguities in the canonical transformation generated by the original algebra of $q$ and $p$. The simple harmonic oscillator solution demonstrates this aspect. Using (\ref{49}) and ignoring commutators of $q$ and $p$ shows that the annihilation operator of (\ref{62}) contains the factor \begin{equation} \label{63} e^{- i Q} = - i \sqrt{ \frac{ m \omega }{ 2 P } } \left ( q + \frac{ i p}{ m \omega} \right ) \, . \end{equation} The term in the parentheses in (\ref{63}) is, up to a factor, the standard annihilation operator associated with the harmonic oscillator. It is also true that ignoring the ordering ambiguities has resulted in an expression that does not satisfy $e^{-iQ} e^{iQ} = 1$ at the quantum level, exposing the formal nature of the manipulations that led to (\ref{63}). Choosing $\delta = - 1/2$ reproduces the correct harmonic oscillator energy spectrum. It is important to determine if the general form for the bound state spectrum of the Hamiltonian is in any way valid for other systems, since the harmonic oscillator is a notoriously pliable system. Choosing $n = -1$ in (\ref{51}) and restricting to $\lambda, q > 0$ produces the one-dimensional Coulomb potential, whose associated Schr\"odinger equation can be readily solved by standard methods. The eigenvalue equation \begin{equation} \label{64} \left ( - \frac{ \hbar^2}{ 2 m } \frac{d^2}{ dq^2} - \frac{m}{ \lambda q} \right ) \psi_n ( q ) = E_n \psi_n ( q ) \, , \end{equation} possesses the bound state energies $E_n = - m^3 / (2 \hbar^2 \lambda^2 n^2)$. The canonically transformed Hamiltonian (\ref{51}) gives $\beta = - 2$ for this case, so that the choice $\omega = - m^3 / ( 2 \hbar^3 \lambda^2 )$ and $\delta = 0$ reproduces the bound state energy spectrum of the Schr\"odinger equation since $\tilde{H} = - \hbar | \omega | ( a^\dagger a + 1 )^{-2} $. In addition, it is possible to evaluate the integral (\ref{58}) for this case. Following the prescription outlined in (\ref{60}) and (\ref{61}) and using the value $\omega$ determined from the differential equation gives the result \begin{equation} \label{65} \hbar^{3/2} ( Q - Q_i ) = - \left ( \frac{ \lambda^2 }{ 2 m^3} \right )^{\frac{1}{2}} pq \sqrt{ \frac{m}{\lambda q} - \frac{p^2}{2m} } - 2 \arcsin \sqrt{ \frac{ \lambda q p^2 }{ 2 m^2 } } \, . \end{equation} The associated annihilation operator (\ref{62}) possesses the factor \begin{eqnarray} \label{66} \exp ( - i Q \hbar^{-3/2} ) & \propto & \exp \left ( 2 i \arcsin \sqrt{ \frac{ \lambda q p^2 }{ 2 m^2 } } \right ) \nonumber \\ & = & - \frac{ 2 \lambda q}{m} \left ( H ( p , q ) + \frac{m}{ 2 \lambda q } \right ) + 2 i \sqrt{ - \frac{\lambda^2 q^2 p^2}{ 2 m^3} H(p,q) } \; . \end{eqnarray} Setting the Hamiltonian equal to its ground state eigenvalue, $E_1 = - m^3 / ( 2 \lambda^2 \hbar^2 ) $, reduces (\ref{66}) to \begin{equation} \label{67} a \, \propto \, \frac{m^2}{ \lambda \hbar^2} q - 1 + \frac{i}{\hbar} q p \, \rightarrow \, \frac{m^2}{ \lambda \hbar^2} q - 1 + q \frac{ \partial }{ \partial q } \, , \end{equation} and this differential operator annihilates the ground state wave function determined from the Schr\"odinger equation, $\psi_0 = C q \exp ( - m^2 q / \hbar^2 \lambda ) $. As another example, letting $n \rightarrow \infty$ in (\ref{51}) produces a potential that is zero for $|q| < 1 / \lambda$ and infinite for $|q| > 1 / \lambda$. This limit therefore corresponds to a particle in an infinite well of width $2 / \lambda$. In this limit $\beta \rightarrow 2$, so that choosing $\omega = \pi^2 \lambda^2 \hbar / 8 m$ and $\delta = -1$ allows (\ref{56}) to reproduce the standard square well energy spectrum $E_n = n^2 \hbar \omega$. The form for $f(Q)$ given by (\ref{58}) for this limit is not useful since it can be shown to correspond to a mapping of the interval $2/ \lambda$ into the whole real line. It is, however, possible to solve the classical square well problem using a canonical transformation of the form $F ( p , Q ) = - p \, f(Q)$, where the function $f$ is chosen to be \begin{equation} \label{68} f ( Q ) = \frac{8}{ \lambda \pi^2} \sum_{n = 1, 3, 5, \ldots}^{\infty} \frac{ (-1)^{(n-1)/2} }{ n^2} \sin ( n \pi Q ) \, . \end{equation} The Fourier series (\ref{68}) is the sawtooth wave with unit period and maxima and minima of $ \pm \lambda^{-1}$. The derivative of (\ref{68}) gives the square wave with values $ \pm 2 \lambda^{-1}$, so that \begin{equation} \label{69} \left ( \frac{ \partial f ( Q ) }{ \partial Q } \right )^2 = \frac{4}{ \lambda^2} \, . \end{equation} As a result, the free Hamiltonian is mapped into another free Hamiltonian under the action of the canonical transformation, \begin{equation} \label{70} P = p \, \frac{ \partial f ( Q ) }{ \partial Q} \, \Rightarrow \, \frac{p^2}{ 2 m } \, \rightarrow \, \frac{ \lambda^2 P^2}{ 8 m } \, , \end{equation} so that the classical result for the evolution of $Q$ is \begin{equation} \label{71} Q = Q_i + \frac{ \lambda^2 P t }{ 4 m } \, . \end{equation} Whereas the original momentum $p$ oscillates between a positive and negative value, the new variable $P$ is truly a constant of motion. The classical canonical transformation gives \begin{equation} \label{72} q = f \left ( Q_i + \frac{ \lambda^2 P t }{ 4 m } \right ) \, , \; \; p = \frac{ P }{ \frac{\partial f}{ \partial Q}} \, , \end{equation} and this describes the bouncing motion of the classical particle in the square well. \newsection{Quantum Canonical Transformations and Anomalies} In the operator approach to quantum mechanical systems any nontrivial change of variables is complicated by the ordering and noncommutativity of the constituent operators that occur in expressions. Such difficulties are not immediately apparent in the path integral expression (\ref{seven}) due to the $c$-number form of the variables in the action. However, closer inspection of the action in (\ref{seven}) shows that the formal time derivatives do not behave in a way that allows the classical canonical transformation to be implemented, since $q_{j+1} - q_j$ is not {\it a priori} $O( \epsilon )$. For this reason the implementation of canonical transformations in the path integral formalism cannot in general reproduce the transformed classical action. In fact, it would be an error in most cases if it did, since using the classical result in the action of the transformed path integral would yield a transition element that was inconsistent with the results obtained from the Schr\"odinger equation, operator techniques, or the original untransformed path integral. An alternate approach must be taken, and in this paper a variant of the method of Fukutaka and Kashiwa \cite{Fukutaka} will be used. This approach can be inferred by examining the ramifications of using a new set of canonically conjugate variables to construct the path integral. The phase space of a quantum mechanical system may possess unusual properties, as the following simple argument demonstrates. The standard configuration space transition element can be written \begin{equation} \label{77.2} \langle \, q(0) \, | \, q(T) , T \, \rangle = G ( q(0), q(T), T ) \exp i W ( q(0), q(T), T) \; , \end{equation} and upon taking the modulus squared, integrating over $q(T)$, and using the completeness of the position states, it follows that \begin{equation} \label{77.3} \int dq(T) \, | G ( q(0), q(T), T) |^2 = \langle \, q(0) \, | \, q(0) \, \rangle = \int \frac{ dp(0) }{ 2 \pi \hbar} \; . \end{equation} For a quadratic Hamiltonian, it is well known that the function $G$ is independent of $q(0)$ and $q(T)$ \cite{Feyn}, so that (\ref{77.3}) relates the volumes of quantum phase space components to each other. For example, the free particle is such that \begin{equation} G ( q(0), q(T) , T ) = \sqrt{ \frac{m }{ 2 \pi i \hbar T } } \; , \end{equation} so that (\ref{77.3}) gives \begin{equation} \label{77.4} \int dq(T) = \frac{T}{m} \int dp(0) \; . \end{equation} Result (\ref{77.4}) is reminiscent of the spreading of a wave-packet for the free particle. A similar analysis for the harmonic oscillator gives \begin{equation} \int dq(T) = \frac{ \sin \omega T }{ m \omega } \int dp(0) \; . \end{equation} Of course, both of the phase space volumes appearing in these expressions are infinite, and the comparison of infinities is a poorly defined endeavor. Nevertheless, these results hint at a richer structure in the quantum mechanical phase space, and that this structure is related to the prefactor $G$. If there exist new conjugate operators, $\hat{Q}$ and $\hat{P}$, at the quantum level, it is then natural to construct the path integral using their eigenstates as intermediate states. This means a repetition of the steps used in Sec.~II that led to (\ref{seven}), using as a unit projection operator \begin{equation} \label{78} \hat{\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}} = \int \frac{dP}{ 2 \pi \hbar } \, dQ \, | \, Q \, \rangle e^{i Q P / \hbar } \langle \, P \, | \, . \end{equation} In so doing, two difficulties occur immediately. The first is the evaluation of the matrix elements of the original Hamiltonian, $H ( \hat{p}, \hat{q}, t )$, in the new states. The second problem is the endpoint evaluation. While the intermediate states are the new ones, the endpoint states of the transition element are still eigenstates of the old operators. In the transition element of (\ref{one}) there are two inner products of importance to the final form of the path integral constructed using $N$ copies of the unit projection operator (\ref{78}), and these are $\langle \, p_f \, | \, Q_N \, \rangle$ and $ \langle \, P_1 \, | \, q_i \, \rangle$. In some simple cases, such as the transformation from Cartesian to polar coordinates, it is possible to obtain exact expressions for these inner products. In most cases it is not. In order to evaluate these inner products, a general form for them will be assumed, and a consistency condition necessary to maintain (\ref{78}) as a unit projection operator will be derived. This result will serve to define a quantum mechanical version of canonical transformations that is similar in structure to that proposed by Fukutaka and Kashiwa \cite{Fukutaka}. If (\ref{78}) is to hold, the form of the inner products must be such that \begin{equation} \label{79} \langle \, p_f \, | \, q_i \, \rangle = \int \frac{ dP_1 }{ 2 \pi \hbar } \, dQ_N \, \langle \, p_f \, | \, Q_N \, \rangle \, e^{i P_1 Q_N / \hbar } \langle \, P_1 \, | \, q_i \, \rangle \, . \end{equation} The new variables and the inner products are defined in the following way. The inner products are written formally in terms of some function $F ( p , Q ) $, \begin{eqnarray} \label{80} \langle \, p_f \, | \, Q_N \, \rangle & = & \exp \left \{ \frac{i}{\hbar} \left [ P_f ( Q_f - Q_N ) + F ( p_f , Q_f ) \right ] \right \} \, , \\ \label{81} \langle \, P_1 \, | \, q_i \, \rangle & = & \exp \left \{ - \frac{i}{\hbar} \left [ P_1 Q_i + F ( p_i , Q_i ) \right ] \right \} \, . \end{eqnarray} Inserting forms (\ref{80}) and (\ref{81}) into (\ref{79}) gives \begin{equation} \label{82} \langle \, p_f \, | \, q_i \, \rangle = \exp \left \{ \frac{i}{\hbar} \left [ P_f ( Q_f - Q_i ) + F ( p_f , Q_f ) - F ( p_i , Q_i ) \right ] \right \} \, . \end{equation} In order that (\ref{82}) reduces to the standard result, it is necessary to identify \begin{eqnarray} \label{83} - P_f ( Q_f - Q_i ) & = & F( p_f , Q_f ) - F ( p_f , Q_i ) \, , \\ \label{84} - q_i ( p_f - p_i ) & = & F( p_f , Q_i ) - F ( p_i , Q_i ) \, . \end{eqnarray} For the identifications of (\ref{83}) and (\ref{84}) the inner product of (\ref{82}) reduces to \begin{equation} \label{85} \langle \, p_f \, | \, q_i \, \rangle = \exp \left [ - \frac{i}{\hbar} \, q_i ( p_f - p_i ) \right ] \, , \end{equation} which is the correct result if the restriction $p_i = 0$, familiar from the discussion in Sec.~II.B, is enforced. Although identifications (\ref{83}) and (\ref{84}) result in infinite series definitions of the new variables $P$ and $Q$, the leading term of the expansions reproduces the classical result. For example, (\ref{83}) gives \begin{equation} \label{86} P_f = - \frac{ \partial F ( p_f , Q_f )}{ \partial Q_f } + \frac{1}{2} \frac{ \partial^2 F( p_f, Q_f ) }{ \partial {Q_f}^2} ( Q_f - Q_i ) + \ldots \, , \end{equation} so that the first term coincides with the time independent form of the classical canonical transformation to the new variable $P$. In addition, the quantum counterparts of identities such as (\ref{45.1}) are altered. This will be discussed later in this section. The ``infinitesimal'' versions of (\ref{83}) and (\ref{84}), to be used in defining the path integral variables, are given by \begin{eqnarray} \label{87} P_j & = & - \frac{ F( p_j , Q_j ) - F ( p_j , Q_{j-1} ) } { \Delta Q_j } \, , \\ \label{88} q_j & = & - \frac{ F( p_{j+1} , Q_j ) - F ( p_j , Q_j ) }{ \Delta p_j }\, , \end{eqnarray} where $\Delta Q_j = Q_j - Q_{j-1}$ and $\Delta p_j = p_{j+1} - p_j $. Using these definitions of the new variables allows the formal time derivatives in the path integral action to be transformed appropriately, since (\ref{87}) and (\ref{88}) give \begin{equation} \label{89} - q_j ( p_{j+1} - p_j ) = P_{j+1} ( Q_{j+1} - Q_j ) + F ( p_{j+1} , Q_{j+1} ) - F ( p_j , Q_j ) \, , \end{equation} and the action sum in the path integral (\ref{seven}) therefore becomes \begin{equation} \label{90} - \sum_{j=0}^{N} q_j ( p_{j+1} - p_j ) = F( p_f , Q_f ) - F(p_i , Q_i ) + \sum_{j=0}^{N} P_{j+1} ( Q_{j+1} - Q_j ) \, . \end{equation} Result (\ref{90}) is similar in form to the standard endpoint terms generated in the action by a canonical transformation. It is important to remember that this result is valid only for the case that $p_i = 0$. The form of the transformed Hamiltonian appearing in the action is complicated by the dependence of the old variables, $q$ and $p$, on $\Delta P$ and $\Delta Q$, as well as $P$ and $Q$. From (\ref{87}) it follows that $p_j$ is a function of $P_j$, $Q_j$, and $Q_{j-1}$, but that the dependence on $Q_{j-1}$ can be expressed in a power series in $\Delta Q_j$, \begin{equation} \label{91} P_j = \sum_{n=1}^{\infty} \frac{1}{n!} \frac{ \partial^n F}{ \partial {Q_j}^n} \, ( - \Delta Q_j )^{n -1} \; . \end{equation} The importance of the leading behavior of $\Delta Q$ in $\epsilon$, discussed in Sec.~II.B, is now apparent. The form of the transformed Hamiltonian will depend critically on whether the terms containing $ \Delta Q$ and $\Delta P$ are suppressed by the overall factor of $\epsilon$ that prefaces the Hamiltonian. It should be clear from the discussion of Sec.~II that these $\Delta$ terms will be suppressed by {\em some} power of $\epsilon$; it is not clear until the specific system and transformation are chosen if they will still contribute to the path integral when the infinite sum is evaluated. If they do, then the transformed quantum mechanical Hamiltonian will differ from the classical transformed Hamiltonian. The transformed Hamiltonian will therefore be written $ H ( p_j , q_j ) = \tilde{H} ( P_j , Q_j, \Delta P_j, \Delta Q_j )$, since it is not {\it a priori} obvious that the $\Delta$ terms can be suppressed. From the discussion in Sec.~II.B it is apparent there are cases where such terms can contribute to the evaluation of the path integral, and, at least for one of the cases discussed there, can become $O( \hbar )$ terms. These are the path integral counterparts of the ordering ambiguities in the operator approach to canonical transformations, and, in a loose sense, represent commutators between the old canonical variables $\hat{q}$ and $\hat{p}$. These results can be generalized to the case that the function $F$ or the original Hamiltonian have explicit time-dependence. Denoting the function as $F ( p_j, Q_j, t_j )$, an analysis similar to that which led to (\ref{87}) and (\ref{88}) gives \begin{eqnarray} \label{90a} P_j & = & - \frac{ F(p_j, Q_j, t_j) - F (p_j, Q_{j-1}, t_j )}{\Delta Q_j} \; , \\ \label{90b} q_j &=& - \frac{ F(p_{j+1}, Q_j, t_j) - F (p_j, Q_j, t_j )}{\Delta p_j} \; , \\ \label{90c} H ( p_j , q_j , t_j ) & = & \tilde{H} ( P_j , Q_j , \Delta Q_j , \Delta P_j , t_j ) + \frac{ \partial F ( p_{j+1} , Q_j , t_j ) }{ \partial t_j } \; . \end{eqnarray} The result (\ref{90c}) is valid only in the limit that $t_{j+1} - t_j = \epsilon \rightarrow 0$. However, the identifications of (\ref{90a}) through (\ref{90c}) lead to a result similar to (\ref{90}) \begin{eqnarray} \label{91a} & & \sum_{j=0}^N \left [ - q_j ( p_{j+1} - p_j ) - \epsilon H( p_j , q_j , t_j ) \right ] = F ( p_f , Q_f , t_f ) - F ( p_i , Q_i , t_i ) \nonumber \\ & & + \sum_{j = 0}^N \left [ P_{j+1} ( Q_{j+1} - Q_j ) - \epsilon \tilde{H} ( P_j , Q_j , \Delta P_j , \Delta Q_j , t_j ) - \epsilon \frac{ \partial F ( p_{j+1} , Q_j , t_j )}{ \partial t_j } \right ] \; . \end{eqnarray} However, it will now be shown that the Jacobian of a general transformation may contribute terms of $O ( \Delta Q)$ and $ O ( \Delta P ) $ to the action in such a way that they are not prefaced by a factor of $\epsilon$. For that reason they cannot be ignored, since the sum in which they are embedded allows them to contribute a finite amount to the transformed action. These $ O ( \Delta Q )$ and $ O ( \Delta P ) $ contributions are calculated from (\ref{88}) and (\ref{89}) using the implicit dependence of $Q$ on $q$ and $p$. Initially, these contributions will be calculated for the case of a one-dimensional system, and the generalization will be discussed afterward. The starting point is the definition of the inverse Jacobian, \begin{equation} \label{92} J^{-1} = \prod_{j=1}^N \left [ \frac{ \partial Q_j}{ \partial q_j } \frac{ \partial P_j }{ \partial p_j } - \frac{ \partial P_j}{ \partial q_j } \frac{ \partial Q_j }{ \partial p_j } \right ] \; , \end{equation} where it has been assumed that $Q_j$ and $P_j$ depend primarily on $q_j$ and $p_j$, {\it i.e.}, that the dependence on the other variables is suppressed by some power of $\epsilon$. It will be seen that this is a self-consistent assumption. The partial derivatives of $P_j$ can be obtained to $O ( \Delta Q )$ from the expansion (\ref{90}) by using the implicit dependence of $Q_j$ on $q_j$ and $p_j$. The result is \begin{eqnarray} \label{93} \frac{\partial P_j}{ \partial p_j} & = & - \frac{ \partial^2 F}{ \partial p_j \, \partial Q_j} - \frac{1}{2} \frac{ \partial^2 F}{ \partial {Q_j}^2 } \frac{ \partial Q_j }{ \partial p_j } + \left [ \frac{1}{2} \frac{ \partial^3 F}{ \partial p_j \, \partial {Q_j}^2 } + \frac{1}{6} \frac{ \partial^3 F}{ \partial {Q_j}^3} \frac{\partial Q_j}{ \partial q_j} \right ] \Delta Q_j \\ \label{94} \frac{\partial P_j}{ \partial q_j} & = & - \frac{1}{2} \frac{ \partial^2 F}{ \partial {Q_j}^2} \frac{\partial Q_j}{ \partial q_j} + \frac{1}{6} \frac{ \partial^3 F}{ \partial {Q_j}^3} \frac{ \partial Q_j}{\partial q_j} \Delta Q_j \; . \end{eqnarray} Direct substitution of (\ref{93}) and (\ref{94}) into (\ref{92}) gives \begin{equation} \label{95} J^{-1} = \prod_{j=1}^N \left [ - \frac{ \partial^2 F}{ \partial p_j \, \partial Q_j } \frac{ \partial Q_j }{ \partial q_j } + \frac{1}{2} \frac{ \partial^3 F}{ \partial p_j \, \partial {Q_j}^2 } \frac{ \partial Q_j }{ \partial q_j } \Delta Q_j \right ] \; . \end{equation} Result (\ref{88}) can now be differentiated and combined with the independence of $q_j$ and $p_j$ to obtain, to $O ( \Delta p )$, the quantum counterpart of (\ref{45.1}), \begin{equation} \label{96} 1 = \frac{\partial q_j}{ \partial q_j} = - \frac{ \partial^2 F}{ \partial p_j \, \partial Q_j } \frac{ \partial Q_j} {\partial q_j} - \frac{1}{2} \frac{ \partial^3 F}{ \partial {p_j}^2 \, \partial Q_j} \frac{ \partial Q_j}{ \partial q_j } \Delta p_j \; . \end{equation} The term $\Delta p_j$ can be written in terms of an expansion in $\Delta Q_j$ and $\Delta P_j$, so that \begin{equation} \label{97} \Delta p_j = \frac{ \partial p_j}{ \partial Q_j } \Delta Q_j + \frac{ \partial p_j }{ \partial P_j } \Delta P_j \; . \end{equation} Combining (\ref{97}) and (\ref{96}) and inserting the result into (\ref{95}) yields \begin{equation} \label{98} J^{-1} = \prod_{j=1}^{N} \left [ 1 + \frac{1}{2} \left ( \frac{ \partial^3 F}{ \partial {p_j}^2 \, \partial Q_j } \frac{ \partial Q_j }{ \partial q_j} \frac{ \partial p_j}{ \partial Q_j } + \frac{ \partial^3 F}{ \partial p_j \, \partial {Q_j}^2 } \frac{ \partial Q_j }{\partial q_j} \right ) \Delta Q_j + \frac{1}{2} \frac{ \partial^3 F}{ \partial {p_j}^2 \, \partial Q_j} \frac{ \partial Q_j}{ \partial q_j} \frac{ \partial p_j }{ \partial P_j} \Delta P_j \right ] \; . \end{equation} In general the lack of invariance for the measure of a path integral under a transformation, which itself is a symmetry of the action, is referred to as an anomaly \cite{Anomaly}. In the case of (\ref{98}), the anomaly arises due to the formal nature of time-derivatives in the path integral action, and has nothing to do with the behavior of the classical action under a canonical transformation. Nevertheless, (\ref{98}) will be referred to as the anomaly and can be written, to lowest order, as \begin{equation} \label{99} J^{-1} = \prod_{j=1}^N ( 1 + A_j \Delta Q_j + B_j \Delta P_j ) \; , \end{equation} where \begin{eqnarray} \label{99.2} A_j & = & \frac{1}{2} \left ( \frac{ \partial^3 F}{ \partial {p_j}^2 \, \partial Q_j } \frac{ \partial Q_j }{ \partial q_j} \frac{ \partial p_j}{ \partial Q_j } + \frac{ \partial^3 F}{ \partial p_j \, \partial {Q_j}^2 } \frac{ \partial Q_j }{\partial q_j} \right ) \\ \label{99.3} B_j & = & \frac{1}{2} \frac{ \partial^3 F}{ \partial {p_j}^2 \, \partial Q_j} \frac{ \partial Q_j}{ \partial q_j} \frac{ \partial p_j }{ \partial P_j} \; . \end{eqnarray} It is important to note that, even if $\Delta Q_j$ is $O ( \epsilon )$, the cross-terms in (\ref{99}) can contribute finite quantities. This follows from the fact that \begin{equation} \label{100} J^{-1} = \lim_{N \rightarrow \infty} \prod_{j=1}^N ( 1 + A_j \Delta Q_j + B_j \Delta P_j) = \lim_{N \rightarrow \infty} \exp \left [ \sum_{j=1}^N \ln ( 1 + A_j \Delta Q_j + B_j \Delta P_j ) \right ] \; . \end{equation} As a result, the expansion of the logarithm creates terms of the form \begin{equation} \label{101} J = \exp \left \{ \frac{ i }{ \hbar } \sum_{j=1}^{N} \left [ i \hbar A_j \Delta Q_j + i \hbar B_j \Delta P_j \right ] \right \} \end{equation} which can be absorbed into the transformed action of the path integral. It should be noted that these terms can contribute a finite quantity to the action even if $\Delta Q$ is $O( \epsilon )$ since $N \epsilon \rightarrow T$. For the same reason, if the $\Delta$ terms are $O ( \epsilon )$ or smaller, then the higher powers in the expansion of the logarithm can be dropped. Because they are proportional to $\hbar$, terms of the form (\ref{101}) are reminiscent of the velocity-dependent potentials (\ref{22}) discussed in detail in Sec.~II.B. Clearly, if the $\Delta$ terms are not suppressed by a factor of $\epsilon$, it will be necessary to retain higher order terms in both the expansion of the Jacobian (\ref{92}) as well as later in the expansion of the logarithm in (\ref{100}). These results may be generalized to the multidimensional case. The multidimensional versions of (\ref{87}) and (\ref{88}) are given by \begin{eqnarray} \label{102} - P^a_j \Delta Q^a_j & = & F ( p^a_j , Q^a_j ) - F ( p^a_j , Q^a_{j-1} ) \; , \\ \label{103} - q^a_j \Delta p^a_j & = & F ( p^a_{j+1} , Q^a_j ) - F ( p^a_j , Q^a_j ) \; , \end{eqnarray} where a sum over the repeated index $a$ is implicit. The definitions (\ref{102}) and (\ref{103}) do not yield a unique expression for each of the $q^a_j$ and $P^a_j$ since the Taylor series expansions can be separated in an arbitrary manner for each of the variables. In what follows a symmetrized definition of each of the canonical variables will be used, so that \begin{eqnarray} \label{104} P^a_j & = & - \frac{ \partial }{ \partial Q^a_j } \sum_{n=0}^{\infty} \frac{(-1)^n}{(n+1)!} \frac{\partial^n F_j }{\partial {Q^{a_1}_j} \cdots \partial {Q^{a_n}_j}} \Delta Q^{a_1}_j \cdots \Delta Q^{a_n}_j \; , \\ \label{105} q^a_j & = & - \frac{ \partial }{ \partial p^a_j} \sum_{n=0}^{\infty} \frac{(-1)^n}{(n+1)!} \frac{\partial^n F_j }{\partial {p^{a_1}_j} \cdots \partial {p^{a_n}_j}} \Delta p^{a_1}_j \cdots \Delta p^{a_n}_j \; , \end{eqnarray} where there is an implicit sum over any repeated pair of $a_i$ coordinate indices. It is straightforward to repeat the analysis that led to (\ref{99}) and (\ref{99.2}) and this yields the multidimensional version of the anomaly to $O( \Delta )$, \begin{equation} \label{106} J^{-1} = \prod_{j=1}^N ( 1 + A^a_j \Delta Q^a_j + B^a_j \Delta P^a_j ) \; , \end{equation} where \begin{eqnarray} \label{107} A^a_j & = & \frac{1}{2} \frac{ \partial^3 F_j }{ \partial p^b_j \, \partial p^c_j \, \partial Q^d_j } \, \frac{ \partial p^b_j}{ \partial Q^a_j} \, \frac{ \partial Q^d_j}{ \partial p^c_j} + \frac{1}{2} \frac{ \partial^3 F_j }{ \partial p^b_j \, \partial Q^c_j \, \partial Q^a_j } \, \frac{ \partial Q^b_j}{ \partial q^c_j} \; , \\ \label{108} B^a_j & = & \frac{1}{2} \frac{ \partial^3 F_j }{ \partial p^b_j \, \partial p^c_j \, \partial Q^d_j } \, \frac{ \partial p^b_j}{ \partial P^a_j} \, \frac{ \partial Q^d_j}{ \partial q^c_j} \; . \end{eqnarray} Exponentiation of (\ref{106}) leads to a result similar to (\ref{101}). The $O ( \Delta )$ anomaly takes a particularly simple form when the original generating function is given, for the one-dimensional case, by \begin{equation} \label{109} F = - p^\alpha f ( Q ) \; . \end{equation} {}From the results of Sec.~III it is clear that (\ref{109}) is adequate to transform all arbitrary single power potentials to a cyclic form. The anomaly associated with (\ref{109}) will be evaluated using the classical forms for the new variables. Such a procedure is consistent only to $O ( \Delta )$. It follows that these classical forms are given by solving \begin{equation} \label{110} q_j = \alpha ({p_j})^{( \alpha - 1 )} f( Q_j ) \; , \; \; P_j = ({p_j})^\alpha \frac{ \partial f}{ \partial Q_j} \; , \end{equation} and these relations in turn show that \begin{eqnarray} \label{111} \frac{ \partial Q_j }{ \partial q_j } & = & \frac{1}{ \alpha} {p_j}^{( 1 - \alpha ) } \left ( \frac{ \partial f }{ \partial Q_j} \right )^{-1} \; , \\ \label{112} \frac{ \partial p_j }{ \partial Q_j } & = & - \frac{1}{ \alpha } {p_j}^{1 - \alpha} \left ( \frac{ \partial f}{ \partial Q_j} \right )^{-2} \frac{ \partial^2 f}{ \partial {Q_j}^2 } \; . \end{eqnarray} Using (\ref{110}) through (\ref{112}) in (\ref{99.2}) and (\ref{99.3}) yields \begin{eqnarray} \label{113} A_j & = & - \frac{1}{2 \alpha} \left ( \frac{ \partial f}{ \partial Q_j } \right )^{-1} \frac{ \partial^2 f}{ \partial {Q_j}^2 } \; , \\ \label{114} B_j & = & \frac{ 1 - \alpha }{ 2 \alpha P_j} \; . \end{eqnarray} Similarly, using the multi-dimensional generating function \begin{equation} F = - p^a f^a ( Q ) \end{equation} results in a vector anomaly solely of the $A$ type, given by \begin{equation} \label{115a} A^a_j = - \frac{1}{2} \frac{ \partial^2 f^b }{ \partial Q^a_j \, \partial Q^c_j } \, \frac{ \partial Q^c_j }{ \partial q^b_j } \end{equation} It is important to note that if it is possible to treat $\Delta Q \approx \epsilon \, \dot{Q}$, then the exponentiated anomaly term of (\ref{113}) becomes \begin{equation} \label{115} - \lim_{N \rightarrow \infty} \sum_{j = 1}^N A_j \Delta Q_j = - \int_0^T {\rm dt} \, A ( Q ) \, \dot{Q} = \frac{1}{2 \alpha} \int_0^T {\rm dt \, \frac{d}{ dt } } \ln \frac{ \partial f(Q) }{ \partial Q} \; . \end{equation} For these conditions the entire $A$ anomaly therefore reduces to a prefactor for the path integral, given by \begin{equation} \label{116} A_p = \left ( \frac{ \partial f(Q_f) }{ \partial Q_f } \right )^{1/2\alpha} \left ( \frac{ \partial f( Q_i ) }{ \partial Q_i } \right )^{- 1/2\alpha} \; . \end{equation} Similarly, the $B$ anomaly can be written \begin{equation} - \lim_{N \rightarrow \infty} \sum_{j = 1}^N B_j \Delta P_j = - \frac{1 - \alpha}{ 2 \alpha } \int_0^T {\rm dt} \, \frac{\dot{P}}{P} = - \frac{1 - \alpha }{2 \alpha} \int_0^T {\rm dt \, \frac{d}{ dt } } \ln P \; . \end{equation} As a result, the $B$ anomaly creates a second prefactor, \begin{equation} \label{116a} B_p = \left ( \frac{P_i}{P_f} \right )^{(1-\alpha)/2\alpha} \; . \end{equation} Results (\ref{116}) and (\ref{116a}) show that, even in the case that the canonically transformed Hamiltonian is cyclic and the transformed path integral generates no prefactor, it is still possible for the correct prefactor or van Vleck determinant to be recovered from the anomaly associated with the canonical transformation. However, results (\ref{116}) and (\ref{116a}) also show that the problem of identifying the appropriate boundary conditions for $Q$ and $P$ is of paramount importance to evaluating the anomaly and determining the correct prefactor for the original path integral. In previous sections it has been stressed that the use of a canonical transformation requires suppressing the $p_i$ term that must be inserted into the action to allow the definition of the canonical transformation. On the face of it, simply setting $p_i$ to zero would appear to be sufficient to bypass this problem. However, doing so would create three initial and final conditions for the classical system, thereby overspecifying the classical solution to the equations of motion, a solution that is critical to evaluating the path integral for cyclic coordinates. However, if $q_i$ is set to zero, the $p_i$ term is automatically suppressed since it appears in the action as $q_i p_i$. This choice therefore allows the value of $p_i$ to be determined from the classical equations of motion consistent with the boundary conditions $q_i = 0$ and $p_f$ arbitrary. The requirement that $q_i$, rather than $p_f$, be zero for consistency is an outgrowth of choosing to write the action with a term of the form $q \dot{p}$, rather than $p \dot{q}$. This in turn was a result of choosing a canonical transformation of the third kind. Other choices will lead to different consistency requirements. In the case of quantized variables, the problem is yet more subtle. In Sec.~II.D the path integral with an action translated by a classical solution was evaluated and the fluctuation variables $p_j$ and $q_j$, given by (\ref{33}) and (\ref{33.2}), were shown to be arbitrary at their undefined endpoint values, {\it i.e.}, $q(t = T)$ and $p( t = 0)$. While this is a natural consequence of the uncertainty principle, it means that the original quantum variables do not collapse to their classical values at these times, {\it i.e.}, $q ( t = T ) \neq q_c ( t = T )$. Therefore, using the classical definitions for both of the $q$ and $p$ endpoint values is not a reliable method. As in the classical case, if $p_f$ is to be defined and $p_i$ is to be arbitrary, {\it i.e.}, non-zero, it is clear from the discussion in Sec.~II.B that the path integral must be evaluated at $q_i = 0$, since such a choice will suppress the $p_i$ term while still allowing $p_i$ to be arbitrary. The absence of $q_f$ from the action of the path integral of the form (\ref{seven}) allows it to be arbitrary without encountering a similar problem. Thus, the canonically transformed path integral's endpoint values are correct only if $q_i = 0$. For a canonical transformation of the form given by (\ref{109}), this means that $Q_i$ must be a root of $f(Q)$. This clearly also suppresses the initial value of the generating function $(p_i)^\alpha f(Q_i)$. Obviously, the $q_i \neq 0$ case can be evaluated by first translating the action everywhere by the classical solutions, as in Sec.~II.D. This leaves a path integral with the effective boundary conditions $q_i = 0$ and $p_f = 0$, allowing a consistent evaluation. A drawback to this technique is that such a translation will create additional terms in the potential in most cases, and the simple canonical transformations introduced in Sec.~III to render power potentials cyclic will no longer be applicable after the translation. However, if the original potential was linear or quadratic this will not be the case, since such a translation induces no additional terms in the fluctuation potential for these two cases. A translation by a classical solution then shows that the prefactor of the form (\ref{116}) must be independent of the endpoint values for the case that the original potential was linear or quadratic, and should be evaluated consistent with the conditions $q_i = 0$ and $p_f = 0$. Apart from these considerations, the transformed action with the anomaly term in it is given by \begin{equation} \sum_{j=1}^N \left [ ( P_j + i \hbar A_j ) \Delta Q_j + i \hbar B_j \Delta P_j - \epsilon H ( P_j , Q_j , \Delta Q_j , \Delta P_j ) \right ] \; . \end{equation} If the range of the $P_j$ integrations is $- \infty$ to $ + \infty$, it is possible to move the anomaly into the Hamiltonian by translating the $P_j$ variables to $P_j - i \hbar A_j$, so that the Hamiltonian becomes formally similar to that of a particle moving in a complex vector potential. The anomaly appears because of the structure of quantum mechanical phase space. The exact function of the anomaly depends on the specific system being evaluated. Some of these will be discussed in Sec.~V. \newsection{Examples} In this section the machinery developed in the previous sections will be applied to specific cases to evaluate the path integral by a canonical transformation. In most of the cases the exact form of the path integral is available by other methods, so that the outcome of the canonical transformation may be compared to show that equivalent results are obtained. \subsection{ Transformations of the Free Particle } In this subsection a specific set of canonical transformations of free particle systems will be considered. In Sec.~II.C the path integral (\ref{31}) for the square well was derived. Through Poisson resummation it was shown to possess the same infinite range of integrations for the measure as that of a free particle. The path integral for the square well can therefore be evaluated by the techniques of (\ref{24}) and (\ref{25}) for cyclic Hamiltonians. This shows that the square well path integral reduces to the correct result, {\it i.e.}, the value of the action along the classical trajectory with the additional overall factor of $1 / \sqrt{2a}$. There is no need to perform a canonical transformation on this system. However, since the exact solution of the free particle path integral is available, such a system can serve as a laboratory to investigate the validity of the techniques derived in previous sections. To begin with, the variables in the action will be translated by the classical solution to the equation of motion, so that the endpoint variables are given by $p_{N+1} = 0$ and $q_0 = 0$. Because it is quadratic in the momentum, the action is unaffected in form by this translation. However, the arguments of Sec.~II.C show that the remaining path integral should reduce to a factor of unity, even in the event that it is canonically transformed. In this subsection the effect of canonical transformations associated with the classical generating function $F = - p \, f( Q )$ on such a free particle path integral will be considered. Such a canonical transformation at the classical level creates a Hamiltonian that, for most choices of $f$, is velocity-dependent. Such Hamiltonians are typically not self-adjoint, creating difficulties in constructing the Hilbert space of the theory. It is therefore of interest to examine how the transformed path integral sidesteps this problem. This canonical transformation has the general form (\ref{109}), so that, to $O(\Delta)$, the anomaly is given by \begin{equation} \label{117} A_j = - \frac{1}{2} \left ( \frac{ \partial f }{ \partial Q_j } \right )^{-1} \frac{ \partial^2 f }{ \partial {Q_j}^2 } \; , \; \; B_j = 0 \; , \end{equation} It is important to investigate if the approximations used to derive (\ref{117}) are valid, since the exact Jacobian may contain additional terms. The definitions of the new quantum mechanical variables in (\ref{87}) and (\ref{88}) result in \begin{eqnarray} \label{118} q_j & = & f ( Q_j ) \\ \label{119} p_j & = & \frac{ P_j \Delta Q_j }{ f(Q_j) - f(Q_{j-1})} = \left ( \frac{ \partial f ( Q_j ) }{ \partial Q_j } - \frac{1}{2} \frac{ \partial^2 f ( Q_j ) }{ \partial {Q_j}^2 } \Delta Q_j + \ldots \right )^{-1} P_j \; . \end{eqnarray} Because $q_j$ is independent of $P_j$, the exact Jacobian for the $j$th product in the measure is given by \begin{equation} \label{120} dp_j \, dq_j = dP_j \, dQ_j \, J_j = dP_j \, dQ_j \left [ 1 - \frac{1}{2} \left ( \frac{ \partial f ( Q_j ) }{ \partial Q_j } \right )^{-1} \frac{ \partial^2 f ( Q_j ) }{ \partial {Q_j}^2 } \Delta Q_j + \ldots \right ]^{-1} \; . \end{equation} When exponentiated, (\ref{120}) yields the same $O(\Delta)$ result as (\ref{117}). However, it would be misleading to exponentiate this Jacobian for the following reason. The Hamiltonian in the path integral action remains quadratic in momentum, since \begin{equation} \label{121} \epsilon \, \frac{ {p_j}^2 }{ 2 m } = \epsilon \, \frac{ {P_j}^2 }{2 m } \left ( \frac{ \partial f }{ \partial Q_j } \right )^{-2} \left [ 1 - \frac{1}{2} \left ( \frac{ \partial f ( Q_j ) }{ \partial Q_j } \right )^{-1} \frac{ \partial^2 f ( Q_j ) }{ \partial {Q_j}^2 } \Delta Q_j + \ldots \right ]^{-2} \; . \end{equation} Even though the $O(\Delta Q)$ terms in the Hamiltonian could be treated as a perturbation, the presence of the $\Delta Q$ terms in the anomaly prevent integrating over the $Q$ variables as in (\ref{24}) to show that this remaining path integral reduces to unity. Instead, the $P$ integrations must be performed first, and this shows that the anomaly in the measure is cancelled as a result of the Gaussian $P$ integrations. Since the action was translated by the classical solution {\em prior}\/ to canonical transformation, the boundary conditions are $P_f = P_i = 0$ and $Q_f = Q_i = 0$. Upon performing the $P$ integrations, the remaining Euclidean path integral reduces to \begin{eqnarray} \label{122} & & \int \prod_{i=1}^N \left [ dQ_i \, \frac{\partial f (Q_i ) }{ \partial Q_i} \sqrt{ \frac{ 1 }{ 2 \pi \hbar \epsilon }} \right ] \nonumber \times \\ & & \exp \left \{ - \frac{1}{\hbar} \sum_{j=1}^N \frac{ m \Delta {Q_j}^2 }{ 2 \epsilon } \left ( \frac{ \partial f ( Q_j ) }{ \partial Q_j } \right )^2 \left [ 1 - \frac{1}{2} \left ( \frac{ \partial f ( Q_j ) }{ \partial Q_j } \right )^{-1} \frac{ \partial^2 f ( Q_j ) }{ \partial {Q_j}^2 } \Delta Q_j + \ldots \right ]^{2} \right \} \; . \end{eqnarray} It is natural to define the new variables $f_j = f( Q_j)$, and this gives \begin{equation} df_i = dQ_i \frac{ \partial f (Q_i) }{ \partial Q_i } \; . \end{equation} This new variable must have the same range of integration as the original variable $q_j$ by virtue of (\ref{118}). The transformed action simplifies as well since \begin{eqnarray} \label{123} & & \frac{ m \Delta {Q_j}^2 }{ 2 \epsilon } \left ( \frac{ \partial f ( Q_j ) }{ \partial Q_j } \right )^2 \left [ 1 - \frac{1}{2} \left ( \frac{ \partial f ( Q_j ) }{ \partial Q_j } \right )^{-1} \frac{ \partial^2 f ( Q_j ) }{ \partial {Q_j}^2 } \Delta Q_j + \ldots \right ]^{2} \nonumber \\ & & = \frac{ m \Delta {Q_j}^2 }{ 2 \epsilon } \left ( \frac{ \Delta f_j }{ \Delta Q_j } \right )^2 = \frac{ m \Delta {f_j}^2 }{ 2 \epsilon } \; . \end{eqnarray} The resulting path integral is therefore identical to the original path integral written in terms of the $q_j$ variables. The anomaly has been cancelled by contributions from the Hamiltonian. This means that the path integral defined by the measure (\ref{120}) and the action (\ref{121}) maintains a well-defined quantum theory for a velocity-dependent Hamiltonian. In general, it is not difficult to see that a canonical transformation resulting in a transformed Hamiltonian that is quadratic in $P$ will possess $O ( \Delta Q ) $ terms that, upon integration of the $P$ variables, can result in cancellation of the anomaly. \subsection{ The Linear Potential } The case of the linear potential, \begin{equation} \label{125} H = \frac{p^2}{2 m} + m \lambda q \; , \end{equation} allows an exact integration of the path integral, yielding the transition element \begin{equation} \label{126} W_{fi} = \frac{1}{\sqrt{2 \pi \hbar}} \exp \left \{ - \frac{ i }{ \hbar } \left [ \frac{1}{2} T^2 \lambda p_f + \frac{ T {p_f}^2}{ 2m } + q_i m \lambda T + p_f q_i + \frac{1}{6} m \lambda^2 T^3 \right ] \right \} \; . \end{equation} Since the action is linear, the effect of a canonical transformation on the path integral will be analyzed for the case that $p_f = q_i = 0$. Result (\ref{126}) shows that the path integral with $p_f = q_i = 0$ must result in \begin{equation} \label{127} W_{fi} = \frac{1}{\sqrt{2 \pi \hbar}} \exp \left \{ - \frac{ i }{ \hbar } \frac{1}{6} m \lambda^2 T^3 \right \} \; . \end{equation} The evaluation of this path integral by canonical transformation can be used as another test of the techniques developed in the previous sections. The classical action has the form (\ref{51}) and can be rendered cyclic by a canonical transformation of the type (\ref{52}). Evaluating the integral (\ref{58}) for the classical generating function yields \begin{equation} \label{128} F(p, Q) = - \frac{ p^3 }{ 6 m^2 \lambda } \left [ \frac{ 8 }{ 9 m \lambda^2 Q^2 } - 1 \right ] \; . \end{equation} However, this generating function suffers from a defect inherited from the parent Hamiltonian, which is not positive-definite due to the odd power of $q$. Using the generating function of (\ref{128}) yields the classical Hamiltonian \begin{equation} \label{129} \tilde{H} = P^{2/3} \; , \end{equation} which is positive-definite since $P$ is assumed to range over real values and the real branch of the 2/3 power is used. In order to match the range of the original Hamiltonian, $P$ would have to range over both pure real and pure imaginary values, rendering the integrations over $P$ undefined. A similar problem exists for the range of the new canonical variable $Q$, since classically it is transformed to \begin{equation} \label{130} Q^2 = \frac{ 8 }{ 9 m \lambda^2 } \frac{ p^2 }{ p^2 + 2 m^2 \lambda q} \; , \end{equation} resulting in imaginary values for the case that the original Hamiltonian is negative. This problem can be remedied by adding the term $p E_0 / m \lambda$ to the generating function (\ref{128}), where the limit $E_0 \rightarrow \infty$ is understood. Doing so allows the range for $P$ to be real while still matching the range of the original Hamiltonian, since the transformed Hamiltonian becomes \begin{equation} \label{131} \tilde{H} = P^{2/3} - E_0 \; \Rightarrow \; P = \left ( \frac{p^2}{ 2m } + m \lambda q + E_0 \right )^{3/2} \; , \end{equation} while the range of $Q$ is now real, since \begin{equation} \label{132} Q^2 = \frac{ 8 }{ 9 m \lambda^2 } \frac{ p^2 }{ ( p^2 + 2 m^2 \lambda q + 2 m E_0 ) } \; . \end{equation} The necessary presence of $E_0$ stems from the fact that the Hamiltonian is not bounded from below. Since the transformation does not yield a quadratic Hamiltonian, it will be assumed that the perturbative argument of Sec.~II is valid, and that terms of $O(\Delta Q)$ in the transformed Hamiltonian can be suppressed. The transformed path integral is then given by \begin{equation} \label{133} W_{fi} = \frac{A_p B_p}{ \sqrt{2 \pi \hbar} } \exp \left \{ \frac{ i }{ \hbar } [ F_f - F_i ] \right \} \int \frac{ dP}{ 2 \pi \hbar } \, dQ \, \exp \left \{ \frac{i}{ \hbar } \int_0^T {\rm dt} \, [ P \dot{Q} - P^{2/3} + E_0 ] \right \} \; , \end{equation} where $A_p$ and $B_p$ are the anomaly prefactors (\ref{116}) and (\ref{116a}), $F_i$ and $F_f$ are the generating function evaluated at the initial and final conditions, and all $O(\Delta Q)$ terms have been suppressed in the Hamiltonian. Because the transformed Hamiltonian is cyclic, the results of Sec.~II.C show that the remaining path integral can be evaluated by finding the action along the classical trajectory. The initial and final conditions are determined from the equations of motion for the original variables, with the boundary conditions that $p_f$ and $q_i$ both vanish. The solutions for $p$ and $q$ consistent with these conditions are easily found, with the result that $p_i = m \lambda T$. Using (\ref{131}) then gives \begin{equation} \label{134} P_i = \left ( \frac{1}{2} m \lambda^2 T^2 + E_0 \right )^{3/2} \; , \end{equation} while \begin{equation} \label{135} Q_i = \frac{2}{3} \sqrt{ \frac{T^2}{ E_0 - \frac{1}{2} m \lambda^2 T^2 } } \; . \end{equation} The Hamiltonian equations of motion give \begin{eqnarray} \label{136} \dot{P} & = & 0 \; \Rightarrow \; P_f = P_i \; , \\ \label{137} \dot{Q} & = & \frac{2}{3} P^{-1/3} \; \Rightarrow \; Q_f = Q_i + \frac{2}{3} {P_f}^{-1/3} T \; . \end{eqnarray} In the limit that $E_0 \rightarrow \infty$, it follows that $Q_f = Q_i$. Using these results, the action along the classical trajectory becomes \begin{equation} \label{138} \int_0^T {\rm dt} \, [ P\dot{Q} - P^{2/3} + E_0 ] = E_0 T - \int_0^T {\rm dt} \, \frac{1}{3} {P_i}^{2/3} = \frac{2}{3} E_0 T - \frac{1}{6} m \lambda^2 T^3 \; . \end{equation} The generating functions reduce to \begin{eqnarray} \label{139} F_f & = & 0 \\ \label{140} F_i & = & \frac{2}{3} E_0 T \; . \end{eqnarray} Finally, form (\ref{116}) for the anomaly prefactor reduces to \begin{equation} \label{141} A_p = \sqrt{ \frac{ Q_i }{ Q_f} } \; \Rightarrow \; \lim_{E_0 \rightarrow \infty} A_p = 1 \; , \end{equation} while the prefactor (\ref{116a}) becomes \begin{equation} \label{141a} B_p = \left ( \frac{P_f}{P_i} \right )^{1/3} = 1 \; . \end{equation} Combining results (\ref{138} -- \ref{141a}) gives the correct result \begin{equation} \langle \, p_f = 0 \, | e^{- i H T / \hbar} | \, q_i = 0 \, \rangle = \frac{1}{ \sqrt{ 2 \pi \hbar } } \exp \left \{ - \frac{i}{ \hbar } \frac{1}{6} m \lambda^2 T^3 \right \} \; , \end{equation} showing that all reference to $E_0$ has disappeared from the problem. It is not difficult to extend the same analysis to the case that $q_i = 0$ and $p_f \neq 0$ to show that the correct results follow from the canonical transformation. \subsection{Polar Coordinates} The transformation from Cartesian to polar coordinates served as the first indication that adopting classical canonical transformations to the path integral was more complicated than expected \cite{Edwards}. In effect, it is a multi-dimensional version of the transformation to a velocity-dependent potential analyzed in Sec.~V.A. As a result, a mechanism similar to (\ref{123}) should occur, allowing the canonically transformed path integral to maintain its equivalence to the original path integral. The starting point is the two-dimensional Hamiltonian \begin{equation} \label{142} H = \frac{1}{2m} ( {p_x}^2 + {p_y}^2 ) \; . \end{equation} The action associated with this Hamiltonian may be transformed into polar coordinates by using the classical generating function \begin{equation} \label{143} F = - p_x r \cos \theta - p_y r \sin \theta \; . \end{equation} Of course, the quantum mechanical version of this transformation results in terms of $O(\Delta)$ and higher. In the following analysis, the $O(\Delta)$ terms will be retained to construct the form of the path integral under this transformation. It is to be remembered throughout that this is a shorthand for the full canonical transformation. For simplicity, the boundary conditions will match those for the case that the original Cartesian action has been translated by the classical solutions to the equations of motion, so that $p_{xf} = p_{yf} = x_i = y_i = 0$. For such a choice the remaining path integral must reduce to a factor of unity. The procedure is tedious but straightforward. The new momenta and coordinates are given by \begin{eqnarray} \label{144} P_{rj} & = & p_{xj} \left ( \cos \theta_j + \frac{1}{2} \sin \theta_j \, \Delta \theta_j \right ) + p_{yj} \left ( \sin \theta_j - \frac{1}{2} \cos \theta_j \, \Delta \theta_j \right ) \; , \\ \label{145} P_{\theta j} & = & - p_{xj} \left ( r_j \sin \theta_j - \frac{1}{2} \sin \theta_j \, \Delta r_j - \frac{1}{2} r_j \cos \theta_j \, \Delta \theta_j \right ) \nonumber \\ & & + p_{yj} \left ( r_j \cos \theta_j + \frac{1}{2} \cos \theta_j \, \Delta r_j - \frac{1}{2} r_j \sin \theta_j \, \Delta \theta_j \right ) \; , \\ \label{146} x_j & = & r_j \cos \theta_j \; , \\ \label{147} y_j & = & r_j \sin \theta_j \; . \end{eqnarray} These definitions yield $p_x$ and $p_y$ in terms of the new variables. Substituting them into the Hamiltonian gives the transformed Hamiltonian to $O(\Delta)$: \begin{equation} \label{148} \tilde{H}_j = \frac{1}{2m} \left ( {P_{rj}}^2 + \frac{1}{{r_j}^2} \left ( 1 - \frac{1}{2} \frac{ \Delta r_j}{ r_j} \right )^{-2} {P_{\theta j}}^2 \right ) \; . \end{equation} Retention of the $O(\Delta)$ terms is essential since the transformed Hamiltonian (\ref{148}) is not cyclic and also remains quadratic in the momenta. The anomaly term can be calculated to $O(\Delta)$ directly from the form of the transformations, or by using the multi-dimensional form (\ref{107}). The resulting measure for the path integral transforms according to \begin{equation} \label{149} dx_j \, dy_j \, dp_{xj} \, dp_{yj} \, \rightarrow \, d \theta_j \, dr_j \, dP_{\theta j} \, dP_{rj} \, \left ( 1 - \frac{1}{2} \frac{\Delta r_j}{r_j} \right )^{-1} \; . \end{equation} It is possible to exponentiate the anomaly, resulting in terms in the transformed action with the form \begin{equation} \label{150} \left ( P_{rj} - \frac{i \hbar}{2 r_j} \right ) \Delta r_j \; . \end{equation} Since the range of the $P_{rj}$ integrations is infinite, this extra term can be transferred to the Hamiltonian by translating the $P_{rj}$ variables. This results in \begin{equation} \label{151} \frac{1}{2m} {P_{rj}}^2 \, \rightarrow \, \frac{1}{2m} {P_{rj}}^2 + \frac{i \hbar}{2m r_j } P_{rj} - \frac{\hbar^2}{ 8 m {r_j}^2} \; . \end{equation} This is precisely the self-adjoint form (\ref{SC2}) for the Weyl-ordered Hamiltonian in spherical coordinates discussed in Sec.~III. However, as in the case of the velocity-dependent transformation discussed in this section, it is misleading to exponentiate the anomaly term. This is demonstrated by performing the momentum integrations. The integration over $P_\theta$ exactly cancels the anomaly, and the resulting measure in the path integral is $r_j \, dr_j \, d \theta_j \, ( 2 \pi / m \epsilon) $, while the action becomes \begin{equation} \label{152} \sum_{j=1}^N \left ( \frac{m}{2 \epsilon} \Delta {r_j}^2 + \frac{m}{2 \epsilon} {r_j}^2 \left ( 1 - \frac{1}{2} \frac{ \Delta r_j }{ r_j } \right )^2 \Delta {\theta_j}^2 \right ) \; . \end{equation} Using (\ref{146}) and (\ref{147}) it is straightforward to show that (\ref{152}) is, to $O(\Delta)$, the same as \begin{equation} \label{153} \sum_{j=1}^N \left ( \frac{m}{2 \epsilon} {\Delta x_j}^2 + \frac{m}{2 \epsilon} {\Delta y_j}^2 \right ) \; , \end{equation} while the measure is the same as $dx_j \, dy_j ( 2 \pi / m \epsilon)$. Thus, the path integral with $P_r$ and $P_\theta$ integrated generates a path integral and measure exactly equivalent to the original path integral with $p_x$ and $p_y$ integrated. Since the original path integral reduces to a factor of unity, this completes the proof that the path integral with its action constructed using (\ref{148}) and measure given by (\ref{149}) reduces to a factor of unity. This is an $O(\Delta)$ proof of equivalence, similar to the all order proof for the transformation to a velocity-dependent potential discussed in Sec.~V.A. This is in effect nothing more than a multi-dimensional version of the relationship (\ref{123}), and could be extended to an all orders proof. \subsection{ The Harmonic Oscillator} The harmonic oscillator has been analyzed by employing the canonical transformation (\ref{48}) \begin{equation} F = - \frac{p^2}{2 m \omega} \tan Q \; , \end{equation} so that, in the nomenclature of Sec.~III, $f(Q) = \tan Q / ( 2 m \omega )$ and $\alpha = 2$. It will be reviewed here for the sake of completeness and because certain results will be used in Sec.~V.E. The results for the quantum version give \begin{eqnarray} \label{154} q_j & = & \frac{p_{j+1} + p_j }{ 2 m \omega } \tan Q_j \\ \label{155} P_j \Delta Q_j & = & \frac{ {p_j}^2 }{ 2 m \omega } \left ( \tan Q_j - \tan Q_{j-1} \right ) \end{eqnarray} The classical canonical transformation leads to the transformed Hamiltonian $\tilde{H} = \omega P$. The quantum version of the transformation, given by (\ref{154}) and (\ref{155}), results in terms of $O(\Delta)$ in the transformed Hamiltonian. However, because the transformed Hamiltonian is not quadratic in $P$ and is cyclic, it will be assumed that suppressing these terms is allowed by the perturbative argument of Sec.~II.B. A mild difference occurs since the range of the $P$ variable is $[ 0 , \infty ]$. This prevents the transfer of the anomaly into the Hamiltonian. As a result, the anomaly terms will be evaluated using (\ref{116}) and (\ref{116a}). Performing the path integral using results (\ref{24}) yields the transition element \begin{equation} \label{156} W_{fi} = \frac{A_p B_p}{ \sqrt{2 \pi \hbar} } \exp \left \{ \frac{i}{\hbar} [ F_f - F_i + S_{cl}] \right \} \end{equation} where $S_{cl}$ is the transformed action evaluated along a classical trajectory, \begin{equation} \label{157} S_{cl} = \int_0^T {\rm dt} \, \left ( P_c \dot{Q}_c - \omega P_c \right ) \; . \end{equation} Hamilton's equations of motion, $\dot{Q} = \omega$ and $\dot{P} = 0$, have the solutions $Q_f = Q_i + \omega T$ and $P_f = P_i$, showing that $S_{cl} = 0$. The restriction to $q_i = 0$ is satisfied by the choice $Q_i = 0$. Using these results in (\ref{116}) and (\ref{116a}) gives the anomalies \begin{eqnarray} \label{158} A_p & = & \left ( \frac{ \partial f(Q_f) }{ \partial Q_f } \right )^{1/2\alpha} \left ( \frac{ \partial f( Q_i ) }{ \partial Q_i } \right )^{- 1/2\alpha} = \frac{1}{ \sqrt{\cos \omega T} } \; , \\ \label{159} B_p & = & \left ( \frac{P_i}{P_f} \right )^{(1-\alpha)/2\alpha} = 1 \; . \end{eqnarray} The product of the anomalies reproduces the correct prefactor (\ref{40}). The generating functions become $F_i = 0$ and \begin{equation} \label{160} F_f = - \frac{ {p_f}^2 }{ 2 m \omega } \tan \omega T \; . \end{equation} Comparison with (\ref{39}) and (\ref{40}) shows that combining these results in (\ref{156}) yields the correct harmonic oscillator transition element for the case $q_i = 0$. \subsection{ The Time-Dependent Harmonic Oscillator} One of the drawbacks to the techniques developed in this paper has been the restriction $q_i = 0$. Of course, it is possible to circumvent this problem by first translating the action by a classical solution to the equations of motion. The remaining path integral will then have the boundary condition $q_i = 0$ automatically. Unfortunately, for all but the quadratic and linear potentials, doing so induces additional terms into the action, preventing the use of the generating function (\ref{52}) which was derived to render the simple power potential potential of (\ref{51}) cyclic. However, it is possible to treat any translated action with a potential involving terms higher than quadratic in first approximation as a time-dependent harmonic oscillator. This follows from the fact that the translated action will possess the form \begin{equation} \label{161} {\cal L } = - q \dot{p} - \frac{p^2}{2m} - \frac{1}{2} \frac{ \partial^2 V ( q_c ) }{ {\partial q_c}^2} q^2 - \ldots \end{equation} where $q_c$ is a classical solution to the original equations of motion consistent with the boundary conditions $q_c (t = 0) = q_i$ and $p_c (t = T) = p_f$. The presence of a set of well-defined eigenvalues of the associated eigenvalue problem is of central importance in determining tunneling rates and stability of states in the quantum theory and is intimately related to Morse theory \cite{Morse}. An canonical transformation approach to the remaining quadratic path integral, effectively a time-dependent harmonic oscillator with the boundary conditions $q_i = p_f = 0$, will be used to obtain an approximate evaluation. This begins by defining the time-dependent frequency $\omega ( t )$ by \begin{equation} \label{162} ( \omega ( t ) )^2 = \frac{1}{m} \frac{\partial^2 V( q_c ) }{ {\partial q_c}^2 } \; . \end{equation} The right-hand side of (\ref{162}) can be negative for a wide variety of circumstances. For example, the potential $ V(q) = - \beta q^2 + \lambda q^4$ gives rise to negative values for $\omega^2$ along any trajectory that passes through the range of values $q^2 < \beta / 6 \lambda$. As a result many trajectories will generate an imaginary value for $\omega$ for intervals of $t$. The time-dependent canonical transformation to be used is given by \begin{equation} \label{163} F = - \frac{p^2}{ 2 m \omega (t) } \tan Q \; , \end{equation} where the time-dependent frequency of (\ref{162}) appears in (\ref{163}). Suppressing all terms of $O(\Delta)$ and using result (\ref{90c}), the transformed Hamiltonian for this case is given by \begin{equation} \label{164} \tilde{H} = \omega(t) P + \frac{P}{2 \omega(t) } \frac{\partial \omega(t)} {\partial t} \sin 2 Q \; . \end{equation} Clearly, suppressing the $O(\Delta)$ terms is not valid in this case since the transformed Hamiltonian is no longer cyclic. As a result, the analysis that follows must be considered as an attempt at an approximate but nonperturbative evaluation of the path integral. Hamilton's equations of motion are given by \begin{eqnarray} \label{165} \dot{Q} & = & \omega (t) + \frac{\partial \omega(t)}{\partial t} \frac{\sin 2Q}{2 \omega(t) } \; , \\ \label{166} \dot{P} & = & - \frac{\partial \omega(t)}{\partial t} \frac{\cos 2Q}{ \omega(t) } P \; . \end{eqnarray} The solution to (\ref{165}) depends upon the form of $\omega(t)$, but in general it cannot be formally expressed as an integral. The solution can be obtained by iteration or can be approximated. To lowest order, the form for $Q$ consistent with the boundary condition $q_i = 0$ is given by \begin{equation} \label{167} Q(t) \, \approx \, \int_0^t d \tau \, \omega ( \tau ) \; . \end{equation} It is not difficult to see that (\ref{167}) is accurate for small values of $t$, and hence for $T$ small. Once the form for $Q(t)$ is known, it is straightforward to solve (\ref{166}) by formal integration to obtain \begin{equation} \label{168} \frac{P_f}{P_i} = \exp \left \{ - \int_0^T {\rm dt} \, \left ( \frac{\partial \omega(t)}{\partial t} \frac{\cos 2Q(t)}{ \omega(t) } \right ) \right \} \; . \end{equation} The classical action along the trajectory given by (\ref{165}) vanishes, while by virtue of the boundary conditions, $F_i = F_f = 0$. The entire translated path integral reduces to the prefactor generated by the anomalies, and this is given by \begin{equation} \label{169} \frac{1}{ \sqrt{2 \pi \hbar \cos Q(T) } } \exp \left \{ - \int_0^T {\rm dt} \, \left ( \frac{\partial \omega(t)}{\partial t} \frac{\cos 2Q(t)}{4 \omega(t) } \right ) \right \} \; . \end{equation} Result (\ref{169}) is, of course, dependent on the original form of the interaction prior to translation as well as the values of $p_f$ and $q_i$. This is because the functional form for $\omega(t)$ depends on the original form of the interaction and the boundary conditions of the trajectory through (\ref{162}). Combining (\ref{169}) with the value of the original action along the classical trajectory gives a new nonperturbative evaluation of the original path integral.
1,108,101,566,196
arxiv
\section{Introduction} Consider the existence and uniqueness of classical solutions for the Dirichlet problem \begin{equation}\label{w1} \left\{\begin{array}{ll} a\, \mbox{div} \dfrac{Du}{\sqrt{1+|Du|^2}} +b\, \dfrac{\mbox{det} D^2u}{(1+|Du|^2)^2}=\phi\left(\dfrac{1}{\sqrt{1+|Du|^2}}\right), & \mbox{in $\Omega$}\\ u=0,& \mbox{on $\partial \Omega$,} \end{array}\right. \end{equation} where $\Omega$ is a bounded smooth domain of $\R^2$, $a,b$ are constants and $\phi\in C^1([-1,1])$. In equation \eqref{w1}, the first term in the left-hand side is a quasilinear operator, while the second one is of Monge-Amp\`ere type. Some equations of paramount importance have already appeared in the literature for particular choices of the constants $a,b$. For example, if $b=0$, then \eqref{w1} falls in the class of prescribed mean curvature equations where the right-hand side depends of the gradient $Du$. This equation has attracted the attention of many researchers, becoming a fruitful topic of interest. Without aiming to collect all this bibliography, we refer to the reader to \cite{oo} and references therein. A solution of \eqref{w1} parametrizes a surface in Euclidean space $\R^3$ whose mean curvature $H$ and Gauss curvature $K$ satisfy the relation \begin{equation}\label{w2} 2aH+bK=\phi(\langle N,v\rangle), \end{equation} where $N$ is the Gauss map of the surface and $v=(0,0,1)$. In general, a surface that satisfies a relationship $W(H,K)=0$ between $H$ and $K$ is called a Weingarten surface. The simplest relation $W$ is being linear, that is, $2aH+bK=c$ for constants $a,b,c\in\R$. Regarding this equation, surfaces with constant mean curvature ($b=0$) and with constant Gauss curvature ($a=0$) are particular examples of linear Weingarten surfaces. From now, we will suppose that $a,b\not=0$. The generalization \eqref{w2} is motivated by the theory of the flow by the mean curvature of Huisken, Sinestrari and Ilmanen \cite{hsi,il} and the flow by the Gauss curvature of Andrews and Urbas (\cite{an,ur}). For example, a translating soliton $S$ is a solution of the mean curvature flow when $S$ evolves purely by translations along some direction of the space. If this direction is, say, $v=(0,0,1)$, then $S+t v$, $t\in \R$, satisfies that for fixed $t$, the normal component of the velocity vector $v$ at each point is equal to the mean curvature at that point. For the initial surface $S$, this implies that $2H=\langle N,v\rangle$. In nonparametric form, $\langle N,v\rangle$ coincides with $1/\sqrt{1+|Du|^2}$, so the surface satisfies \eqref{w1} for $b=0$ and $\phi$ the identity. Similarly, translating solitons by the Gauss curvature are obtained in the same fashion by doing in \eqref{w1} $a=0$ and $\phi$ the identity. Finally, we point out that the first author, together with G\'alvez and Mira, have developed a theory of complete surfaces whose mean curvature is given as a prescribed function of its Gauss map, generalizing some well-known results of the theory of constant mean curvature surfaces and translating solitons of the mean curvature flow (\cite{bgm,bgm2}). The purpose of this paper is to investigate the radial solutions of \eqref{w1} when $\Omega$ is a round ball $B(0,R)\subset\R^2$ centered at the origin $0$ and of radius $R>0$. It is also desirable that the solutions of \eqref{w1} inherit the symmetries of $\Omega$, so if $\Omega$ is a round ball, a solution of \eqref{w1} must be radial. Our interest is to determine the existence and uniqueness of radial solutions starting orthogonally from the rotation axis. In the case that $u=u(r)$ is such a radial solution, equation \eqref{w1} transforms into the initial value problem \begin{equation}\label{rot} \left\{\begin{array}{ll}a\left(\dfrac{u''}{(1+u'^2)^{3/2}}+\dfrac{u'}{r\sqrt{1+u'^2}}\right)+b\dfrac{u''u'}{r(1+u'^2)^2}=\phi\left(\dfrac{1}{\sqrt{1+u'^2}}\right), & \mbox{in $(0,R)$}\\ u(0)=0, u'(0)=0, \end{array}\right. \end{equation} Let us notice that the equation in \eqref{rot} is singular at $r=0$, so the solvability is not assured by standard methods. Equivalently, we are asking for the existence of rotational surfaces satisfying the Weingarten relation \eqref{w2} whose generating curve meets orthogonally the rotation axis. In the field of geometry, there is a great interest in the classification of rotational linear Weingarten surfaces (replacing $\phi$ by a constant $c$) in Euclidean space (\cite{lo1,rs,st}) and also in other ambient spaces (\cite{bss,gm,lo1,lo2,mr}). As a consequence of our investigations, we realized that the existence of solutions of \eqref{rot} depended not only on the constants $a$, $b$ and the function $\phi$, but strongly also on the character of \eqref{rot} as a partial differential equation. In case that the equation is elliptic at $r=0$, we give a positive answer to the problem. \begin{thm} \label{t1} Suppose that \eqref{rot} is elliptic at $r=0$. Then there is $R>0$ such that there exists a solution of \eqref{rot} in $[0,R]$. \end{thm} The elliptic caracter of \eqref{rot} at $r=0$ depends on the sign of $a^2+b\phi(1)$. For the particular case $2aH+bK=c$ and when this relation is elliptic, we provide a proof of the existence of solutions starting orthogonally from the rotation axis. In the other two types of equations, we achieve successful answers to the existence problem of \eqref{rot}. In the case that the equation is hyperbolic at $r=0$, we obtain: \begin{thm}\label{t2} If \eqref{rot} is of hyperbolic type at $r=0$, then there do not exist radial solutions of \eqref{rot}. \end{thm} If \eqref{rot} is parabololic, we find all solutions regardless of the intersection with the rotation axis being orthogonal or having a cusp, or even if the solution stays at a positive distance to the rotation axis. \begin{thm}\label{t3} If \eqref{rot} is of parabolic type, then the solutions are parametrizations of suitable circles of fixed radius. \end{thm} This paper is organized as follows. In Section \eqref{sec2}, we relate the constants $a,b$ and the prescribed function $\phi$ with the character of the PDE \eqref{w1} as elliptic, hyperbolic and parabolic. We also state the character of equation \eqref{rot} at a single instant $r=r_0$. In Section \ref{sec3}, we address the existence of radial solutions of \eqref{w1} for the hyperbolic and parabolic cases. First, we prove that if the equation is hyperbolic, there are not solutions of \eqref{rot} intersecting orthogonally the rotation axis (Theorem \ref{t2}). Second, in the parabolic case we find all solutions forming all them a uniparametric family of circles of the same radius (Theorem \ref{t3}). Finally, in Section \ref{sec4} we focus on the elliptic case. We exhibit an affirmative solution to the existence problem of \eqref{rot}, proving Theorem \ref{t1}. Then, we prove uniqueness and symmetry results of the solutions of the Dirichlet problem \eqref{w1}. \section{Types of Weingarten equations}\label{sec2} Let us write \eqref{w1} in nonparametric form. Let $u=u(x,y)$ and suppose that $u$ satisfies \eqref{w1}. If we define the functional $$ \mathfrak{F}(p,q,r,s,t)=a\frac{(1+p^2)s-2pqt+(1+q^2)r}{(1+p^2+q^2)^{3/2}}+b\frac{rs-t^2}{(1+p^2+q^2)^2}-\phi\left(\frac{1}{\sqrt{1+p^2+q^2}}\right). $$ then $\mathfrak{F}(u_x,u_y,u_{xx},u_{yy},u_{xy})=0$. Furthermore, the determinant of the coefficients of second order is $\mathfrak{F}_r\mathfrak{F}_s-\frac{1}{4}\mathfrak{F}_t^2=(1+p^2+q^2)^2(a^2+b\phi)$. Thus depending on the sign on the left-hand, we have an EDP of elliptic, parabolic or hyperbolic type. Bearing this in mind, the following definition arises: \begin{definition} Let $S$ be a surface satisfying \eqref{w2}. \begin{itemize} \item If $a^2+b\phi>0$, the surface is of \emph{elliptic} type. \item If $a^2+b\phi=0$, the surface is of \emph{parabolic} type. \item If $a^2+b\phi<0$, the surface is of \emph{hyperbolic} type. \end{itemize} \end{definition} \begin{rmk} The sphere of radius $r>0$ satisfies \eqref{w2} for different values of $a,b$ and choices of $\phi$. Indeed, the left-hand side of \eqref{w2} is $(2ar+b)/r^2$. Taking $\phi$ the constant function $\phi=(2ar+b)/r^2$, then $$a^2+b\phi=\frac{(ar+b)^2}{r^2}.$$ Thus the sphere satisfies \eqref{w2} for many values of $a,b$, being elliptic or parabolic depending if $ar+b\not=0$ or $ar+b=0$, respectively. \end{rmk} We emphasize that for fixed $a,b\in\R$ and $\phi$, a given surface may have points of the three types, depending on the height of the parallel in $\mathbb{S}^2$ where the Gauss map $N$ lies, and eventually on the value of $\phi(\langle N,(0,0,1)\rangle)$. Some of the results achieved in this paper only depend on the local character of the PDE \eqref{w1} as elliptic, hyperbolic or parabolic. For instance, as proved in subsequent sections, the existence of solutions of \eqref{rot} solely depends on the sign of the quantity $a^2+b\phi(1)$. For other results, as the ones exhibited in Section \ref{sec4}, the elliptic condition $a^2+b\phi>0$ must be everywhere fulfilled. Taking into account these discussions, we settle the notation in the following definition. \begin{definition} Let be $u=u(r)$ a solution of \eqref{rot} and $r_0\geq0$. We say that \eqref{rot} is: \begin{itemize} \item of elliptic type at $r=r_0$ if $a^2+b\phi\left(\frac{1}{\sqrt{1+u'(r_0)^2}}\right)>0$; \item of parabolic type at $r=r_0$ if $a^2+b\phi\left(\frac{1}{\sqrt{1+u'(r_0)^2}}\right)=0$; and \item of hyperbolic type at $r=r_0$ if $a^2+b\phi\left(\frac{1}{\sqrt{1+u'(r_0)^2}}\right)<0$. \end{itemize} \end{definition} The conditions of being elliptic and hyperbolic are \emph{open}, in the sense that if \eqref{rot} is elliptic or hyperbolic at some $r_0$, there exists some $\varepsilon>0$ such that \eqref{rot} is elliptic or hyperbolic for every $r\in(r_0-\varepsilon,r_0+\varepsilon)\cap[0,\infty)$. We will simply say that \eqref{rot} is of elliptic type if $a^2+b\phi>0$ for every possible value in the argument of $\phi$, and similarly to the hyperbolic and parabolic types. \section{Radial solutions: hyperbolic and parabolic type}\label{sec3} In this section we investigate the existence of classical radial solutions of \eqref{rot} in the hyperbolic and parabolic cases. We first prove Theorem \ref{t2}, which is formulated again for the reader's convenience. \begin{thm}[hyperbolic type] If \eqref{rot} is of hyperbolic type at $r=0$, then there are not solutions of the initial value problem \eqref{rot}. \end{thm} \begin{pf} By contradiction, suppose that $u=u(r)$ is a solution of the initial value problem \eqref{rot}. Taking limits in \eqref{rot} as $r\rightarrow0$ and applying the L'H\^{o}pital rule to the quotient $u'(r)/r$, we have $$ 2au''(0)+bu''(0)^2=\phi(1), $$ because $u'(0)=0$ in $\phi(1/\sqrt{1+u'^2})$. However, the discriminant of this equation on $u''(0)$ is $a^2+b\phi(1)$ which is negative, obtaining a contradiction. \end{pf} Now we address the existence problem of \eqref{rot} in the parabolic case. Since $a^2+b\phi=0$ everywhere, then $\phi$ is a constant function, say, $\phi=c$. Because $a^2+bc=0$, in particular $b\not=0$. If we divide \eqref{rot} by $-b$, we can assume that $b=-1$ and $c=a^2$. Furthermore, after a change of orientation on the surface, if necessary, we can suppose that $a>0$. Note that this change of the orientation does not affect the right-hand side of \eqref{w1}, since $\phi=c$. Definitively, the class of parabolic equations \eqref{w2} reduces to the linear Weingarten relation $2aH-K=a^2$ with $a>0$. We formulate again Theorem \ref{t3} and prove it, by finding all radial solutions of \eqref{w1} independently if the surface meets or not the rotation axis. See Figure \ref{fig1}. \begin{thm}[parabolic type] The solutions of \begin{equation}\label{p0} a\left(\frac{u''}{(1+u'^2)^{3/2}}+\frac{u'}{r\sqrt{1+u'^2}}\right)-\frac{u''u'}{r(1+u'^2)^2}=a^2, \quad a>0, \end{equation} are circles of radius $a$. \end{thm} \begin{pf} From \eqref{p0}, $$ \frac{u''}{(1+u'^2)^{3/2}}\left( a-\frac{u'}{x\sqrt{1+u'^2}}\right)=a\left( a-\frac{u'}{x\sqrt{1+u'^2}}\right). $$ This implies the discussion of two cases. \begin{enumerate} \item Suppose that there is $r_0>0$ such that $$ a\not=\frac{u'(r_0)}{x\sqrt{1+u'(r_0)^2}}. $$ Then in an interval around $r_0$, $$ \frac{u''}{(1+u'^2)^{3/2}}=a.$$ Then it is immediate that \begin{equation}\label{p1} u(r)=\pm\frac{1}{a}\sqrt{1-(ar+k)^2}+m, \end{equation} for some constants $ k,m\in\R$. It is straightforward that $u$ parametrizes a circle of radius $1/a$. \item Suppose $$ a =\frac{u'(r)}{x\sqrt{1+u'(r)^2}} $$ for all $r$. Solving this equation, $$u(r)=\pm\frac{1}{a}\sqrt{1-a^2r^2}+m,\quad m\in\R. $$ Then $u$ parametrizes a circle centered at $r=0$ of radius $1/a$. Let us notice that this solutions is particular of \eqref{p1} with $k=0$. \end{enumerate} \end{pf} Studying in detail each choice of $k$ in \eqref{p1}, we conclude the next classification of the radial solutions of \eqref{w2} in the parabolic case. \begin{cor}\label{cor-pa} The radial solutions of the equation $$a\, \mbox{div} \dfrac{Du}{\sqrt{1+|Du|^2}} - \dfrac{\mbox{det} D^2u}{(1+|Du|^2)^2}=a^2$$ are: \begin{enumerate} \item The vertical straight-line at $r_0=1/a$. \item From the solutions of \eqref{p1}, the constant $k$ must be less than $1$. Furthermore, \begin{enumerate} \item If $k\in(0,1)$ we obtain a one-parameter family of minor subarcs of the circle of radius $1/a$ that intersect the $z$-axis at two cusp points. \item If $k=0$ we obtain a half-circle centered at the $z$-axis of radius $1/a$. \item If $k\in(-1,0)$ we obtain a one-parameter family of major subarcs of the circle of radius $1/a$ that intersect the $z$-axis at two cusp points. \item If $k=-1$ we obtain the full circle of radius $1/a$ intersecting tangentially the $z$-axis. \item If $k<-1$ we obtain the full circle of radius $1/a$ strictly contained in the halfplane $r>0$. \end{enumerate} \end{enumerate} \end{cor} \begin{pf} A particular case to consider of radial solutions occurs when the generating curve is not a graph on the $r$-axis, that is, it is a vertical straight-line at $r=r_0$. In such a case, the surface is a circular cylinder, hence $K=0$ and $H=1/(2r_0)$, obtaining the example of item (1). It only remains to notice that from the solutions of \eqref{p1}, we deduce that $k<1$ because $|ar+k|<1$ and $r>0$. In such a case, the description of item (2) is obvious by varying $k$ from $1$ to $-\infty$. \end{pf} \begin{figure}[hbtp] \begin{center} \includegraphics[width=.6\textwidth]{parabolicw.png} \end{center} \caption{Radial solutions of the parabolic Weingarten equation.}\label{fig1} \end{figure} In terms of surfaces of revolution, we conclude \begin{cor} The rotational linear Weingarten surfaces of parabolic type are circular cylinders, spheres, embedded tori of revolution and a 1-parameter family of non-complete examples intersecting the rotation axis at cusp points and whose profile curves are arcs of a circle of fixed radius. \end{cor} \section{Existence of radial surfaces: elliptic case}\label{sec4} In this last section we study \eqref{w1} in the elliptic case. First, we prove Theorem \ref{t1}, whose formulation is stated again. \begin{thm}\label{t-elli} If the equation in \eqref{rot} is elliptic at $r=0$, there is $R>0$ such that the initial value problem \eqref{rot} has a solution in $[0,R]$. \end{thm} \begin{pf} Multiplying \eqref{rot} by $r$, it is immediate that we can write \eqref{rot} as $$ \left(\frac{ru'}{\sqrt{1+u'^2}}\right)'+\frac{b}{2a}\left(\frac{u'^2}{1+u'^2}\right)'=\frac{r}{a}\phi\left(\frac{1}{\sqrt{1+u'^2}}\right).$$ Define the functions $f,g\colon\R \to\R$ by $$f(y)=\frac{y}{\sqrt{1+y^2}}, \quad g(y)=\frac{1}{a}\phi\left(\frac{1}{\sqrt{1+y^2}}\right).$$ Now we write the above equation as $$ rf(u')+\frac{b}{2a}f(u')^2=\int_0^r tg(u'(t))\, dt.$$ After solving $f(u')$ and eventually $u'$, we define the operator $$ (\mathsf{T}u)(r)=\int_0^rf^{-1}\left(\frac{2a}{b}\left(-s+\sqrt{s^2+\frac{b}{a}\int_0^stg(u'(t))\, dt}\right)\right)\, ds. $$ It can be easily proved that $u$ is a solution of the problem \eqref{rot} if $u$ is a fixed point of the operator $\mathsf{T}$. In this setting, we exhibit the existence of $R>0$ such that $\mathsf{T}$ a contraction in the space $C^1([0,R])$ endowed with the usual norm $\|u\|=\|u\|_\infty+\|u'\|_\infty$. Denote $L_{f^{-1}}$ and $L_g$ the Lipschitz constants of $f^{-1}$ and $g$ in $[-\epsilon,\epsilon]$, respectively, provided $\epsilon<1$. For all $u,v\in C^1([0,R])$, we have $$\|\mathsf{T}u-\mathsf{T}v\|=\|\mathsf{T}u-\mathsf{T}v\|_\infty+\|(\mathsf{T}u)'-(\mathsf{T}v)'\|_\infty,$$ We first study the term $\|\mathsf{T}u-\mathsf{T}v\|_\infty$ because $\|(\mathsf{T}u)'-(\mathsf{T}v)'\|_\infty$ is similar. Given two functions $u,v$ in the ball $\overline{\mathcal{B}(0,\epsilon)}\subset (C^1([0,R]),\|\cdot\|)$ and for all $r\in [0,R]$, where $R$ will be determined later, we have \begin{equation}\label{e1} \begin{split} &|(\mathsf{T}u)(r)-(\mathsf{T}v)(r)|\leq\\ & \int_0^r\Bigg|f^{-1}\left(\frac{2a}{b}\left(-s+\sqrt{s^2+\frac{b}{a}\int_0^stg(u')\, dt}\right)\right)-\\ &f^{-1}\left(\frac{2a}{b}\left(-s+\sqrt{s^2+\frac{b}{a}\int_0^stg(v')\, dt}\right)\right)\Bigg|\, ds\\ &\leq \frac{2a}{b}L_{f^{-1}}\int_0^r\Bigg|\sqrt{s^2+\frac{b}{a}\int_0^stg(u')\, dt}-\sqrt{s^2+\frac{b}{a}\int_0^stg(v')\, dt}\Bigg|\, ds, \end{split} \end{equation} where $L_{f^{-1}}$ stands for the Lipschitz constant of the function $f^{-1}$. Using the L'H\^{o}pital rule, the behavior of the function $\int_0^stg(u')\, dt$ at $s=0$ comparing with $s^2$ is $$ \lim_{s\rightarrow0}\frac{\int_0^stg(u')\, dt}{s^2}= \lim_{s\rightarrow0}\frac{sg(u'(s))}{2s}=\frac{\phi(1)}{2a}. $$ Therefore $$\int_0^stg(u')\, dt=c_0s^2+o(s^2),\hspace{.5cm} c_0=\frac{\phi(1)}{2a}.$$ Following the argument in \eqref{e1}, \begin{equation*} \begin{split} &|(\mathsf{T}u)(r)-(\mathsf{T}v)(r)|\\ &\leq \frac{2a}{b}L_{f^{-1}}\int_0^r\Bigg|\sqrt{s^2+\frac{b}{a}\int_0^stg(u')\, dt}-\sqrt{s^2+\frac{b}{a}\int_0^stg(v')\, dt}\Bigg|\, ds\\ &=2L_{f^{-1}}\int_0^r\frac{|\int_0^st(g(u')-g(v'))\, dt|}{\sqrt{s^2+\frac{b}{a}\int_0^stg(u')\, dt}+\sqrt{s^2+\frac{b}{a}\int_0^stg(v')\, dt}}\, ds\\ &\leq 2L_{f^{-1}}\int_0^r\frac{\int_0^st|g(u')-g(v')|\, dt}{\sqrt{s^2+\frac{b}{a}\int_0^stg(u')\, dt}+\sqrt{s^2+\frac{b}{a}\int_0^stg(v')\, dt}}\, ds. \end{split} \end{equation*} Now, using the Lipschitz constant $L_g$ of $g$, we have \begin{equation*} \begin{split} &\leq 2L_{f^{-1}}L_g\int_0^r\frac{\int_0^st|u'(t)-v'(t)|\, dt}{\sqrt{s^2+\frac{b}{a}\int_0^stg(u')\, dt}+\sqrt{s^2+\frac{b}{a}\int_0^stg(v')\, dt}}\, ds\\ &\leq L_{f^{-1}}L_g\|u-v\|\int_0^r\frac{s^2}{\sqrt{s^2+\frac{b}{a}\int_0^stg(u')\, dt}+\sqrt{s^2+\frac{b}{a}\int_0^stg(v')\, dt}}\, ds\\ &\leq L_{f^{-1}}L_g\|u-v\|\int_0^r\frac{s}{\sqrt{1+\frac{b}{a}\frac{\int_0^stg(u')\, dt}{s^2}}+\sqrt{1+\frac{b}{a}\frac{\int_0^stg(v')\, dt}{s^2}}}\, ds. \end{split} \end{equation*} Taking into account that $\int_0^stg(u')\, dt=c_0s^2+o(s^2)$, we follow the above expression: \begin{equation*} \begin{split} &= L_{f^{-1}}L_g\|u-v\|\int_0^r\frac{s\, ds}{\sqrt{1+\frac{b}{a}(c_0+o(1))}+\sqrt{1+\frac{b}{a}(c_0+o(1))}}. \end{split} \end{equation*} Bearing in mind that since $c_0=\phi(1)/(2a)$ and $$ 1+\frac{b}{a}c_0=\frac{1}{2}+\frac{a^2+b\phi(1)}{2a^2}>\frac{1}{2}>0, $$ we conclude that for $r$ close to $0$, the denominator in the above integral can be upper bounded by a constant $C>0$. Hence, $$|(\mathsf{T}u)(r)-(\mathsf{T}v)(r)|\leq CL_{f^{-1}}L_g\|u-v\|\int_0^rs\, ds=CL_{f^{-1}}L_g\frac{r^2}{2}\|u-v\|.$$ Let $R_1$ be sufficiently small such that the constant $K_1=CL_{f^{-1}}L_gR_1^2/2$ is less than $1$. Then $\|\mathsf{T}u-\mathsf{T}v\|_{\infty}\leq K_1\|u-v\|$. As we have said, a similar argument works with $\|(\mathsf{T}u)'-(\mathsf{T}v)'\|_\infty$. In virtue of the definition of $\mathsf{T}$, $$ (\mathsf{T}u)'(r)=f^{-1}\left(\frac{2a}{b}\left(-r+\sqrt{r^2+\frac{b}{a}\int_0^rtg(u'(t))\, dt}\right)\right).$$ Thus \begin{equation*} \begin{split} &|(\mathsf{T}u)'(r)-(\mathsf{T}v)'(r)|\\ &\leq L_{f^{-1}}\frac{2a}{b}\left|\sqrt{r^2+\frac{b}{a}\int_0^rtg(u'(t))\, dt}-\sqrt{r^2+\frac{b}{a}\int_0^rtg(v'(t))\, dt}\right|\\ &\leq L_{f^{-1}}\frac{2\int_0^rt|g(u'(t))-g(v'(t))|dt}{\sqrt{r^2+\frac{b}{a}\int_0^rtg(u'(t))\, dt}+\sqrt{r^2+\frac{b}{a}\int_0^rtg(v'(t))\, dt}}. \end{split} \end{equation*} Again, since $g$ is Lipschitz, \begin{equation*} \begin{split} &\leq L_{f^{-1}}L_g\|u-v\|\frac{r^2}{\sqrt{r^2+\frac{b}{a}\int_0^rtg(u'(t))dt}+\sqrt{r^2+\frac{b}{a}\int_0^rtg(v'(t))dt}}\\ &=L_{f^{-1}}L_g\|u-v\|\frac{r}{\sqrt{1+\frac{b}{a}\frac{\int_0^rtg(u'(t))\, dt}{r^2}}+\sqrt{1+\frac{b}{a}\frac{\int_0^rtg(v'(t))\, dt}{r^2}}}\\ &\leq L_{f^{-1}}L_g\|u-v\|\frac{r}{\sqrt{1+\frac{b}{a}(c_0+o(1))}+\sqrt{1+\frac{b}{a}(c_0+o(1))}}\\ &\leq CL_{f^{-1}}L_g\|u-v\|r. \end{split} \end{equation*} Let $R_2>0$ be sufficiently small so the constant $K_2=CL_{f^{-1}}L_gR_2$ is less than $1$. With this constant, if $r\in [0,R_2]$, we have $\|(\mathsf{T}u)'-(\mathsf{T}v)'\|\leq K_2\|u-v\|$. Finally, we now choose the value $R$ as $R=\min\{R_1,R_2\}$. Thus if $r\in [0,R]$, we have $$\|\mathsf{T}u-\mathsf{T}v\|<\max\{K_1,K_2\}\|u- v\|,$$ proving that the operator $\mathsf{T}$ is contractible in $C^1([0,R])$. This proves the existence of a fixed point $u\in C^1([0,R])\cap C^2((0,R])$. Finally, we prove that the solution $u$ extends with $C^2$-regularity at $r=0$. By taking limits as $r\to 0$, and by L'H\^{o}pital rule again on the quotient $u'(r)/r$, we conclude $$2au''(0)+bu''(0)^2=\phi(1).$$ In this case, $$u''(0)=\frac{-a\pm\sqrt{a^2+b\phi(1)}}{b},$$ which has a solution by the elliptic condition $a^2+b\phi(1)>0$. \end{pf} \begin{rmk} In the proof of Theorem \ref{t-elli} we can relax the $C^1$-regularity of $\phi(y)$ to just being Lipschitz continuous around $y=1$. Indeed, all the arguments of the proof are local and one of the hypotheses needed is $g(y)=\frac{1}{a}\phi\left(\frac{1}{\sqrt{1+y^2}}\right)$ to be Lipschitz around $y=0$, i.e. $\phi(y)$ to be Lipschitz around $y=1$. \end{rmk} In the remaining of this paper we assume that the equation in \eqref{w1} is elliptic, i.e. $a^2+b\phi>0$ for every possible argument of the function $\phi$. This global elliptic condition will allow us to obtain results concerning the global behavior of the solutions of equation \eqref{w1}. First, we prove the uniqueness of the Dirichlet problem \eqref{w1}. Here we use the comparison principle for fully nonlinear elliptic PDEs (\cite[Th. 17.1]{gt}) to the functional $\mathfrak{F}$ which, in our context of Weingarten surfaces, asserts that if $u_1$ and $u_2$ are two functions defined in $\Omega$ such that $\mathfrak{F}[u_1]\geq\mathfrak{F}[u_2]$ in $\Omega$ and $u_1\leq u_2$ on $\partial\Omega$, then $u_1\leq u_2$ in $\Omega$. Furthermore, if $\mathfrak{F}[u_1]>\mathfrak{F}[u_2]$ in $\Omega$, then $u_1<u_2$ in $\Omega$. Similarly the functional $\mathfrak{F}$ satisfies a maximum principle in the sense that if $\mathfrak{F}[u_1]=\mathfrak{F}[u_2]$, $u_1=u_2$ at some point $x_0\in\Omega$ and $u_1\geq u_2$ in an open set of $x_0$, then $u_1=u_2$ in that open set. We now prove the uniqueness of the Dirichlet problem \eqref{w1} assuming arbitrary continuous boundary values. \begin{proposition}\label{pr-uni} Suppose that the equation in \eqref{w1} is elliptic. If the Dirichlet problem \eqref{w1} has a solution for continuous boundary values $u=\varphi$ on $\partial\Omega$, then the solution is unique. \end{proposition} \begin{pf} The argument is standard using the comparison principle and the fact the vertical translations of $\R^3$ are isometries that preserve the solutions of \eqref{w1}. If $u_1$ and $u_2$ are two such solutions, we move the graph $S_1$ of $u_1$ downwards until that it does not intersect $S_2$, the graph of $u_2$. This is possible because $S_1$ and $S_2$ are compact surfaces. Now we move $S_1$ upwards until reaching a first contact with $S_2$. Then the (interior or boundary version of the) maximum principle asserts that $S_1=S_2$, that is, $u_1=u_2$ in $\Omega$. \end{pf} We finish this section proving that in case that the equation \eqref{w1} is elliptic, the solutions of the Dirichlet problem \eqref{w1} are radial if $\Omega$ is an Euclidean ball. The method comes back to the known technique of moving planes. First, we need the next result, which has its own interest. \begin{proposition}\label{pr-no} Assume that the equation \eqref{w1} is elliptic. If the Dirichlet problem \eqref{w1} has a solution $u$, then $u$ has constant sign in $\Omega$. \end{proposition} \begin{pf} By contradiction, suppose that $u$ changes of sign in $\Omega$. Let $x_0,x_1\in\Omega$ be the points where $u$ attain its minimum and maximum and suppose without loss of generality that $u(x_0)\leq 0<u(x_1)$. In particular, $Du(x_0)=Du(x_1)=0$. Let $v_0,v_1\colon\Omega\to\R$ be the constant functions defined by $v_0(x,y)=u(x_0)$ and $v_1(x,y)=u(x_1)$. Since $v_0\leq u$ in a neighborhood of $x_0$ and $u\leq v_1$ around $x_1$, and because the functional $\mathfrak{F}$ is elliptic, the comparison principle implies $$\mathfrak{F}[v_0]< \mathfrak{F}[u],\quad \mathfrak{F}[u]< \mathfrak{F}[v_1].$$ Since $\mathfrak{F}[u]=0$ and $\mathfrak{F}[v_0]=\mathfrak{F}[v_1]=-\phi(1)$, we obtain a contradiction. \end{pf} Once proved that the solution has sign in $\Omega$, or equivalently, the surface that determines lies completely at one side of the coordinate plane $z=0$, we can prove that if $\Omega$ is a round disc, then the solution of \eqref{w1} is radially symmetric, or equivalently, the surface is rotational about the $z$-axis. Here we follow the moving plane method of Alexandrov (\cite{al}), see also \cite{gi,se}. The arguments are standard and the key issue is that the equation is elliptic and Proposition \ref{pr-no}. We give a brief proof, stating the result in its more general assumption of the domain $\Omega$. In the next result, we will denote by $(x_1,x_2,x_3)$ the coordinates of $\R^3$. \begin{thm} Suppose that $\Omega\subset\R^2$ is a bounded smooth domain, convex in the $x$-direction and symmetric about the line $x_1=0$. If \eqref{w1} is of elliptic type, then any solution $u\in C^2(\overline{\Omega})$ of \eqref{w1} with Dirichlet condition $u=0$ along $\partial\Omega$ is also symmetric about the line $x_1=0$. \end{thm} \begin{pf} From Proposition \ref{pr-no} we know that $u$ has constant sign. Without loss of generality, we suppose that $u<0$ in $\Omega$. Since the function $\phi$ depends on $1/\sqrt{1+|Du|^2}$ (or $\phi=\phi(\langle N,v\rangle)$ in \eqref{w2}), then the reflections about a vertical plane of a surface that satisfies the Weingarten equation \eqref{w2} are surfaces satisfying the same equation. For $t\leq 0$, let $\Omega_t=\Omega\cap \{x_1\leq t\}$. If $A\subset \R^2$, with the notation $A^*$ we stand for the reflection of $A$ about the line of equation $x_1=t$, that is, $A^*=\{(2t-x_1,x_2):(x_1,x_2)\in A\}$. Define on $\Omega_t^*$ the function $u_t$ obtained by reflection about the line $x_1=t$, $u_t(x_1,x_2)=u(2t-x_1,x_2)$. Then $u_t$ satisfies \eqref{w1} in $\Omega_t^*$. We begin with the method of moving planes doing reflection about the line $x_1=t$ for $t$ near $-\infty$. Since $\Omega$ is bounded and after the first touching point $t_1<0$ with $\partial\Omega$, we have $u<u_t$ in $\Omega_t$ for $t\in (t_1,t_1+\epsilon)$ for some $\epsilon>0$ sufficiently small. Moving $t\nearrow 0$, and by the compactness of $\Omega$, let $$t_0=\sup\{t<0:u<u_t \mbox{ in }\Omega_t^*\}.$$ If $t_0<0$ and because $(\partial\Omega_t\cap\partial\Omega)^*\subset\Omega$, $u<0$ in $\Omega$ and the convexity of $\Omega$ in the $x_1$-direction, there is $x_0\in\Omega_t^*$ such that $u=u_{t_0}$ at $x_0$. Since $u\leq u_{t_0}$ and $\mathfrak{F}[u]=\mathfrak{F}[u_{t_0}]$ in $\Omega_{t_0}^{*}$, then $u=u_{t_0}$ by the maximum principle. Using an argument of connectedness, this implies that the line $x_1=t_0$ is a line of symmetry of $u$, which it is false because $\Omega_{t_0}^*\cup\Omega_{t_0}\not=\Omega$. Thus $t_0=0$ and $u<u_t$ in $\Omega_t^*$ for all $t<0$. By the symmetry of $\Omega$ with respect to the line $x_1=0$, there is $x_0\in \partial\Omega_0^*\cap\partial \Omega$ such that $u(x_0)=u_0(x_0)=0$. Using the maximum principle of elliptic equations in its boundary version, we conclude that that $u=u_0$ in $\Omega\cap\Omega_0^*$, proving the result. \end{pf} As consequence of this theorem, together with Theorem \ref{t-elli} and Proposition \ref{pr-uni}, we conclude the following consequence. \begin{cor} Assume that the equation \eqref{w1} is elliptic. Then there is $R>0$ such that the Dirichlet problem \eqref{w1} in the ball $B(0,R)$ has a unique solution. Moreover, this solution is radial. \end{cor} \section*{Acknowledgments} Rafael L\'opez belongs to the Project I+D+i PID2020-117868GB-I00, supported by MCIN/ AEI/10.13039/501100011033/
1,108,101,566,197
arxiv
\section{Introduction} Quantum data locking (QDL) is a uniquely quantum protocol that provides one of the strongest violations of classical information theory in the quantum setting~\cite{QDL}. In QDL the knowledge of a relatively short key of constant size allows one to (un)lock an arbitrarily long message encoded in a quantum system. Otherwise, without knowledge of the key, only a negligible amount of information about the message can be accessed~\cite{CMP,Buhrman,Leung,Dupuis,Fawzi}. Such an exponential disproportion between the length of the key and that of the message is impossible in classical information theory, according to which the bits of secret key should be at least as many as the bits of encrypted information~\cite{Shannon}. Although cryptographic applications may seem natural, it was recognized early that the security provided by QDL is in general not robust under the leakage of a small fraction of the key or the message. Indeed, as a relatively short key is sufficient to lock a much longer message, it may very well happen that the leakage to the eavesdropper of a few bits is sufficient to unlock a disproportionate amount of information. Here we analyze this issue and show that there exist QDL schemes that can be made resilient against information leakage at the cost of increasing the length of the secret key by a proportional amount. We show that to protect QDL from the leakage of $n$ bits of the key or the message, it is sufficient to add an overhead of about $n$ bits to the secret key. The security of QDL is based on the {\it accessible information} criterion. It is well known that such a criterion does not guarantee composable security~\cite{Renner}. That is, security is in general not granted if QDL is used as a subroutine of another protocol. To avoid this problem, one should make an assumption on Eve's technological capability, and require that the message exchanged by QDL is not used for the next protocol until Eve's quantum memory expires. For instance, composable security is granted if the eavesdropper has no quantum memory and is hence forced to measure her share of the quantum system as soon as she obtains it. Such additional assumptions make the accessible information criterion weaker than the commonly accepted security criterion for quantum private communication~\cite{Renner}, which is instead related to the {\it Holevo information}. Interestingly enough, a large gap exists between these two security criteria that may allow for high rate QDL through quantum channels with poor privacy~\cite{QEM,PRX}. As a matter of fact, explicit examples of channels with low or even zero privacy that allow for QDL at high rates have been recently provided~\cite{AW,seplock}. It is hence of fundamental interest to seek a deeper understanding of the phenomenon of QDL, especially in the presence of noise. A number of QDL methods exist. In some of the best known, the codewords are created by applying uniformly distributed random unitary operations to the vectors of a given orthonormal basis~\cite{CMP,Buhrman,Dupuis,Fawzi}. The role of the random unitaries is to scramble the codewords in such a way that an eavesdropper has essentially no information about the message. The crucial feature of strong QDL schemes is that the number of scrambling unitaries is much smaller (in fact exponentially smaller) than the number of different messages~\cite{NOTA-design}. Although a scheme that can be implemented efficiently on a quantum computer exists~\cite{Fawzi}, the realization of QDL with currently available technologies still presents severe experimental difficulties. Moreover, all known QDL protocols are defined for a noiseless quantum system. Hence, a problem of fundamental importance is to design protocols capable of performing QDL through noisy quantum channels. (Explicit protocols for QDL through noisy channels have been recently introduced in~\cite{seplock}.) Here we present two QDL protocols based on random phase shifts. The first one is based on random vectors (instead of random bases) of the form~\cite{NOTA-coding} \begin{equation}\label{phase-v} |\psi \rangle = \frac{1}{\sqrt{d}} \sum_{\omega=1}^d e^{i\theta(\omega)} |\omega\rangle \, , \end{equation} where $\{|\omega\rangle\}_{\omega=1,\dots,d}$ is a given orthonormal basis in a $d$-dimensional Hilbert space, and $e^{i\theta(\omega)}$ are i.i.d.\ random phases with zero mean, $\mathbb{E}\left[ e^{i\theta(\omega)} \right] = 0$. Even the choice $e^{i\theta(\omega)} = (-1)^{b(\omega)}$ is sufficient, where $b(\omega)$ are random binary variables. We show that codewords sampled from this ``phase ensemble'' of vectors suffice to build strong QDL schemes. It is worth noticing that, compared to previously known QDL protocols that require the preparation of uniform (Haar distributed) basis vectors, our scheme greatly simplifies the structure of codewords for QDL and represents a major simplification for optical experimental implementations. Moreover, the expansion of the set of QDL codewords given in this paper paves the way to the application of random coding techniques to lock classical information into noisy quantum channels~\cite{QEM,PRX}, which requires the codewords used to hide information to coincide with the codewords used to protect information from noise. Codewords randomly selected from an ensemble that attains the coherent information bound suffice to protect information from noise~\cite{Lloyd,Shor,Devetak,PSW,Horo,Klesse}. This paper shows that such codewords also suffice to keep information secure from an eavesdropper. The paper proceeds as follows. In Sec.~\ref{sec:phase} we describe a new QDL protocol where the codewords are obtained by applying a random phase modulation. Section~\ref{sec:sketch} provides a sketch of the proof of the QDL property of our protocol, while details are provided in the Appendix~\ref{app:proof}. Then, Sec.~\ref{sec:robust} proves the robustness of QDL to loss of information to the eavesdropper. Section~\ref{sec:appl} discusses possible applications and experimental realizations of our protocol in quantum optics. Finally, conclusions are presented in Sec.~\ref{sec:concl}. \section{Quantum data locking from phase modulation}\label{sec:phase} In a typical QDL protocol, the legitimate parties, Alice and Bob, publicly agree on a set of $n = MK$ codewords in a $d$-dimensional quantum system. From this set, they then use a short shared private key of $\log K$ bits to select a set of $M$ codewords that they will use for sending information. If an eavesdropper Eve does not know the private key, then the number of bits --- as quantified by the {\it accessible information} $I_{\text{acc}}$, which is defined as the maximum mutual information between the message and Eve's measurement result --- that she can obtain about the message by intercepting and measuring the state sent by Alice is essentially equal to zero for certain choices of codewords. We consider here two QDL protocols where Alice and Bob are able to communicate via a $d$-dimensional noiseless quantum channel. In the first QDL protocol, to encrypt a message $m \in \{ 1, \dots, M \}$, Alice prepares one of the vectors \begin{equation}\label{codews} |\psi_{mk}\rangle = \frac{1}{\sqrt{d}} \sum_{\omega=1}^d e^{i\theta_{mk}(\omega)} |\omega\rangle \, , \end{equation} where the value of $k \in \{ 1, \dots, K \}$ is determined by the secret key, and the vectors are sampled i.i.d.\ from the phase ensemble defined above. We require that Bob, knowing the key, is able to decode with a probability of success close to $1$. That is, for any $k$ there exists a positive operator-valued measurement (POVM) with elements $\{ \Lambda^{(k)}_m \}$ such that \begin{equation} \bar{p}_\mathrm{succ} = \frac{1}{M} \sum_{m=1}^M \mathrm{Tr} \left( \Lambda^{(k)}_m |\psi_{mk}\rangle\langle\psi_{mk}| \right) \geq 1 - \epsilon \, . \end{equation} On the other hand, we require that if Eve intercepts and measures the state $|\psi_{mk}\rangle$, then she will only be able to access a negligible amount of information about the message $m$, as quantified by the accessible information $I_{\text{acc}}$. We require that~\cite{NOTA1} \begin{equation} I_{\text{acc}} \lesssim \delta \log{M} \, . \end{equation} Furthermore, for a key-efficient QDL scheme we demand that $K \ll M$. In particular, we require that $\log{K} = O\left( \log{\log{M}} \right)$ and that $\delta \to 0$ as $K$ increases. Here we show that a set of $MK$ codewords randomly selected from the phase ensemble will define a QDL protocol with probability arbitrarily close to $1$ for $d$ large enough. To show that, we make repeated applications of the following bound on the largest and smallest eigenvalues of a random matrix: \begin{theorem}\label{ThRM}~\cite{BaiYin} Consider a $d \times n$ matrix $W$, whose entries are independent and identically distributed random variables with zero mean, variance $\sigma^2$, and finite fourth moment. Define $X = (1/n) W W^\dagger$. Then almost surely as $d \rightarrow \infty$, the largest eigenvalue of $X$ $\rightarrow (1+\sqrt y)^2 \sigma^2$, where $y=d/n$. In addition, when $d \leq n$, the smallest eigenvalue of $X$ $\rightarrow (1-\sqrt y)^2 \sigma^2$. \end{theorem} To apply this theorem to our case, let \begin{equation} W = \sum_{j=1}^n | \psi_j \rangle \langle e_j | \, , \end{equation} where $| \psi_j \rangle$ are $n$ random vectors from the phase ensemble, and the set $\{|e_j\rangle \}$ is an orthonormal basis for an auxiliary $n$-dimensional Hilbert space. Notice that the elements of $W$ are just the components of the randomly selected codewords $|\psi_j\rangle$ in the basis $\{ |\omega\rangle\}_{\omega=1,\dots,d}$. We have \begin{equation} X = \frac{1}{n} W W^\dagger = \frac{1}{n} \sum_{j=1}^n |\psi_j\rangle\langle \psi_j| \, , \end{equation} and for the phase ensemble $\sigma^2=1/d$. For finite $d$, we use another result from random matrix theory: \begin{theorem}\label{ThRM2}~\cite{FS} The probability that the largest eigenvalue of $X$ is larger than $\left[ \left( 1 + \sqrt{y} \right)^2 + \delta \right] \sigma^2$ is no greater than $C \exp{(-d \delta^{3/2}/C)}$, where $C$ is a constant. Similarly, if $d < n$, the probability that the smallest eigenvalue is less than $\left[ \left( 1 - \sqrt{y} \right)^2 - \delta \right] \sigma^2$ is less than $C/\left(1-\sqrt{y}\right) \exp{(-d \delta^{3/2}/C)}$. \end{theorem} This theorem states that the probability that the bounds of Theorem~\ref{ThRM} are violated by more than $\delta$ vanishes exponentially with $d$ and $\delta$. First of all, using these results, we can easily show that for $M \ll d$ Bob's average probability of successful decoding is $\bar{p}_\mathrm{succ} \gtrsim 1-2\sqrt{M/d}$. To see that, assume that Bob applies the ``pretty good measurement'' POVM with elements \begin{equation} \Lambda^{(k)}_m = \Sigma_k^{-1/2} |\psi_{mk} \rangle \langle \psi_{mk}| \Sigma_k^{-1/2} \, , \end{equation} where \begin{equation} \Sigma_k = \sum_{m=1}^M |\psi_{mk} \rangle \langle \psi_{mk}| \, . \end{equation} Then we have, assuming $\delta \ll 1$, \begin{eqnarray} \bar{p}_\mathrm{succ} & = & \frac{1}{M} \sum_{m=1}^M \left| \langle \psi_{mk}| \Sigma_k^{-1/2} |\psi_{mk} \rangle \right|^2 \\ & \geq & \frac{d}{M} \left[ \left( 1 + \sqrt{\frac{d}{M}} \right)^{2} + \delta \right]^{-1} \label{ineq} \\ & \simeq & 1 - 2 \sqrt{\frac{M}{d}} \, \end{eqnarray} where in~(\ref{ineq}) we have applied Theorems~\ref{ThRM} and~\ref{ThRM2} to bound \begin{equation} \Sigma_k^{-1/2} \geq \sqrt{\frac{d}{M}} \left[ \left( 1 + \sqrt{\frac{d}{M}} \right)^2 + \delta \right]^{-1/2} \, . \end{equation} On the other hand, the bound on Eve's accessible information is given by the following \begin{theorem}\label{main-r} Select $MK$ i.i.d.\ random codewords $|\psi_{mk}\rangle$ ($m=1,\dots,M$ and $k=1,\dots, K$) from the phase ensemble, with $MK \gg d$ and $M \ll d$. Then, for any $\delta >0$ and \begin{equation}\label{K-condition} K > \frac{4}{\delta^2} \left( \ln{M} + \frac{2d}{\delta M} \ln{ \frac{5}{\delta} }\right) \, , \end{equation} Eve's accessible information satisfies \begin{equation} I_{\text{acc}} = O\left( \delta \log{d} \right) \end{equation} up to a probability \begin{equation}\label{p-fail} p_\mathrm{fail} \leq \exp{\left[ - M \left( \frac{\delta^3 K}{4} - \delta \ln{M} - \frac{2d}{M} \ln{\frac{5}{\delta}} \right) \right]} \end{equation} that vanishes exponentially in $M$. \end{theorem} The sketch of the proof is provided in Sec.~\ref{sec:sketch}, while details are in Appendix~\ref{app:proof}. This theorem states that a set of $MK$ random codewords from the phase ensemble defines a strong QDL protocol. For instance, if we put $\delta \simeq 1/\log{M}$, then $I_{\text{acc}}$ is smaller than a constant with $\log{K} = O\left( \log{\log{M}} \right)$. Otherwise, for $\delta \ll 1/\log{d}$, a secret key of size $\log{K} = O\left( \log{1/\delta} \right)$ is sufficient to guarantee $I_{\text{acc}} = O\left( \delta \log{d} \right)$. \subsection{Quantum data locking from random unitaries}\label{ssec:unitary} The second QDL protocol is defined by a set of random unitaries of a particular form. We define the ``phase ensemble'' of unitaries of the form \begin{equation} U = \sum_{\omega=1}^d e^{i \theta(\omega)} |\omega\rangle\langle\omega| \, , \end{equation} where $\theta(\omega)$ are i.i.d.\ random phases with $\mathbb{E}[e^{i \theta(\omega)}] = 0$. Given the set of Fourier-transformed basis vectors \begin{equation}\label{phase-U-q} |m\rangle = \frac{1}{\sqrt{d}} \sum_{\omega=1}^d e^{i 2\pi m \omega/d} |\omega\rangle \, , \end{equation} for $m=1,\dots,d$, we define a set of $dK$ codewords as \begin{equation}\label{phase-U} |\psi_{mk}\rangle = U_k |m\rangle = \frac{1}{\sqrt{d}} \sum_{\omega=1}^d e^{i 2\pi m \omega/d + i \theta_k(\omega)} |\omega\rangle \, . \end{equation} Notice that for any $k$, codewords with different $m$ are mutually orthogonal. This implies that Bob can decode with $\bar{p}_\mathrm{succ} = 1$. Furthermore, each codeword in~(\ref{phase-U-q}) has the same distribution of the codewords selected from the phase ensemble of vectors. The only difference is that $|\psi_{mk}\rangle$ and $|\psi_{m'k}\rangle$ are no longer statistically independent. We thus obtain the following \begin{theorem} Select $K$ i.i.d.\ random unitaries $U_{k}$ ($k=1,\dots, K$) from the phase ensemble and define the codewords $|\psi_{mk}\rangle = U_k |m\rangle$ ($m=1,\dots, d$). Then, for any $\delta >0$ and \begin{equation} K > \frac{4}{\delta^2} \left( \ln{d} + \frac{2}{\delta} \ln{ \frac{5}{\delta} }\right) \, , \end{equation} Eve's accessible information satisfies \begin{equation} I_{\text{acc}} = O\left( \delta \log{d} \right) \end{equation} up to a probability that vanishes exponentially in $d$. \end{theorem} The proof of this theorem can be obtained by a straightforward modification of that of Theorem~\ref{main-r} and is not reported here. It is worth noticing that these unitaries form an abelian group. It is hence somehow surprising that they yield strong QDL properties. \section{Sketch of the proof of Theorem~\ref{main-r}}\label{sec:sketch} To compute Eve's accessible information about the message, we associate to the sender Alice a dummy $M$-dimensional quantum system carrying the classical variable $m$ as a set of basis vectors $\{ |m\rangle \}_{m=1,\dots,M}$. We then suppose that Eve intercepts the quantum system that has the encrypted message. Since Eve does not know the secret key, the correlations between Alice and Eve are described by the following classical-quantum state: \begin{equation} \rho_{AE} = \frac{1}{M} \sum_{m=1}^M |m\rangle\langle m|_A \otimes \frac{1}{K} \sum_{k=1}^K |\psi_{mk}\rangle\langle\psi_{mk}|_E \, , \end{equation} where the codewords $|\psi_{mk}\rangle$ are as in Eq.~(\ref{codews}). The accessible information is by definition the maximum classical mutual information that can be achieved by local measurements on Alice's and Eve's subsystems: \begin{eqnarray} I_{\text{acc}} & := & I_{\text{acc}}(A;E)_{\rho} \nonumber \\ & = & \max_{\mathcal{M}_A,\mathcal{M}_E} H(X) + H(Y) - H(X,Y) \, , \end{eqnarray} where $\mathcal{M}_A \, : A \to X$, $\mathcal{M}_E \, : E \to Y$ are local measurements with output variables $X$ and $Y$ respectively, and $H(\, \cdot \,)$ denotes the Shannon entropy of the measurement results. The optimal measurement on Alice's subsystem is obviously a projective measurement on the basis $\{ |m\rangle \}_{m=1,\dots,M}$. Concerning Eve's measurement, it is sufficient to consider only rank-one POVM, which are described by measurement operators of the form $\{ \mu_j |\phi_j\rangle\langle\phi_j|\}_{j=1\dots,J}$, with $\mu_j \geq 0$, and satisfying the normalization condition $\sum_j \mu_j |\phi_j\rangle\langle\phi_j| = \mathbb{I}$. A straightforward calculation then yields \begin{multline} I_{\text{acc}} = \log{M} - \\ \min_{\{\mu_j|\phi_j\rangle\langle\phi_j|\}} \sum_j \frac{\mu_j}{M} \left\{ H[Q(\phi_j)] - \eta\left[ \sum_{m=1}^M Q_m(\phi_j)\right] \right\} \, , \end{multline} where $\eta(x) = -x \log{x}$, $Q(\phi)$ is the $M$-dimensional real vector of non-negative components \begin{equation} Q_m(\phi) = \frac{1}{K} \sum_{k=1}^K |\langle \phi | \psi_{mk} \rangle |^2 \, , \end{equation} and $H[Q(\phi)] = - \sum_{m=1}^M Q_m(\phi) \log{Q_m(\phi)}$ denotes the Shannon entropy of $Q(\phi)$. We now apply a standard convexity argument, first used in~\cite{QDL}. To do that, notice that assuming $\langle \phi_j |\phi_j\rangle = 1$, then $\sum_j \mu_j/d = 1$. This implies that the positive quantities $\mu_j/d$ can be interpreted as probability weights. An upper bound on the accessible information is then obtained by using the fact that the average cannot exceed the maximum. We thus obtain \begin{equation}\label{Iacc-0} I_{\text{acc}} \leq \log{M} - \frac{d}{M} \min_{|\phi\rangle} \left\{ H[Q(\phi)] - \eta\left[\sum_{m=1}^M Q_m(\phi)\right] \right\} \, . \end{equation} Notice that now the maximization is no longer over a POVM with elements $\{ \mu_j |\phi_j\rangle\langle\phi_j| \}$ but over a single normalized vector $|\phi\rangle$. Then the proof proceeds along three main conceptual steps: \begin{enumerate} \item (See Appendix~\ref{step-1} for details.) Theorems~\ref{ThRM} and~\ref{ThRM2} imply that the random variable $\sum_{m=1}^M Q_m(\phi)$ is close to $M/d$ with arbitrarily high probability for all vectors $|\phi\rangle$ if $d$ is large enough and $M K \gg d$. In particular, the inequality \begin{equation} \sum_{m=1}^M Q_m(\phi) \leq \frac{M}{d} \left[ \left( 1 + \sqrt{\frac{d}{MK}}\right)^2 + \delta \right] \end{equation} applied to Eq.~(\ref{Iacc-0}) yields \begin{equation} I_{\text{acc}} \leq \alpha \log{d} + \eta(\alpha) - \frac{d}{M} \min_{|\phi\rangle} H[Q(\phi)] \, , \end{equation} where \begin{equation} \alpha = \left( 1 + \sqrt{\frac{d}{MK}} \right)^2 + \delta \, . \end{equation} \item (See Appendix~\ref{step-2} for details.) We show that for any given $|\phi\rangle$ and $\delta >0$, \begin{equation} \eta\left[Q_m(\phi)\right] \geq - \frac{1-\delta}{d}\log{\left(\frac{1-\delta}{d}\right)} \end{equation} for at least $(1-\delta)M$ values of $m$ (up to a probability exponentially small in $M$). To do that, we show that there is a negligible probability that $Q_m(\phi) <(1-\delta)/d$ for more than $\ell=\delta M$ values of $m$. This result implies \begin{equation}\label{Iacc-2-p} H[Q(\phi)] \geq \frac{M}{d} \left( 1 - 2\delta \right) \log{d} \, . \end{equation} \item (See Appendix~\ref{step-3} for details.) Finally, to account for the minimum over all unit vectors $|\phi\rangle$, we introduce a discrete set $\mathcal{N}_\delta$ of vectors with the property that for any $|\phi\rangle$ there exists $|\phi_i\rangle \in \mathcal{N}_\delta$ such that \begin{equation} \| |\phi\rangle\langle\phi| - |\phi_i\rangle\langle\phi_i| \|_1 \leq \delta \, . \end{equation} A set with this property is called an $\delta$--net. The $\delta$--net is used to approximate the value of $H[Q(\phi)]$ up to an error of the order of $\delta \log{d}$. We show that the inequality in~(\ref{Iacc-2-p}) holds true with high probability for all unit vectors in the $\delta$--net. \end{enumerate} In conclusion we obtain that for any $\delta >0$ Eve's accessible information satisfies \begin{equation}\label{Iacc-3-p} I_{\text{acc}} = O\left( \delta \log{d}\right) \, , \end{equation} up to a probability which is exponentially small in $d$ provided \begin{equation} K > \frac{4}{\delta^2} \left( \ln{M} + \frac{2d}{\delta M} \ln{\frac{5}{\delta}} \right) \, . \end{equation} \section{Robustness against information leakage}\label{sec:robust} We consider our protocols based on the phase ensemble of random vectors to assess the robustness of QDL against the leakage to Eve of part of the key or the message. What happens if part of the key is known by Eve? For example, suppose that Eve knows the first $\gamma \log{K}$ bits of the secret key. Since the remaining $(1-\gamma)\log{K}$ are still secret and random, it follows that we can still apply Theorem~\ref{main-r} and Eve's accessible information satisfies $I_\mathrm{acc} = O\left( \delta \log{d} \right)$ up to a failure probability \begin{equation}\label{pfail1} p'_\mathrm{fail} \leq \exp{\left[ - M \left( \frac{\delta^3 K^{1-\gamma}}{4} - \delta \ln{M} - \frac{2d}{M}\ln{\frac{5}{\delta}} \right) \right]} \, . \end{equation} To assess the security of the QDL protocol under the leakage of any fraction of the key, we have to take into account all the possible subsets of $\gamma \log{K}$ bits. Each of these subsets, determines a corresponding subset of $K^{1-\gamma}$ values of the key that remain secret to Eve. The number of ways in which these values can be chosen is ${ K \choose K^{1-\gamma}}$. Applying the union bound we can hence estimate from~(\ref{pfail1}) the failure probability if any fraction of $\gamma \log{K}$ bits leaks to Eve: \begin{equation} p''_\mathrm{fail} \leq { K \choose K^{1-\gamma}} p'_\mathrm{fail} \leq \exp{\left( K^{1-\gamma}\ln{K} \right)} \, p'_\mathrm{fail} \, . \end{equation} Putting $M/d = \delta$, this probability is exponentially small in $M$ under conditions \begin{equation}\label{K-cond-1} K^{1-\gamma} > \frac{4}{\delta^2} \left( \ln{M} + \frac{2}{\delta^2} \ln{\frac{5}{\delta}} \right) \end{equation} and \begin{equation} M > K^{1-\gamma} \ln{K} \left( \frac{\delta^3 K^{1-\gamma}}{4} - \delta \ln{M} - \frac{2}{\delta} \ln{\frac{5}{\delta}} \right)^{-1} \, . \end{equation} In particular, for $\gamma \ll 1$, the condition~(\ref{K-cond-1}) can be replaced by \begin{equation}\label{K-cond-2} K \gtrsim \left[ \frac{4}{\delta^2} \left( \ln{M} + \frac{2}{\delta^2} \ln{\frac{5}{\delta}} \right) \right]^{1+\gamma} \, . \end{equation} Compared to~(\ref{K-condition}), this condition implies that to make QDL robust against the leakage of a fraction of $\gamma\log{K}$ bits of the key, one simply needs to use a longer key of about $(1+\gamma)\log{K}$ bits. This result shows the existence of QDL schemes that are robust to some loss of key. What happens if a small fraction of $n$ bits of the message leaks to Eve? Suppose that Eve knows the first $n$ bits of the message, then one has to require that the key is sufficiently long to lock the remaining $\log{M}-n$ bits of the message. Since the codewords corresponding to the remaining part of the message are still random, we have \begin{equation}\label{I-Eve-1} I_{\text{acc}} = O \left( \delta \left( \log{M} - n \right) \right) \, , \end{equation} up to a probability \begin{align} p'_\mathrm{fail} \leq & \exp{\left\{ - M 2^{-n} \left[ \frac{\delta^3 K}{4} - \delta (\ln{M}-n) \right. \right.} \nonumber \\ & \hspace{3cm} \left. \left. - 2^{n+1} \frac{d}{M} \ln{\frac{5}{\delta}} \right] \right\} \, . \end{align} We apply again the union bound to estimate the failure probability if any subset of $n$ bits of the message leaks to Eve: \begin{align} p''_\mathrm{fail} \leq & { M \choose M 2^{-n}} \, p'_\mathrm{fail} \leq \exp{\left(M 2^{-n} \ln{M}\right)} \, p'_\mathrm{fail} \\ \leq & \exp{\left\{ - M 2^{-n} \left[ \frac{\delta^3 K}{4} - \ln{M} - \delta (\ln{M}-n) \right. \right. } \nonumber \\ & \hspace{3.5cm} \left. \left. - 2^{n+1} \frac{d}{M} \ln{\frac{5}{\delta}} \right] \right\} \, . \end{align} For any given $n$, the latter is exponentially small in $M$ provided \begin{equation} K > \frac{4}{\delta^3} \left[ \ln{M} + \delta (\ln{M}-n) + 2^{n+1} \frac{d}{M} \ln{\frac{5}{\delta}} \right] \, . \end{equation} Compared to~(\ref{K-condition}), the last condition implies that to protect QDL against the leakage of $n$ bits of message, the key should be enlarged by $\Delta(\log{K}) \simeq n$ bits. This result shows the existence of QDL schemes that are robust to plain-text attack. A similar result can be obtained starting from other QDL protocols, e.g., using the results of~\cite{Omar,Dupuis} based on the min-entropy of the message. \section{Applications}\label{sec:appl} {\it Towards quantum optical realizations.---} The phase ensemble finds natural applications in the context of linear optics. For instance, codewords belonging to the phase ensemble can be realized by coherently splitting a photon over $d$ modes (e.g., spatial, temporal, orbital angular momentum modes, etc.), then by applying independent random phase shifts to each mode. If information is encoded in the arrival time, the codewords can be prepared by first applying a linear dispersion transformation (see, e.g.,~\cite{Dirk}) and then random phase shifts at different times. Concerning Bob decoding, in the case of QDL by the phase ensemble of random unitaries (see Sec.~\ref{ssec:unitary}) it is sufficient to apply the inverse transformation of the one applied by Alice for encoding, then measure by photo-detection. Notice that both encoding and decoding operations can be realized by linear passive optical elements and photodetection (for decoding) as discussed in~\cite{QEM}. For the QDL based on random codewords from the phase ensemble of vectors, Bob should in principle decode by applying the pretty good measurement associated to the set of QDL codewords, yet we don't know at the moment an explicit construction. A crucial requirement to realize our strong QDL protocols in quantum optics is the ability to prepare and manipulate quantum states of light in high dimension. The latter is the goal of cutting edge research and several important milestone have been achieved so far. See, for example, the recent report of entanglement production between two photons in a $100 \times 100$-dimensional Hilbert space~\cite{Z}. {\it Quantum data locking of noisy channels.---} Although the phenomenon of QDL has been known for about ten years now~\cite{QDL}, only recently has it been reconsidered in the context of noisy quantum channels~\cite{QEM,PRX}. In particular, the locking capacity of a (noisy) quantum channel has been defined in~\cite{PRX} as the maximum rate at which classical information can be locked through $N$ instances of the channel with the assistance of a secret key shared by Alice and Bob which grows sub-linearly in $N$. One can indeed define a weak and a strong notion of locking capacity~\cite{PRX}. In the weak case one assumes that Eve has access to the channel environment (the output of the complementary channel), while in the strong case one gives her access to the quantum system being input to the noisy channel. Remarkably, there are examples of quantum channels whose locking capacity is much larger than the private capacity~\cite{AW,seplock}. Our result on Eve's accessible information applies directly to the strong notion of locking capacity and can be extended (by a simple application of a data processing inequality) to the weak case. It hence remains to show how and at which rate Bob can decode reliably. One way to do that is by concatenating the QDL protocol with an error correcting code~\cite{Fawzi,Omar,PRX}. Consider $N \gg 1$ uses of a quantum channel. If the quantum capacity of the channel is $Q$, then one can lock information by choosing codewords in an error correcting subspace of dimension $d \simeq 2^{N Q}$. Another approach may consist in designing a code which allows for both QDL and error correction at the same time. Our results indicate that random codes exist that can be applied both for protecting against decoherence and for locking classical information. (An explicit example has been recently presented in~\cite{seplock}.) {\it Locking a quantum memory.---} The QDL properties of the phase ensemble can be used to lock information into a quantum memory by applying a local random gauge field. Consider an ideal (noiseless) semiclassical model for a quantum memory consisting of $d$ charged particles on a ring of length $L$. For recording locked information in the quantum memory, Alice applies a random i.i.d.~magnetic field to each particle and encodes a classical message into one of the momentum eigenstates $|p\rangle = d^{-1/2} \sum_{k=1}^d e^{i 2\pi p x/L + i \int_0^x A(x')dx'} |x\rangle$, where $A(x')$ is the vector potential in natural units. Notice that the application of the random local fields corresponds to a random phase in the momentum eigenstates. Then, a legitimate receiver who knows the pattern of the magnetic field applied by Alice, can decode the message by simply measuring the momentum. On the other hand, Eve's accessible information can be made negligibly small by the QDL effect. \section{Conclusion}\label{sec:concl} It is well known that the security provided by QDL can be severely hampered if even a small fraction of the key or the message leaks to Eve. Here we show that, although this is true in general, there exist QDL protocols that are instead robust against the leakage of a small part of the message or the key. Until now, the codewords used in QDL have been restricted to either Haar-distributed random bases~\cite{CMP,Dupuis,Fawzi} or approximate mutually unbiased bases~\cite{Fawzi} (the role of mutually unbiased bases being not yet completely understood~\cite{QDL,Wehner}). This paper showed that codewords modulated by random phase shifts in a given basis suffice to guarantee strong and robust QDL properties. The fact that random phases suffice to ensure strong QDL properties yields a major simplification for the experimental realization of a QDL protocol. To produce states from the phase ensemble, one only requires to generate $d$ binary phase shifts, instead of $d^2$ random variables sampling Haar-distribution of unitary transformations. The phase ensemble is well-adapted for use in quantum optical channels, where a single photon may be coherently split across different modes (e.g., path or time-bin modes), to which i.i.d.~random phase-shifts are applied. Alternatively, one can employ random unitaries from a set of dispersive transformations~\cite{Dirk}. To decode the message, the legitimate receiver can first apply the inverse transformation of the encoding one (both are linear passive transformations), then measure by photo-detection~\cite{QEM}. Our results suggest that random codes of the type defined here can be used to perform direct QDL over noisy quantum channels~\cite{QEM,PRX}, which requires that the codewords for QDL (encoding for security) also be appropriate codewords for combating noise on the channel (encoding for error correction). A straightforward way for doing that is to concatenate the QDL protocol with a quantum error correcting code~\cite{Omar}, allowing for a rate of locked communication at least equal to the quantum capacity of the channel. A fundamental question is whether one can lock information at a rate higher than the quantum (and private) capacity. Such a question has indeed a positive answer, as shown by the results recently presented in~\cite{AW,seplock}, providing examples of quantum channels with low or zero privacy that instead allow for high rate QDL. In particular, the phase ensemble was originally proposed in~\cite{Lloyd} as a code for attaining the coherent information rate for reliable quantum communication over a quantum channel (see also~\cite{PSW,Horo,Klesse}). The results proved here suggest that there exist random codes defined from the phase ensemble that allow for robust QDL over a noisy, lossy bosonic channel at a rate equal to and possibly larger than the coherent information. {\it Acknowledgment.---} This work was supported by the DARPA Quiness Program through US Army Research Office award W31P4Q-12-1-0019. CL thanks Michele Allegra and Xiaoting Wang for several valuable and enjoyable scientific discussions. The authors are very grateful to Hari Krovi and Saikat Guha for their comments and suggestions.
1,108,101,566,198
arxiv
\subsubsection*{\bibname}} \newtheorem{theorem}{Theorem} \newtheorem{corollary}{Corollary} \newtheorem{lemma}{Lemma} \input{./macros.tex} \begin{document} \twocolumn[ \aistatstitle{Algorithms for the Communication of Samples} \aistatsauthor{Lucas Theis \And Noureldin Yosri} \aistatsaddress{Google \And Google} ] \begin{abstract} We consider the problem of \textit{reverse channel coding}, that is, how to simulate a noisy channel over a digital channel efficiently. We propose two new coding schemes with practical advantages over previous approaches. First, we introduce \textit{ordered random coding} (ORC) which uses a simple trick to reduce the coding cost of previous approaches based on importance sampling. Our derivation also illuminates a connection between these schemes and the so-called \textit{Poisson functional representation}. Second, we describe a hybrid coding scheme which uses dithered quantization to efficiently communicate samples from distributions with bounded support. \end{abstract} \section{INTRODUCTION} Consider a problem where a sender has information $\mathbf{x}$ and wants to communicate a noisy version of it over a digital channel, \begin{align} \mathbf{Z} \sim \mathbf{x} + \mathbf{U}. \end{align} The sender does not care which value the noise $\mathbf{U}$ takes as long as it follows a given distribution. For example, it may be desired that the noise is a fair sample from a Gaussian distribution. Can we exploit the sender's indifference to the exact value of the noise to save bits in the communication? More generally, we may want to send a sample from a given distribution, \begin{align} \mathbf{Z} \sim q_\mathbf{x}. \end{align} How can we communicate such a sample most efficiently? This is the problem of \textit{reverse channel coding}. While channel coding tries to communicate digital information over a noisy channel with as few errors as possible, reverse channel coding attempts to do the opposite, namely to simulate a noisy channel over a digital channel. This problem has therefore also been referred to as ``channel simulation'' \citep[e.g.,][]{cuff2008} and is closely related to ``relative entropy coding'' \citep{flamich2020cwq}. The reverse channel coding problem occurs in many applications. In neural compression, differentiable channels enable gradient-based methods to optimize encoders but these are necessarily noisy if we want to limit their capacity. For example, it is common to approximate quantization with uniform noise when training neural networks for lossy compression \citep{balle2017end}. Reverse channel coding allows us to implement such noisy channels at test time \citep{agustsson2020uq} and to use arbitrary distributions in place of uniform noise \citep{havasi2018miracle}. In differential privacy, most mechanisms seek to limit the amount of sensitive information revealed to another party by adding noise to the data \citep{dwork2006dp}. Efficiently communicating such private information is an active area of research \citep[e.g.,][]{chen2020trilemma} and the goal of reverse channel coding. Quantum teleportation can be viewed as another instance of reverse channel coding where classical bits are used to communicate stochastic information in the form of a qubit. Some of the earliest results on reverse channel coding were obtained in quantum mechanics \citep{bennett2002reverse}. The naive approach to our problem would be to let the sender generate a sample and then to encode this noisy data. If the data or the noise stems from a continuous distribution, lossless source coding is impossible as it would require an infinite number of bits. On the other hand, lossy source coding (by first quantizing) leads to further corruption of the data and still wastes bits on encoding noise. In contrast, efficient reverse channel coding techniques are able to communicate such stochastic information with a coding cost which is close to the mutual information between the data $\mathbf{X}$ and the sample $\mathbf{Z}$ \citep[e.g.,][]{harsha2007}, \begin{align} I[\mathbf{X}, \mathbf{Z}] = h[\mathbf{Z}] - h[\mathbf{Z} \mid \mathbf{X}] = \mathbb{E}[D_\textnormal{KL}\infdivx{q_\mathbf{X}}{p}], \label{eq:mi} \end{align} where $h$ is the differential entropy and $p$ is the marginal distribution of $\mathbf{Z}$. Unlike the naive approach, the coding cost actually decreases as we introduce more noise, that is, when the (differential) entropy of $q_\mathbf{X}$ increases. The present article aims to provide an overview of several useful algorithms for reverse channel coding and to compare them. For instance, we describe a practical algorithm based on the Poisson functional representation \cite{li2018pfr}. We then introduce new algorithms along with practical advantages and provide results on their theoretical properties. As a further contribution, we present a unifying view of some of these algorithms which helps to clarify the relationship between them and sheds light on their empirical behavior. Finally, we will provide the first direct empirical comparisons between different reverse channel coding algorithms. All proofs and additional empirical results are provided in the appendix. \section{RELATED WORK} The \textit{reverse Shannon theorem} of \cite{bennett2002reverse} shows that a sender who has access to $\mathbf{X}$ can communicate an instance of $\mathbf{Z}$ at a cost which is close to the two random variables' mutual information. Many papers have considered problems related to reverse channel coding and derived bounds on its coding cost \citep[e.g.,][]{cover2007capacity,harsha2007,braverman2014}. To our knowledge, the sharpest upper bound so far was provided by \cite{li2021lemma} who showed that an optimal code does not require more than \begin{align} I[\mathbf{X}, \mathbf{Z}] + \log (I[\mathbf{X}, \mathbf{Z}] + 1) + 4.732 \label{eq:bound} \end{align} bits to communicate an exact sample. On the other hand, \cite{li2018pfr} showed that distributions exist for which the coding cost is at least \begin{align} I[\mathbf{X}, \mathbf{Z}] + \log (I[\mathbf{X}, \mathbf{Z}] + 1) - 1. \label{eq:lower_bound} \end{align} That is, the bound in Eq.~\ref{eq:bound} cannot be improved significantly without making additional assumptions about the distributions involved \citep[see also][]{braverman2014}. Note that the communication overhead (the second and third term) becomes relatively less important as the transmitted amount of information increases. Most general reverse channel coding algorithms operate on the same basic principle. First, a potentially large number of candidates is generated from a fixed distribution which is known to the sender and receiver, \begin{align} \mathbf{Z}_n \sim p, \end{align} where $n \in \mathbb{N}$ or $n \in \{1, \dots, N\}$. Both the sender and receiver are able to generate these candidates without communication by using a shared source of randomness. In practice, this will typically be a pseudorandom number generator with a common seed. The sender selects an index $N^*$ according to some distribution such that, at least approximately, \begin{align} \mathbf{Z}_{N^*} \sim q_\mathbf{x}. \end{align} Note that only $N^*$ needs to be communicated and this can be done efficiently if $H[N^*]$ is small. The main difference between algorithms is in how $N^*$ is decided. \cite{li2017dyadic} described an algorithm for communicating samples from distributions with log-concave PDFs without common randomness. Without a shared source of randomness, the number of bits required is at least \textit{Wyner's common information} \citep{wyner1975ci,cuff2008}, which can be significantly larger than the mutual information \citep{xu2011wyner}. In the following, we therefore focus on algorithms with access to common randomness. \cite{agustsson2020uq} showed that there is no general algorithm whose computational complexity is polynomial in the communication cost. That is, as the amount of information transmitted increases, general purpose algorithms become prohibitively expensive. One solution to this problem is to split information into chunks and to encode them separately \citep{havasi2018miracle,flamich2020cwq}. However, this reduces statistical efficiency as each chunk will contribute its own overhead to the coding cost. We therefore typically find a tension between the computational efficiency and the coding efficiency of a scheme. A more well-known idea in machine learning is \textit{bits-back coding} \citep{wallace1990bb,hinton1993bb}, which at first sight appears closely related to reverse channel coding. Here, the goal is to losslessly compress a source~$\mathbf{X}$ using a model of its joint distribution with a set of latent variables~$\mathbf{Z}$. Encoding an instance~$\mathbf{x}$ involves sampling $\mathbf{Z} \sim q_\mathbf{x}$ while using previously encoded bits as a source of randomness. The data and latent variables are subsequently encoded using the model's joint distribution \citep{townsend2019bb}. Unlike reverse channel coding, however, bits-back coding necessarily transmits a perfect copy of the data, that is, it is an implementation of source coding. On the other hand, reverse channel coding can be viewed as a generalization of source coding which also supports lossy transmission. Source coding is recovered as a special case when choosing $q_\mathbf{x}(\mathbf{z}) = \delta(\mathbf{z} - \mathbf{x})$. \section{ALGORITHMS} We will first continue the discussion of related work by introducing existing algorithms for the simulation of noisy channels. New ideas are presented mainly in Sections~\ref{sec:sis}, \ref{sec:unify}, and \ref{sec:hybrid}. \subsection{Rejection sampling} \label{sec:rs} \textit{Rejection sampling} (RS) is a method for generating a sample from one distribution given samples from another distribution. As an introductory example, we show how RS can be turned into a reverse channel coding scheme. Let $\mathbf{Z}_n$ be candidates drawn from a proposal distribution $p$. Further, let $U_n \sim \text{Uniform}([0, 1))$. RS selects the first index ${N_\textnormal{RS}^*}$ such that \begin{align} U_{N_\textnormal{RS}^*} \leq w_\textnormal{min} \frac{q_\mathbf{x}(\mathbf{Z}_{N_\textnormal{RS}^*})}{p(\mathbf{Z}_{N_\textnormal{RS}^*})} \label{eq:rs} \end{align} where \begin{align} w_\textnormal{min} \leq \inf_\mathbf{z} \frac{p(\mathbf{z})}{q_\mathbf{x}(\mathbf{z})} \label{eq:rs_constant} \end{align} ensures that the right-hand side in Eq.~\ref{eq:rs} is a probability. If $w_\textnormal{min} > 0$, then ${N^*}$ will be finite and, crucially, \begin{align} \mathbf{Z}_{N_\textnormal{RS}^*} \sim q_{\mathbf{x}}. \end{align} A sender could thus communicate a sample from $q_\mathbf{x}$ by sending ${N_\textnormal{RS}^*}$, assuming the receiver already has access to the candidates $\mathbf{Z}_n$. Note that this works even when the distribution is continuous since ${N_\textnormal{RS}^*}$ will still be discrete. While $w_\textnormal{min}$ may depend on $\mathbf{x}$, in the following analysis we assume for simplicity that the same value is used for all target distributions. Let us consider the coding cost of encoding ${N_\textnormal{RS}^*}$. The average probability of accepting a candidate is \begin{align} \int p(\mathbf{z}) w_\textnormal{min} \frac{q_\mathbf{x}(\mathbf{z})}{p(\mathbf{z})} \, d\mathbf{z} = w_\textnormal{min}. \end{align} The marginal distribution of ${N_\textnormal{RS}^*}$ is therefore a geometric distribution whose entropy can be bounded by \begin{align} H[{N_\textnormal{RS}^*}] &\leq -\log w_\textnormal{min} + 1/\ln 2. \end{align} Rejection sampling is efficient if $H[{N_\textnormal{RS}^*}]$ is not much more than the information contained in $\mathbf{Z}$. While it is easy to construct examples where RS is efficient, it is also easy to construct examples where $-\logw_\textnormal{min}$ is significantly larger than $I[\mathbf{X}, \mathbf{Z}]$. For instance, the density ratio in Eq.~\ref{eq:rs_constant} may be unbounded. However, if we are willing to accept an approximate sample, then there are ways to limit the coding cost even then. For example, we may unconditionally accept the $N$th candidate if the first $N - 1$ candidates are rejected. In this case, the distribution of $\mathbf{Z}_{N^*}$ will be a mixture distribution with density \begin{align} \beta p(\mathbf{z}) + (1 - \beta) q_\mathbf{x}(\mathbf{z}), \end{align} where $\beta = (1 - w_\textnormal{min})^{N - 1}$ is the probability of rejecting all $N - 1$ candidates. The quality of a sample is often measured in terms of the \textit{total variation distance} (TVD), \begin{align} D_\textnormal{TV}\infdivy{p}{q} = \frac{1}{2} \int |p(\mathbf{z}) - q(\mathbf{z})| \, d\mathbf{z}. \end{align} When we measure the deviation of the mixture distribution from the target distribution $q_\mathbf{x}$, we obtain \begin{align} D_\textnormal{TV}\infdivy{\beta p + (1 - \beta) q_\mathbf{x}}{q_\mathbf{x}} &= \beta D_\textnormal{TV}\infdivy{p}{q_\mathbf{x}}. \end{align} That is, the divergence decays exponentially with~$N$. An alternative approach to limiting the coding cost is to choose an invalid but larger $w_\textnormal{min}$. \cite{harsha2007} described a related approach which effectively uses $w_\textnormal{min} = 1$ but is nevertheless able to produce an exact sample by adjusting the target distribution after each candidate rejection (Appendix~A). However, their approach is computationally expensive and generally infeasible for continuous distributions. \subsection{Minimal random coding} \label{sec:is} An approach closely related to \textit{importance sampling} was first considered by \cite{cuff2008} and later dubbed \textit{likelihood encoder} \citep{song2016ld}. The approach was independently rediscovered in machine learning by \cite{havasi2018miracle} who referred to it as \textit{minimal random coding} (MRC) and used it for model compression. It has since also been used for lossy image compression \citep{flamich2020cwq}. Unlike \cite{havasi2018miracle}, \cite{cuff2008} only considered discrete distributions and assumed that $p$ is the true marginal distribution of the data. But \cite{cuff2008} also described a more general approach where the amount of shared randomness between the sender and receiver is limited. In MRC, the sender picks one of $N$ candidates by sampling an index ${N_\textnormal{MRC}^*}$ from the distribution \begin{align} \pi_\mathbf{x}(n) \propto q_\mathbf{x}(\mathbf{Z}_n)/p(\mathbf{Z}_n). \label{eq:is} \end{align} Unlike RS, the distribution of $\mathbf{Z}_{N_\textnormal{MRC}^*}$ (call it $\tilde q_\mathbf{x}$) will in general only approximate $q_\mathbf{x}$. On the other hand, the coding cost of MRC can be significantly smaller. \cite{havasi2018miracle} showed that under reasonable assumptions, samples from $\tilde q_\mathbf{x}$ will be similar to samples from $q_\mathbf{x}$ if the number of candidates is \begin{align} N = 2^{D_\textnormal{KL}\infdivx{q_\mathbf{x}}{p} + t} \label{eq:is_cc} \end{align} for some $t > 0$. That is, the number of candidates required to guarantee a sample of high quality grows exponentially with the amount of information gained by the receiver. Since acceptable candidates may appear anywhere in the sequence of candidates, each index is a priori equally likely to be picked. That is, the marginal distribution of ${N_\textnormal{MRC}^*}$ is uniform and its entropy is \begin{align} H[{N_\textnormal{MRC}^*}] = \log N. \end{align} \subsection{Poisson functional representation} \label{sec:pfr} \cite{li2018pfr} introduced the following \textit{Poisson functional representation} (PFR) of a random variable. Let $T_n$ be the arrival times of a homogeneous Poisson process on the non-negative real line such that $T_n \geq 0$ and $T_n \leq T_{n + 1}$ for all $n \in \mathbb{N}$. Let \begin{align} \mathbf{Z}_n &\sim p, & {N_\textnormal{PFR}^*} &= \argmin_{n \in \mathbb{N}} T_n \frac{p(\mathbf{Z}_n)}{q_\mathbf{x}(\mathbf{Z}_n)} \label{eq:pfr}. \end{align} for all $n \in \mathbb{N}$. Then $\mathbf{Z}_{N_\textnormal{PFR}^*}$ has the distribution $q_\mathbf{x}$. As in RS, the index ${N_\textnormal{PFR}^*}$ picks one of infinitely many candidates and we obtain an exact sample from the target distribution. However, \cite{li2018pfr} provided much stronger guarantees for the coding cost of the ${N_\textnormal{PFR}^*}$. In particular, \begin{align} H[{N_\textnormal{PFR}^*}] \leq I[\mathbf{X}, \mathbf{Z}] + \log(I[\mathbf{X}, \mathbf{Z}] + 1) + 4. \end{align} The distribution of ${N_\textnormal{PFR}^*}$ takes a more complicated form than in RS or MRC \citep[][Eq. 29]{li2021lemma}. Nevertheless, a coding cost corresponding to the bound above can be achieved by entropy encoding ${N_\textnormal{PFR}^*}$ while assuming a Zipf distribution $p_\lambda(n) \propto n^{-\lambda}$ with \citep{li2018pfr} \begin{align} \lambda = 1 + 1 / (I[\mathbf{X}, \mathbf{Z}] + e^{-1}\log e + 1). \label{eq:lambda} \end{align} A downside of the PFR is that it depends on an infinite number of candidates. Unlike rejection sampling, we cannot consider the candidates in sequence but generally have to consider the scores of all candidates (Eq.~\ref{eq:pfr}). However, if we can bound the density ratio as in rejection sampling (Eq.~\ref{eq:rs_constant}), then we can terminate our search for the smallest score after considering a finite number of candidates. Let \begin{align} S_n^* = \argmin_{i \leq n} T_i \frac{p(\mathbf{Z}_i)}{q_\mathbf{x}(\mathbf{Z}_i)} \end{align} be the smallest score observed after taking into account $n$ candidates. Since $T_{m} \geq T_n$ for all $m > n$, all further scores will be at least $T_n w_\textnormal{min}$. Hence, if $S_n^* \leq T_n w_\text{min}$, we can terminate the search for the best candidate. Algorithm~\ref{alg:pfr} summarizes this idea. Here, \texttt{simulate($n$, p)} is a function which simulates a distribution \texttt{p} by returning the $n$th pseudorandom sample derived from $n$ and an implicit random seed. Similarly, $\texttt{expon}(n, 1)$ simulates an exponential distribution with rate 1. \begin{algorithm}[t] \caption{PFR} \label{alg:pfr} \begin{algorithmic}[1] \Require $\texttt{p}, \texttt{q}, w_\text{min}$ \State $t, n, s^* \gets 0, 1, \infty$ \Statex \Repeat \State $z \gets \texttt{simulate}(n, \texttt{p})$ \Comment{Candidate generation} \State $t \gets t + \texttt{expon}(n, 1)$ \Comment{Poisson process} \State $s \gets t \cdot \texttt{p}(z) / \texttt{q}(z)$ \Comment{Candidate's score} \Statex \If{$s < s^*$} \Comment{Accept/reject candidate} \State $s^*, n^* \gets s, n$ \EndIf \Statex \State $n \gets n + 1$ \Until{$s^* \leq t \cdot w_\text{min}$} \Statex \State \textbf{return} $n^*$ \end{algorithmic} \end{algorithm} \subsection{Ordered random coding} \label{sec:sis} In typical applications of importance sampling we do not worry about coding costs and it may therefore not surprise that the entropy of ${N_\textnormal{MRC}^*}$ is large. We here show that a slight modification reduces the entropy of the selected index while keeping the distribution of the communicated sample exactly the same. To generate a sample from $\pi_\mathbf{x}$ (Eq.~\ref{eq:is}), we can write \begin{align} {N^*} &= \argmax_{n \leq N}\log q_\mathbf{x}(\mathbf{Z}_n) - \log p(\mathbf{Z}_n) + G_n, \label{eq:gumbel_soft_max} \end{align} where the $G_n$ are Gumbel distributed random variables \citep{gumbel1954} with scale parameter 1. This is the well-known Gumbel-max trick for sampling from a categorical distribution \citep{maddison2014astar}. This trick still works if we permute the Gumbel random variables arbitrarily, as long as the permutation does not depend on the values of the candidates $\mathbf{Z}_n$. \\ \begin{theorem} Let $\tilde G_n$ be the result of sorting the Gumbel random variables in decreasing order such that $\tilde G_1 \geq \dots \geq \tilde G_N$ and define \begin{align} {\tilde N^*} &= \argmax_{n \leq N}\log q_\mathbf{x}(\mathbf{Z}_n) - \log p(\mathbf{Z}_n) + \tilde G_n. \end{align} Then $\mathbf{Z}_{{N^*}} \sim \mathbf{Z}_{{\tilde N^*}}$. \label{th:sis} \end{theorem} If the density ratio is bounded, the sample quality improves quickly once the coding cost exceeds the information contained in a sample. In particular, we have the following result. \\ \begin{theorem} Let $\tilde q$ be the distribution of $\mathbf{Z}_{{\tilde N^*}}$. If the number of candidates is $N = 2^{D_\textnormal{KL}\infdivx{q}{p} + t}$ and $p(\mathbf{z}) / q(\mathbf{z}) \geq w_\textnormal{min} > 0$ for all $\mathbf{z}$, then \begin{align} D_\textnormal{TV}\infdivy{\tilde q}{q} = O(2^{-t/8}). \end{align} \label{th:orc_tvd} \end{theorem} \vspace{-20pt} While the distribution of $\mathbf{Z}_{{\tilde N^*}}$ remains the same, the distribution of ${\tilde N^*}$ is no longer uniform but biased towards smaller indices. Since ${N^*}$ is uniform, we must have $H[{\tilde N^*}] \leq H[{N^*}]$. In the next section, we show that $H[{\tilde N^*}]$ is in fact on a par with $H[{N_\textnormal{PFR}^*}]$. We dub the approach \textit{ordered random coding} (ORC) and pseudocode for ORC is provided in Appendix~B. \subsection{A unifying view} \label{sec:unify} In this section we show a close connection between methods based on importance sampling and the PFR. First, we can rewrite Eq.~\ref{eq:gumbel_soft_max} as follows, \begin{align} {N_\textnormal{MRC}^*} &= \argmin_{n \leq N} S_n \frac{p(\mathbf{Z}_n)}{q_\mathbf{x}(\mathbf{Z}_n)} \end{align} where $S_n$ are exponentially distributed with rate $1$. This is true because $-\log S_n$ is Gumbel distributed. Note the similarity to the PFR, \begin{align} {N_\textnormal{PFR}^*} &= \argmin_{n \in \mathbb{N}} T_n \frac{p(\mathbf{Z}_n)}{q_\mathbf{x}(\mathbf{Z}_n)}, \end{align} where $T_n \sim \sum_{m = 1}^n S_m$. ORC, on the other hand, first sorts the exponential random variables, $\tilde S_1 \leq \dots \leq \tilde S_N$, before choosing \begin{align} {N_\textnormal{ORC}^*} &= \argmin_{n \leq N} \tilde S_n \frac{p(\mathbf{Z}_n)}{q_\mathbf{x}(\mathbf{Z}_n)}. \end{align} It turns out that \citep{renyi1953os} \begin{align} \tilde S_n \sim \sum_{m = 1}^n \frac{S_m}{N - m + 1}. \label{eq:sorted_exponential} \end{align} This allows us to generate the sorted exponential variables in $O(N)$ instead of $O(N \log N)$ time with sorting. More interestingly, Eq.~\ref{eq:sorted_exponential} reveals a close connection to the PFR. Where the PFR uses cumulative sums of exponential random variables, ORC uses weighted sums. This representation allows us to arrive at the following result. \vspace{6pt} \begin{theorem} Let $S_n$ be exponentially distributed RVs and $\mathbf{Z}_n \sim p$ for all $n \in \mathbb{N}$ (i.i.d.), and let \begin{align} T_n &= \sum_{m = 1}^n S_m, & \tilde T_{N,n} &= \sum_{m = 1}^n \frac{N}{N - m + 1} S_m \end{align} for $N \in \mathbb{N}$. Further, let \begin{align} {N_\textnormal{PFR}^*} &= \argmin_{n \in \mathbb{N}} T_n \frac{p(\mathbf{Z}_n)}{q(\mathbf{Z}_n)}, \\ {N_\textnormal{ORC}^*} &= \argmin_{n \leq N} \tilde T_{N,n} \frac{p(\mathbf{Z}_n)}{q(\mathbf{Z}_n)}. \end{align} Then ${N_\textnormal{ORC}^*} \leq {N_\textnormal{PFR}^*}$. Further, there exists an $M \in \mathbb{N}$ such that for all $N \geq M$ we have ${N_\textnormal{ORC}^*} = {N_\textnormal{PFR}^*}$. \label{th:sis_pfr} \end{theorem} Note that the additional factor $N$ in the definition of $\tilde T_{N,n}$ compared to $\tilde S_n$ does not change the output of argmin. Using Theorem~\ref{th:sis_pfr}, it is not difficult to see that the bound on the coding of the PFR also applies to ORC, or the following result. \vspace{6pt} \begin{corollary} Let $C = \mathbb{E}_\mathbf{X}[D_\textnormal{KL}\infdivx{q_\mathbf{X}}{p}]$ and let ${N_\textnormal{ORC}^*}$ be defined as in Theorem~\ref{th:sis_pfr}. Then \begin{align} H[{N_\textnormal{ORC}^*}] < C + \log (C + 1) + 4. \end{align} \label{th:sis_entropy} \end{corollary} \vspace{-12pt} To achieve this bound, a Zipf distribution $p_\lambda(n) \propto n^{-\lambda}$ with $\lambda = 1 + 1/(C + e^{-1}\log e + 1)$ can be used to entropy encode the index, analogous to the PFR (Eq.~\ref{eq:lambda}). The significance of these results is as follows. \cite{li2018pfr} showed that the PFR is near-optimal in the sense that the entropy of ${N_\textnormal{PFR}^*}$ is close to the worst-case coding cost needed for communicating a perfect sample. However, the construction of the PFR relies on an infinite number of candidates and all theoretical guarantees are lost if we naively limit the number of candidates to $N$. In particular, we do not know how quickly the quality of a communicated sample deteriorates as we decrease~$N$, or how large $N$ should be. On the other hand, we do have some idea of the quality of a sample obtained via importance sampling from a finite number of candidates \citep[e.g., Theorem~\ref{th:orc_tvd} or the results of][]{cuff2008,chatterjee2018is,havasi2018miracle}. But the coding cost of MRC is relatively large and continues to grow unbounded as $N$ increases. ORC combines the best of both by inheriting the coding cost of the PFR and the sample quality of MRC. Unlike the PFR, the guarantees of ORC hold for a finite number of samples. Unlike MRC, we can make $N$ arbitrarily large without having to worry about an exploding coding cost, which also makes it easier to tune this parameter. \subsection{Dithered quantization} \label{sec:dq} \textit{Dithered quantization}, also known as \textit{universal quantization} \citep{ziv1985universal}, refers to quantization with a randomly shif\-ted lattice. Consider a scalar $y \in \mathbb{R}$ and a random variable $U$ uniform over any interval of length one, such as $[0, 1)$. Then \citep[e.g.,][]{schuchman1964dither} \begin{align} \lfloor y - U \rceil + U \sim y + U_0, \end{align} where $U_0$ is uniform over $[-0.5, 0.5)$. More generally, let $Q$ be a quantizer which maps inputs to the nearest point on a lattice and let $\mathbf{V}$ be a random variable which is uniformly distributed over an arbitrarily placed Voronoi cell of the lattice. Further, let $\mathbf{V}_0$ be uniform over the Voronoi cell which contains the lattice point at zero. Then \citep{zamir2014book} \begin{align} Q(\mathbf{y} - \mathbf{V}) + \mathbf{V} \sim \mathbf{y} + \mathbf{V}_0. \end{align} Dithered quantization has been mainly used as a tool for studying quantization from a theoretical perspective. However, already \cite{roberts1962noise} considered some of its practical advantages over uniform quantization for compressing grayscale images, especially in terms of perceptual quality. \cite{theis2021stochastic} showed that universal quantization can outperform vector quantization where a realism constraint is considered. Universal quantization also recently started being used in neural compression \citep{choi2019uq}, in particular to realize differentiable training losses at inference time \citep{agustsson2020uq}. To communicate a sample of a uniform distribution centered around $y$, the sender encodes $K = \lfloor y - U \rceil$. The receiver decodes $K$ and computes $Z = K + U$, which is distributed as $y + U_0$. Like the reverse channel coding schemes discussed so far, this requires a shared source of randomness in the form of $U$. \cite{zamir1992universal} showed that dithered quantization is statistically efficient in the sense that \begin{align} H[K \mid U] = I[Y, Z]. \end{align} That is, the cost of encoding $K$ is as close as can be to the amount of information contained in $Z$. Note that we can condition on $U$ when encoding $K$ since $U$ is known to both sender and receiver. Another advantage of dithered quantization is that it is computationally highly efficient, at least for the simple case of the integer lattice. \subsection{Hybrid coding} \label{sec:hybrid} While dithered quantization is computationally much more efficient than general reverse channel coding schemes, it is also much more limited in terms of the distributions it can simulate. Here we propose a hybrid coding scheme for continuous distributions which retains most of the flexibility of general purpose schemes but is computationally more efficient when the support of the target distribution is small. The general idea is as follows. Instead of drawing candidates from a fixed distribution $p$, candidates $\mathbf{Z}_n$ are drawn from a distribution $r_\mathbf{x}$ which acts as a bridge and more closely resembles the target distribution $q_\mathbf{x}$. Since $r_\mathbf{x}$ is closer to $q_\mathbf{x}$, we will require fewer candidates to find one that is suitable. Let \begin{align} {N^*} &= \argmin_{n \leq N} \tilde T_{N,n} \frac{r_\mathbf{x}(\mathbf{Z}_n)}{q_\mathbf{x}(\mathbf{Z}_n)}, \label{eq:idx_hybrid} \end{align} be the index of the selected candidate. Unlike before, only the sender has access to the candidates and so knowing ${N^*}$ alone will not allow us to reconstruct $\mathbf{Z}_{N^*}$. Our hybrid coding scheme relies on two insights. First, the receiver does not require access to all candidates but only to the selected candidate. Second, the missing information can be encoded easily and efficiently if $r_\mathbf{x}$ can be simulated with dithered quantization. We assume for now that there exist vectors $\mathbf{c}_\mathbf{x}$ such that the support of $q_\mathbf{x}$ is contained in the support of \begin{align} r_\mathbf{x}(\mathbf{z}) = \begin{cases} 1 & \text{ if } \mathbf{z} \in \mathbf{c}_\mathbf{x} + [-0.5, 0.5)^D, \\ 0 & \text{ else } \end{cases} \end{align} which can be simulated via dithered quantization, \begin{align} \mathbf{K}_n &= \lfloor \mathbf{c}_\mathbf{x} - \mathbf{U}_n \rceil, & \mathbf{Z}_n &= \mathbf{K}_n + \mathbf{U}_n, \label{eq:bks_hybrid} \end{align} where $\mathbf{U}_n \sim \text{Uniform}([0, 1)^D)$. Define $\mathbf{K}^* = \mathbf{K}_{N^*}$. Hybrid coding transmits the pair (${N^*}, \mathbf{K}^*$), which the receiver uses to reconstruct the selected candidate via \begin{align} \mathbf{Z}_{N^*} = \mathbf{K}^* + \mathbf{U}_{N^*}. \end{align} \begin{theorem} Let ${N^*}$ and $\mathbf{K}^*$ be defined as in Eq.~\ref{eq:idx_hybrid} and below Eq.~\ref{eq:bks_hybrid} and let $p$ be the uniform distribution over $[0, M_1) \times \cdots \times [0, M_D)$ for some $M_i \in \mathbb{N}$. Then \begin{align*} H[{N^*}, \mathbf{K}^*] < C + \log (C - \textstyle\sum_i \log M_i + 1) + 4, \end{align*} where $C = \mathbb{E}_\mathbf{X}[D_\textnormal{KL}\infdivx{q_\mathbf{X}}{p}]$. \label{th:hybrid} \end{theorem} Theorem~\ref{th:hybrid} shows that $({N^*}, \mathbf{K}^*)$ is an efficient representation if the marginal distribution of $\mathbf{Z}$ is uniform over some box whose sides have lengths $M_i$, since then $C = I[\mathbf{X}, \mathbf{Z}]$. However, for continuous random variables this can always be achieved through a transformation $\Psi$. If $\tilde q_\mathbf{x}$ is the desired target distribution before the transformation, then \begin{align} q_\mathbf{x}(\mathbf{z}) &= \tilde q_\mathbf{x}(\Psi(\mathbf{z}) ) |D\Psi(\mathbf{z})| \end{align} is the target distribution in transformed space. Note that after the transformation, the support of $q_\mathbf{x}$ is always bounded. Moreover, for small enough $M_i$ the support will be contained in the support of $r_\mathbf{x}$, satisfying our earlier assumption. After transmitting a sample from $q_\mathbf{x}$ via hybrid coding, it is assumed that the receiver applies the transformation $\Psi$ to obtain a sample from $\tilde q_\mathbf{x}$. \begin{figure}[t] \centering \input{figures/hybrid.tex} \caption{Computational cost of communicating a sample from a (truncated) Gaussian whose mean varies with standard deviation $\sigma$. The vertical axis measures the average number of candidates considered before an algorithm terminates. Shaded regions indicate the 10th and 90th percentile.} \label{fig:hybrid} \end{figure} To achieve the bound in Theorem~\ref{th:hybrid}, the sender first encodes ${N^*}$ while assuming a Zipf distribution $p_\lambda(n) \propto n^{-\lambda}$ with \begin{align} \textstyle\lambda = 1 + 1 / (C - \sum_i \log M_i + e^{-1} \log e + 1). \end{align} The $K_i^*$ are subsequently added to the bit stream using a fixed rate of $\log M_i$ bits. Algorithm~\ref{alg:hybrid} describes the algorithm used for encoding. Here, \texttt{q} is the transformed density and \texttt{p} is not needed as it is assumed to be uniform. Note that hybrid coding effectively reduces to ORC when the support is unconstrained ($M_i = 1$). For larger $M_i$, the bound on the coding cost improves only slightly but the computational cost reduces significantly. Since the number of candidates required for a sample of high quality grows exponentially in the KLD (Eq.~\ref{eq:is_cc}) and \begin{align} D_\text{KL}[q_\mathbf{x} \mid\mid r_\mathbf{x}] = D_\text{KL}[q_\mathbf{x} \mid\mid p] - \log M_i, \end{align} we should expect a speedup on the order of $\prod_i M_i$. We thus want to maximize $M_i$ while making sure that the support of $q_\mathbf{x}$ is still contained within that of $r_\mathbf{x}$. \begin{algorithm}[t] \caption{Hybrid coding} \label{alg:hybrid} \begin{algorithmic}[1] \Require $c, \texttt{q}, w_\text{min}, N$ \State $t, n, s^* \gets 0, 1, \infty$ \Statex \Repeat \State $u \gets \texttt{uniform}(n, 0, 1)$ \Comment{Candidate generation} \State $k \gets \texttt{round}(c - u)$ \State $z \gets k + u$ \Statex \State $v \gets N / (N - n + 1)$ \State $t \gets t + v \cdot \texttt{expon}(n, 1)$ \State $s \gets t / \texttt{q}(z)$ \Comment{Candidate's score} \Statex \If{$s < s^*$} \Comment{Accept/reject candidate} \State $s^*, n^*, k^* \gets s, n, k$ \EndIf \Statex \State $n \gets n + 1$ \Until{$s^* \leq t \cdot w_\text{min}$ \textbf{or} $n > N$} \Statex \State \textbf{return} $n^*, k^*$ \end{algorithmic} \end{algorithm} \section{EXPERIMENTS} \label{sec:experiments} \begin{figure*}[t] \centering \input{figures/comparison.tex} \caption{A comparison of various reverse channel coding algorithms for $2^{16}$-dimensional categorical distributions which are themselves randomly distributed according to a Dirichlet distribution. Dashed lines indicate the average KLD between the target distribution and the uniform candidate generating distribution. \textit{Left:} The sample quality as a function of the coding cost. \textit{Middle:} The sample quality as a function of the computational cost (for which the number of iterations is a proxy, except for RS*). \textit{Right:} The coding cost as a function of the maximum number of candidates considered.} \label{fig:comparison} \end{figure*} We run two sets of empirical experiments to compare the reverse channel coding schemes discussed above. We first investigate the effect of hybrid coding on the computational cost of communicating a (truncated) Gaussian sample. We then compare the performance of a wider set of algorithms for the task of approximately simulating a categorical distribution. \subsection{Gaussian distribution} Consider the task of communicating a sample from a $D$-dimensional Gaussian with random mean, \begin{align} \mathbf{Z} &\sim \mathcal{N}(\mathbf{X}, \mathbf{I}), & \mathbf{X} &\sim \mathcal{N}(0, \sigma^2 \mathbf{I}), \end{align} where $\mathbf{I}$ is the identity matrix and the mean $\mathbf{X}$ itself is Gaussian distributed with covariance $\sigma^2\mathbf{I}$. The marginal distribution of $\mathbf{Z}$ is Gaussian with mean zero and covariance $\sigma^2\mathbf{I} + \mathbf{I}$ and so we use this distribution as our candidate generating distribution $p$. The average information gained by obtaining a sample is \begin{align} I[\mathbf{X}, \mathbf{Z}] &= \textstyle -\frac{D}{2} \log\left( 1 - \frac{\sigma^2}{\sigma^2 + 1} \right). \end{align} To be able to apply the hybrid coding scheme, we slightly truncate the target distribution by assigning zero density to a small fraction $\theta$ of values with the lowest density. The TVD between the truncated Gaussian and the Gaussian distribution is $\theta$. A classifier observing $\mathbf{Z}$ would be able to distinguish between these two distributions with an accuracy of at most $1/2 + \theta/2$. In our experiments, we fix $\theta = 10^{-4}$ so that this accuracy is close to chance. We compare hybrid coding (Algorithm~\ref{alg:hybrid}) to ORC with $N = \infty$, which reduces to the PFR (Algorithm~\ref{alg:pfr}). Using an unlimited number of candidates allows us to avoid any further approximations and to focus on the computational cost. Appropriate values for $w_\text{min}$ and $M$ are provided in Appendix~H. Figure~\ref{fig:hybrid} shows the average number of iterations an algorithm runs before identifying a suitable candidate of a 1-dimensional (truncated) Gaussian. The computational cost of the PFR grows exponentially with the amount of information transmitted, which is approximately $\log\sigma$. On the other hand, the computational cost of the hybrid coding scheme quickly saturates and remains low throughout, allowing for much quicker communication of the Gaussian sample. \subsection{Categorical distribution} As another example we consider $D$-dimensional categorical distributions distributed according to a Dirichlet distribution with concentration parameter $\bm{\alpha}$. We chose $D = 2^{16}$ and $\alpha_i = 3\cdot 10^{-4}$ for all $i$, leading to sparse target distributions and a uniform marginal distribution. We include rejection sampling (RS) with an optimal choice for $w_\text{min}$ (a different value for each distribution) as well as the greedy rejection sampler (RS*) of \cite{harsha2007} in the comparison. For each method and target distribution, we simulate $10^5$ samples and measure the TVD between the resulting histogram and the target distribution. As a measure of the coding cost, we estimate the entropy of the index distribution obtained by averaging index histograms of 20 different target distributions. We explore the effects of limiting the number of candidates available to an algorithm. We find that the sample quality of all methods deteriorates quickly as the coding cost drops below the information contained in exact samples (Fig.~\ref{fig:comparison}, left and middle). RS performed surprisingly well in the bit-rate constrained regime (left) but not as well when constraining computational cost (middle). RS* performed even better in the low bit-rate regime but we note that its iterations' computational complexity is larger by a factor $D$ compared to the other methods. PFR and ORC performed best for samples of high quality. MRC \citep{cuff2008,havasi2018miracle} performed worse than the other methods mostly due to the coding and computational cost growing unboundedly with the number of candidates (Fig.~\ref{fig:comparison}, right). ORC addresses this issue such that its coding cost converges to that of the PFR \citep{li2018pfr}. \section{DISCUSSION} We demonstrated a close connection between minimal random coding \citep[MRC;][]{havasi2018miracle}, or likelihood encoding \citep{cuff2008,song2016ld}, and the Poisson functional representation \citep[PFR;][]{li2018pfr}. Ordered random coding (ORC) occupies a space between the two and benefits from the theoretical guarantees of both. In practice, we found that ORC can significantly outperform MRC, especially as the desired sample quality increases (achieving a 20\% reduction in coding cost at a TVD of about 0.02). Our second coding scheme enables much more efficient communication of samples from distributions with concentrated support. When the target distributions' support is unbounded, hybrid coding may still be applied after truncation, as in the Gaussian example. While the hybrid scheme is more efficient than other approaches for the Gaussian example, its cost still grows exponentially with dimensionality. A potential solution is to generate candidates using more sophisticated lattices than the integer lattice considered here \citep[e.g.,][]{leech1967,zamir2014book}. While some distributions are known to be hard to simulate \citep{long2010rbm}, an important task for future research is to characterize other distributions which can be simulated efficiently and which are therefore of great interest for practical applications. \subsubsection*{Acknowledgements} We would like to thank Abhin Shah, Johannes Ballé, Eirikur Agustsson, and Aaron B. Wagner for helpful discussions of the ideas presented in this manuscript. \bibliographystyle{abbrvnat}
1,108,101,566,199
arxiv
\section{Introduction} Since the pioneering works of Bahcall \& Soneira (\cite{bah}) and Gilmore \& Reid (\cite{gil83}), the comparison with deep star counts has been a primary method to disentangle the stellar components of the Milky Way. However, the number of parameters and the observational uncertainties involved in this kind of analysis are often prohibitive. In contrast, nearby stars are a well defined sample of disk stars and the distances are now well known so one can extract detailed information about the local star formation rate and how this is connected with the whole Galactic disk. In this framework, the Hipparcos catalog provides a unique opportunity. Before Hipparcos, the local stellar population of bright stars, in main sequence or in red giant phase, was poorly represented; moreover, the distance uncertainties washed out most of the fine structure of the color-magnitude diagrams (CMDs). After Hipparcos, it was possible to study the color-magnitude diagram for local stars in a statistical sense, beyond the simple comparison between evolutionary tracks and single stars, a method for which is the subject of this paper. Published studies based on the Hipparcos local sample have concentrated on two approaches. One is the direct comparison between data and artificial CMDs using a likelihood function, e.g. Bertelli \& Nasi (\cite{bert}) and Schr\"oder \& Pagel (\cite{scho}). The second is the Bayesian approach, e.g. by Hernandez et al. (\cite{her2}) and Vergely et al. (\cite{verg}). Here we suggest an hybrid technique (for an over-complete description see Cignoni 2006). First, we adopt a {Bayesian treatment of the observational uncertainties}: the CMD is converted into an image (binning process) and a Richardson-Lucy algorithm is used to clean the data (see Cignoni \& Shore 2006 for details). This ``cleaned'' Hipparcos CMD can then be used to recover the local star formation rate (SFR). This is done in different steps, as described in the following sections: (1) an ensemble of synthetic CMDs is generated using Monte Carlo simulations; (2) a likelihood function is minimized for the comparison between theory and observation; and (3) a confidence limit of the result is evaluated with a bootstrap technique. In section 2 we describe the properties of the selected volume complete sample and how we removed the known stellar clusters, moving groups, and associations. Sections 3 and 4 describe the method: physical inputs adopted for the stellar evolutionary calculations, Monte Carlo generations of artificial CMDs, likelihood function and bootstrap. In section 5 we apply the algorithm to artificial CMDs, showing which parameters are critical for recovering the star formation rate. Section 6 shows the results for the Hipparcos data. Section 7 discusses the dependence of the recovered SFR from a kinematic selection of the data. Finally in section 9 the results are discussed and compared with previous works available in the literature. \section{Sample selection} The Hipparcos mission observed objects to a limiting magnitude of about $V=12.5$ mag, with a completeness limit that depends on Galactic latitude $b$ and spectral type (see e.g. Perryman et al. \cite{per}): $V<7.9+1.1\sin |{b}|$ for spectral types earlier or equal to G5, $V<7.3 +1.1\sin |{b}|$ for spectral types later than G5. For a volume complete sample, we chose stars within 80 pc of the Sun and brighter than $V=8$ in visual apparent magnitude, corresponding to a minimum absolute visual magnitude $M_V=3.5$. Considering that the formal completeness limit is dependent by galactic latitude and spectral type, we checked if the sample is still complete against the Malmquist bias (in a magnitude limited sample, the brighter stars are statistically over-represented). The bias was quantified by comparing the luminosity functions for subsamples with different heliocentric distances. Figure \ref{malmb} shows the luminosity functions for stars with different distances from the Sun at intervals of 10 pc. \begin{figure} \centering \includegraphics[width=7cm]{5645fig01.eps} \caption{Luminosity functions (in absolute visual magnitude) for stars selected in distance (as labeled). All the stars are brighter than $V=8$.} \label{malmb} \end{figure} Using a Kolmogorov-Smirnov test, the hypothesis that the luminosity functions are realizations of the same distribution (for the range $-1<M_V<3.5$) cannot be rejected with a probability of 10\%. Thus, the sample selected at $V=8$ should be complete up to 80 pc and $M_V=3.5$, which is ensured by selecting Hipparcos stars that are brighter than $V \sim 8$ and within $80$ pc from the Sun. This sample contains about $4000$ objects with a parallax error generally better than 10\% (see Fig. \ref{precision}). \begin{figure}[] \centering \includegraphics[width=7cm]{5645fig02.eps} \label{precision} \caption{Distribution of the parallax precision ($\Delta \pi/\pi$) for Hipparcos stars within 80 pc and brighter than $V=8$.} \end{figure} \subsection{Clusters contamination} We are interested in the local star formation activity as it reflects that of the whole disk. In contrast, nearby clusters and associations are groups of stars that are not dynamically mixed with the field and which - having been formed together in large numbers at specific times - do not represent the {\it mean properties} of the solar neighborhood. Figure \ref{asso} shows the identified associations members within 80 pc and brighter than $V=8$. The most significant contamination is by the Hyades cluster with about 120 identified members. \begin{figure} \centering \includegraphics[width=7cm]{5645fig03.eps} \caption{The selected Hipparcos sample where members of associations are shown.} \label{asso} \end{figure} We will hence call ``the Hipparcos sample'' the set defined after that the identified cluster members are eliminated. \section{The model} \subsection{Theoretical inputs} \label{presc} To recover the star formation history from the observational CMD one needs to model a synthetic population with various theoretical ingredients. Using a Monte Carlo algorithm (CERN library), masses and ages are extracted according assumed initial mass functions (IMFs) and star formation rates (SFRs). Then, a suitable age-metallicity relation (AMR) is adopted. The extracted synthetic stars are placed in the CMD by interpolation among the adopted stellar evolution tracks. In order to take into account the presence of binary stars, a chosen fraction of these stars are assumed as binaries and coupled with a companion star. The code relies on a set of evolutionary computations covering with a fine grid the mass range $\approx\,0.1$ to $\approx\,7 M_{\odot}$, with metallicities from $Z=0.001$ to $Z=0.03$. The amount of original helium abundance has been chosen by assuming a primordial helium abundance $Y_P=0.23$ and $\Delta Y/ \Delta Z \sim 2$ (see e.g. Pagel \& Portinari \cite{pag}, Castellani, Degl'Innocenti \& Marconi \cite{cast99a}). The adopted color transformation are from Castelli, Gratton \& Kurucz (1997) For masses greater than $0.5\, M_{\odot}$ we use the Cariulo et al. (\cite{car}), Castellani et al. (\cite{cast03}), Castellani, Degl'Innocenti \& Marconi (\cite{cast99a}) evolutionary tracks (partially available at the URL: http://astro.df.unipi.it/SAA/PEL/Z0.html). The input physics adopted in the models are described in Cariulo et al. (\cite{car}). Convective regions, identified following the Schwarzschild criterion, are treated with the mixing length formalism in which the mixing length parameter $\alpha$ defines the ratio between the mixing length and the local pressure scale height; we have uniformly adopted $\alpha =1.9$, which has been calibrated in a way to reproduce, with the adopted color transformations, the observed RG branch color of the galactic globular clusters and young globulars in the LMC (Cariulo et al. \cite{car}, Castellani et al. \cite{cast03}, Brocato et al. \cite{broc}). The solar mixture adopted for the calculations is $[Z/X]_{\odot}=0.0245$ (Grevesse \& Noels \cite{greve}, GN93). We use throughout the canonical assumption of inefficient overshooting and the He burning structures are calculated according to the prescriptions of canonical semiconvection induced by the penetration of convective elements in the radiative region (Castellani et al. \cite{cast85}). Less massive stars ($0.5\, M_{\odot}<\,M\,<\,0.7\, M_{\odot}$), whose evolutionary times are longer than the Hubble time, have been evolved up to central H exhaustion. For very low mass stars ($M<0.5\, M_{\odot}$) we used the Zero Age Main Sequence positions by Baraffe et al. (\cite{bar7},\cite{bar8}). \subsection{Artificial CMDs} \label{stepping} The so called ``forward'' procedure for obtaining a SFR that can generate an observed photometric sample diagram is to produce artificial CMDs to be compared with the observations. The first technical problem of a similar approach is the time spent for the Monte Carlo generation of a CMD for each SFR. Both the data and the artificial photometry are stored in a color magnitude grid, each bin of which contains the number of stars observed or predicted to be in it. The SFR with the higher probability of generating the data is chosen by means of a likelihood test. So, to explore a sufficiently wide number of star formations, it is necessary to construct a basis set, each of partial CMDs\footnote{In contrast to the isochrones used for cluster simulations, these partial CMDs form a statistically {\it fuzzy} set in the sense that they must span a range of color, luminosity, and abundance depending on the mass function adopted for the model. This appears more than an analogy; it may be possible to apply some of the methods already developed in this field to study even deeper survey fields where the data are less well constrained than the Hipparcos sample.} with $\approx 10^5$ stars per step star formation, uniform in a given time interval and zero elsewhere. The step functions must be exhaustive (the sum covering the whole Hubble time) and they cannot overlap. Thus, for each combination of IMF and AMR, the CMD corresponding to any SFR is computed as a linear combination of the partial CMDs: \begin{eqnarray} &&m_{i}=\sum_{j}r_{j}\,c_{ij}\label{rl3} \end{eqnarray} where $m_{i}$ is the number of star in the final CMD in bin $i$, $r_{j}$ is the star formation rate for the partial CMD $j$, and $c_{ij}$ is the number of stars in the bin $i$ owning to the partial CMD $j$. The number of Monte Carlo simulations is reduced to the number of partial CMDs. This method has been already applied by several authors (see e.g. Aparicio, Gallart \& Bertelli \cite{apa1}-\cite{apa2}, Gallart et al. \cite{gal}, Bertelli \& Nasi \cite{bert}). The duration of each star formation interval is chosen depending on the timescale of the typical stellar population involved, in order to properly sample even those stars with rapid evolutionary changes. Thus, we have chosen star formation steps of 0.5 Gyr for stars younger than 2 Gyr, while for the later epoches the duration is increased (1 Gyr for stars with age between 2 and 4 Gyr, 2 Gyr for older stars). \section{The comparison between theory and observation} \subsection{Choosing the grid} Choosing the grid dimension for the CMD binning is an essential step. If the CMD bins are too small, the histogram fluctuations are too large and it's more difficult to recover the underlying SFR. If the bin size is too large one can loose information. A first rule for choosing the grid uses from the typical masses involved in the sample: massive stars have shorter life time (and the corresponding CMD regions are underpopulated) and it is thus suitable to use a larger bin size in order to map their history. Another limit is the evolutionary phase of the star mapping the CMD: after the main sequence, the partial CMDs become nearly degenerate and consequently the grid needs to be finer. The adopted binning must arise by numerical simulations for the specific problem to check the sensitivity of the algorithm to the various choices. \subsection{Searching for the ``best model''} To quantify the similarity among CMDs, we chose a Poisson based likelihood function: \begin{eqnarray} \chi_P= \sum_{i=1}^{N bin} n_{i}\ln\frac{n_{i}}{m_{i}}-n_{i}+m_{i} \label{pois1} \end{eqnarray} where $m_{i}$ and $n_{i}$ are the model and the data histogram in the i-bin. This ``likelihood'' is considered as a mere ``distance'' to be minimized, while \emph{the acceptance level of a solution is estimated using a bootstrap technique}. Our model depends on several parameters (10 coefficients of the star formation rate), thus, the problem is to move within this multi-dimensional parameter space and to search for the combination of parameters that minimize $\chi_p$. For this task we implement the Nelder-Mead simplex method (Nelder \& Mead, \cite{neld}). The main problem of the simplex method is the efficency: the presence of many local minima can prevent reaching a real global minimum for $\chi_P$. To improve the efficency we add a logarithmically distributed random variable to each vertex of the simplex, the minimization algorithm is re-started each time a ``global minimum'' is found, and the new departure is randomly chosen (in order to avoid the dependence by the initial guess) in the parameter space to obtain a class of best values. The final best parameter is the smallest among the best values. The restart process is stopped when this ``minimum value'' no longer changes. \subsection{Confidence intervals} The bootstrap method is commonly used to estimate confidence intervals. In empirical bootstrap simulations one processes the original data set $N$ times (copies) so each of the original $n$ data points is sampled with replacement \footnote{In a random sample with replacement, each observation in the data set has an equal chance of being selected and it can be selected over and over again.} and with equal probability of being sampled. One finally obtains $N$ different data sets, each with $n$ data points. Because of the replacements, some values in each data set are repeated, while others are lacking. This mimics the observational process: if the observational data is representative of the underlying distribution, the data produced with replacements are copies of the original one with local crowding or sparseness. In practice, the method uses the bootstrapped copies imposing the same minimization procedure as it would be performed on the real data set. The result will be a set of ``best'' parameters. The confidence interval is then the interval that contains a defined percentage of this parameter distribution. \section{Sensitivity tests with artificial data} Applying the method to artificial data, we have tested: \begin{enumerate} \item how the completeness limit, that fixes the boundaries in magnitude of the CMD used for the analysis, affects the result. Different zones in the CMD give information about different epochs of star formation. The completeness limit determines our ``zone of ignorance'' for the underlying SFR. \item how sensitive the recovered SFR is to parametric functions such as the IMF, AMR, binary population. This procedure may highlight parameters whose values need to be known very accurately in advance; \item how the contamination of accidental intruders (e.g. stellar cluster stars) in the sample, could lead to a biased SFR. \end{enumerate} For all tests we have fixed the grid bin size. Young partial CMDs comprise massive stars so the spanned region is broad (the more massive the star is, the longer its excursion in color to reach the red giant branch) and poorly populated: small bins contain few stars and the recovered SFR suffers of low number statistics. Old partial CMDs are composed by less massive stars so a narrower bin size is required to distinguish different SFRs. After some trials we found that a bin size of 0.05 mag, both for the color and absolute magnitude, is the best compromise. {\it Hereafter, we call ``artificial data'' each synthetic CMD mimicking the Hipparcos data while the theoretical CMD is simply called the ``model''}. \subsection{Completeness limits} The completeness limit of the observed CMD limits our ability to exploiting all the information contained in the CMD. In this section we explore how different limits in absolute magnitude can modify the recovered SFR. We analyze three different completeness limits: $M_{V}=2.5, 3.5, 4.5$. During the test we separately select the contribution given by main sequence stars and later evolutionary phases (RGB and red clump), defined as stars with $B-V>0.8$. The results are drawn in Fig. \ref{zone} (here and in the following all SFR histograms have unit area). \begin{figure*} \centering \includegraphics[width=6.5cm,height=5.5cm]{5645fig04.eps} \includegraphics[width=6.5cm,height=5.5cm]{5645fig05.eps}\\ \includegraphics[width=6.5cm,height=5.5cm]{5645fig06.eps} \includegraphics[width=6.5cm,height=5.5cm]{5645fig07.eps}\\ \includegraphics[width=6.5cm,height=5.5cm]{5645fig08.eps} \includegraphics[width=6.5cm,height=5.5cm]{5645fig09.eps} \caption{``True'' (solid line) and recovered (dashed line) SFR from stars brighter than $M_V=2.5, 3.5, 4.5$. On the left the figures show the results obtained using only main sequence stars. The figures on the right represent the recovered SFR by including all the evolutionary phases.} \label{zone} \end{figure*} Main sequence stars with $M_V < 2.5$ mag give information about only the recent SFR, while we learn nothing about stars older than 3 Gyr (this is evident from the large error bars for the recovered SFR beyond this age, which mitigates the result and demonstrates the impossibility for the procedure to recover the SFR due to the lack of information). The result is significantly improved by also including later evolutionary phases. The recovered SFR is close to the original one, even if the error bars are quite large. This is due to the fact that one obtains information only from fast evolutionary phases (clump and red giants for the past star formation and upper main sequence for the recent one) and the probability of finding stars in these zones is low. Obviously we are considering a perfect situation where both the chemical composition and the IMF of the stars are well known so the uncertainty is as minimum as possible; for real data other sources of uncertainties occur. Including stars to $M_{V}=3.5$ the precision of the recovered SFR increases and from the main sequence alone one can obtain the SFR up to $\sim 6$ Gyr. However, to study earlier epochs it is needed to include later evolutionary phases. Also in this case the uncertainty in the recovered SFR, for stars older than 6 Gyr, is large. The reason is the same: for $M_V<3.5$, the information on the ancient star formation comes only from late evolutionary phases which are too rapid to provide for a large number of stars. By increasing the magnitude limit at $M_{V}=4.5$, the entire SFR is recovered with small uncertainties. The inclusion of the main sequence, however, produces a systematic difference between the input and recovered SFR for the oldest era while including late evolutionary phases leads to the right solution. This finding is easier to understand if one looks how different masses contribute to different epoches of star formation. Figures \ref{grotte} draw this map (for minimum luminosities $M_{V}=3.5$ and $4.5$) for an artificial population built with a constant SFR (0-12 Gyr) and Salpeter IMF. \begin{figure} \centering \includegraphics[width=7.5cm]{5645fig10.eps} \includegraphics[width=7.5cm]{5645fig11.eps} \caption{Theoretical distributions in time of stars generated with a flat SFR (0-12 Gyr), Salpeter IMF and solar composition. Different lines indicate different mass ranges. In the upper panel only stars with visual absolute magnitude below 3.5 are plotted, while in the lower panel the magnitude limit is $M_{V}=4.5$.} \label{grotte} \end{figure} If the minimum luminosity is set to $M_{V}= 3.5$ we cannot see the main sequence for masses below $1.2\,M_{\odot}$. So this mass range the RGB and He burning stars provide information about the earlier epochs, but not on the recent SFR. At $M_{V}= 4.5$ we see the main sequence down to $1 \,M_{\odot}$, so we can analyze with this mass range each period between now and $10-12$ Gyr ago. These results are \emph{not} linked to the particular SFR parameterization: Fig. \ref{sfrs} shows the recovered SFR for different input SFRs. \begin{figure} \centering \includegraphics[width=8cm]{5645fig12.eps} \caption{As in Fig. \ref{zone} for $M_V<4.5$ and all evolutionary phases for different input SFR shapes.} \label{sfrs} \end{figure} In the following we present test results involving artificial data adopting the magnitude limit $M_V=4.5$. \subsection{IMF - SFR degeneracy} \label{IMF - SFR degeneracy} Even if the form of the IMF is well defined for masses above $1\,M_{\odot}$ from observations and theoretical analyses (see e.g. Larson et al. \cite{lars}), the precise value of the exponent is still debated. For instance, Kroupa (\cite{krou}) finds the Salpeter exponent 2.3$\pm$0.7 to be the most likely value, and any study of the local SFR must account for this uncertainty. The possibility that changes in the IMF mimics the SFR effects on CMD is a well known degeneracy. We analyzed the sensitivity of the recovered SFR to the chosen IMF using different IMF exponents ($s=1.3,\, 2.3,\, 3.3,\, 4.3$) for the artificial data while in the model a \emph{fixed IMF exponent equals to 2.3} was used. The results are shown in Fig. \ref{imfs}. It's noteworthy that the input SFR is always recovered: even if a wrong IMF is adopted (that is, different from the one used for the artificial data), it doesn't lead to a wrong solution (at least for ``reasonable'' IMF exponents less than 4). In conclusion, for the mass range covered by Hipparcos sample ($M_V\leq 4.5$), \emph{the IMF exponent alone is not a crucial parameter if the AMR is known in advance}. Figure \ref{grotte} shows this. For ages $ < 6$ Gyr the CMD is populated by the whole mass spectrum (only very massive stars are dead), so a variation of the IMF modifies the population in this age range in the same way (the relative SFR is preserved). In contrast, the old eras (8 to 12 Gyr) include only low mass objects (even intermediate mass stars are already dead), thus the IMF variations mainly alters the ratio between old (older than 8 Gyr) and recent star formation (see Fig. \ref{imfs}). \begin{figure} \centering \includegraphics[width=8cm]{5645fig13.eps} \caption{The ``true'' (solid line) and the recovered SFR for different values (labeled) of the IMF exponent used in the artificial data. The IMF exponent used in the model is fixed to 2.3. } \label{imfs} \end{figure} \subsection{Binaries - SFR degeneracy} \label{Binaries - SFR degeneracy} Another source of uncertainty, when one looks at the solar neighborhood, is the percentage of stars in unresolved binary systems. Our model doesn't account for binary evolution with mass exchange: we assume that each binary component evolves as a single star. Our knowledge of binary stars populations and evolution in the local disk is far from perfect, so including interacting binaries in the simulations would involve many unknown parameters (such as the mass exchange rate, evolution of the separation, etc.). However, the mere presence of a given percentage of unresolved binary systems affects the CMD morphology. In order to perform this analysis, we have built the usual artificial data using different prescriptions on the binary population (10\%, 30\%, 50\% of binaries with random and equal mass ratio). The partial CMDs used in the model are built with the same composition and IMF adopted for the artificial data, but without binaries. The results of the simulations are shown in figure \ref{binarie}. As found for the IMF, if the mass ratio is uniformly distributed, we can recover the correct SFR independently on the presence of binaries. In the extreme case of equal mass ratio ($q=1$), the recovered SFR is severely biased. In particular, the presence of binaries doesn't affect the recent result, while the only modifications affect the old SFR. \begin{figure} \centering \includegraphics[width=8cm,height=7cm]{5645fig14.eps}\\ \includegraphics[width=8cm,height=7cm]{5645fig15.eps} \caption{``True'' (solid line) and recovered SFRs. The artificial data is generated with the indicated percentage of binaries and mass ratio (random number or unity), while the model is without binaries.} \label{binarie} \end{figure} The explanation comes from the displacement that binaries cause from single star CMD: if the mass ratio is uniform the smearing effect is random and the main sequence is merely wider (this effect is generally smaller than 0.05 mag, which is the binning size) and it looks like an increased photometric error for the recovered SFR. If, however, the mass ratio is unity, the global effect is systematic and the main sequence develops a parallel double: in this case the recovered SFR is systematically biased. Thus, to recover a star formation rate, the choice of the binary population is not crucial but if an extreme prescription is adopted (e.g. equal masses), the recovered SFR may be biased. \subsection{Metallicity - SFR degeneracy} An old, metal-poor stellar populations can mimic younger, metal-rich populations. Moreover, knowing that the solar neighborhood is a mix of stars of different chemical compositions (see e.g. Nordstr\"om et al. \cite{nord}), we expect a very complex influence of the composition on the CMD morphology. These simple considerations oblige us to check if it's still possible to recover the SFR when only a partial knowledge on the age-metallicity relation is available. In these calculations we now assume that the IMF and the binary population are the same both in the model and in the artificial data. \subsubsection{Artificial data with single metallicity}\label{Artificial data with single metallicity} As first step, we have built an artificial population with a single metallicity (no AMR), then, the SFR is recovered assuming a wrong metallicity. The results are shown in Fig. \ref{metawrong}. \begin{figure} \centering \includegraphics[width=8cm]{5645fig16.eps} \caption{Sensitivity test to metallicity. The adopted composition for the model is solar. If the data have the same composition, the ``true'' SFR (heavy solid line) is close to the recovered one (heavy dashed line). If the data metallicity is slightly different from the solar value ($\Delta Z= \pm 0.005$), systematic relevant deviations appear in the solution.} \label{metawrong} \end{figure} If we adopt the same metallicity for the artificial data and for the model, the SFR is recovered. \emph{However, if we slightly change the metallicity in the data ($\Delta Z= \pm 0.005$), without changing the composition of the model, systematic discrepancies appear in the recovered SFR}. If the artificial data are metal poor compared to the model, the recovered old SFR is underestimated (and the recent one is overestimated) and oppositely for metal richer artificial data. This result is a strong warning about the widely used assumption of solar composition for nearby stars: \emph{small deviations from the solar value could bias the derived SFR}. Moreover, we know that deviations from solar values exist and are usually much larger than $\Delta Z= \pm 0.005$. Figure \ref{zt} shows the age-metallicity relation and the metallicity distribution by Nordstr\"om et al. (\cite{nord}), the most representative census for ages and metallicities in solar neighborhood. This is characterized by a constant mean metallicity and a large scatter at all ages (about $\sigma \sim 0.2\,\rm dex$ in $[Fe/H]$). The quoted formal error of $\sim 0.1$ dex in $[Fe/H]$ cannot account for the observed spread. \begin{figure*} \centering \includegraphics[width=8cm]{5645fig17.eps} \includegraphics[width=8cm]{5645fig18.eps} \caption{Left panel: age-metallicity diagram for single stars (within 40 pc from Sun) with age determination better than 25\% (from Nordstr\"om et al. \cite{nord}). Right panel: the distribution of $[Fe/H]$ for the same sample.} \label{zt} \end{figure*} \subsubsection{Artificial data with an age-metallicity dispersion} In order to test the sensitivity to a metallicity dispersion, artificial data were generated with the observed mean metallicity plus a variable dispersion (the explored range is from $\sigma=0.01$ dex to $\sigma=0.2$ dex in $[Fe/H]$). The conversion between $[Fe/H]$ and $Z$ that is appropriate for our models calculated for the GN93 composition ($[Z/X]_{\odot}=0.0245$) is: \begin{eqnarray} \log Z = 0.73\times 10^{([Fe/H]-1.61)} \label{fehz} \end{eqnarray} The enhancement of $\alpha$ elements is not included because it has shown to be negligible for disk stars. The SFR was searched, adopting in the model the same mean metallicity of the artificial data, but \emph{without metallicity spread}. The results are shown in Fig. \ref{tantedisp}, \begin{figure*} \centering \includegraphics[width=7cm]{5645fig19.eps} \includegraphics[width=7cm]{5645fig20.eps}\\ \includegraphics[width=7cm]{5645fig21.eps} \includegraphics[width=7cm]{5645fig22.eps} \caption{Solid line: SFR assumed for the artificial data. Dashed line: recovered SFR. The $\sigma$ value indicates the dispersion in $[Fe/H]$, used for the artificial data. The model has the same mean metallicity, but no dispersion.} \label{tantedisp} \end{figure*} where the solid line is the input SFR and the dashed line the recovered one. Above $\sigma=0.1$ dex, most of the information contained in the SFR is lost: this numerical experiment has pointed out how the dispersion in metallicity can be a critical factor. \emph{A wrong estimate of the dispersion leads to a wrong solution.} For the final test we adopt the same dispersion both in the artificial data and in the model. The idea is to check if in presence of dispersion, even when we rightly estimate the metallicity dispersion of the data, the spread of the CMD due to the metallicity spread still allows to recover the underlying SFR. In order to do a realistic attempt, we adopt the metallicity distribution by Nordstr\"om et al. (\cite{nord}) (Fig. \ref{zt}). The overall effect is a broadening of the partials CMDs. The SFR extraction is presented in figure \ref{halfrec} for an artificial population generated with this AMR. \begin{figure} \centering \includegraphics[width=8cm]{5645fig23.eps} \caption{Input SFR (solid line) compared with the recovered SFR (dashed line). Model and artificial data have the same metallicity dispersion. } \label{halfrec} \end{figure} There are systematic shifts between the recovered and the input SFRs, indicating a limit in our ability to distinguish different stages of star formation (for a comparison with the single metallicity tests, see section ~\ref{Artificial data with single metallicity}), but the trend is preserved. The implication for real data is actually encouraging: if the nearby stars show an age-metallicity relation like the N\"ordstrom et al. result, the application of the model to Hipparcos stars can give information on the real SFR. \subsection{Contamination from clusters and associations} The solar neighborhood includes stellar clusters or part of them. We have removed about 80 stars (mainly Hyades stars) within 80 pc and brighter than $M_V=3.5$. This number may appear small ($\sim 2\%$ of the total sample), but these object are concentrated in time so they can produce a burst in the SFR (at $\sim 0.5$ Gyr, for the Hyades) which does not represent a galactic field property. However, we cannot exclude that some cluster members remain as yet unidentified, so an interesting analysis involves the cluster impact on the recovered SFR. We have contaminated an artificial sample with a variable percentage of synthetic Hyades-like stars (500 Myr and solar metallicity), from 2\% to 15\%, with the results shown in Fig. \ref{clusterhip}. At 2\% contamination the SFR changes within the error bars. Increasing the cluster percentage, the peak at 500 Myr becomes progressively more evident. At 15\%, the recovered SFR is perturbed on a scale of 5 Gyr. \begin{figure} \centering \includegraphics[width=8cm]{5645fig24.eps} \caption{Recovered star formation rates for different contaminations (the percentage is labeled) of Hyades-like stars. } \label{clusterhip} \end{figure} The same test has been performed with a synthetic cluster at 2 Gyr. Figure \ref{clusterhip2} shows the recovered SFR when 15\% of synthetic cluster stars is added to the artificial data. It is evident that the changes in the recovered SFR are not isolated at 2 Gyr, but the whole SFR shape between 2 Gyr and 7 Gyr is altered. \begin{figure} \centering \includegraphics[width=8cm]{5645fig25.eps} \caption{Solid line and dashed line are respectively the recovered SFR when the contamination by cluster stars (2 Gyr old) is 0 and 15\%. } \label{clusterhip2} \end{figure} \section{Comparison with real data} We are now ready to present the SFR extraction from the real Hipparcos data. Because both the IMF and the binary population are not crucial factors (see sections ~\ref{IMF - SFR degeneracy} and ~\ref{Binaries - SFR degeneracy}), we have fixed them: the IMF exponent is Salpeter, there are no binaries, and we have again assumed the Nordstr\"om et al. AMR. Before applying the SFR extraction method to real data, we must treat the observational errors. Cignoni \& Shore (2006) show how the Richardson-Lucy algorithm allows to restore the original CMD corrupted by a point spread function. Here, we apply the SFR extraction to the Hipparcos CMD that was previously deconvolved by this algorithm. However, the result of a R-L restoration is a two dimensional histogram and the information on the single stars is lost. Thus we cannot directly apply the bootstrap technique to determine the variance on the recovered SFR. A trick to avoid this problem is to construct bootstrap replicates of the data \emph{before the Richardson Lucy restoration}. Then each bootstrap data is reconstructed with the R-L algorithm and the SFR is obtained for each reconstruction. This set furnishes the mean and variance for the final SFR. The R-L algorithm is performed with the PSF built from the observational absolute magnitude error distribution. Figure \ref{lucy_sfr} shows the results: the different curves represent the recovered SFR, after respectively 5, 10, 15, 20, 25 R-L restorations. In order to avoid artifacts, the restoration is stopped at the 25-th iteration when the bulk of the restoration is done (see discussion in Cignoni \& Shore \cite{cign}). For comparison, Fig. \ref{lucy_sfr} (upper panel) shows the recovered SFR (labeled with 0) when our method is applied to the data without R-L restorations. \begin{figure} \centering \includegraphics[width=7cm]{5645fig26.eps}\\ \includegraphics[width=7cm]{5645fig27.eps} \caption{The SFR recovered from the Hipparcos sample. The comparison area involves all stars brighter than $M_V=3.5$. Different lines show the result after the labeled number of R-L iterations.} \label{lucy_sfr} \end{figure} The global effect of the restorations is small \footnote{This means that the uncertainties in the Hipparcos data (at the luminosities of our sample) are small. Cignoni \& Shore (2006) find that this reconstruction is especially useful when the uncertainties are much more larger.} and it is most evident around 2-3 Gyr. Over the 10th iteration, the solution is very stable and the only change is the increase of the estimated uncertainties because of the noise amplification. Before discussing the recovered SFR, it is necessary to recall that the implemented Nordstr\"om et al. AMR is uncertain for ages lower than 1.5 Gyr and greater than 7 Gyr, thus the result for these ages could be unavoidably biased. Moreover, all the information (at $M_V<3.5$) on the SFR older than 7 Gyr comes from evolved stars, with the associated under-population problems. All these points will be discussed further. For the moment, we describe the results for the SFR: \begin{itemize} \item {A bump in the time interval 10-12 Gyr;} \item {A modest activity in the time interval 7-10 Gyr;} \item {A steep increase from 2 Gyr to 6 Gyr;} \item {A modest activity during the last 1 Gyr.} \end{itemize} In order to check the robustness of our finding, we tried to recover the SFR by adopting a different time resolution. In particular we tested a power law stepping $\delta t = [{(1+\varepsilon)^n}\,\times 0.5\,Gyr]$, with $\delta t$ the step duration, $\varepsilon$ a parameter and $n$ a running index (integer, starting from 0). Figure \ref{nuova1} shows the recovered SFR for $\varepsilon=1$. \begin{figure} \centering \includegraphics[width=7cm]{5645fig28.eps}\\ \caption{The recovered SFR for a power law step resolution (dashed line; $\varepsilon=1$, see text) is compared with the result of Fig. \ref{lucy_sfr}. Both the SFR are obtained after 1 R-L restoration.} \label{nuova1} \end{figure} The coarser time steps allow to reduce the uncertainties, but the overall shape for ages less than 8 Gyr is confirmed. As we will discuss later, the results for greater ages are affected by many sources of uncertainty. In the following, the temporal resolution will be fixed to the prescription of section \ref{stepping}. Moreover the number of R-L restoration is fixed at 15. \subsection{Warnings about the AMR} \label{nordmeth} Our SFR represents the most probable result provided that the model is not biased. As already discussed, the observed AMR is chosen mainly because it arises from a very wide observational sample. However, it is still affected by three important biases: \begin{enumerate} \item it was built looking for F-G type stars. This selection was done choosing stars between suitable blue and red color boundaries (by means of $(b-y)$ Str\"omgren color, which is almost metallicity independent). However, as a consequence of the blue cut-off, the younger metal poor stars are under-represented in the final AMR; \item Due to the observational errors, the stellar age determination is progressively more and more difficult as a star is close to the zero age main sequence (where the stellar tracks degenerate). Consequently, the Nordstr\"om AMR is poorly known for very young stars and the AMR we used below 1 Gyr is an extrapolation; \item The age determination is also a problem for stars older than 8 Gyr, because at these ages the main sequence is populated by low mass stars which evolve in a restricted region of the CMD. As a consequence, the ancient part of the AMR is given with very large uncertainties in the age (e.g. the presence of stars older than 13 Gyr). \end{enumerate} Although we have selected stars with a relative uncertainty for the age better than 25\%, the previous points lead to doubts about the recovered SFR for stars younger than 1 Gyr and older than 8 Gyr. In particular, the recovered activity during the last 1 Gyr is partially due to the way the AMR is parameterized in our model. Figure \ref{cubicflat} shows the effect of a different parameterization: the solid line is the resulting SFR if the adopted AMR is a polynomial fit (cubic) plus the dispersion, the dashed line is the result when the dispersion alone is implemented. \begin{figure} \centering \includegraphics[width=7cm]{5645fig29.eps} \caption{Dependence of the result on a different parameterization of the observed AMR. The dashed line indicates the SFR that is obtained implementing the metallicity spread of the Nordstr\"om et al. AMR. The heavy line indicates the resulting SFR when a cubic interpolation of the Nordstr\"om et al. data plus the same observed dispersion is adopted, see text.} \label{cubicflat} \end{figure} \subsection{Warning about the adopted completeness limit} Another point is the completeness limit $M_V=3.5$: in section 5 we showed how the information on the old SFR is enough limited by the completeness limit, with the full information only available with an hypothetical sample complete up to $M_V=4.5$. With the $M_V=3.5$ cut-off, the only tracers of the star formation older than 7 Gyr (see Fig. \ref{grotte}) are red giants, clump stars and subgiants. For this reason, at ages older than 7 Gyr the recovered SFR is certainly undersampled (see Fig. \ref{unders}). Thus, the bump between 10 and 12 Gyr could be an artifact. \begin{figure} \centering \includegraphics[width=7cm]{5645fig30.eps} \caption{The highlighted region identifies the time interval where the recovered SFR is undersampled (because of the magnitude cut at $M_V=3.5$, the only tracers at these ages are red giant and helium clump stars).} \label{unders} \end{figure} Only a deeper volume limited sample will provide a better understanding on the old SFR. \section{Kinematical selection} A genuine star formation rate should represent the number of stars born at each time in our volume; this condition can fall, for example when: \begin{itemize} \item[1)] Old disk stars may have \emph{diffused} into a larger volume, so the old local SFR may be undersampled: the stellar velocities are randomized through chance encounters with interstellar clouds, they gain energy and increase the velocity dispersion. \item[2)] ``Hot'' populations may contaminate the sample. Thick disk and halo stars have kinematical properties that could have been fixed before the disk developed. These stars sample a much larger volume of the disk stars and they are weakly represented in the solar neighborhood. \end{itemize} In these cases, the recovered SFR would be a mere census of the ages of the stars actually present in the solar neighborhood. In order to avoid thick disk/halo contaminations and to check the amount of orbital diffusion of old disk stars, we have evaluated the Galactic velocity components for all stars in the sample and recovered the SFR for different kinematically selected subsamples. The Hipparcos mission measured proper motions which, together with the parallaxes, give tangential velocities $V_T$. For most of the stars in our sample, a measured radial velocity is available (from the SIMBAD database \footnote{SIMBAD database is available at the following URL: http://simbad.u-strasbg.fr/Simbad}). With these data, we have computed the Galactic velocities U,V and W for more than 90\% of the stars in the sample, corrected for the solar motion relative to the Local Standard of Rest (LSR, $U_{\odot}=+10.0$ Km/s, $V_{\odot}=+5.2$ Km/s, $W_{\odot}=+7.2$ Km/s according to Dehnen \& Binney \cite{deh}). Figure \ref{velox} shows the distribution of U,V and W velocities for all stars in the sample with measured radial velocities. \begin{figure} \centering \includegraphics[width=7cm]{5645fig31.eps} \caption{Distribution of U,V and W velocities (referred to the LSR) for all stars in the sample with a measured radial velocity and $M_{V}<3.5$. } \label{velox} \end{figure} In order to search for stars with thin disk properties we need a kinematic criteria. Table \ref{tab} summarizes recent results for the kinematic properties of the thin disk, thick disk and halo (values from Bensby et al. \cite{ben3}). \begin{table} \centering \begin{tabular}{llcccr} \hline \hline\noalign{\smallskip} & $X$ & $\sigma_{\rm U}$ & $\sigma_{\rm V}$ & $\sigma_{\rm W}$ & $V_{\rm asym}$ \\ & & \multicolumn{4}{c}{---------- [km~s$^{-1}$] ----------} \\ \noalign{\smallskip} \hline\noalign{\smallskip} Thin disk & 0.90 & $~~35$ & 20 & 16 & $-15$ \\ Thick disk & 0.10 & $~~67$ & 38 & 35 & $-46$ \\ Halo & 0.0015 & $160$ & 90 & 90 & $-220$ \\ \hline \end{tabular}{\smallskip}{\smallskip}{\smallskip}{\smallskip}\caption{ Characteristic velocity dispersions ($\sigma_{\rm U}$, $\sigma_{\rm V}$, and $\sigma_{\rm W}$) in the thin disk, thick disk, and halo. $X$ is the estimated observed fraction of stars for the given population in the solar neighborhood and $V_{\rm asym}$ is the asymmetric drift with respect to the LSR (values taken from Bensby et al. \cite{ben3}). } \label{tab} \end{table} Before applying the SFR extraction to the kinematical selected data, we have performed the same selection also on the Nordstr\"om et al. age metallicity relation (the AMR implemented in the SFR extraction code). The result is presented in Fig. \ref{fehk}: even if the $[Fe/H]$ dispersion slightly decreases for lower stellar velocities, this value is still very high and no trend is recognizable in the AMR. \begin{figure} \centering \includegraphics[width=7cm]{5645fig32.eps} \caption{Normalized $[Fe/H]$ distributions (data by Nordstr\"om et al. \cite{nord}) for stars with the labeled kinematic selection. } \label{fehk} \end{figure} For this reason, we adopted the same AMR used for the full sample without kinematic selection. This is related to the debate over the existence of a distinct chemical history for disk and thick disk. It is well known that high velocity stars belong to more extended structures (thick disk and halo). Sandage (\cite{sand}) and Casertano et al. (\cite{case}), in particular, used kinematics to trace the thick disk population, but it is much less obvious that these stars reveal an age-metallicity relation that can be distinguished from that of the disk. Metallicity distributions of the thick disk and thin disk do not allow for an unequivocal separation. Some authors (see e.g. Gilmore et al. \cite{gilwy}, Bensby et al. \cite{ben}) argue that the thick disk is a completely kinematically and chemically distinct Galactic zone; e.g. Bensby et al. (\cite{ben}) determine a specific AMR, while a different conclusion is reached e.g. by Norris \& Green (\cite{nor89}) and Norris \& Ryan (\cite{nor91}), who argue that the thick disk is the high velocity dispersion tail of the old disk. Figure \ref{lucy_sfr_kin} shows the recovered SFRs after 15 R-L restorations when the sample is kinematically selected. In particular, the result in the upper panel is found selecting objects with Galactic velocities within the velocity ellipse at $2\sigma$ for the thin disk. Result in the lower panel is found for objects with velocities within the velocity ellipse at $1\sigma$ for the thin disk. The cut at $2\sigma$ excludes essentially all halo and thick disk objects. In this case (Fig. \ref{lucy_sfr_kin}, upper panel), the recovered SFR is almost identical to the one without any selection. One explanation is that the contribution of thick disk and halo stars, for the period 1-8 Gyr, is minimal. This result confirms the general finding that the thick disk, if it exists, seems older than thin disk. For example Fuhrman (\cite{fuh}) indicates 8 Gyr the thick disk age while Soubiran \& Girard (\cite{soub}) from 7 to 13 Gyr (with an average of $9.6\pm 0.3$ Gyr). In addition, the number density of local thick disk stars is a small fraction ($\sim 8\%$) of the thin disk stars. This result is confirmed by many works: Gilmore \& Reid (\cite{gil83}) and Chen (\cite{chen}) find 2\%, Robin et al. (\cite{rob96}) find 6\%, Soubiran et al. (\cite{soub03}) find 15\%. In contrast, by removing stars out of $1\sigma$ one should exclude: \begin{itemize} \item{low velocity tails of halo and thick disk stars;} \item{disk stars whose orbits explore large scale heights (200-300 pc);} \end{itemize} In this case, the recovered SFR (Fig. \ref{lucy_sfr_kin}, lower panel) has a slightly lower peak at 3 Gyr, while the activity in the last 1.5 Gyr is increased. However, the variations are within the statistical uncertainties (between 1 and 2 $\sigma$ of acceptance) and the global trend is preserved. \begin{figure} \centering \includegraphics[width=7cm]{5645fig33.eps}\\ \includegraphics[width=7cm]{5645fig34.eps} \caption{Solid line: the recovered SFR using the full sample. Dashed line: the SFR recovered from stars with Galactic velocities within the velocity ellipse at $2\sigma$ (upper panel) and at $1\sigma$ (lower panel) the velocity ellipse of the thin disk.} \label{lucy_sfr_kin} \end{figure} In conclusion, the recent SFR (last 6 Gyr) seems not to suffer of a significant dynamical diffusion. In this case, a correction for a possible disk depletion due to fast stars, does not really matter: within our level of acceptance, the recovered SFR is a genuine local SFR and not a mere local age distribution. Because of the theoretical difficulties to reproduce the red clump stars (see the discussion in chapter 4), the analysis was repeated excluding all stars with $B-V>0.8$. In this case, because the excluded region involves stars of all ages, the recovered SFR (see Fig. \ref{cnc}) is slightly different at all ages (but still within the $1\,\sigma$ uncertainties), with a major effect around 10 Gyr. \begin{figure} \centering \includegraphics[width=7cm]{5645fig35.eps} \caption{The recovered SFR obtained from the full sample (solid line) and from a selection of stars with $B-V<0.8$ (dashed line).} \label{cnc} \end{figure} \section{Sensitivity to the adopted $(Z/X)_{\odot}$ value} Recent analysis of spectroscopical data using three dimensional hydrodynamic atmospheric models (see Asplund, Grevesse \& Sauval \cite{aspl} and references therein) have reduced the derived abundances of CNO and other heavy elements with respect to previous estimates (Grevesse \& Sauval \cite{greve98}, GS98). Thus the $Z/X$ solar value decreases from the GS98 value $(Z/X)_{\odot}=0.0230$ to $(Z/X)_{\odot}=0.0165$. GS98 already improved the mixture by GN93, widely adopted in the literature ($(Z/X)_{\odot}=0.0245$), mainly revising the CNO and Ne abundance and confirming the very good agreement between the new photospheric and meteoric results for iron. As already discussed our tracks are calculated for the GN93 solar mixture. The change of the heavy element mixture can have two main effects: 1) the change of theoretical tracks at fixed metallicity (but this has been shown to be negligible; see Degl'Innocenti, Prada Moroni \& Ricci \cite{ric}); 2) the variation of the inferred metallicity from the observed $[Fe/H]$. This could be important for our purposes due to the adoption of the observational age-$[Fe/H]$ relation. Figure \ref{asgre} compares the recovered SFR obtained using the Asplund, Grevesse \& Sauval \cite{aspl} or the GN93 solar mixture; the differences are within the statistical uncertainties. \begin{figure} \centering \includegraphics[width=7cm]{5645fig36.eps} \caption{Results for the recovered SFR obtained adopting the $Z/X$ solar value by Asplund, Grevesse \& Sauval (\cite{aspl}) (dashed line) and by GN93 (solid line).} \label{asgre} \end{figure} \section{Discussion and Prospects for Further Studies} \label{outlook} Our recovered SFR can be now compared with other recent published investigations. Bertelli \& Nasi (\cite{bert}), using a similar sample (Hipparcos stars within 50 pc and brighter than $M_V=4.5$) and a similar technique, found a local SFR that is independent by the chosen IMF, with the exception of low exponents (the value 1.3 is rejected). The figure \ref{confbert} shows that our derived SFR is consistent with the ones by Bertelli \& Nasi 2001. The small discrepancies could arise from the different inputs of the two models: \begin{figure} \centering \includegraphics[width=7cm]{5645fig37.eps} \caption{Our recovered SFR (dotted line) rebinned for comparison with Bertelli \& Nasi 2001 (they used two functionally different assumptions for the star forming history: model a adopts a discontinuous change between two constant intervals of SFR; model b adopts a linearly increasing/decreasing SFR with a discontinuity in the slope at some time).} \label{confbert} \end{figure} \begin{itemize} \item {The adopted evolutionary tracks: Bertelli \& Nasi used the Padua stellar evolutionary tracks (Girardi et al. \cite{gira}) which includes overshooting with an efficiency of about $0.12\,H_P$ in the mass range $1.0\,M_{\odot}< M<1.4\,M_{\odot}$ and $\approx 0.25\,H_P$ for higher masses. Our code implements the Pisa stellar tracks (Cariulo et al. \cite{car}, Castellani et al. \cite{cast03}, Castellani, Degl'Innocenti \& Marconi \cite{cast99a}). Even if the red clump region is poorly reproduced by the Pisa stellar tracks, while the Padua tracks match better, we find (see Fig. \ref{cnc}) that clump and giant stars have a low impact on the recovered SFR for the last 6 Gyr.} \item{In the Bertelli \& Nasi model the stars are uniformly distributed in the metallicity range $0.008\,<\,Z\,<\,0.03$. In contrast, we adopted the observational AMR by Nordstr\"om et al. (\cite{nord}). Thus, their mean composition (solar) is metal richer than ours (using Grevesse \& Noels \cite{greve}, the mean $[Fe/H]$ value $\sim -0.15$ corresponds to $Z\approx 0.012$).} \item{Bertelli and Nasi adopt between 30 and 70 percent of binaries (``decreasing from 70 percent for the more massive primaries to about 27 percent at the faint limit $M_V$=4.5''), while our models are without binaries. In the main sequence, the luminosity of a star depends on the mass, thus a binary system can mimic a different mass (and a different age); we already showed that reasonable binary fraction has no influence on the results (i.e. section \ref{Binaries - SFR degeneracy}), however the amount of binaries introduced by the authors could lead to some differences.} \end{itemize} Schr\"oder \& Pagel (\cite{scho}) also used Hipparcos stars within 100 pc and within 25 pc of the Galactic midplane. The SFR and the IMF are inferred by comparing the expected and observed numbers of stars in particular evolutionary phases (upper main sequence, clump, subgiants, etc.). These authors implemented the evolutionary tracks by Eggleton (\cite{egg}) for solar metallicity. The presence of different chemical composition was taken into account by smearing the single metallicity CMD with a Gaussian spread. Their result is a local SFR that slowly increases towards recent times. The authors explain this as an effect of a dilution of the thin disk stars as they diffuse into larger scale heights by dynamical diffusion. In order to transform to a column-integrated SFR they adopted a dynamic diffusion timescale of about 6 Gyr. The final result is only slightly different from the local SFR (except the recent 1 Gyr, where the authors correct for a radial mixing). \emph{In practice, this result seems to confirm our finding that the dynamical diffusion of orbits has a low impact up to 6 Gyr before the present.} Within a Bayesian methodology, Hernandez et al. (\cite{her2}) used an inversion method on the Hipparcos stars brighter than $M_V=3.15$, deriving the local SFR for the last 3 Gyr. The implemented evolutionary tracks are the Padua isochrones (Girardi et al. \cite{gira}) with $[Fe/H]=0$. In figure \ref{confher}, it is shown our SFR against their findings for the last 3 Gyr (due to the lower temporal resolution of our SFR we needed to rebin the the higher time resolution of Hernandez et al. SFR). \begin{figure} \centering \includegraphics[width=7cm]{5645fig38.eps} \caption{Our recovered SFR (dotted line) compared with the Hernandez et al. (\cite{her2}) SFR (dashed line). Both the SFR are rebinned at 1 Gyr.} \label{confher} \end{figure} The two results are compatible, although our time resolution does not allow us to resolve the SFR behavior found by these authors (a cyclic pattern with a period of 0.5 Gyr). Considering that their sample is very similar ours, the differences in the result could be addressed to: \begin{itemize} \item{Hernandez et al. (\cite{her2}) implemented a solar value ($[Fe/H]=0$) without spread, while we have adopted the Nordstr\"om et al. age metallicity relation;} \item{They used Padua isochrones (Girardi et al. \cite{gira}), the same as Bertelli \& Nasi (\cite{bert});} \item{They implemented a power law IMF with exponent 2.7 (steeper than our value 2.35).} \end{itemize} Moreover, because their technique is very different from our method, this agreement constitutes an independent verification of our method. Vergely et al. (\cite{verg}) used a similar inversion method. These authors determined simultaneously the star formation history, the AMR and the IMF from the Hipparcos stars brighter than $V=8$. The authors adopt a much larger sample (not magnitude limited) and the AMR is not constrained. Their result is a column SFR. The surprising feature is the similarity between their result (not local), see Fig. \ref{verga}, and our SFR (local). In particular, their column-integrated SFR decreases with lookback time on a timescale of 4-5 Gyr, essentially the result we obtain. \begin{figure} \centering \includegraphics[width=7cm]{5645fig39.eps} \caption{Vergely et al. (\cite{verg}) recovered SFR (heavy dot-dashed line) compared with our result (dotted line). } \label{verga} \end{figure} This supports our result and can mean: \begin{itemize} \item the local stellar population is not depleted in the past, but the derived SFR represents a genuinely lower activity; \item no significant dynamical diffusion has taken place on a time scale of 4-5 Gyr. \end{itemize} Rocha-Pinto et al. (\cite{roch}) provided a SFR based on chromospheric emission ages for a sample of solar-like stars within 80 pc. Their result shows enhanced SFR episodes at 0-1 Gyr and 2-5 Gyr, that are approximately similar to our result, and at 7-9 Gyr (but it could be a spurious effect due to the low chromospheric emission for these ages). Also these authors find that the effect of dynamical orbit diffusion is not severe and does not affect the general trend of the SFR. In conclusion, our result seems to represent a realistic SFR of the solar neighborhood. The recovered SFR is quite independent of the kinematical selections, suggesting that all the stellar generations (in the last 6 Gyr) are well represented and stars are not diffused in a larger volume. The SFR is consistent with other studies based on similar samples and different techniques. \emph{The result that our local SFR is close to the column SFR (Vergely et al. \cite{verg}) seems to indicate that our result is not local and may be valid for the whole disk.} Having checked that dynamical diffusion has not been so efficient in the last 5-6 Gyr and the internal assumptions of the model (IMF, binaries, adopted solar mixture) have a low impact on the result, we can discuss the physical implication of our results. The SFR obtained in the present work is concentrated in the recent 4 Gyr, that is a timescale longer than Galactic disk rotation ($<1$ Gyr). This result essentially rules out the possibility that this phenomenon is local, suggesting Galactic scale triggering event. It is difficult to explain our result if the Galactic disk is a ``closed box'' (see e.g. Van den Bergh \cite{van}, Schmidt \cite{schm}): in this case the resulting SFR would be \emph{decreasing} from the disk formation to the present (in opposition to our result), following the normal exhaustion of the gas content and an increased production of inert remnants. Even if the disk is periodically refilled with gas, our result is difficult to explain: the resulting SFR would be nearly \emph{constant} in time (unless the infall is huge, but in this case the age-metallicity relation would change relative to observational evidence; e.g. Valle et al. \cite{valle}). Thus, the recovered SFR seems to indicate some kind of \emph{induced event}, for example by the accretion of a satellite galaxy. However, the tracks left by a such an intruder should be recognized in the age-metallicity relation, while the survey of Nordstr\"om et al. (\cite{nord}) shows practically no change in mean metallicity from 1 to 12 Gyr. An accretion should be evident from the analysis of the kinematical properties of stars in different age bins, but the methods to obtain stellar ages are still affected by large errors (see discussion in section \ref{nordmeth}). Much larger surveys of stellar ages and metallicities as a function of galactocentric distance and kinematics are needed to test our hypothesis: comparing results from different regions of the disk could make clear if the recovered event is really a global event. \section{Conclusions} We have used a selection of the Hipparcos stars to recover the local SFR. The analysis is restricted to the stars within 80 pc and brighter than $M_V=3.5$. Numerical experiments with artificial CMDs show that, at these luminosities, neither the IMF nor the binary fraction are critical inputs, while the possibility to recover the SFR is strongly influenced by the adopted age-metallicity relation. In particular, this result was checked assuming the observational AMR for the solar neighborhood by Nordstr\"om et al. (\cite{nord}): the simulation with artificial CMDs indicate that most of the information about the underlying SFR is still recoverable. Finally, we applied the algorithm to real Hipparcos data. In contrast with artificial CMDs, the first problem was the presence of observational uncertainties (due to photometric and parallax errors). To take these uncertainties into account, we applied to the data the Richardson-Lucy technique as introduced in Cignoni \& Shore (\cite{cign}), cleaning the Hipparcos CMD from the observational errors. Then, assuming the observational AMR by Nordstr\"om et al. (\cite{nord}), we have found the most probable SFR from our sample. The result indicates that \emph{the recent local SF history of the Galactic disk is increasing from the past to the present with some irregularities}. The mean value increases very steeply from 6-7 Gyr ago up to 2 Gyr, in a way qualitatively similar to the findings of Vergely et al. (\cite{verg}) and Bertelli \& Nasi (\cite{bert}). In particular, this result is is quite independent against kinematic selections, suggesting that: \begin{enumerate} \item The local contamination of halo and thick disk stars is negligible in the last 6 Gyr and/or these populations are older than 6 Gyr; \item In the last 5-6 Gyr, all the stellar generations are well sampled; in other words, the recovered local SFR seems not to be biased by dynamical diffusion and the local volume is not ``depleted'' by old disk stars. \end{enumerate} The timescale of the recovered SFR seems too long (larger than the dynamical timescale) to be attributed to local events: an accretion of a satellite galaxy is suspected. \begin{acknowledgements} We warmly thank C. Chiosi and J. K\"oppen for their very useful suggestions regarding the PhD thesis by M. Cignoni, and M. Bertero, G. Bono, P. Franco, M. Martos, and G. Valle for discussions. Financial support for this work was provided by the National Institute of Astrophysics (INAF). We dedicate this paper to the memory of Prof. Vittorio Castellani. \end{acknowledgements}
1,108,101,566,200
arxiv
\section{Introduction} \label{sec:intro} Two ways are currently known to form planetesimals in the solar nebula or more generally in a protoplanetary disk. Either pebbles grow via sticking collisions to larger and larger bodies, which can probably only be achieved for very fluffy ice grains, otherwise fragmentation, bouncing, and radial drift limit pebble sizes to just a few mm \citep[e.g.][]{Birnstiel2012, Kataoka2013}. Or alternatively, self-gravity forces an entire cloud of pebbles to contract into planetesimals as originally pointed out by \citet{Safronov1969} and \cite{GoldreichWard1973}. \corrected{\citet{Weidenschilling1980} interjected that the pebble-gas interaction would lead to turbulent diffusion, rendering the necessary densities for gravitational instability impossible.} Meanwhile, we understand that gas turbulence \corrected{merely} regulates the onset of gravitational collapse by controlling both pebble sizes as well as the dust-to-gas ratio in the settled pebble layer around to the mid-plane of the disk \citep{Estrada2016, drazkowska2016, drazkowska2017, Lenz2019, Gerbig2019, Stammler2019}. Magneto and hydro dynamical gas turbulence is in all cases needed to locally concentrate pebbles in a disk, be it as trapping pebbles in flow features like \corrected{in-plane horizontal vortices \citep{BargeSommeria1995}, convection like vertical cells \citep{KlahrHenning1997} and zonal flows (aka pressure bumps) \citep{Whipple1973}, as the typical dust to gas ratio in the solar nebula is too low for gravitational collapse in the presence of Kelvin Helmholtz (KHI) and streaming instability (SI) \citep{Johansen2009, Carrera2017, Gerbig2020}. See \citet{Klahr2018} for a review on the role of turbulence and flow structures for planetesimal formation.} Starting from a mild concentration of pebbles in a pressure bump by a factor of a few (defined as vertical integrated dust-to-gas ratio of $Z = \SigmaDust/\SigmaGas \approx$ 0.02 - 0.03), a gravitational unstable pebble cloud (local dust-to-gas ratio of $\varepsilon = \rhoDust/\rhoGas = 10 - 100$ in the solar nebula \citep{KlahrSchreiber2020a}) can then be created by turbulent clustering \citep{Cuzzi2008, Cuzzi2010, HartlepCuzzi2020}, by sedimentation and SI without \citep{Youdin2005,Johansen2009, Gerbig2020} or with additional concentration in zonal flows and vortices \citep{JohansenKlahrHenning2006, 2007Natur.448.1022J, Carrera2020}. What all these gravitational collapse models have in common, is that the job is not finished when the collapsing pebble cloud reaches the Hill density $\rho_{\rm Hill}$ that is when tidal forces from the central star with mass $M$ can no longer shear the pebble cloud orbiting at distance $R$ apart, \begin{equation} \rho_{\rm Hill} = \frac{9}{4 \pi}\frac{M}{R^3}, \label{HillDens} \end{equation}% \corrected{but actually when the pebble accumulation reach the solid density of a comet or an asteroid, which is dependent on the distance to the star $10^6$ to $10^{12}$ times larger than the Hill density.} \corrected{As mentioned above pebbles at Hill density correspond to a dust-to-gas ratio of 10 - 100 in the solar nebula at early times \citep{Lenz2020,KlahrSchreiber2020a} and even so the solids are locally dominating the dynamics, the presence of gas still can hamper the gravitational contraction as discussed by \citep{Cuzzi2008}. There are two limiting factors that determine the fate of the contracting clump.} Specifically, a pebble cloud could experience erosion by head wind or internal turbulent diffusion. \citet{Cuzzi2008} argue that turbulent diffusion is typically weaker than ram pressure from the head wind and therfore neglect the effect in their further studies \citep{Cuzzi2010, HartlepCuzzi2020}. Yet, in order to efficiently form planetesimals at the desired small sizes of 10 - 100 km, \citet{HartlepCuzzi2020} require a significant reduction of the headwind, up to a factor of 30 in a zonal flow. As such, we argue in \cite{KlahrSchreiber2020a}, that internal diffusion cannot be neglected. If headwind is reduced by a factor of 30, for instance in a zonal flow, then automatically diffusion will be the limiting factor. \corrected{\citet{KlahrSchreiber2020a} compared the time scale for contraction at Hill density under self gravity with the turbulent diffusion timescale and derived a critical length of $\lcrit$, above which diffusion would we slower than contraction. Based on that paradigm they introduce a critical mass $m_{c}$, i.e.\ a sphere of radius $\lcrit$ at Hill density needed for gravity to overcome diffusion of pebbles with a size represented as Stokes number $\stokes$ for a normalized diffusivity of $\delta$:} \begin{equation} m_{c} = \frac{4 \pi}{3} \lcrit^3 \rho_{\rm Hill} = \frac{1}{9} \left(\frac{\delta}{\St}\right)^{\frac{3}{2}} \left(\frac{H}{R}\right)^3 M_\sun. \label{eq:mass} \end{equation} \corrected{$H/R$ is the relative pressure scale height of the protoplanetary disk, reflecting the local gas temperature. The Stokes number is the friction time (or coupling time) of pebbles $\tau$ \citep{Weidenschilling1977} multiplied with the orbital angular velocity $\Omega$, i.e., $\St = \tauS \Omega$. It quantifies how well the particles are coupled to the gas and thus how quickly they sediment to the midplane \citep{Dubrulle1995}, drift towards the star \citep[e.g.,][]{Nakagawa1986} and how well they drive instabilities \citep[e.g.,][]{SquireHopkins2018} and couple to turbulence \citep{2007Natur.448.1022J}.} \corrected{For a given Stokes number, relative pressure scale height $H/R$, stellar mass $M_\sun$ and normalised strength of the SI, which means removing the actual gas disk profile from the equations, there is no explicit dependence of critical mass on distance to the star left in the expression. The reason lies in that the Hill density drops with $R^{-3}$ with distance to the star and at the same time the volume of the critical pebble cloud scales with $R^3$ (for constant $H/R$). Thus a dependence of mass on $R$ comes only from the radial profile of $H(R)/R$, $\delta(R)$ and $\stokes(R)$.} \corrected{The diffusivity $\delta$ generated by the SI appears to scale proportional to the Stokes number and inversely with the mean dust to gas ratios we have to consider here \citep{Schreiber2018,KlahrSchreiber2020a}: \begin{equation} \delta \approx \delta_0 \frac{10}{1 + \varepsilon_{\rm Hill}}\frac{\St}{0.1} \end{equation} and $\stokes$ possibly cancels from the mass prediction (Equation\ \ref{eq:mass}). Thus ultimately the pebble to gas ratio at Hill density is left to be the dominant effect for planetesimal sizes:} \begin{equation} m_{c} \approx 100 \left(\frac{\delta_0}{\varepsilon_{\rm Hill}}\right)^{\frac{3}{2}} \left(\frac{H}{R}\right)^3 M_\sun. \label{eq:mass_simple} \end{equation} \corrected{Further studies of pebble diffusivity in relation to disk structure and especially pebble size distribution \citep{Schaffer2018} for a range of pebble to gas ratios are therefor needed to further constrain the critical masses for planetesimal formation.} \corrected{As in related work \citep{Nesvorny2010, WahlbergJansson2014}, we represent the mass of a pebble cloud as the equivalent (compressed) diameter, this means: if we compress a cloud with a certain mass $m_c$ from Hill to solid density $\rhoSolid \approx 1 {\rm g cm^{-3}}$, it would have a new diameter $a_c$. The actual range of planetesimal average density may fall between $\rhoSolid \approx 0.5 {\rm g cm^{-3}}$ for comets and $\rhoSolid \approx 2 {\rm g cm^{-3}}$ for some of the asteroids, but we neglect this effect for now as we do an order of magnitude estimate and only $\rhoSolid^{1/3}$ enters the expression for the compressed size.} \corrected{In \citet{KlahrSchreiber2020a} we find equivalent sizes of gravitational unstable pebble clouds that range from $ a_c = 60 - 120$ km in a model for the early stages of the solar nebula \citep{Lenz2020}. This size range reflects the varying pebble to gas ratio at Hill density for the gas profile of the nebula and the local $H/R$, i.e.\ the temperature profile of the gas. Both $\varepsilon_{\rm Hill}$ and $H/R$ have a radial profile, yet in effect can balance out each other in terms of controlling planetesimal masses.} The critical length $\lcrit$ in Equation\ \ref{eq:mass} is \corrected{not only} the minimum radius for a cloud of pebbles with Stokes number $\St$, at Hill density and in the presence of turbulent diffusion with strength $\delta$ acting on this length scale to collapse i.e., \begin{equation} \label{eq:collapseCrit} \lcrit = \frac{1}{3} \sqrt{\frac{\delta}{\stokes}} H, \end{equation} Simultaneously, $\lcrit$ is also the scale height of the particle layer if it reaches Hill density \corrected{as its peak value} \citep{KlahrSchreiber2020a, Gerbig2020} and, as we will show in this paper, also the characteristic radius of a Bonnor-Ebert solution for a pebble cloud with a central density of $\rhoHill$. In \citet{KlahrSchreiber2020a}, the collapse criterion was derived for the assumption that turbulent diffusion acts isotropic in all directions. Subsequently we tested the criterion in two-dimensional simulations of the SI. Before turning on self-gravity, we measured the diffusion for different Stokes numbers, radial pressure gradients and different box sizes, i.e.\ different mass quantities of pebbles in the simulation domain at the same dust-to-gas ratio while remaining at Hill density. With the measured diffusion, we then predicted which simulations should gravitationally collapse, and which ones should stay stable. In all cases, the prediction that the simulation domain $L$ has to be \konsti{larger} than $2 l_{\mathrm{c}}$ (from Equation \ref{eq:collapseCrit}) for collapse to proceed was satisfied. Roughly speaking a sphere of radius $l_c$ would have to fit into our simulation domain to allow for collapse. However, all simulations \konsti{in \citet{KlahrSchreiber2020a}} were two-dimensional and radial and vertical diffusion are known to have unequal relative strengths if driven by the SI \citep{JohansenYoudin2007, Schreiber2018}. Thus, in the present paper, we study the SI in a three dimensional box, measure radial and vertical diffusion and then switch on self-gravity to check for gravitational collapse for different total mass content (pebbles plus gas) in the box. \corrected{\citet{Li2019} present the highest resolution study in a line of papers that determine the size distribution of planetesimals formed via self gravity in the presence of streaming instability \citep{Johansen2015,Simon2016,Simon2017,Abod2018}. At least in one of their simulations using rather large pebbles $\stokes = 2$ the binned size distribution shows a maximum of objects with a diameter of 100 km. In our interpretation this turn-over in the size distribution should reflect the critical pebble mass needed for gravitational collapse in the presence of turbulent diffusion. Unfortunately, the strength of diffusion was not determined in those runs. Also despite a huge resolution in \citet{Li2019}, our boxes are still 25 higher in resolution, which maybe important to resolve the critical length scales sufficiently. In that context we can interpret our numerical experiments as a zoom in to the densest regions in \citet{Li2019} to study whether we can explain the turn over via turbulent diffusion.} In Section \ref{sec:2}, we discuss several necessary concepts for the interpretation of our numerical simulations of self-gravitating pebble clouds: the scale height of the pebble layer subject to diffusion, and the relation and difference between Toomre and Hill stability criteria for our turbulent pebble cloud. In Section \ref{sec:3}, we present our numerical 3D SI simulations. We show a set of five different simulations, of which only the first is without self-gravity. In the following four simulations self-gravity is switched on and we increase the total mass of the domain for both pebbles and gas, thereby maintaining the dust-to-gas ratio and thus the potential strength of the SI. To test our base line assumption of deriving a collapse criterion for a sphere of constant density we also interpret our simulation with a centrally peaked pebble cloud in Section \ref{subsec:3-4}. There we derived a centrally peaked Bonnor-Ebert solution for the density distribution in a pebble cloud in which now $l_{\mathrm{c}}$ determines the critical radius of a sphere with central density $\rho_{\mathrm{c}} = \rhoHill$. There we also discuss the effect of non-isotropic diffusion, actually creating an Bonnor-Ebert ellipsoid. \corrected{We summarise our results in Section \ref{sec:4}, where we also compare our results to our two-dimensional studies \citep{KlahrSchreiber2020a} and our large scale simulations on the onset of planetesimal formation \citep{Gerbig2020}}. \correctedn{In Appendix A we reiterate the scale height of pebbles under self gravity and diffusion. In Appendix B we introduce our new concept of diffusive pressure, i.e.\ the treatment of diffusion in the momentum equation rather than in the continuity equation. Thus (angular-) momentum conservation is automatically achieved in our analysis, which we discuss in the context of secular gravitational instability. As this diffusive pressure leads to a formal speed of sound for the pebbles we discuss the implications of that concept in Appendix C.} \begin{deluxetable*}{lll}[tb!] \tabletypesize{\footnotesize} \tablecaption{Used symbols and quantities:\label{tab:usedSym}} \tablehead{\colhead{Symbol} & \colhead{Definition }& \colhead{Description}} \startdata $R$,$M_\odot$ & & heliocentric distance, solar mass \\ $m_c$ & & critical mass of unstable pebble cloud\\ $a_c$ & $a_c \sim m_c^{1/3}$ & equivalent compressed diameter \\ $\Omega$, $\Torb$ & $\Torb = 2\pi/\Omega$ & orbital frequency, orbital period \\ $t$ & & time in orbital periods \\ $G, \Gmod$ & & gravity constant, resp. in code units \\ $\rho, \rho_0 $ & $\rho_0 = \rho(t=0) = <\rho>$ & local and initial (mean) pebble density \\ $\rhoGas$ & & gas density \\ $\cs,\vth,\vthoneD$ & $\cs = \vthoneD = \sqrt{\frac{\pi}{8}}\vth$ & isothermal sound speed, and 1D and 3D thermal speed (gas) \\ $\rhoHill$ & $\rhoHill=9M/4\pi R^3$ & Hill density \\ $\rhoSolid $ & & solid body density \\ $\aDust $ & & pebble radius \\ $\tauS$ & $\tauS = \frac{\aDust \rhoSolid}{\rhoGas \vth}$ & stopping/friction time of peppbles \\ $\stokes$ & $\tauS\Omega$ & Stokes number \\ $\tauFF$ & $\tauFF = \sqrt{\frac{3 \pi}{32 G \rho}}$ & free fall time for density $\rho$ \\ $\tauC$ & $\tauC = \tauFF \left(1 + \frac{8 \tauFF}{3 \pi^2 \tauS}\right)$ & contraction time (incl.\ friction) \\ $\tau_\mathrm{t}$ && correlation time of turbulence\\ $\mathbf{u}$, $\mathbf{v}$ & & gas and dust velocity\\ $\scaleheight$ & $\scaleheight=\cs/\Omega$ & gas disk scale height \\ $\varepsilon $ & $\varepsilon=\rho/\rhoGas$ & local dust-to-gas density ratio \\ $Z$ & $Z=\SigmaDust/\SigmaGas$ & dust-to-gas surface density ratio \\ $\varepsilon_\mathrm{max}$,$\varepsilon_0$ & & maximum and initial dust-to-gas ratio (simulation) \\ $\varepsilon_\mathrm{Hill}$ & & dust-to-gas ratio at reaching Hill density \\ $\rho_c$ & & central density in a pebble layer or a Bonnor-Ebert sphere \\ $\rho_\circ$ & & density at the surface of a Bonnor-Ebert sphere \\ $f$ & $f = \frac{\rho_0}{\rhoHill}$ & initial pebble density in simulation \\ $L$ & $L = 0.001 H$ & simulation domain size \\ $\nu,\alpha$ & $\nu = \alpha \cs \scaleheight$ & global viscosity / diffusion coefficient \\ $D$,$\delta_{(x,z)}$ & $D =\delta \cs \scaleheight$ & local / small scale (anisotropic) diffusion coefficient \\ $h_p$ &$h_p = \sqrt{\frac{\delta}{\stokes}} H$ & Pebble Scale height without self gravity \\ $a$ & $a = \sqrt{\frac{\delta}{\stokes}}c_s$ & pseudo sound speed of pebbles under diffusion \\ $P$ & $ P = a^2 \rho $ & pseudo pressure for pebbles under diffusion \\ $\lcrit, \lcritx, \lcritz$ &$\lcrit = \frac{1}{3} \sqrt{\frac{\delta}{\stokes}} H$ & critical length / Scale height for $\rho_c = \rhoHill$\\ $\hlcrit$ &$l_c = \frac{1}{3} \sqrt{\frac{\delta}{\stokes}}\sqrt{\frac{\rho_{\rm Hill}}{\rho_c}} H$ & same for $\rho_c > \rhoHill$\\ $Q_g, Q_p$ & $Q_p = \sqrt{\frac{\delta}{\stokes}} \frac{Q_g}{Z}$ & Toomre parameter for gas and pebbles\\ $\lambda_{\rm fgm, min, max}$ & & fastest smallest and largest Toomre wavelength\\ $\omega$,$\gamma$ & & frequency, growthrates of plane waves\\ $\lambda_{\rm Jeans}$ & & "Jeans" wavelength \\ $M_{\rm Jeans}$ & & mass of marginally stable Bonnor-Ebert sphere\\ $\eta, \beta$ & & pressure gradient parameters \enddata \end{deluxetable*} We will follow the notation in Tab. \ref{tab:usedSym} throughout this paper. \section{Self-gravity of particle layers} \label{sec:2} \citet{Safronov1969} and \cite{GoldreichWard1973} considered the gravitational stability of a particle layer in the solar nebula for the case that gas can be ignored and derived dispersion relations and probable planetesimal masses to result from gravitational fragmentation. Yet, as shown by \citet{Weidenschilling1980} the interaction with the gas cannot be neglected as it can drive turbulence. Turbulent diffusion limits sedimentation and thus appears to prevent the necessary concentration of pebbles for self-gravity to become important. But this is only true if one considers turbulence to be a strictly diffusive process. As we know today, turbulence also concentrates material, either as part of a particle-gas instability \citep{Youdin2005}, via turbulent clustering \citep{Cuzzi2008}, or through trapping in non-laminar flow features \citep{Whipple1973,BargeSommeria1995,KlahrHenning1997}. \citet{Sekiya1983} included gas for the gravitational stability of the particle layer, but he considered a closely coupled dust and gas system, effectively $\stokes \ll 1$. Finite coupling times were introduced to study a secular gravitational instability \citep{Ward1976,Ward2000,Coradini1981, Youdin2005} in which particle rings contract radially, thereby losing excess angular momentum due to friction with the gas \citep{ChiangYoudin2010} . \corrected{In those studies one considers the motion of the pebble swarm at its rate of terminal velocity with respect to the gas, as a consequence of the rotational profile of the nebula \citep{Sekiya1983} and the mutual gravity of the pebbles.} \corrected{Diffusion of pebbles via turbulence has also been added to these studies \citep{Youdin2011} in explicitly adding a diffusion term to the mass transport of pebbles. Recently \citet{Tominaga2019} showed that the diffusive pebble flux should also be treated in the momentum equation to ensure angular momentum transport.} \corrected{What we do differently in our stability analysis, is to treat the diffusion of particles via turbulent mixing in the momentum equation instead of the continuity equation. As derived in appendix \ref{sec:B} we define the pebble velocity in the continuity equation as the sum of advective and diffusive flux. Redefining the momentum equation to this new pebble velocity introduces a source term for the momentum equation that looks formally like a gradient in pebble pressure $P$.} \begin{equation} \partial_t \mathbf{v} \rho = - \frac{D}{\tau} \mathbf{\nabla} \rho := -\mathbf{\nabla} P. \label{eq:Danda} \end{equation} \corrected{The formal speed of sound $a$ related to this pressure gradient is diffusivity $D$ divided by the stopping time $\tau$} \begin{equation} a^2 = \frac{D}{\tauS}. \label{eq:A2anda} \end{equation} \corrected{and not the actual r.m.s.\ velocity of the particles $v_\mathrm{r.m.s.}$, which is proportional to diffusivity divided by the correlation time of turbulence \citep{YoudinLithwick2007}} \begin{equation} v_\mathrm{r.m.s.}^2 = \frac{D}{\tau_\mathrm{t}}. \label{eq:YLrms} \end{equation} \corrected{Our derivation uses a balance between diffusion and sedimentation via the momentum equation. This is common practice, for example, when calculating the scale height of the particle layer in the midplane of a turbulent disk.} \corrected{We assume the gas to be turbulent, yet incompressible, which holds even during the gravitational contraction of the pebble cloud as shown in \citep{KlahrSchreiber2020a}. The particles can move with respect to the local gas velocity fluctuations, which on average are zero. Thus, in first order approximation particles have to sediment and contract with respect to the gas at rest and get diffused by turbulence, which for the evolution of the pebble distribution $\rho$ acts as a gradient of the pebble pressure $P = a ^2 \rho$, with the speed of sound of the pebbles $a$ being a fraction of the gas speed of sound $\cs$.} \begin{equation} a = \sqrt{\frac{\delta}{\stokes + \delta}} \cs \approx \sqrt{\frac{\delta}{\stokes}} \cs. \label{eq:asound} \end{equation} \corrected{Such a relation between friction $\tauS$, diffusion $D$ and a "thermal" velocity $a$ is not new. It is the same derivation of diffusivity $D_\mathrm{B}$ for a particle of mass $m$ under Brownian motion at temperature $T$ and with the friction parameter $\mathrm{f} = m / \tauS$ found by \citet{Einstein1905}, also based on an equilibrium of diffusion and sedimentation under gravity:} \begin{equation} D_\mathrm{B} = \frac{\mathrm{k} T}{\mathrm{f}} = \tauS \vthoneD^2. \label{eq:Einstein} \end{equation} \corrected{$\mathrm{k}$ is here the Boltzmann constant and $\vthoneD = \sqrt{\frac{k T}{m}}$ the one dimensional thermal velocity of the particle. For a gas the one dimensional thermal velocity is also the isothermal speed of sound $\vthoneD = \cs$. We thus can associate the turbulent gas in the astrophysical environment with the heat bath that drives Brownian motion.} \correctedn{Using expression \ref{eq:YLrms} we can now also relate the r.m.s.\ speed of pebbles $v_\mathrm{r.m.s.}$ with the pseudo sound speed $a^2$ of pebbles as} \begin{equation} a^2 = v_\mathrm{r.m.s.}^2 \frac{\tau_\mathrm{t}}{\tau}. \label{eq:a_v_relation} \end{equation} \correctedn{For pebbles of $ \frac{\tau_\mathrm{t}}{\tau} = 1$ the r.m.s.\ speed and pebble sound speed are then identical. But for larger pebbles the sound speed decreases and for smaller pebbles it increases. The latter case is then usually limited by the compressibility of the gas and set to the speed of sound as done by \citet{Dubrulle1995}. } \corrected{Equation \ref{eq:asound} can easily be transformed to an equation of the thickness of the pebble layer $h_p$ in \citet{Dubrulle1995} by dividing both sides by $\Omega$ and with $c_s = H \Omega$} it follows: \begin{equation} h_\mathrm{p} := \frac{a}{\Omega} = \sqrt{\frac{\delta}{\stokes + \delta}} H, \label{eq:Dubrulle} \end{equation} \corrected{which will be a handy expression for this paper.} \correctedn{In the appendix \ref{sec:C} we show that in fact $a$ is the speed of sound of wave like perturbations of pebbles under diffusion, but for physical realistic wave numbers, those waves are critically damped in less than one oscillation period. Only for nonphysical short wave lengths, where the diffusion description would break down, one can mathematically derive oscillatory solutions.} While SI is just one possible origin of local gas turbulence, it is the easiest to be studied in small boxes at a fraction of the gas pressure scale height and requires fewer assumptions compared to introducing additional external $\alpha$ turbulence stemming from large scales as recently done by \citet{Gole2020}. Additionally, SI dominates on the collapse scales of pebble clouds \citep{KlahrSchreiber2020a} and is therefore ideally suited for our investigation. \corrected{So we distinguish between large scale turbulence $\alpha$ introduced by \citet{ShakuraSunyaev1973} to parametrise angular momentum transport and that may stem from magneto hydro instabilities \citep{BalbusHawley1998} or hydro dynamic instabilities \citep{KlahrBodenheimer2003,Nelson2013,Marcus2016}, which seem to be relevant in protoplanetary disks \citep{Pfeil2019}. Even so $\alpha$ may also have a non-turbulent wind component \citep{Bai2013,Bethune2017}, for lack of better knowledge $\alpha$ is also assumed to drive global diffusion and pebble collisions that determine the conditions for planetesimal formation in terms of dust-to-gas ratio and Stokes number from the large scales \citep{Schaffer2018,Gerbig2020}. And on the other hand we define $\delta$ as the local small scale diffusivity on the scales of pebble cloud collapse. At large scales $L \sim H$ the assumed $\alpha$ is typically orders of magnitude larger than $\delta$, but once $\alpha$ is cascaded down \citep{Kolmogorov1941} to the scales relevant to form a planetesimal of less than 100 km from a pebble cloud at Hill density $L \sim 0.001 H$, then $\delta$ is predominantly produced locally by SI and other resonant drag instabilities \citep{Squire2018}.} As pointed out by \citet{Gerbig2020}, even in the absence of global $\alpha$ turbulence the SI is not the only effect setting the vertical scale of the particle mid-plane, but at the expected high dust to gas ratios and small scales, on a first guess the most important one. Nevertheless, we will see in Section 3.2 that as soon as self gravity is included, even in our non-stratified disk the conditions for Kelvin-Helmoltz instability are given, enhancing the strength of particle diffusion. As such, we will first show how the vertical scale height of the pebble layer in a turbulent disk is modified with the inclusion of self-gravity. \subsection{Dust scale height at Hill density} In our previous numerical experiments \citep{KlahrSchreiber2020a}, we performed two-dimensional vertically integrated simulations of SI and self-gravity. This means that the simulations were effectively 2D, i.e.\ the third dimension is entirely in one cell and thus vertically integrated by default. Thus, we did not have to consider sedimentation of the particles. In the present three-dimensional work, we do not have vertical stellar gravity either, because for us it is sufficient to study streaming instability (SI) only and also do not cover sufficient height of the disk to induce Kelvin Helmholtz instability (KHI): the study of KHI modes driven by the sedimentation of dust \citep{Weidenschilling1980} demands larger boxes of about $L = 0.4H$ as seen in e.g.\ \citet{Gerbig2020}. \corrected{In that paper it was the goal to understand the needed dust enhancement $Z$ to overcome KHI to create the necessary dense pebble layer to trigger streaming instability and self-gravity for planetesimal formation. It was the question whether it is possible to form any planetesimal independent of size. In contrast, for the present paper we assume that we are already in the situation that planetesimals can principally form, as Hill density is already reached, but we ask how big a pebble cloud has to be in order collapse. To answer this question,} we want to identify the smallest possible box at Hill density or more precisely, the smallest necessary mass in a small box that can undergo gravitational collapse. Thus the box in our numerical experiments is only $L_x = L_y = L_z = 0.001 H$ in size. We have chosen that size because for the Stokes number we picked and the strength of SI \corrected{in terms of measured diffusivity} we found, the expected critical length-scales $l_c$ should also be on the order of $0.001 H$ (see Figure \ \ref{fig:3d_prep_simulations}). \corrected{In \citet{Schreiber2018} we experimented with the influence of Stokes number, average dust-to-gas ratio and box size on the strength of streaming instability in terms of pebble density fluctuations and particle diffusion. We found that SI becomes weaker as soon as the fastest growing modes are not fitting into the box anymore, yet SI will not die out and still drive significant diffusion controlling the onset of gravitational collapse at high dust to gas ratios. Other work usually does not consider such high dust to gas ratios or small boxes, yet for the parameters where we approach the simulations of \citet{JohansenYoudin2007} we find an agreement in the measured SI properties in terms of r.m.s. velocities, diffusion and particle concentration.} Without self-gravity \corrected{ the thickness of the pebble layer around the midplane under turbulence for $\stokes > \delta$ is (see Equation \ref{eq:Dubrulle})} \begin{equation} h_\mathrm{p} = \sqrt{\frac{\delta}{St}} H. \end{equation} However, upon reaching Hill density at the midplane, the vertical acceleration from self-gravity $g_z = -\partial_z \Phi_{\rm dust}$ acting on the dust is nine times stronger than the vertical component of stellar gravity as seen from the Poisson equation \begin{equation} \nabla^2 \Phi_{\rm Hill} = 4 \pi G \rho_{\rm Hill} = 9 \Omega^2, \end{equation} where we used the definition of the Hill density in Equation~\ref{HillDens}. Thus, around the mid-plane gravitational acceleration is $g_z = - 9 \Omega^2 z$ in comparison to stellar gravity $g_\odot = -\Omega^2 z$ and we can neglect the latter. As shown in appendix \ref{sec:B} this leads to a new pebble scale height at Hill density of \begin{equation} \lcrit = \frac{1}{3} \sqrt{\frac{\delta}{St}} \label{eq:firstlc} \end{equation} and expanding this to even larger peak densities $\rho_c$ (see Equation\ \ref{eq:firsthlc}) gives: \begin{equation} \hlcrit = \sqrt{\frac{\rho_{\rm Hill}}{\rho_c}} l_c \label{eq:firsthlca} \end{equation} In \citet{KlahrSchreiber2020a} we show that the vertical distribution of pebbles is actually a hyperbolic function, yet sufficiently similar to that of a Gaussian of the same width, for small values of z, i.e.\ $z < \hlcrit$. But, if you integrate $\rho$ vertically from $-\infty$ to $+\infty$ for a Gaussian you receive $\Sigma_\mathrm{Gauss} = \sqrt{2 \pi} \hlcrit$, whereas for the hyperbolic function it is $\Sigma = 2 \sqrt{2} \hlcrit$, which is what we use for our further analysis. The average or initial dust density $\rho_0$ that we choose for our computational domain is defined by multiples $f$ of the Hill density $\rho_0 = f \rhoHill$, thus the column density is always $\Sigma = L f \rhoHill$. As long as $\hlcrit$ is sufficiently smaller than the box height $L/2$, we can use \begin{equation} \Sigma = 2 \hlcrit \sqrt{2} \rho_c = f L \rho_{\rm Hill}. \label{eq:2hat} \end{equation} Thus, we find by eliminating $\rho_{\rm Hill}/\rho_c$ via Equation\ \ref{eq:firstlc} an expression to determine the vertical diffusivity $\delta$ in our numerical experiments (with fixed $L,f,\stokes$ and $H$) as a function of the measured pebble scale height $\hlcrit$: \begin{equation} \delta = \frac{9}{2 \sqrt{2}} f \stokes \frac{L \hlcrit}{H^2}. \label{eq:firstlc3} \end{equation} Note that in case of self gravity the diffusivity is proportional to the dust scale height $\hlcrit$ whereas in the case of no self-gravity diffusivity is proportional to the square of the dust scale height $h_\mathrm{p}^2$ (see Equation \ref{eq:Dubrulle}). \subsection{Toomre stability} As explained in \citet{KlahrSchreiber2020a}, the criterion of $\lcrit < \onehalf L$ \corrected{(a sphere of radius $\lcrit$ has to fit into the simulation box with dimensions $L$)} for gravitational collapse describes the stability of a local non-linear density fluctuation. In contrast, the Toomre stability criterion \citep{Toomre1964} applies to the linear growth of infinitesimal perturbations in surface density, so it is worthwhile to reconcile the relation between the two criteria here. \corrected{The Toomre analysis for planetesimal formation in \citet{Safronov1969} and \citet{GoldreichWard1973} considers a gas free system with the random motions of (almost collision free) particles providing a pressure counteracting gravity. The root mean square of these random particle velocities then defines the "sound speed" of the pebbles. This leads to the same approach as when considering the gravitational stability of a gas disk with thermal pressure and the speed of sound of the gas \citep{BinneyTremaine2008}.} \corrected{But note, that if we now also derive a "sound speed" for pebbles diffused by turbulence, then this is not simply the velocity dispersion of the pebbles.} \corrected{Our particle speed of sound represents the resistance of pebbles clouds against compression by gravity. For instance in case of negligible turbulent diffusion $\delta \rightarrow 0$, but having a Stokes number still smaller than $\delta$, the "particle speed of sound" approaches the speed of sound of the gas (see Equation\ \ref{eq:asound}), whereas the velocity dispersion approaches zero (See Equation\ \ref{eq:YLrms})}. \corrected{This also means, that one cannot use the measured r.m.s.\ velocities of the pebbles in our simulation for the Toomre analysis, but one needs the actual diffusivity on the scales of accumulations. Therefore we measure the scale height of pebbles in the disk (see Equation\ \ref{eq:firstlc3}) and track the diffusion of individual pebbles \citep{KlahrSchreiber2020a}.} We write the momentum equation for our pebble-gas in a classical fashion \citep[see e.g.,][]{ChiangYoudin2010}, but instead of thermal pressure or a dispersion velocity, turbulent diffusion acts as the stabilizing agent. As a result, the momentum flux by pressure \corrected{for an ideal gas} $ - c_s^2 \nabla \rho $ is replaced by the \corrected{the "diffusion pressure" for closely coupled particles } $- a^2 \nabla \rho = - \frac{D}{\tau}\nabla \rho$, \corrected{which means that the diffusive flux is generated in the momentum equation and not added to the continuity equation (see our derivation in appendix \ref{sec:A}). Thus the pebble velocity $\mathbf{v}$ already contains the diffusive flux and one avoids the problem that that neglecting the diffusive flux in the momentum equation can lead to the violation of angular momentum conservation \citet{Tominaga2019}.} Thus, we adopt an only slightly modified set of equations to describing the hydrodynamic behaviour of pebbles under self-gravity in comparison to classical work \citep{Safronov1969,GoldreichWard1973}. \corrected{The background state for our Toomre analysis is a constant surface density of pebbles and gas and Keplerian shear. But note that we neglect azimuthal friction with the gas, which is the driver of the secular gravitational instability \citep{Youdin2011}, because for the high dust to gas ratios we consider, such a damping seems inefficient. As discussed in appendix \ref{sec:B} the radial friction could be included, but it only slows down radial contraction, but not the resulting Toomre stability criterion itself. We linearize the equations for continuity and momentum around the background state as outlined in Chapter 6 of \citet{BinneyTremaine2008}}: \begin{eqnarray} \label{eq:BTrho} \frac{1}{\Sigma}\partial_t \Sigma' + \partial_r v_r' &=& 0,\\ \partial_t v_r' - 2 \Omega v_\phi' &=& -\frac{1}{\Sigma} \frac{D}{\tau} \partial_r \Sigma' - \partial_r \Phi',\label{eq:BTvr}\\ \partial_t v_\phi' + \frac{\kappa^2}{2 \Omega} v_r' &=& 0,\label{eq:BTvphi} \end{eqnarray} and the usual $\Phi' = 2 \pi G \Sigma' / |k|$ for the perturbed potential of a razor thin disk \citet{BinneyTremaine2008}, where the delta function for the vertical density stratification is implemented as \begin{equation} \rho = \frac{k \Sigma}{2} e^{- |k| z}. \label{eq:razor_rho} \end{equation} We adopt Wentzel-Kramers-Brillouin (WKB) waves such that perturbations scale as \begin{equation} \Sigma' = \Sigma_a e^{- i (k r - \omega t)}, \end{equation} The dispersion relation is identical to the one given in \citet{GoldreichWard1973}, except that we replace the random motions $c$ in their equation by \corrected{our pseudo speed of sound for the particles $a$ which, to stress this once more, is not the r.m.s.\ velocity of particles in the flow. This pseudo speed of sound} $a \equiv c_s \sqrt{\delta_x/\stokes}$ reflects the pressure like resistance against compression, generated by diffusion. The dispersion relation is then: \begin{equation} \label{eq:dispersion_relation} \omega^2 = a^2 k^2 - 2 \pi G \Sigma |k| + \kappa^2, \end{equation} where the epicyclic frequency $\kappa^2$ is the Keplerian frequency for a Keplerian rotation profile $\kappa^2 = \Omega^2$. We find the Toomre value for this system to be \begin{equation} \label{eq:particle_toomre_q} Q_p = \sqrt{\frac{\delta_x}{St}} \frac{c_s \Omega}{\pi G \Sigma_{\rm pebble}} = \sqrt{\frac{\delta_x}{St}} \frac{Q_g}{Z}, \end{equation} allowing us to directly compare the particle to the gas Toomre value $Q_g$ by using the metallicity $Z = \Sigma/\Sigma_{\rm gas}$. For $Q_p < 1$ the system is linearly unstable to perturbations. Note, that the metallicity in the context of Equation~\ref{eq:particle_toomre_q} quantifies the particle concentration in the back ground state. A local concentration of pebbles and thus a local increase of $Z$ on scales $< 1/k$ can still be fragmenting as shown by \citet{Johansen2009} and more recently also discussed in \citet{Gerbig2020}, but this process is then not triggered by the linear gravitational instability. Still it is interesting to note that this non-linear triggered collapse will occur when the $Q_p$ calculated for a local metallicity enhancement falls below $2/3$, which is the \corrected{definition of the collapse criterion $1 > \tilde Q_p = 3/2 Q_p$ in \citet{Gerbig2020}. Note that $\tilde Q_p$ is based on the assumption of isotropic diffusion and therefore uses the vertical pebble scale height to estimate diffusivity. Yet the Toomre value $Q_p$ is independent on vertical diffusion as we have demonstrated.} We deem it instructive to investigate the Toomre parameter for when the particle mid-plane reaches Hill density $\rho_c = \rhoHill$. \corrected{Yet, due to the fact that the Toomre ansatz assumes $\Sigma = 2 \rho_c/k$, makes $Q_p$ a function of wave number for a given volume density in the midplane, this question is not straight-forward to answer.} We begin by determining what density at the midplane satisfies $Q_p = 1$. Setting $Q_p = 1$ also defines a unique unstable wave-length of the fastest growing mode with wave number $k_{\rm fgm} = \sqrt{St/\delta} H^{-1}$. Then, via Equation\ \ref{eq:razor_rho}, we can determine $\Sigma(Q_p = 1) = 2 \rho_{\rm Toomre} H \sqrt{\delta/St}$ and define the Toomre density $\rho_{\rm Toomre}$ for isotropic diffusion $\delta = \delta_x = \delta_z$: \begin{equation} \rho_{\rm Toomre} = \sqrt{\frac{\delta}{St}}\frac{ c_s \Omega }{\pi G 2 H \sqrt{\delta/St}} = \frac{1}{2 \pi} \frac{M_\odot}{R^3}, \end{equation} which is $4.5$ times lower than the Hill density\footnote{This Toomre density also applies to a gas disk, as thermal pressure is generally isotropic.}. This comes as no surprise as the Toomre criterion is for axis-symmetric modes which are not subject to tidal shear. Therefore, a local (non-axis-symmetric) particle cloud with $\rho_\mathrm{c} = \rho_\mathrm{Toomre}$ is not stable against tidal gravity and will be ripped apart. Conversely the Toomre parameter of a disk with isotropic diffusion at Hill density would fall smaller than 1 \citep{Gerbig2020}. On the other hand, if the dust layer is vertically much thinner than the radial unstable wavelength, because vertical diffusion is much weaker than radial diffusion ($\delta_z < \delta_x$), then Hill density in the midplane can be compatible with Toomre values larger than one. From Equation\ \ref{eq:firstlc}, we know that the vertical thickness of the relevant particle layer is $\lcrit$. Due to potentially different radial and vertical diffusivities ($\delta_{z} \ne \delta_{x}$), we define a new $\lcritz = \frac{1}{3} \sqrt{\delta_z/\St} H$. Thus, $\Sigma = 2 \sqrt{2} \rho_{\rm Hill} \lcritz $ (see Equation\ \ref{eq:2hat}) and the Toomre value would then be \begin{equation} Q_{\rm Hill} = \sqrt{\frac{\delta_x}{St}} \frac{c_s \Omega}{\pi G \rho_{\rm Hill} 2 \sqrt{2} \lcritz } = \frac{\sqrt{2}}{3}\sqrt{\frac{\delta_x}{\delta_z}}. \end{equation} Thus as long as radial diffusion is sufficiently larger than vertical diffusion, which seems to be case for all known studied configurations so far, then one could even globally ($L > 1 / k$) reach the Hill density in the midplane and still not be linear unstable in the Toomre fashion. In our numerical experiments, we set the dust mass in our domain $M_\mathrm{box} = f L^3 \rhoHill$ to a fixed value, which defines the pebble surface density as $\Sigma = f L_z \rhoHill$. As a result, the Toomre value in the simulation is given via the height of the simulation box $L_z = L$ and the mean density of pebbles expressed in multiples $f$ of the Hill density \begin{equation} Q(f) = \frac{4}{9} \sqrt{\frac{\delta_x}{St}} \frac{H}{L_z}\frac{1}{f}, \label{eq:QF} \end{equation} which is independent of the strength of vertical diffusion. For the sake of completeness we calculate the range of unstable wavelengths from the Toomre parameter, which shows the fastest growing mode with $\lambda_{\rm fgm}$ to be \begin{equation} \lambda_{\rm fgm} = 2 \pi \sqrt{\frac{\delta_x}{St}} H Q_p = 6 \pi Q_p \lcrit, \end{equation} which is a wavelength considerably larger than the critical length for collapse $\lcrit$. For our numerical experiments, once we have determined the strength of radial diffusion, and find Toomre values lower than unity, we can also determine a fastest growing mode for our simulations to be \begin{equation} \lambda_{\rm fgm} = \frac{8 \pi}{9} \frac{\delta_x}{St} \frac{H^2}{f L_z} = 8 \pi \frac{\lcrit^2}{f L_z}. \label{eq:lambda_fgm} \end{equation} For any Toomre parameter, when $Q < 1$ there exists a largest and smallest unstable wavelength, where large modes are stabilised by the Coriolis forces, i.e.\ $\kappa^2$ \begin{equation} \lambda_{\rm min, max} = \frac{\lambda_{\rm fgm}}{1 \pm \sqrt{1 - Q_p^2}}. \end{equation} Interestingly the Toomre wavelengths are proportional to $\lcrit^2$, where as the Jeans length for gravitational instability, i.e.\ in the absence of the Coriolis term is linear in $\lcrit$ \begin{equation} \lambda_{\rm Jeans} = 2 \pi \lcrit, \label{eq:Jeans} \end{equation} as discussed in \citet{KlahrSchreiber2020a}. With all relevant length scales established as a function of the turbulent diffusivity, Stokes number, and the pebble load of our disk, we have all the tools at hand to interpret the following simulations. \begin{figure*} \gridline{ \fig{Figure1b.png}{0.9\textwidth}{(a): Radial-vertical slice for $\varepsilon/\varepsilon_0$.} } \gridline{\fig{Figure1a.png}{0.9\textwidth}{(b): Radial-azimuthal slice for $\varepsilon/\varepsilon_0$.} } \caption{Simulation end-states of our 3D SI study without self gravity. Colour scale represents dust to gas ratio $\varepsilon/\varepsilon_0$ between 0.1 and 10.0. We compare 3 different box sizes, but always the same physical parameters in terms of particle size and average dust to gas ratio. The upper row is a slice in the radial vertical direction and the lower row in the radial azimuthal direction. The left column with $L = 0.1 H = 20 \eta R$ is an almost copy of model AB-3D ($L = 40 \eta R$) in \citep{JohansenYoudin2007} and shows the same saturated state in their Figure 9. For our collapse study we increased the resolution in steps of factor 10 while reducing the box size. Therefor we see basically a zoom in to smaller scales of SI. As smaller scales are less unstable the overall strength of SI was getting weaker, but simultaneously we were able to find the lower cut off of instability, as the smallest strictures are clearly resolved. We chose the $L=0.001 = 0.02 \eta R$ simulation for our self-gravity study, because based on the measured diffusion, the critical length could be larger than the box size, preventing collapse.} \label{fig:3d_prep_simulations} \end{figure*} \section{Numerical Experiments} \label{sec:3} \corrected{In \citet{KlahrSchreiber2020a} we used in total 15 different two-dimensional simulation set ups to study two different Stokes numbers and a range of total box sizes plus some additional runs for a different radial pressure gradient and a different initial dust-to-gas ratios. All simulations confirmed our criterion $L > 2 l_c$ respectively $m > m_c$ to form planetesimals. Such an extended parameter study is currently not possible for three-dimensional simulations. We therefore studied a range of three-dimensional SI simulations (resembling model {AB-3D} with $\stokes = 0.1$ and initial dust to gas of $\varepsilon_0 = 1$ from \citet{JohansenYoudin2007}) with decreasing box sizes $L$ (see Figure \ref{fig:3d_prep_simulations}) \citep{SchreiberThesis} and pick a particular model for which the measured diffusivity would lead to critical length scale on the order of that $L = 0.001 H$, this box is 200 times smaller than the simulation in \citet{JohansenYoudin2007}. The computational cost at the resulting resolution is immense, thus when we found that our initial model for $\rho_0 = \rhoHill$ did not collapse, we did not set up a new simulation with a bigger box, but gradually increased the total mass content in the simulations, thus not changing the dust-to-gas ratio and SI, but only the effect of self gravity.} \corrected{In that sense the three-dimensional simulations in the present paper are testing our collapse criterion $m > m_c$ by variation of a different parameter than in \citep{KlahrSchreiber2020a}. But in both cases, whether we change box size $L$ and keep $\rho_0 = \rhoHill$ constant for the two-dimensional case, or we keep the box size $L$ constant and increase $\rho_0 = f \rhoHill$ with $f = 2, 4$ and $8$, we effectively change the total mass of pebbles in the box, until we find the simulation to produce planetesimals. For the purpose of testing our collapse criterion, we therefor extended the definition of critical length $\lcrit$, which was originally in \citet{KlahrSchreiber2020a} only for the pebbles as Hill density, to $\hlcrit = \lcrit f^{-\frac{1}{2}} $ as a function of the increased pebble density $f$. Nevertheless, any size estimates for planetesimals would still be based on the original condition using $\lcrit$ with $f=1$ based on large scale SI simulation (see Equation\ \ref{eq:mass}). The modification for our numerical experiments, is justified because the general criterion $\lcrit < \onehalf L$ has to be valid for arbitrary combinations of diffusivity and pebble mass in the box. We will compare the results from this three-dimensional study with the original two-dimensional study \citep{KlahrSchreiber2020a} in Section \ref{sec:4}.} All our past simulations \citep{Schreiber2018,KlahrSchreiber2020a} as well as those in the present paper have been performed with the Pencil Code \citep{Brandenburg2001}, which solves for the gas density $\rho_\mathrm{g}$ with a finite difference version of the following set of equations in the shearing sheet approximation \begin{equation} \frac{\partial \rho_\mathrm{g}}{\partial t} + \nabla \cdot \left(\rho_\mathrm{g} \mathbf{u}\right) + u_{0,y} \frac{\partial \rho_\mathrm{g}}{\partial y} = f_\mathrm{D}\left(\rho_\mathrm{g}\right), \end{equation} where $f_\mathrm{D}\left(\rho_\mathrm{g}\right)$ is a hyper-diffusivity term to stabilise the scheme. Vectors are denoted as bold figures. $\mathbf{e_x}$ and $\mathbf{e_y}$ are the unit vectors. Gas velocities $\mathbf{u}$ are solved relative to the \konsti{unperturbed local azimuthal velocity} $u_{0,y}$ \konsti{$= -q\Omega x$}, \corrected{with $q = 1.5$ for the Keplerian profile,} via the equation of motion \begin{eqnarray} \label{eq:gas_euler_code} \frac{\partial \mathbf{u}}{\partial t} + \left(\mathbf{u}\cdot\nabla\right)\mathbf{u} + u_{0,y}\frac{\partial \mathbf{u}}{\partial y} = -c_\mathrm{s}^2 \nabla \ln \rho_\mathrm{g} + \Omega h \beta \mathbf{e_{x}}\\ + \left(2\Omega u_y \mathbf{e_{x}} - \frac{1}{2}\Omega u_x \mathbf{e_{y}} \right) - \varepsilon \frac{ \mathbf{u - v}}{\tau} + f_\nu \left(\mathbf{u},\rho_\mathrm{g}\right).\nonumber \end{eqnarray} Our simulation is isothermal and we use a fixed speed of sound $c_s$ and $\beta$ denotes the radial pressure gradient, see below. Note that in contrast to \citet{Gerbig2020}, there is no vertical gravity included $- \Omega z \mathbf{z}$ as discussed before. $\Omega h \beta \mathbf{{x}}$ represents the effect of the global pressure gradient in the disk \konsti{which drives the relative velocity between particles and gas \citep{Nakagawa1986}, and as such leads to radial drift and ultimately to drag instabilities like the SI.} Particles are treated as Lagrangian tracer. Their positions $\mathbf{x}$ and velocities $\mathbf{v}$ in the shear frame are governed by \begin{eqnarray} \frac{\partial \mathbf{x}}{\partial t} = - q \Omega \mathbf{x} \mathbf{e_{y}} + \mathbf{v} \end{eqnarray} and \begin{eqnarray} \frac{\partial \mathbf{v}}{\partial t} = \left(2\Omega v_y \mathbf{e_{x}} - \frac{1}{2}\Omega v_x \mathbf{e_{y}}\right) - \frac{\mathbf{v - u}}{\tau} \end{eqnarray} with the coupling term $- (\mathbf{v - u})/\tau$ transferring momentum between dust and gas. For additional technical features we refer to \citet{Gerbig2020}. For our experiment, we choose a domain size \konsti{of} $L_x=L_y=L_Z={0.001}{H}$, and \konsti{a Stokes number of} $\textrm{St}=\Omega \tauS = 0.1$ particles, representing typical \konsti{maximum} pebble sizes in protoplanetary disks \konsti{\citep[see e.g.,][]{Birnstiel2012}}. In \citet{Schreiber2018}, we also performed 2D simulations with $\textrm{St} = 0.01$ which also confirmed our collapse criterion, yet at much higher computational cost. For now, high resolution 3D simulations at these small $\stokes$ numbers are not feasible. The resolution is $128$ cells per dimension. With on average 10 particles per cell, this leads to a total number of $20,971,520$ particles. For the following collapse simulation, we needed 1.4 $\times 10^{6}$ core-hours on 1024 cores in parallel, a total of 58 days of net running time, spread over one year. Larger numbers of cores would not help for such a small system. The 3D parameter study on the SI with various resolutions, initial dust-to-gas ratios and Stokes numbers to identify a suited setup for our simulation \citep{SchreiberThesis} consumed another 15 $\times 10^{6}$ core-hours, without which the simulations presented here could not have been performed. This is only to justify that we did not do an extended parameter study as we did in our two-dimensional study \citep{KlahrSchreiber2020a}. The pressure gradient was set to $\beta=-0.1$, which translates into sub-Keplerian speed $dV$ as \begin{equation} dV = \eta v_K = -\frac{1}{2} \beta c_s. \end{equation} \corrected{This pressure gradient is twice as large as then one used in \citet{JohansenYoudin2007} in order to compensate for the fact that our box is 100 times smaller and does not cover the fastest growing modes of SI, but still driving the SI to a saturated level of diffusion within the given computation time. In the next Section we will directly compare the diffusivities measure in our simulation with the one in \citet{JohansenYoudin2007}.} The initial dust-to-gas ratio is set to $\varepsilon_0=1$ same as in \citet{JohansenYoudin2007}, and as such chosen three times lower than in \citep{KlahrSchreiber2020a}. Although dust-to-gas ratios greater than $\varepsilon_0=1$ do not fundamentally change the nature of SI, they need longer computation time to reach saturated turbulence \citep{Schreiber2018}. \begin{figure*} \plotone{Figure2.pdf} \caption{Timeseries of the maximum dust-to-gas ratio in the 3D collapse simulation with $St=0.1$ particles in units of the average dust-to-dust ratio $\varepsilon_0$. The grey vertical line indicates the time when self-gravity is turned on. The initial gravity parameter is set to $\tilde{G}_0=0.71$, which represents Hill density (blue curve). Since no collapse occurred in the initial simulation ($f=1$), the gravity parameter was increased to $\tilde{G} = f\tilde{G}_0$ (other colored lines) to step wise double the mass of pebbles and gas in the box, restarting from a gravitoturbulent snapshot. The simulation with eight times larger mass collapsed immediately (yellow) within one contraction time $\tauC$. The one with four times higher gravity took longer, but collapsed after a quarter of an orbit (orange). The run with two times larger mass did not collapse (purple) for more than 30 contraction times $\tauC$. \label{fig:3d_coll_rhopmax_ts}} \end{figure*} \subsection{3D Local SI without Self-Gravity} The initial run \texttt{mod0} without self-gravity took 22 days of effective run time to bring the simulation into a state of saturated SI, even we already had increases the radial pressure gradient by a factor of two. Hence, all further tests (with increased self-gravity) were performed with this single simulation as its basis. The timeseries of the maximum dust-to-gas ratio is shown in Figure \ref{fig:3d_coll_rhopmax_ts} (blue). The vertical grey bar indicates when self-gravity is switched on. The particle diffusivity is measured in the saturated SI state in radial and vertical direction before self-gravity was switched on. To measure the radial and vertical diffusion in our simulations, we trace individual particles and fit their increasing displacement with the diffusion ansatz $dx = \sqrt{D t} $. For details of this procedure, we refer to Section 3 of \citet{Schreiber2018}. The measured and scaled dimensionless diffusivities ($\delta = D / (c_s H)$) in the simulation without self-gravity in the radial direction are \begin{equation} \delta_x = \left( {1.90 \pm 1.22} \right) \cdot 10^{-6}, \end{equation} and in the vertical direction \begin{equation} \delta_z = \left( {7.25 \pm 2.20} \right) \cdot 10^{-9}. \end{equation} \corrected{A similar yet weaker anisotropy was already reported in \citet{JohansenYoudin2007} yet for a simulation in a much larger box $L = 0.2 H$ and using a pressure gradient only half as the one chosen here:} \begin{equation} \delta_{x, L = 0.2 H} = \left( {1.6 \pm 0.2} \right) \cdot 10^{-5}, \end{equation} and in the vertical direction \begin{equation} \delta_{z, L = 0.2 H} = \left( {2.7 \pm 0.1} \right) \cdot 10^{-6}. \end{equation} \corrected{See also \citet{Schreiber2018} for extended two-dimensional studies on the anisotropy of diffusion. The overall strength of our diffusivity is thus an order of magnitude smaller in the radial direction and by two orders in the vertical direction than found by \citet{JohansenYoudin2007}, but this effect is unavoidable for the box-size that we need to test our stability criterion. We will discuss the role of the box size and resolution for simulations of planetesimal formation in the presence of SI in the discussion section.} \corrected{Note that our numerical experiments are to test the criterion for collapse, which should hold for any diffusivity value, but that actual critical masses for pebble clouds in the solar nebula use the larger diffusivities as inferred from values in the literature \citep{Schreiber2018,JohansenYoudin2007}. Additional determinations of radial diffusion in SI and in the presence of additional turbulence, especially for mixed particles sizes \citep{Schaffer2018}, are unfortunately not available yet.} The fact that the radial diffusivity was found to be more than two orders of magnitude larger than the vertical diffusion means that the corresponding estimated critical length-scales differ by one order of magnitude. For Hill density, our diffusivities translate into a radial critical length scale via Equation \ref{eq:collapseCrit}, i.e.: \begin{equation} \lcritx =1.4 \times 10^{-3} = 1.4 L, \end{equation} which does not fit into our domain, and a vertical scale of \begin{equation} \lcritz =8.9 \times 10^{-5} = 0.089 L, \end{equation} which does fit into the box. More importantly, the vertical Jeans length $l_{\rm Jeans} = 2 \pi l_{cz} = 0.56 L$ also fits into the box, and thus we can expect vertical contraction. But note that this does not mean collapse, as there can be no gravitational collapse in one dimension. So as both the critical length scale and Jeans length in the radial direction are larger than the radial box extent, and we have no measure for diffusion in the azimuthal direction, the contraction will possibly not go beyond forming a layer of half-width $l_{c,z}$. \corrected{Even so we argue that azimuthal diffusion will be equally important at the onset of a three-dimensional collapse, we are not able to determine this diffusion, before the collapse happens. Without forming already an azimuthaly contracted sheet, the pebble motion will be dominated by Keplerian shear, and a particle tracking is impossible to our understanding. Whether a subtraction of the Keplerian profile, before tracking the pebbles, would lead to useful results has still to be shown.} If we would now switch on self-gravity, then the Toomre parameter for Hill density can be determined from the radial diffusivity (Equation\ \ref{eq:QF}) and found to be $Q_p = 1.94$, indicating stability against linear self-gravity modes. This means that even if a radial Jeans length would fit into our box, it would be stabilized by the Coriolis force. But note that self-gravity might modify the strenght of diffusion, which we are going to study in the next section. \begin{figure*} \gridline{\fig{Figure3a.png}{0.9\textwidth}{(a): $\rho_0 = \rho_{\rm Hill}$ ; $t = {2.94}{\Omega^{-1}}$} } \gridline{\fig{Figure3b.png}{0.9\textwidth}{(b): $\rho_0 = 2\rho_{\rm Hill}$ ; $t = {2.89}{\Omega^{-1}}$} } \caption{Simulation end-states of the 3D collapse study: Not-collapsing cases. Each column is a projection in a different direction: vertical $=z$ (left), azimuthal $=y$ (middle), and radial $=x$ axis (right). The color bar shows the average dust-to-gas ratio along the projection axis. The simulations have the same set of parameters, only the total mass, and thus the strength of self-gravity is altered via the $f$ parameter.\label{fig:3d_coll_endstatesa}} \end{figure*} \subsection{A simulation at Hill density} At 1.6 orbital periods, after SI has saturated we switch on self-gravity (See Figure\ \ref{fig:3d_coll_rhopmax_ts}). We set the dimensionless gravity constant in the Pencil Code to $\Gmod \equiv G f \rho_{\rm Hill}/\Omega^2 = 9/4 \pi = 0.71$, which by this construction means the average density in our simulation domain is for $f=1$ exactly the Hill density. Over the next orbital period we noticed that the density fluctuations increased (see Figure\ \ref{fig:3d_coll_rhopmax_ts}), but no collapse and planetesimal formation happened in this run with $f=1$. This is of particular significance because the self-gravitating clumps in our simulation exceed Hill density by a factor of 30, yet are still not able to contract against the turbulent diffusion. \corrected{We continued the simulation for a total of 1.4 orbits, which for the average density (Hill density) corresponds to 14 free fall times $\tauFF$ calculated as: \begin{equation} \label{eq:freefalltime} \tauFF = \sqrt{\frac{3 \pi}{32 G \rho}}\, \approx 0.1 \Torb \sqrt{\frac{\rhoHill}{\rho}}. \end{equation} If we consider the average over-densities in the simulation of 30 times Hill density (See Figure \ref{fig:3d_coll_rhopmax_ts}), then we even ran the simulation for 75 free fall times. As we discuss in \citet{KlahrSchreiber2020a} contraction time $\tauC$ for a pebble cloud with $\tauFF \Omega > St$ is actually longer than the free fall time because of the friction of dust with the gas: \begin{equation} \tauC = \tauFF \left(1 + \frac{8 \tauFF}{3 \pi^2 \tauS}\right). \label{eq:full} \end{equation} In that case the 1.4 orbits would correspond to 5.2 contraction times at Hill density, yet if we again use the average clump density of 30 Hill densities, then we find that we ran our simulation for about 58 contraction times without a gravitational collapse happening. We are therefor confident that even for longer run times no collapse would have occurred.} Instead, a vertically contracted particle layers form. Note that these dust layers are not necessarily at the disk midplane, as we do not include vertical stellar gravity in our simulations defining such a midplane. The existence of these layers conforms with our prediction in the previous section, where the vertical Jeans length measured from vertical diffusivities was shown to be smaller than our domain size. Independent of whether such a vertical contraction was due to sedimentation in the stellar gravity field or by the gravitational potential of the pebbles themselves, such a dust layer will modify the SI \citep{Johansen2009} and trigger additional instabilities, such as Kelvin Helmholtz instability (KHI) \citep{JohansenHenningKlahr2006}. While, \cite{Bai2010b} suggest that the SI might change its behavior and become stronger with sedimentation, \citet{Gerbig2020} show that the KHI may be as important as the SI, due to setting the vertical extent of the particle layer. To verify whether or not the KHI is active in our simulation we determine the Richardson Number for our particle layer, which expresses the ratio of stabilizing vertical stratification with the vertical shear. For that, we need to characterise the dust layer. \corrected{The pebble distribution itself is too noisy to directly measure the local Richardson number for the flow \citep{JohansenHenningKlahr2006}. Therefore, we determine the variance of the pebble layer density to determine its thickness.} Thus we fit the dust layer with a Gaussian distribution in the spirit of a non-selfgravitating pebble layer, even though we argued that the correct analytic solution for constant diffusion and a vertically infinite domain would be a hyperbolic function (See Equation\ \ref{eq:cosh}), because the scale height of Gaussian and hyperbolic function are quite similar as shown in \citet{KlahrSchreiber2020a} and the standard deviation of the particles is a clearly defined value. Hence, we measure the scale height via the standard deviation of the vertical dust-to-gas ratio distribution to be $\hlcritz = 1.31 \cdot 10^{-4} H$. So about $7$ scale heights fit vertically into our simulation domain, about $1.5$ times as thick as $l_z$ without sedimentation. This scale height can directly be translated into a new vertical diffusivity (see Equation\ \ref{eq:firstlc3}) and we get \begin{equation} \delta_z = St \, \frac{9 }{2 \sqrt{2}} \frac{\hlcritz L}{H^2} = {4.2} \cdot 10^{-8}, \end{equation} with an increased midplane density to about $\rho_c = 2.7 \rhoHill$. We find that the inclusion of self-gravity increases vertical diffusivity by about a factor of 6. With the vertical structure and the midplane pebble to gas ratio we can now determine the Richardson number \citep{Chandrasekhar1961} as function of the combined pebble and gas density $\rho^* = \rho + \rho_g$ as: \begin{equation} \mathrm{Ri} \equiv \frac{\left({g_z}/{\rho^*}\right)\left(\partial \rho / \partial z\right)}{\left(\partial v_\phi/\partial z\right)^2}, \label{eq:Ri_general} \end{equation} in which for the vertically isothermal ansatz, e.g. vertically constant diffusion, we can replace gravity by stratification $\rho g_z = - D \partial \rho / \partial z$ and $\mathrm{Ri}$ reduces to: \begin{equation} \mathrm{Ri} = \frac{\delta}{St} c_s^2 \frac{\rho}{\rho^*} \frac{\left(\partial \ln \rho / \partial z\right)^2}{\left(\partial v_\phi/\partial z\right)^2}. \label{eq:Ri_general1} \end{equation} The azimuthal velocity from the Nakagawa solution \citep{Nakagawa1986} as a function of the dust-to-gas ratio and radial pressure gradient $\eta$ is $v_\phi = v_{\phi 0} - (\beta/2) (\rho_g/\rho^*) c_s$, so we find \begin{equation} \partial v_\phi/\partial z = \frac{\beta}{2} \frac{\partial \rho^* / \partial z}{\rho^{*}} \frac{\rho_g}{\rho^*} c_s. \end{equation} Thus, the Richardson number is: \begin{equation} \mathrm{Ri} = \frac{\delta}{St} \frac{4}{\beta^2} \left(1 + \varepsilon\right). \label{eq:Ri_general2} \end{equation} In the case where self-gravity dominates, the pebbles are responsible for each ingredient in the Richardson number and there is no dependence on height in this expression. Also in contrast to the non-self-gravity case \citep{Chiang_2008, Gerbig2020} the value is proportional to $\varepsilon + 1$ and not $\left(\varepsilon + 1\right)^3$, because here the dust-to-gas ratio defines vertical gravity. For an average midplane density of $\rho_0 = 2.7 \rhoHill$, we thus find $\mathrm{Ri}(f=1) = 0.3$, between the hypothetical critical Richardson number for KHI $\mathrm{Ri}=0.25$ and the numerically determined value of $\mathrm{Ri} \approx 1$ \citep{JohansenHenningKlahr2006}, indicating KHI is likely active here. Nevertheless, without more simulations of self-gravity including SI and KHI it will be difficult to distinguish the role of the two instabilities for our chosen scenario. Yet, for the purpose on how much turbulent diffusivity is needed to prevent planetesimal formation, or more precisely, how much mass is needed to overcome a certain level of turbulent diffusion, this question is irrelevant. We also found that radial particle diffusion increases within the emerging self-gravitating dust layer. In contrast to the vertical diffusion value, the new radial diffusion value $\delta_x$ can be measured with the default method of tracking the particle travel distance over time. The new value for the radial particle diffusion is then an order of magnitude stronger than without self-gravity, i.e.\ \begin{equation} \delta_x = \left( {2.38 \pm 1.38} \right) \cdot 10^{-5}. \end{equation} This means that the pre-existing anisotropy in diffusion in the absence of self-gravity is preserved. See table \ref{tab:simdata0}. \corrected{The radial diffusivity value is now at the same level as for the larger boxes without self gravity \citep{Johansen2009}, while the vertical diffusion is still about 64 times weaker, meaning a factor of 8 in our length-scale estimates.} For our setup at Hill-density, i.e. for $f = 1$, we find no gravitational collapse (as seen in Figure\ \ref{fig:3d_coll_rhopmax_ts}). Instead, the gas and pebble mixture develops stronger turbulence and the amplitude of density fluctuations increases. \corrected{In comparison to simulations without self gravity, where the SI formed multiple filaments in the vertical direction (see the right frame in the upper row of Figure\ \ref{fig:3d_prep_simulations}), now a single almost plane pebble layer is created as result of pebble self gravity.} So far our assumptions about a stability criterion seem to find support in the numerical simulation, i.e.\ that if a box is too small in one direction (here the radial) to not have a critical length fit into it $\lcrit > \onehalf L$, then it cannot collapse, despite being at Hill density. But how can we study the critical box size or critical mass from which on our critical length scales would fit into the domain or respectively unstable wavelength fit in? \begin{figure*} \gridline{\fig{Figure3c.png}{0.9\textwidth}{(c): $\rho_0 = 4\rho_{\rm Hill}$ ; $t = {2.79}{\Omega^{-1}}$} } \gridline{\fig{Figure3d.png}{0.9\textwidth}{(d): $\rho_0 = 8\rho_{\rm Hill}$ ; $t = {2.69}{\Omega^{-1}}$} } \caption{Simulation end-states of the 3D collapse study: Collapse cases. Each column is a projection in a different direction: vertical $=z$ (left), azimuthal $=y$ (middle), and radial $=x$ axis (right). The color bar shows the average dust-to-gas ratio along the projection axis. These two simulations have the same set of parameters as the ones shown in Fig\ \ref{fig:3d_coll_endstatesa}, only the total mass, and thus the strength of self-gravity is altered via the $f$ parameter.\label{fig:3d_coll_endstates}} \end{figure*} \subsection{Simulations above Hill density} Ideally, we would repeat the simulation from scratch with a larger box, however, this is currently numerically too expensive. We therefore choose to make the critical length-scale smaller by increasing the total mass in the simulations, i.e.\ by decreasing the overall Toomre $\tilde{Q}$. Technically we achieve a decreased Toomre $\tilde{Q}$ by increasing the gravitational constant, which is equivalent to up-scaling the total mass in the box. As seen in e.g., \citet{Simon2016, Schaefer2017, Gerbig2020}, who also used this method in their numerical studies of planetesimal formation, this procedure has the advantage of neither directly affecting the strength of SI, nor, as can be seen in the Richardson number, the strength of KHI, because the average dust-to-gas ratio remains unchanged. As our numerical simulation is scale free, decreasing $\tilde{Q}$ by a factor of two is similar to doubling the box \corrected{volume (increasing the box dimension in terms of the critical length $L/\lcrit$ by $25\%$) especially as long as we are gravity dominated. In case of pure SI, diffusivity would increase with an actually larger box size, and we would have to recalculate the now larger $\lcrit$ \citep{KlahrSchreiber2020a}, which would not be possible for collapsing cases. Thus by keeping $L$ and $\varepsilon_0$ as in the $f=1$ simulation, we argue that diffusivity does not change too much with increasing $f=2,4,8$. We indirectly find support for this assumption as the $\hlcrit(f)$ criterion, based on the the diffusivity in the $f=1$ case succesfully describes the outcome of the simulations with $f=2,4,8$.} \begin{deluxetable}{ccccccccc} \tablecaption{Simulation Results for $f=0$ and $f=1$.\label{tab:simdata0}} \tablehead{ \colhead{model} & \colhead{self gravity}& \colhead{$\delta_x$} & \colhead{$\delta_z$} & \colhead{$Q_p$} & \colhead{$l_{c,x}$} & \colhead{$l_{c,z}$} } \startdata \texttt{mod0} & No &$1.9\times 10^{-6}*$ & $7.3\times 10^{-9}*$ & $1.9$ & $1.5$& $0.09$ \\ \texttt{mod1} & Yes &$2.4\times 10^{-5}*$ & $4.2\times 10^{-8}\#$ & $6.9$ & $5.1$ & $0.2$ \enddata \tablecomments{This table collects the simulation results from our two base line models with initially Hill density $f=1$: (1) the name of the model, (2) gravity switch, (3) and (4) are normalised radial and vertical diffusion (5) $Q$ is the particle Toomre value according to Equation\ \ref{eq:particle_toomre_q}. The following length-scales are given in units of the boxsize $L$: (6) $l_{c,x}$ and (7) $l_{c,z}$ are the radial and vertical critical length. Diffusivities denoted with $ ^*$ have been measured using the particle tracking, those denoted with $^\#$ by measuring a scale height.} \end{deluxetable} In model \texttt{mod2} we increased the mass in the box by doubling the gravitational constant ($f=2$), leading to a smaller Toomre parameter and shorter critical length scales (see Tab.\ref{tab:simdata1}). Due to the short duration of the run, we were not able to determine a new diffusivity, so we assumed the same diffusivity as before in the ($f=1$) simulation. While both radial and vertical length scales became smaller, particles did not collapse. \corrected{The highest pebble concentration was $\varepsilon_\mathrm{max} = 400 \varepsilon_\mathrm{0} = 800 \varepsilon_\mathrm{Hill}$, thus the contraction time would have been $\tauC = 5 \times 10^{-3} \Torb$ and even considering for the average maximum pebble load of about 100 $\varepsilon_\mathrm{Hill}$ we have run the simulation effectively for 34 contraction times without a collapse happening. Thus we deem \texttt{mod2} not collapsing for many contraction / collapse times as stable.} \corrected{Interestingly, the particle filament from the $f=1$ case is now warped in the azimuthal direction reminding of a KHI shape (as seen in the $y-z$ plot), in a similar way as found in simulations including vertical gravity \citep{Gerbig2020}, an indication of the modified SI and specially the KHI modes, as indicated by the Richardson number.} Only after increasing the mass to $f=4$ (\texttt{mod4}), the pebble cloud starts to fragment, what also happened in the case of $f=8$ (\texttt{mod8}). In both cases we started from the same gravoturbulent snapshot based on $f=1$ to save computational effort and to mimic a gradual increase in mass load to allow the system to seek for a new stable state if possible. If one would start with self-gravity at these large dust masses in a laminar disk, collapse could occur before turbulence is triggered. \corrected{In Figure\ \ref{fig:3d_coll_endstatesa}, we show the end states for the two simulations that included self-gravity (\texttt{mod1},\texttt{mod2}) that did not collapse and in Figure\ \ref{fig:3d_coll_endstates} those that did collapse (\texttt{mod4},\texttt{mod8}). We are plotting the averaged dust to gas ratio in the line of sight. Simulations with $f\leq2$ show no fragments, but $f=1$ shows a single prominent elongated cloud, diagonally located in the $x-y$ plane (as seen from the top view) and $f=2$ two distinct elongated clouds (seen best, when comparing the top view $x-y$ with the side view $y-z$) which do not contract further despite strong self-gravity. The two simulations with higher total mass collapsed each into a single planetesimal, though the run with $f=8$ shows some additional overdensities which are unclear if they also would collapse, if we could continue the simulation.} \corrected{All these filaments are tilted in the $x-y$ plane in the opposite direction to shear. Whereas \texttt{mod0} (see Figure\ \ref{fig:3d_prep_simulations}) shows the typical trailing wave behaviour in the filaments created by the streaming instability, we now see the effect of the self gravity. Using the Hill density and above for the pebbles implies that perturbations will not get sheared out by the tidal forces from the sun. This enables the formation of trailing and leading filaments likewise, as can also be seen in simulations of gas disks around young stars, in which gravitational bound structures emerge, i.e.\ planet formation via gravitational fragmentation \citep{Durisen2007}.} We had to stop the still ongoing collapse simulations, when the density started to diverge, limiting our time-step. For the further evolution \corrected{of the contracting pebble clouds into planetesimals} we refer to simulations like those presented by \citet{Nesvorny2019}, who recently studied the formation of binary planetesimals from collapsing pebble clouds. In neither the $f=4$ or $f=8$ cases, did the radial critical length $\hlcritx$ fit into the box (see table 2). But in all simulations from $f=1$ to $f=4$ the vertical length-scale $\hlcritz$ easily fit into the box. So neither asking that at least one direction is gravitational stable nor to ask that all directions are gravitational unstable seems to be a good criterion for collapse. Likewise, the Toomre parameter that we calculated for each run is not an adequate predictor. As defined above (see Equation\ \ref{eq:QF}) $Q_p$ is independent from vertical diffusion in our simulation setup, as we have a fixed surface density of pebbles. Thus $Q_p$ being set by the stronger radial diffusion only falls below unity for the highest mass case (\texttt{mod8}) $Q_p = 0.86$, in which the fastest growing wavelength (see Equation\ \ref{eq:lambda_fgm}) would be $\lambda_{\rm fgm} \approx 0.01 H = 10 L$, certainly not fitting into our domain. Neither did the smallest unstable wavelength $\lambda_{\rm min} = 0.0067 H = 6.7 L$ fit into the domain, which indicates how large a box would have to be in order to study the classical gravitational instability in a simulation. \corrected{Those scales are covered and resolved in simulations of large scale planetesimal formation, yet, as the diffusion was not measured in \citet{Li2019} for the large chosen Stokes Number $\stokes = 2$, we can not determine what actual value their Toomre parameter would have obtained. We can only speculate that the $\lambda_{\rm fgm}^2$ may define the mass of the largest or most abundant planetesimals formed in large scale simulations if we extrapolate from studies of gas disk fragmentation \citep{Kratter2010}, but this is still left to be shown.} \begin{deluxetable*}{cccccccccccc} \centerwidetable \tabletypesize{\scriptsize} \tablecaption{Simulation Results including self gravity\label{tab:simdata1}} \tablehead{ \colhead{model($f$)} & \colhead{$Q_p$} & \colhead{$\hlcritx$} & \colhead{$\hlcritz$} & \colhead{$\hlcrit(\rho_0)$} & \colhead{$a_{\rm box}[km]$} & \colhead{$a_{c}[km]$}& \colhead{$a_{\rm Jeans}[km]$} &\colhead{$\rho_{\rm c}/\rho_0$} & \colhead{$\hlcrit(\rho_c)$} & \colhead{$\rho_{\rm max}/\rho_0$}&\colhead{Collapse?}} \startdata (1) & (2) & (3) & (4) & (5) &(6) & (7) &(8)&(9)&(10)&(11) &(12)\\ \hline \texttt{mod1} & $6.9$ & $5.1$ & $0.2$ & $1$ & $42$ & $71$ & $165$&$52000$ &$0.004$&$90$ & No\\ \texttt{mod2} & $3.4$ & $3.6$ & $0.15$ & $0.74$ & $53$ & $63$ & $147$&$6500$ &$0.009$ &$300$ & No\\ \texttt{mod4} & $1.7$ & $2.6$ & $0.1$ & $0.53$ & $66$ & $56$ & $131$&$800$ &$0.018$&$>1000$ & Yes\\ \texttt{mod8} & $0.8$ & $1.8$ & $0.08$ & $0.37$ & $84$ & $50$ & $117$&$102$ &$0.036$&$>1000$ & Yes \enddata \tablecomments{This table collects all simulation results including self-gravity for different values of initial average pebble density $f = \rho_0/\rhoHill$ in units of a Hill density: The columns are: (1) the name of the model, with the number being equal to $f$, (2) $Q$ is the particle Toomre value according to Equation\ \ref{eq:particle_toomre_q}. The following length-scales are given in units of the box-size $L$: (3) $\hlcritx$ and (4) $\hlcritz$ are the radial and vertical critical length. (5) $\hlcritz(\rho_0)$ is the dimensional average critical length for the mean density in the box. Note that $\hlcrit(\rhoHill) = \lcrit$. (6) $a_{\rm box}[km] $ gives the actual pebble mass in the simulation, given in equivalent compressed diameter ({The conversion from mass $m$ to diameter $a[km] $ needs the definition of a nebula model, particular here a local temperature: $H/R = 0.03$, the mass of the central object: $M_\star = M_\odot$, and a solid density for a collapsed body $\rhoSolid = 1 \mathrm{g}/\mathrm{cm}^3$}, (7) $a_c$ is the equivalent diameter for pebble clouds of mass $m_c$, i.e.\ for mean density $\rho_0$, (8) $a_{\rm Jeans}[km]$ is the equivalent diameter for the mass of a contracted B.E.\ sphere (of Jeans Mass and for density at the surface $\rho_\circ = f \rhoHill$), (9) $\rho_{\rm c}/\rho_0$ is the central density peak with respect to the initial density needed to make a B.E.\ sphere of mass $a_{\rm box}[km]$ unstable, which results in (10) a critical length for the sphere of $\hlcrit$ for given central density. (11) gives the maximum density fluctuation achieved in the individual simulations $\rho_{\rm max}$ compared to the initial density, and the last column (12) states whether collapse occurred. Collapse happens when the critical length fits about twice into the domain ($\hlcrit \lessapprox \onehalf L$). The central compression needed for collapse is then less than 1000 and can easily be resolved on the grid. The size predicted on just diffusion and Hill density is $a_c = 71$ km, and falls just squarely between the two collapsing models. Note that other nebula parameters, foremost $\varepsilon_\mathrm{Hill}$ and $H/R$, will lead to other equivalent sizes.} \end{deluxetable*} For the case $f=4$ (\texttt{mod4}), the Toomre value exceeds unity $Q_p = 1.7$ and still the pebble cloud fragments. So as a general outcome, linear Toomre modes are not necessarily indicative of gravitational collapse for our pebble clouds. So we probably have to consider the three-dimensional shape of the pebble cloud as it results from an-isotropic diffusion and then see what determines the stability of this body. In the case of isotropic pressure, or in this case isotropic diffusion we would expect a spherical structure to evolve, in which gravity and diffusion may counteract. In the case of non-isotropic diffusion one would then expect an ellipsoidal structure. This idea has been championed in the context of elliptical galaxies, with anisotropic r.m.s.\ velocities \citep{1981gask.book.....M} and even goes back to \citet{Schwarzschild1908}. As in our simulations diffusion and gravity define one length per dimension, which are our $\hlcritx$, $\hlcritz$ and a so far unknown $\hlcrity$, we can construct an ellipsoid of the follwing shape: \begin{equation} 1 = \frac{x^2}{\hlcritx^2} + \frac{y^2}{\hlcrity^2}+ \frac{z^2}{\hlcritz^2}, \end{equation} with the volume \begin{equation} V = \frac{4 \pi}{3} \hlcritx \hlcrity \hlcritz = \frac{4 \pi}{3} \hlcrit^3. \end{equation} Thus an elliptic pebble cloud has the same mass as a sphere of radius $\hlcrit$, which would be result of the individual diffusivities combined in the following way: \begin{equation} \delta = \sqrt[3]{\delta_x \delta_y \delta_z} \approx \sqrt{\delta_x \delta_z}, \end{equation} with the assumption $\delta_y \approx \sqrt{\delta_x \delta_z}$ for lack of ways to obtain this value otherwise. For the diffusivities as measured for $f=1$ this results in a value of $\delta = 10^{-6}$. In that case, the resulting critical length $\hlcrit$ roughly fits into our box for the collapsing cases $f=4$ and $f=8$ cases but not in the stable cases $f=2$ and $f=1$ (see Table \ref{tab:simdata1}). Furthermore if we calculate the mass of this ellipsoid $m_c$ (see Equation\ \ref{eq:mass}) and express the result as equivalent radius $a_c$ for the individual simulations (see table \ref{tab:simdata1}), then we notice that those cases collapsed in which the total amount of pebbles, expressed in equivalent size $a_\textrm{box}$ is clearly larger than the critical size $a_c$. \corrected{This is a direct confirmation of our collapse criterion $m > m_c$ as defined in Equation\ \ref{eq:mass_simple}.} \corrected{It may seem adhoc that we base our collapse criterion on the average density of the simulations and on the box size, when on the other hand clearly much smaller and much denser structures do form in the simulations. This can be justified by checking how the radius $\lcrit$ of an unstable pebble cloud (see Equation\ \ref{eq:mass_simple}) scales with its density in units of Hill density, i.e.\ $f$:} \begin{equation} \lcrit(f) \sim f^{-\frac{1}{2}}, \end{equation} \corrected{thus a pebble concentration 10 times smaller than $\lcrit$ has to be on average at least more than 100 times denser ($f = 100$) than the average density to full-fill the collapse criterion. More importantly the critical mass in a clump scales as} \begin{equation} m_c(f) \sim \lcrit(f)^3 f = f^{-\frac{1}{2}}, \label{eq:mcf} \end{equation} \corrected{and thus the resulting compressed diameter scales as} \begin{equation} a_c(f) \sim f^{-\frac{1}{6}}. \label{eq:acf} \end{equation} \corrected{In other words, unstable fragments smaller than the box size at several times the Hill density, do not represent a significantly smaller compressed size, than the box at Hill density itself.} \corrected{Following our numerical simulations is seems difficult to create that massive and compact clumps on small scales (more than $10\%$ of the pebbles in $0.1\%$ of the volume), if the large scales are not already unstable. Here we have not yet considered that the actual pebble concentrations are not of constant density, but have an internal stratification, which we will investigate in the next step.} \section{A Bonnor-Ebert solution for pebble clouds}\label{subsec:3-4} The original derivation of the $l_c < \onehalf L$ criterion stemmed from a time scale argument, that contraction is faster than diffusion. The assumption was a sphere of constant density, with sharp cut-off boundaries. A different approach to derive a critical mass would be study the local equilibrium between gravity and diffusion in a similar fashion as we did for the midplane layer for the particles. But since collapse needs more than one dimension, we are also motivated to ask what the three dimensional shape and profile of a self-gravitating pebble cloud would be under the influence of internal turbulent diffusion. As above we replace our ellipsoid with a spheroid of equivalent mass and same central density, yet with the spatial averaged diffusivity. This can be studied using Equation \ref{eq:plane} but now in spherical coordinates, assuming spherical symmetry, where $r$ is the distance from the clump center, which leads to the Lane-Emden equation: \begin{equation} \rho = - \frac{D}{\tau} \frac{1}{4 \pi G } \frac{1}{r^2}\partial_r r^2 \partial_r\ln{\rho}. \label{eq:sphere} \end{equation} One can rewrite this in terms of the critical length $\hlcrit$, which is equivalent to the normalisation value for the dimensionless radius of a Bonner-Ebert sphere (see Equation\ 9.6 in \citet{StahlerPalla2008}) \begin{equation} \frac{\rho}{\rho_c} = - \frac{\hlcrit^2}{r^2} \partial_r r^2 \partial^2_r\ln{\rho}. \label{eq:BE} \end{equation} The resulting radial profile is the Bonnor-Ebert (BE) solution, which can be found by solving the Lane-Emden equation numerically for an isothermal equation of state with the characteristic radius $\hlcrit$ following \citet{StahlerPalla2008}. Obviously this BE sphere has the same characteristic scale $\hlcrit$ as the plane layer, which is the same critical length of $\lcrit$ from the time scale criterion, for $\rho_c=\rhoHill$. For a given temperature, or in our case for a given diffusivity and Stokes-number combination, a family of different solutions is possible, only depending on the ratio of the sphere's central density $\rho_c$ to the Hill density. If the radially decreasing density falls below $\rho_\circ = \rho_c/14.1$, then the cloud will be linear unstable for collapse. This maximal density ratio of $\rho_c/\rho_\circ = 14.1$ has to be determined numerically and is a general property of the isothermal BE solutions. The mass of a BE sphere is a function of the numerically obtained non-dimensional cloud mass $m$ as a function of density contrast and the pressure at the surface $P_\circ$ \begin{equation} M_{\rm BE} = \frac{m a^4}{P_\circ^{1/2} G^{3/2}} = m \frac{\left(\delta/St\right)^{3/2} c_s^3}{\rho_\circ^{1/2} G^{3/2}} \label{eq:BE_Mass} \end{equation} where we use that our pressure is $P_\circ = a^2 \rho_\circ$ with the equivalent speed of sound $a = \left(\delta/St\right)^{1/2} c_s$ (see Equation\ \ref{eq:asound}). The critical dimensionless cloud mass was numerically determined as $m_1=1.18$ which then defines the critical BE mass also known as the Jeans mass \citet{StahlerPalla2008}. For reaching the Hill density at the surface $\rho_\circ = \rhoHill$ this leads to \begin{equation} M_{\rm Jeans} = m_1 \sqrt{\frac{4 \pi}{9}} \left(\frac{\delta}{St}\right)^{3/2} \left(\frac{H}{R}\right)^3 \left(\frac{\rhoHill}{\rho_\circ}\right)^{1/2} M_\sun, \label{eq:BE_Mass3} \end{equation} which already shows the same functional dependence on $H/R$, $\delta / St$ as Equation\ \ref{eq:mass} for $m_c$, which was derived for a sphere of constant density at the Hill value. \begin{equation} M_{\rm Jeans} = 12.5 \left(\frac{\rhoHill}{\rho_\circ}\right)^{1/2} m_c, \label{eq:BE_MassX} \end{equation} This means the equivalent compressed size $a_c$ would be 2.3 times larger for a BE \corrected{solution with a central density of $\rho_c = 14.1 \rhoHill$ when compared to a sphere of constant Hill density}. \corrected{The radius of the BE solution beyond which it is unstable is $l_\circ = 6.5 \hlcrit$ \citep{StahlerPalla2008}. At this distance the local density drops to $\rho(l_\circ) = \rho_\circ$. Thus the minimum extent of the BE solution to become unstable would be $l \ge l_\circ$. But then the BE sphere (see Equation\ \ref{eq:BE_MassX}) would have a size of $l_\circ = 1.7 \lcrit$ and thus not fit into boxes of $L = 2 \lcrit$.} \corrected{With increasing central density, both size and mass of the BE solution do decrease. A BE sphere that fits perfectly inside our constant density cloud ($l_\circ = \lcrit$) would then need a central density of $\rho_c = 42 \rhoHill$ and as a result have about $7.2 m_c$ in mass, or correspondingly still about twice as large as described by $a_c$. Therefore, the unstable BE solutions in our numerical experiments with a limited mass reservoir will need even higher central densities.} If we compare the Jeans masses (expressed as equivalent size $a_\mathrm{Jeans}$) for our 4 simulations $f=1$ to $f=4$ (see Table \ref{tab:simdata1}) with the mass in the box $m_\mathrm{box} = f \rhoHill L^3$ and $a_\mathrm{box} = \left({m_\mathrm{box}}/{\rhoSolid}\right)^{1/3}$, we see that it is always larger than the total mass of available pebbles on our simulations domain ($a_{\rm Jeans} > a_{\rm Box}$). But, if we express the Jeans mass as a function of its central density, then we can calculate a critical density at which a new Jeans mass $\hat{M}_{\rm Jeans}$ equals the mass of the pebbles in the box. Replacing the density at the surface $\rho_\circ$ with the central density $\rho_0 = 14.1 \rho_\circ$ we find: \begin{equation} \hat{M}_{\rm Jeans} = 47 \left(\frac{\rhoHill}{\rho_c}\right)^{1/2} m_c. \label{eq:BE_Mass33} \end{equation} If we now set $M_{\rm Box} = \hat{M}_{\rm Jeans}$ we can calculate the necessary density to make this cloud unstable \begin{equation} \rho_c = \left(\frac{47 m_c}{M_{\rm Box}}\right)^2 \rhoHill \label{eq:BE_Mass4} \end{equation} and compare this value to the average density in the box $\rho_0$. For the cases $f=1$ and $f=2$, overdensities in $\rho_c / \rho_0$ of a factor of more than 10000 are needed to make a BE sphere of given mass $M_{\rm Box}$ collapse. The necessary resolution for the central peak at size $\hlcrit(\rho_c)$ would hardly be reached. Yet for models $f=4$ and $f=8$ a concentration of only about 1000 of the initial density is needed to trigger collapse, and these are the peaks in pebble concentration that we observe in the collapsing runs see Figure\ \ref{fig:3d_coll_rhopmax_ts} when converting the plotted $\varepsilon / \varepsilon_0$ into $\varepsilon / \varepsilon_\textrm{Hill}$ \begin{equation} \varepsilon = f \varepsilon_0. \label{eq:BE_Mass5} \end{equation} Based on Equation\ \ref{eq:BE_Mass33} we can now see that one needs to increase the central density fluctuation by $10^6$ to decrease the Jeans mass by a factor of a 1000 and thus producing an equivalent size $a_c$ to be 10 times smaller. This is what makes it so hard to form small planetesimals. \corrected{We can illustrate this effect in Figure \ref{fig:4}. We first calculate a BE solution for $\hat{M}_{\rm Jeans} = m_c$, i.e.\ the mass as predicted by the time scale argument. Then it follows that $\hlcrit = \lcrit / 47$ and a central density of $\rho_c = 47^2 \rhoHill = 2209 \rhoHill$. This BE sphere has then a size of $l_0 = 6.5 \hlcrit = 0.14 \lcrit$ and contains the same mass as a sphere of constant density $\rhoHill$ and radius $\lcrit$. In other words, if we reshape our pebble cloud to create a less than a $2 \times 10 ^3$ central density increase then it will still be stabilized by diffusion. We also add a profile (dotted line) where the central density peak is 10 times lower than the critical value. This BE sphere would then also fit into our original constant density cloud, but now the BE sphere would be about three times more massive. A weaker density bump can only be unstable for larger and more massive pebble clumps. We can also show how strong the density peak would have to be ($\rho_c = 2.2 \times 10^4 \rhoHill$), in order to make a three times lower mass cloud unstable (dashed line). Finally we also convert those cloud masses into equivalent radii and find that by varying the central density by a factor of 10, only changes the equivalent diameter by a factor of $1.4$ with respect to the prediction based on a constant density sphere.} \corrected{This strong correlation between the available pebble mass and the necessary over-density for collapse is the reason that our simple time-scale based argument for the critical mass of a constant density sphere at Hill level holds even for numerical simulations with density fluctuations of several thousand and associated length-scales of less than $10\%$ of the simulation domain.} \begin{figure*} \centering \includegraphics{Figure4.pdf} \caption{Bonnor-Ebert solutions for pebble clouds. The solid lines depict a solution that has the same mass $m_c$ as a constant density pebble cloud at $\rhoHill$ of radius $\lcrit$ (depicted as dash-dotted line in the left figure). The x-axis is always given in units of that critical length. Left plot: density profile. The pebble density decreases to the critical value $\rho_\circ = 14.1 \rho_c/$ within $0.14 \lcrit$. For 10 times larger (smaller) central density, the profile is getting steeper (shallower) as plotted in dashed (dotted) curves. In the middle plot we show the enclosed mass of the BE solutions. The fiducial case (solid line) matches the desired mass of the constant density solution $m_c$, but the lower density case (dotted line) exceeds the mass by a factor of three and the increased density case (dashed line) would allow for three times smaller masses to collapse. In the right plot we add the equivalent sizes for the three different central density cases and find all to lie within a narrow range around the nominal size $a_c$.} \label{fig:4} \end{figure*} In Table 2 we summarise the results of our simulations. \corrected{All models cover the same volume and the same solid to gas ratio. Therefore also the strength of SI will be similar in these simulations, even we are not able to determine actual diffusivities from the models \texttt{mod2}-\texttt{mod8}. Therefor the assumed diffusivity is still the one from \texttt{mod2}. Based on that diffusivities (see \ref{tab:simdata0}) we determine the pebble Toomre value and the critical length-scales $\hlcrit,\hlcritx,\hlcritz$ for the increased average pebble density. We translate the pebble mass in the simulation to an equivalent compressed mass by assuming a central object of solar mass, a local temperature equivalent to $H/R = 0.03$ and a compressed density for planetesimals of $\rhoSolid \approx 1 \mathrm{g}/\mathrm{cm}^3$.} \corrected{We also derive the predicted critical equivalent size $a_c$ of the pebble cloud for the measured diffusivities as well as the associated equivalent size for a Bonnor-Ebert solution with Hill density at its outer edge. We find that those models produce a collapse in which the critical length fits into the simulation box $\hlcrit \lessapprox \onehalf L$ (\texttt{mod4,mod8}). In those cases also the density spike of an associated BE sphere made of all available pebbles in the box needs an amplitude of less than $1000$ with respect to the initial pebble density. Such a compression to a scale corresponding to $10\%$ of the box size is easily achieved in the simulations and also still well resolved.} \section{Summary, Conclusion and Outlook} \label{sec:4} \corrected{In this paper we tested our criterion for the gravitational collapse of a pebble cloud with internal turbulent diffusion. This criterion, as derived in \citep{KlahrSchreiber2020a}, defines a minimum mass $m_c$ for which the contraction time would be faster than turbulent diffusion. We find that for a given value of diffusion pebble clouds exceeding a given mass can collapse, whereas as lower mass clouds will be dispersed.} \corrected{This collapse criterion is different from asking for the necessary pebble load ("metalicity" or local pebble-to-gas surface density) in order to trigger streaming instability and planetesimal formation in the first place, as we did in \citet{Gerbig2020}. In that paper we were approaching planetesimal formation from the large scales, asking for sedimentation in the presence of turbulent diffusion sufficient to allow for the formation of regions exceeding the Hill density. Whereas in \citep{KlahrSchreiber2020a} and the present paper we ask whether all regions of Hill density will automatically collapse into planetesimals or whether there is a mass threshold, like a Jeans criterion.} \corrected{In the present paper we tested the collapse criterion} \begin{equation} m > m_{c} = \frac{4 \pi}{3} \lcrit^3 \rho_{\rm Hill} = \frac{1}{9} \left(\frac{\delta}{\St}\right)^{\frac{3}{2}} \left(\frac{H}{R}\right)^3 M_\sun, \label{eq:massrep} \end{equation} \corrected{in three-dimensional simulations to successfully describe which models would be too low in mass to produce a collapse.} \corrected{The predicted equivalent sizes $a_c$ of an unstable pebble cloud in the solar nebula or any protoplanetary disk, are set by the strength of the streaming instability, which in turn depends mostly on the local pebble-to-gas ratio at reaching Hill density. The gas mass of a proto-planetary disk is therefor responsible for the resulting planetesimal sizes. In the present paper we have chosen a parameter set in particle size, average pebble to gas ratio, gas mass, and pressure gradient, suited to test our collapse criterion $m > m_c$ and we find diameters of $a_c = 70$ km. Other parameters in terms of local pressure scale height $H/R$, pebble sizes, local gas density and pressure gradient will lead to a wide range of sizes. Nevertheless, for the solar nebula model as derived in \citet{Lenz2020} we find equivalent diameters for the pebble clouds of $60 - 120$ km at early times, and values as low as $6 - 12$ km as the nebula disperses.} Our simulations focused on the interaction of streaming instability, Kelvin-Helmholtz instability and self-gravity on the scales of planetesimal formation. Therefore we simulated only a small Section of the disk and ignored vertical gravity and effects of large scale turbulence on small scales (as argued for in \citep{KlahrSchreiber2020a}). \corrected{In agreement with simulations using larger boxes \citet{JohansenYoudin2007} and \citet{Schreiber2018} we find that diffusion by SI is anisotropic. Radial diffusion at small scales is about an order of magnitude stronger than the vertical diffusion, already without self-gravity. But for the first time we could show how diffusivity (especially in the radial direction) changes after the inclusion of self-gravity.} The turbulence and diffusion in the self-gravity case is still anisotropic and in both directions significantly larger than in the case without self-gravity. We attribute this increase to a modified streaming instability and to \corrected{potentially} active Kelvin-Helmholtz instability because of vertical self-sedimentation. We measure a Richardson number of $\mathrm{Ri} = 0.3$, well in the possible regime for KHI. \corrected{Our results provide an explanation for the turn over in the planetesimal size distribution towards small objects ($a_c < 100 \mathrm{km}$) as found in global simulations \citep{Johansen2009, Simon2016, Simon2017, Abod2018}, for which unfortunately the diffusion was not determined. Future simulations will have to clarify the range of diffusivities that can be expected for realistic pebble sizes and pebble size distributions \citep{Schaffer2018}.} \corrected{The highest resolution studies of SI and planetesimal formation to our knowledge are those by \citet{Li2019} who found at a resolution of $\delta x = 1.9 \times 10 ^{-4} H$ a turnover of the size distribution at 100 km diameter for $\stokes = 2$ particles. As diffusivity (especially in the radial direction) was not measured in these runs, a determination of the critical length scales and our pebble cloud collapse criterion, is not possible. Yet, if we use diffusion measurements from \citet{JohansenYoudin2007} and assume that larger particles produce stronger diffusion and thus the ratio of $\delta/\stokes$ may remain relatively constant, we only would have to look at the dust to gas ratio at Hill density in their simulation for the chosen constant of gravity in code units $\Gmod = 0.05$ to make a guess on expected critical length scales. Following \citet{Li2019} the Hill density in code units in their simulation is determined by the constant of gravity in code units as:} \begin{equation} \varepsilon_\mathrm{Hill} = \frac{9}{\Gmod} = 180, \end{equation} \corrected{for which our predicted equivalent diameter would be $a_c = 25$ km. Yet it has been reported that the inclusion of vertical gravity can increase diffusivity, thus if the turn-over corresponds to $a_c = 100$ km, this would require a radial diffusion of 16 times stronger than in the unstratified case of \citet{JohansenYoudin2007}. To clearly relate the turnover size to the diffusion limited size and thus the strength of turbulent diffusion, the measurement of the diffusivities in such large scale simulations seems to be unavoidable.} \corrected{We derived a novel Toomre value for the self gravitating pebble sub-disk under turbulent diffusion. We could show that diffusion leads to a pebble sound speed $a$ that is not the r.m.s.\ speed $v_\mathrm{rms}$ of pebbles, but represents the "pressure" like resistance of pebbles against compression, driven by diffusion:} \begin{equation} P / \rho = a^2 = \frac{D}{\tauS c_s^2 + D} c_s^2. \end{equation} \corrected{Only in the special case that the correlation time $\tau_\mathrm{t}$ of the sufficiently subsonic turbulence is equal to the stopping time $\tauS$ of the particles one would find $a \approx v_\mathrm{rms}$ as with $D \approx v_\mathrm{rms}^2 \tau_\mathrm{t}$ \citep{YoudinLithwick2007} we get} \begin{equation} a^2 = v_\mathrm{rms}^2 \frac{\tau_\mathrm{t}}{\tauS} \frac{1}{1 + \frac{v_\mathrm{rms}^2}{c_s^2} \frac{\tau_\mathrm{t}}{\tauS}} . \end{equation} \corrected{All our analyses were based on the assumption that the gas is quasi incompressible. In fact with our box size of $L = 0.001 H$ the sound crossing time is the shortest time scale in the system and was therefor making or simulations so numerical expensive. An incompressible code would have performed much better, but we had none available that also had self gravitating and friction coupled particles incorporated. We find the gas density to fluctuate on a $1\%$ level, which is close enough to incompressibility.} \corrected{The derived Toomre parameter $Q_p$ for the pebbles is related to the Toomre parameter of the gas disk $Q_g$ as \begin{equation} Q_p = \sqrt{\frac{\delta_x}{\stokes}} \frac{Q_g}{Z}, \end{equation} which is almost identical to the stability parameter as derived in \citet{Gerbig2020}: \begin{equation} \tilde Q_p = \frac{3}{2} \sqrt{\frac{\delta_z}{\stokes}} \frac{Q_g}{Z} \end{equation} The difference lies in using radial diffusion $\delta_x$, to study the onset of a linear gravity mode from constant back ground density or using the vertically measured diffusion $\delta_z$ for a collapse criterion in a non-linear state, based on the original $m_c$ criterion. Only for isotropic diffusion $\tilde Q_p$ could be a good approximation for the Toomre value, but in any case $\tilde Q_p\leq 1$ is the condition to get planetesimal formation started by setting a minimal pebble enhancement $Z$ for a given macroscopic diffusion $\delta_z$ (or even $\alpha$) to reach Hill density in the midplane.} We have shown that the radial diffusion decides on the Toomre value, whereas the vertical diffusion regulates the midplane density. Therefor it is possible to exceed Hill density in the midplane (for vertical diffusion weaker than radial diffusion) without being Toomre unstable. Gravito turbulence (driven by linear self-gravity modes of the pebbles) may not play a major role since the Toomre value $Q_p$ for the pebble layer is always very large for the measured diffusion strength and even in the one possibly unstable case $Q_p < 1$ no unstable modes fit into the box the linear instability will be suppressed and outgrown by the non-linear collapse. One only should expect significant gravitational turbulence below $Q_p<1.5$ and according to table \ref{tab:simdata1}, especially the not-collapsing simulations have larger Toomre values. Therefore it is probably self-sedimentation and the hydro dynamical instabilities that create the gravitational finite amplitude unstable overdensities. \corrected{Whereas \citet{KlahrSchreiber2020a} was testing this criterion in vertically integrated two-dimensional simulations of the SI we were performing three dimensional simulations in the present paper. Both two-dimensional and three-dimensional simulations confirm the collapse criterion, if one defines a dimensionally averaged diffusivity $\delta = \sqrt{\delta_x \delta_z}$. In the first paper we performed simulations for different Stokes numbers, different box sizes $L$, we varied the pressure gradient and also the initial dust to gas ratio. In the present paper we kept all those parameters fixed, but we changed the total pebble mass and therefor the critical length $\hlcrit$ even if $\delta$ is not changed. As a result we were able to perform simulations for $\hlcrit < L/2$, which collapsed, as well as for $\hlcrit > L/2$, which did not collapse for many contraction times, confirming our stability criterion.} We also derived a Bonnor-Ebert model for pebble clouds in equilibrium between diffusion and contraction. In that case, one expresses the mass of the pebble cloud in terms of central density $\rho_c$ (or outer density $\rho_\circ$) and diffusion per Stokes Number. The equivalent size depends only weakly on central density $a_c \propto \rho_c^{-1/6}$ explaining why smaller planetesimals are less likely to form, as they need a much stronger density fluctuation, before gravity can take over. In the two simulations in which planetesimals formed (\texttt{mod4} and \texttt{mod8}) we found the onset of a $66$ km and $84$ km pebble heap collapse. These sizes are not unrealistic for planetesimals \citep{Morbidelli2009}, but the collapse was not complete in our simulation, as even at pebble densities of $10^3 \rhoHill$ one is still by a factor of $10^2$ - $10^9$ below the solid density of a planetesimal depending on the location in the nebula. During the further contraction fragmentation into several planetesimals can occur, as a result of the angular momentum of the pebble cloud, as well as erosion by the headwind. Only simulating the further collapse can show how many planetesimals with what size spectrum will form from the collapse of the unstable pebble clouds \citep{Nesvorny2019}. We refer to \citet{KlahrSchreiber2020a} for further discussions on realistic pebble sizes, dust to gas ratios and resulting planetesimal sizes for models of the solar nebula \citep{Lenz2020}, finding a preferred $a_c \approx 100$ km from the asteroid to the Kuiper belt, as argued for by observations \citep{Bottke2005,Nesvorny2011}. \corrected{In \citet{Schreiber2018} we find that for high mass loads the strength of diffusion scales inversely with the dust to gas ratio and also proportional to the stokes number, at least over a certain range of pebble sizes. This implies a major dependence of the critical masses on the pebble to gas ratio at Hill density, which can vary strongly over the course of planetesimal formation and a lesser dependence on the pebble stokes number. If SI and diffusion $\delta$ decrease with $\stokes$ then the ratio of $\delta/\stokes$ may stay constant. This effect needs further investigation, especially if one considers a range of stokes numbers as in \citet{Schaffer2018}. The ultimate goal would be to define a representative $a_c$ for a particle size mixture, gas pressure gradients and the local dust to gas ratio at Hill density. And in a second step to learn how the mass spectrum $dN/dM_{\rm P} \propto M_{\rm P}^{-p}$ (see \citep{Johansen2015,Simon2017,Abod2018,Li2019}) of forming planetesimals will relate to this representative $a_c$ and the local availability of pebbles. The results can then be fed into models of planetary embryo and planet formation \citep{Mordasini2009,Johansen2019,Emsenhuber2020,Schlecker2020,Voelkel2020a,Voelkel2020b,Voelkel2020c} and by studying the full model including pebble accretion \citep{KB2006,OrmelKlahr2010,LambrechtsJohansen2012,Lambrechts2019,Bitsch2019,Bitsch2019b,Voelkel2020c} we can ultimately test our paradigm for planetesimal formation in its capability to create the diversity of exoplanets and explain peculiarities of the solar system.} \acknowledgments{We are indebted to Hans Baehr, Konstantin Gerbig, Christian Lenz and Ruth Murray-Clay for many fruitful discussions and technical advise. Many thanks to our anonymous referee as well as Sanemichi Takahashi and Ryosuke Tominaga to ask the right questions that led us to better understand the concepts of diffusion pressure and its implications for current and future work. This research has been supported by the Studienstiftung des deutschen Volkes, the Deutsche Forschungsgemeinschaft Schwerpunktprogramm (DFG SPP) 1385 "The first ten million years of the Solar System" under contract KL 1469/4-(1-3) "Gravoturbulente Planetesimal Entstehung im fr\"uhen Sonnensystem" and by (DFG SPP) 1833 "Building a Habitable Earth" under contract KL 1469/13-1 \& KL 1469/13-2 "Der Ursprung des Baumaterials der Erde: Woher stammen die Planetesimale und die Pebbles? Numerische Modellierung der Akkretionsphase der Erde." This research was supported by the Munich Institute for Astro- and Particle Physics (MIAPP) of the DFG cluster of excellence "Origin and Structure of the Universe and in part at KITP Santa Barbara by the National Science Foundation under Grant No. NSF PHY11-25915. The authors gratefully acknowledge the Gauss Centre for Supercomputing (GCS) for providing computing time for a GCS Large-Scale Project (additional time through the John von Neumann Institute for Computing (NIC)) on the GCS share of the supercomputer JUQUEEN \citep{Stephan:202326} at J\"ulich Supercomputing Centre (JSC). GCS is the alliance of the three national supercomputing centres HLRS (Universit\"at Stuttgart), JSC (Forschungszentrum J\"ulich), and LRZ (Bayerische Akademie der Wissenschaften), funded by the German Federal Ministry of Education and Research (BMBF) and the German State Ministries for Research of Baden-W\"urttemberg (MWK), Bayern (StMWFK) and Nordrhein-Westfalen (MIWF). Additional simulations were performed on the THEO and ISAAC clusters of the MPIA and the COBRA, HYDRA and DRACO clusters of the Max-Planck-Society, both hosted at the Max-Planck Computing and Data Facility in Garching (Germany). H.K. also acknowledges additional support from the DFG via the Heidelberg Cluster of Excellence STRUCTURES in the framework of Germany's Excellence Strategy (grant EXC-2181/1 - 390900948).} \newpage
1,108,101,566,201
arxiv
\section{Introduction} As the byproducts of Shor's factoring algorithm \cite{shor} and Grover's searching algorithm \cite{grover96}, the quantum phase estimation algorithm \cite{kitaev} and amplitude amplification technique \cite{brassard-Amplitude-Amplification} play important roles in many quantum algorithms developed in the past. It is a beautiful application of quantum phase estimation algorithm to Shor's factoring algorithm. However, other applications like quantum counting \cite{brassard}, eigenvalue estimation of Hermitian matrix \cite{abrams}, HHL algorithm to solve linear system \cite{harrow} appear some ``unclean" parts (here ``unclean" does not means the algorithms are not good, but means they may contain some restrictions). The reason to these ``unclean" parts comes from quantum phase estimation algorithm itself. Since quantum phase estimation algorithm is used to estimate eigenvalue of unitary transformation, which has the expression $e^{{\bf i} \theta}$ with $0\leq \theta <2\pi$. For example, in the case of estimating eigenvalue $\lambda$ of Hermitian matrix $H$. Although $e^{{\bf i} Ht}$ is unitary with eigenvalue $e^{{\bf i}\lambda t}$, the argument $\lambda t$ may not lie between 0 and $2\pi$. We should compress $\lambda$ via $t$ into a small number $\lambda t$ that lies in the interval $[0,2\pi)$. By quantum phase estimation algorithm, we will get a good approximate of $\lambda t$. However, the error will be enlarged by $t$ and so the complexity will also be enlarged by $t$. This means we may cannot get a good approximate of $\lambda$ efficiently. What worse is that sometimes we even do not know how to choose such an $t$. Eigenvalue estimation of Hermitian matrix is one central step of HHL algorithm. The above discussion points out one caveat in HHL algorithm, that is the choice of the compression parameter $t$. But in HHL algorithm, the authors have assumed the singular values of the coefficient matrix lie between $1/\kappa$ and 1, where $\kappa$ is the condition number. And so avoided to consider this problem. However, in any given problem, we should consider this before applying HHL algorithm in order to make less assumptions. A known fact is that HHL algorithm only returns a good approximate $|\tilde{x}\rangle$ of the quantum state $|x\rangle$ of the exact solution. However, such a good approximate may not induce a good approximate of the classical solution $x=|x||x\rangle$, where $|x|$ is the 2-norm of $x$. This is because, if the error between $|\tilde{x}\rangle$ and $|x\rangle$ is $\epsilon$, then the error between $|x||\tilde{x}\rangle$ and $x$ will be $|x|\epsilon$. This means the error is enlarged by the norm of the classical solution. Or in other words, the complexity of HHL algorithm is enlarged by the norm of the classical solution. This is another caveat that did not considered in HHL algorithm. The actual effect of above two discovered caveats on the efficiency of HHL algorithm will be discussed in this work. It is well known that HHL algorithm paves a way to study machine learning by quantum computer \cite{aaronson}, \cite{biamonte}, \cite{dunjko}, \cite{rebentros14}, \cite{rebentros17}, \cite{schuld}, \cite{wiebe}. With the rapid developments of applications of HHL algorithm in quantum machine learning. On one hand, these works provide substantial examples that quantum computer can do better than classical computers. On the other hand, we should ask what is the practical efficiency of these works due to the restrictions of HHL algorithm? And under what kind of conditions, can these works achieve high speedup than the classical counterparts? The second target of this work is to reconsider the influences of the caveats we discovered in HHL algorithm on the efficiency of several quantum machine learning algorithms: linear regression \cite{schuld}, \cite{wiebe}, \cite{wang}, supervised classification \cite{lloyd13}, least-square support vector machine \cite{rebentros14} and Hamiltonian simulation of low rank matrix \cite{rebentros16}. The structure of this work is as follows: In section II, we first briefly review the quantum algorithm to estimate eigenvalues of Hermitian matrix, from which we will discuss some other caveats about HHL algorithm that did not discussed in the past. Section III is devoted to reconsider the efficiency of several quantum machine learning algorithms that related to HHL algorithm, which also possess the same problems in HHL algorithm. \section{Reconsidering HHL algorithm} When applying HHL algorithm \cite{harrow} to solve linear systems $Ax=b$ (assume $A$ is Hermitian), people usually talk about the following four caveats \cite{aaronson}, \cite{childs}: (C1). The condition number $\kappa$ of $A$. (C2). The Hamiltonian simulation of $e^{{\bf i} A t}$. (C3). The preparation of the quantum state $|b\rangle$. (C4). The result of HHL algorithm is a quantum state $|x\rangle$ of the solution. The caveat (C2) can be solved, for example in the case when $A$ is sparse and all its entries are efficiently available \cite{berry} or when $A$ is low rank \cite{rebentros16}, \cite{lloyd14}. There are also some methods to resolve the caveat (C3), for example in the relatively uniform case \cite{lloyd13}, or when all the entries and the norm of $b$ are efficiently computable \cite{grover}. As for (C4), obtaining quantum state $|x\rangle$ is enough for many quantum machine learning problems \cite{rebentros14}, \cite{rebentros17}, \cite{schuld}, \cite{wiebe}. The influence of consider number is unavoidable in the algorithm. From these points, these four caveats seem acceptable to HHL algorithm. However, in the original paper of HHL algorithm, it actually contains another caveat, which is related to the first caveat and is easy to be ignored when using HHL algorithm: (C5). The singular values of $A$ lie between $1/\kappa$ and 1. The first four caveats may be solvable in some sense as discussed above, however, the fifth caveat is a little difficult to overcome. As discussed in the original paper \cite{harrow}, it can be solved by a scaling. But usually, it is hard to achieve such a scaling, since we do not know the condition number $\kappa$ in advance. The caveat (C5) mainly comes from the quantum phase estimation algorithm to estimate eigenvalues of Hermitian matrix. We should compress the eigenvalues of the Hermitian matrix into small numbers that lie between 0 and $2\pi$ before applying quantum phase estimation algorithm. In the following, we first briefly review this algorithm. Let $A=(a_{ij})$ be an $M\times M$ Hermitian matrix with an eigenvalue $\lambda$ (unknown) and a corresponding eigenvector $|u\rangle$ (known). Then $U=e^{{\bf i} A t}$ is unitary. Now we suppose $U$ can be efficiently simulated in time $O(t^\gamma \textmd{poly}(\log M)/\epsilon)$ with accuracy $\epsilon$. Consider the following quantum phase estimation algorithm to estimate the eigenvalue $\lambda$, where $t$ and $N$ appear below are under determination: \begin{equation}\begin{array}{lll}\label{phase-estimation-alg0} \vspace{.15cm} \hspace{-.3cm}\displaystyle\frac{1}{\sqrt{N}}\sum_{x=0}^{N-1}|x\rangle|u\rangle &\mapsto&\displaystyle\frac{1}{\sqrt{N}}\sum_{x=0}^{N-1}|x\rangle U^x|u\rangle \\ \vspace{.15cm} &=& \displaystyle\frac{1}{\sqrt{N}}\sum_{x=0}^{N-1}e^{{\bf i}\lambda tx}|x\rangle|u\rangle \\ &\mapsto& \displaystyle\frac{1}{N}\sum_{y=0}^{N-1}\Bigg[\sum_{x=0}^{N-1}e^{{\bf i} x(\lambda t-\frac{2\pi y}{N})}\Bigg]|y\rangle|u\rangle. \end{array}\end{equation} Then \begin{equation}\begin{array}{lll} \vspace{.15cm} \textmd{Prob}(|y\rangle) &=& \displaystyle\frac{1}{N^2}\Bigg|\sum_{x=0}^{N-1}e^{{\bf i} x(\lambda t-\frac{2\pi y}{N})}\Bigg|^2 \\ &=& \displaystyle\frac{1}{N^2}\Bigg|\frac{\sin \frac{N}{2}(\lambda t-\frac{2\pi y}{N})}{\sin \frac{1}{2}(\lambda t-\frac{2\pi y}{N})}\Bigg|^2. \end{array}\end{equation} If $|\lambda t-\frac{2\pi y}{N}|\leq \frac{\pi}{N}$, then $|\frac{N}{2}(\lambda t-\frac{2\pi y}{N})|\leq \frac{\pi}{2}$, and so \begin{equation} \textmd{Prob}(|y\rangle) \geq\frac{4}{\pi^2}\frac{1}{N^2}\Bigg|\frac{\frac{N}{2}(\lambda t-\frac{2\pi y}{N})}{\frac{1}{2}(\lambda t-\frac{2\pi y}{N})}\Bigg|^2 =\frac{4}{\pi^2}. \end{equation} By choosing suitable $t$ and $N$, we can always find such a $y$ satisfies $|\lambda t-\frac{2\pi y}{N}|\leq \frac{\pi}{N}$. Therefore $|\lambda-\frac{2\pi y}{tN}|\leq \frac{\pi}{tN}=\delta$ and $N=O(1/t\delta)$. At this time $2\pi y/tN$ will be a good approximate of $\lambda$. The complexity of this algorithm is \begin{equation}\label{complexity0} O((tN)^\gamma\textmd{poly}(\log M)/\epsilon)=O(\textmd{poly}(\log M)/\epsilon \delta^\gamma), \end{equation} here $\delta$ is the accuracy of estimation of eigenvalue $\lambda$. The above is a brief overview of quantum phase estimation algorithm to estimate eigenvalues of Hermitian matrix. On the whole, when fixing $t$ and the accuracy $\delta$, we can choose a suitable $N$ such that the algorithm is efficient. Note that the result $2\pi y/tN$ of algorithm (\ref{phase-estimation-alg0}) is non-negative, so what if $\lambda<0$? This problem actually does not hard to solve, since we can choose $t$ small enough, such that $|\lambda t|<\pi$. So if $2\pi y/N>\pi$, then we believe that $\lambda t=2\pi y/N-2\pi=-2\pi (N-y)/N$. The good point of quantum phase estimation is that we can estimate all eigenvalues of $A$ even without knowing the eigenvectors. The algorithm is almost the same as (\ref{phase-estimation-alg0}) and it forms one central step of HHL algorithm. However, there exists one problem we should consider beforehand in algorithm (\ref{phase-estimation-alg0}). \vspace{.3cm} \textbf{Problem T. How to choose $t$.} It is clear that $t$ should satisfy $|\lambda t|<\pi$ due to $e^{2\pi {\bf i}}=1$ and the sign of $\lambda$. A theoretical choice of $t$ is $t=\pi /|\lambda_{\max}|$, where $|\lambda_{\max}|$ is the maximal singular value of $A$. There are several different types of upper bounds about $|\lambda_{\max}|$, a few are listed below: \begin{equation}\begin{array}{lll}\vspace{.15cm} & \hspace{-.3cm}\displaystyle\sqrt{\textmd{Tr}(AA^\dagger)}, &~~ \|A\|_1=\max_j \sum_i|a_{ij}|, \\ & \hspace{-.3cm}\displaystyle\|A\|_2=\sqrt{\sum_{i,j}|a_{ij}|^2},&~~ M\|A\|_{\max}=M\max_{i,j}|a_{ij}|. \end{array}\end{equation} The above choices about upper bound will affect the complexity of the total algorithm, since the above matrix computation are not easy in classical computer or even in quantum computer. One simple case is when $A$ is $\textmd{poly}(\log M)$ sparse and all the entries of $A$ are bounded by $\textmd{poly}(\log M)$. \vspace{.3cm} Now come back to HHL algorithm. The authors in \cite{harrow} assume that all the singular values of $A$ lies between $1/\kappa$ and $1$. At this time, $1/\kappa=|\lambda_{\min}|$ equals the smallest singular value of $A$. Under this assumption, the complexity of solving $Ax=b$ is \begin{equation}\label{complexity-hhl-0} O((\log M)s^2\kappa^2/\epsilon)=O((\log M)s^2/\lambda_{\min}^2\epsilon), \end{equation} where $s$ is the sparseness of matrix $A$. Moreover, as discussed in the original paper, the assumption (C5) can be resolved by scaling the the linear system in the first step. However, scaling does not affect the whole procedure of HHL algorithm, since the initial state of HHL algorithm is $|0\rangle|b\rangle$, and scaling does not affect this initial state. Actually, scaling only works when estimating the eigenvalues of $A$. It compresses the singular values of $A$ into the interval $[1/\kappa,1]$. More specifically, let $\widetilde{A}$ be a new matrix which does not satisfy the condition (C5). Denote $A=t\widetilde{A}$ such that $A$ satisfies the condition described in (C5) (here we suppose $t$ can be obtained by some methods). Then from (\ref{complexity-hhl-0}), if $|\tilde{\lambda}_{\min}|$ is the smallest singular value of $\widetilde{A}$, the complexity of HHL algorithm will be \begin{equation}\begin{array}{lcl}\label{complexity-hhl-1} \vspace{.2cm} \displaystyle O\Big(\frac{(\log M)s^2}{t^2\tilde{\lambda}_{\min}^2\epsilon}\Big) &\xlongequal[]{\textmd{theoretically}}& \displaystyle O\Big(\frac{(\log M)s^2\tilde{\lambda}_{\max}^2}{\tilde{\lambda}_{\min}^2\epsilon}\Big) \\ &=& \displaystyle O\Big(\frac{(\log M)s^2\kappa^2}{\epsilon}\Big). \end{array}\end{equation} Here ``theoretically" means the best choice about $t$ is $O(1/|\tilde{\lambda}_{\max}|)$, where $|\tilde{\lambda}_{\max}|$ is the largest singular value of $\widetilde{A}$. It also means we may cannot obtain the best choices of $t$. Formula (\ref{complexity-hhl-1}) implies that, if we do not know $\kappa$ in advance, and just choose a reasonable $t$ based on some methods, then the smaller $t$ is, the higher complexity of HHL algorithm will be. Another point in (\ref{complexity-hhl-1}) we should pay attention to is that HHL algorithm only preserves eigenvalues larger than $1/\kappa=1/|\lambda_{\min}|$. However, we do not know $|\lambda_{\min}|$ beforehand. There are lots of upper bounds about the maximal eigenvalue, but few about the smallest eigenvalue. A reasonable idea is choosing a small number $\mu$ instead of $|\tilde{\lambda}_{\min}|$ (for example, see applications in \cite{rebentros14}, \cite{rebentros17}). Then in the procedure of HHL algorithm, we only keep the eigenvalues larger than $\mu$ and ignore all the eigenvalues smaller than $\mu$. At this time, the complexity will be \begin{equation}\label{complexity-hhl-2} O((\log M)s^2/t^2\mu^2\epsilon). \end{equation} However, it will make the solution less accuracy if $\mu$ is not close to $|\tilde{\lambda}_{\min}|$. In other words, if we only consider the solution of the linear system lie in these components, then the error is $\epsilon$. While the error to the exact solution may be larger than $\epsilon$. Since the solution not only depends on $A$, but also on $b$. And it seems hard to estimate the error of HHL algorithm at this time. This can be reflected more clearly in the simple case when $A=\textmd{diag}\{a_0,\ldots,a_{r-1},a_r,\cdots,a_{M-1}\}$ is a diagonal matrix with \[1\geq|a_0|\geq \cdots\geq|a_{r-1}|\geq \mu>|a_r|\geq \cdots\geq|a_{M-1}|>0.\] Set $b=(b_0,\ldots,b_{r-1},b_r,\ldots,b_{M-1})^T$. Then in HHL algorithm, we will obtain a solution in the form \[(b_0/a_0,\ldots,b_{r-1}/a_{r-1},0,\ldots,0)^T.\] However, the exact solution is \[(b_0/a_0,\ldots,b_{r-1}/a_{r-1},b_r/a_r,\ldots,b_{M-1}/a_{M-1})^T.\] The error will be large if $b$ does not lie in the well-conditioned parts of $A$. The final point we should pay attention to HHL algorithm is that: HHL algorithm returns an approximate state $|\tilde{x}\rangle$ of the state $|x\rangle$ of the exact solution. This means $||x\rangle-|\tilde{x}\rangle|\leq \epsilon$. But the exact solution is $x=|x||x\rangle$. So $|x-|x||\tilde{x}\rangle|\leq |x|\epsilon$. The error may be enlarged by the norm $|x|$. If we want this error small, then the complexity of HHL algorithm becomes $O((\log M)s^2\kappa^2|x|/\epsilon)$. Actually, this phenomenon has already appeared in the quantum counting algorithm \cite{brassard}: Suppose there are $K$ marked items in $\{1,2,\ldots,N\}$. Then the result of quantum counting algorithm is that we can approximate $K$ with relative error $\epsilon$ in time $O(\sqrt{N/K\epsilon^2})$. So we can find an $\widetilde{K}$ such that $|K-\widetilde{K}|\leq K\epsilon$. Similarly, if we want $K\epsilon=1$ for example, then the final complexity will be enlarged into $O(\sqrt{NK})$. The above phenomenon in HHL algorithm can be stated more clearly in the following way: Assume $A$ is invertible, the eigenvalues of $A$ are $\lambda_1,\ldots,\lambda_M$ and the corresponding eigenvectors are $|u_1\rangle,\ldots,|u_M\rangle$. Assume $b=\sum_{j=1}^M\beta_j|u_j\rangle$, then the exact solution of $Ax=b$ is $x=\sum_{j=1}^M \beta_j\lambda_j^{-1}|u_j\rangle$. And so its quantum state is $|x\rangle=\frac{1}{\sqrt{Z}}\sum_{j=1}^M \beta_j\lambda_j^{-1}|u_j\rangle$, where $Z=|x|^2=\sum_{j=1}^M |\beta_j\lambda_j^{-1}|^2$. Assume the solution obtained by HHL algorithm is $|\tilde{x}\rangle=\frac{1}{\sqrt{\widetilde{Z}}}\sum_{j=1}^M \beta_j\tilde{\lambda}_j^{-1}|u_j\rangle$ where $\widetilde{Z}=\sum_{j=1}^M |\beta_j\tilde{\lambda}_j^{-1}|^2$ and $|\lambda_j^{-1}-\tilde{\lambda}_j^{-1}|\leq\epsilon$. Then the classical solution obtained by HHL algorithm has the form $\tilde{x}=\sum_j \beta_j\tilde{\lambda}_j^{-1}|u_j\rangle$. Note that $\widetilde{Z}=|\tilde{x}|^2$. By the error analysis in HHL algorithm \cite{harrow}, we have \[O(\epsilon^2) \geq ||x\rangle-|\tilde{x}\rangle|^2=2-2\langle x|\tilde{x}\rangle.\] This means $1\geq \langle x|\tilde{x}\rangle\geq 1-O(\epsilon^2)$. On one hand, \[|Z-\widetilde{Z}|=\sum_{j=1}^M |\beta_j|^2\Big||\lambda_j^{-1}|^2-|\tilde{\lambda}_j^{-1}|^2\Big|\leq O(\epsilon\kappa|b|^2).\] We wish $\epsilon\kappa|b|^2\leq\epsilon_1$ is small. So $\epsilon\leq\epsilon_1/\kappa|b|^2$. On the other hand, if we set $\widetilde{Z}=Z+\delta$ with $0\leq\delta\leq \epsilon_1$, then \[\begin{array}{lll} \vspace{.2cm} |x-\tilde{x}|^2 &=& \widetilde{Z}+Z-2\sqrt{Z\widetilde{Z}}\langle x|\tilde{x}\rangle \\ \vspace{.2cm} &\leq& 2Z+\delta-2\sqrt{Z(Z+\delta)}(1-O(\epsilon^2)) \\ \vspace{.2cm} &\leq& \delta+(Z+\delta)O(\epsilon^2) \\ &\approx& O(Z\epsilon^2). \end{array}\] Similarly, we also wish $Z\epsilon^2\leq \epsilon_2^2$ is small, which implies $\epsilon\leq\epsilon_2/\sqrt{Z}=\epsilon_2/|x|$. Combing the above analysis, we can choose $\epsilon=\min\{\epsilon_1/\kappa|b|^2,\epsilon_2/|x|\}$. Finally, we assume $\epsilon_1=\epsilon_2=:\tilde{\epsilon}$ when they are small. Then \begin{equation} \epsilon=\min\{\tilde{\epsilon}/\kappa|b|^2,\tilde{\epsilon}/|x|\} =\frac{\tilde{\epsilon}}{\max\{\kappa|b|^2,|x|\}}. \end{equation} The complexity of HHL algorithm will be \begin{equation} O\Big(\frac{(\log M)s^2\kappa^2\max\{\kappa|b|^2,|x|\}}{\tilde{\epsilon}}\Big). \end{equation} So the complexity of HHL algorithm is influenced by the norm of the solution and also the norm of $b$. It is not hard to check that the error between $A\tilde{x}$ and $b$ is $|A\tilde{x}-b|^2=\sum_{j=1}^M |\beta_j(\lambda_j\tilde{\lambda}_j^{-1}-1)|^2=O(\widetilde{Z}\epsilon^2)$, which also implies the influence of the norm of the solution on the efficiency of HHL algorithm. \section{Reconsidering some quantum machine learning algorithms} The problems discussed in HHL algorithm actually appears in some other related quantum machine learning algorithms. This section will be denoted to review several of them. \emph{Linear regression.} Linear regression is a basic problem in machine learning. The quantum algorithm to linear regression problem has been considered in \cite{schuld}, \cite{wiebe}, \cite{wang}. It is well known that the linear regression problem is equivalent to solve a linear system $F^\dagger F {\bf x}=F^\dagger {\bf b}$, where $F$ is the data matrix and ${\bf b}$ is a given vector. The prediction on the new data ${\bf c}$ is equivalent to evaluate the inner product ${\bf c}\cdot{\bf x}$. By HHL algorithm, we can find the state $|x\rangle$ of the solution efficiently. And suppose we can prepare the quantum state $|c\rangle$ of ${\bf c}$ efficiently. Then by swap test, we can estimate $\langle c|x\rangle$ efficiently. However, there appears at least two problems. First, as discussed above, the accuracy of HHL algorithm is related to $|{\bf x}|$, which may kill the exponential speedup to this problem. Second, note that ${\bf c}\cdot{\bf x}=|{\bf c}||{\bf x}|\langle c|x\rangle$, so a good approximate of $\langle c|x\rangle$ does not imply a good approximate of ${\bf c}\cdot{\bf x}$ especially when $|{\bf c}|,|{\bf x}|$ are large. Therefore, generally swap test is not good to estimate inner product of classical vectors even though their quantum states can be prepared efficiently. \emph{Supervised classification.} In paper \cite{lloyd13}, Lloyd et al. provided an efficient quantum algorithm to one type of supervised classification problem. Such a classification is based on the comparison of distances between the given vector ${\bf u}$ to the means of two clusters $V$ and $W$. The main techniques used in this paper are swap test and quantum state preparation. This paper introduced a great technique to prepare the desired quantum state. More specifically, assume $V=\{{\bf v}_1,\ldots,{\bf v}_M\}$, then based on Hamiltonian simulation, they get the following state efficiently \[\begin{array}{lll} && \displaystyle \frac{1}{\sqrt{2}}|0\rangle\Big[\cos(|{\bf u}|t)|0\rangle-\frac{1}{\sqrt{M}}\sum_{j=1}^M\cos(|{\bf v}_j|t)|j\rangle\Big] \\ && \hspace{.5cm} \displaystyle -\frac{{\bf i}}{\sqrt{2}}|1\rangle\Big[\sin(|{\bf u}|t)|0\rangle-\frac{1}{\sqrt{M}}\sum_{j=1}^M\sin(|{\bf v}_j|t)|j\rangle\Big]. \end{array}\] Choose $t$ so that $|{\bf u}|t,|{\bf v}_j|t\ll 1$, then the state along with $|1\rangle$ is an approximate of \begin{equation} \label{SC-eq1} \frac{1}{\sqrt{Z}}\Big[|{\bf u}||0\rangle-\frac{1}{\sqrt{M}}\sum_{j=1}^M|{\bf v}_j||j\rangle\Big], \end{equation} where $Z=|{\bf u}|^2+\frac{1}{M}\sum_{j=1}^M|{\bf v}_j|^2$. Based on swap test, we can get a good approximate of the probability, which is about $Zt^2$, of getting $|1\rangle$. It is not hard to see that this technique works well when the data $\{|{\bf u}|,|{\bf v}_1|,\ldots,|{\bf v}_M|\}$ are relatively uniformly distributed due to the choice of $t$. And the complexity is affected by $\max\{|{\bf u}|,|{\bf v}_1|,\ldots,|{\bf v}_M|\}$. So it requires the norms of given vectors are relatively small. When obtaining the state (\ref{SC-eq1}), we can measure the first register of the following state \[\frac{1}{\sqrt{2}}\Big(|0\rangle|u\rangle+\frac{1}{\sqrt{M}}\sum_{j=1}^M|j\rangle|v_j\rangle\Big)\] in the basis obtained by extending the state (\ref{SC-eq1}). The probability of getting (\ref{SC-eq1}) is \begin{equation} P=\frac{1}{2Z^2}\Big|{\bf u}-\frac{1}{M}\sum_{j=1}^M{\bf v}_j\Big|^2, \end{equation} which can also estimated efficiently by swap test. Finally, we will obtain a good approximate of the distance, which equals $2PZ^2$, from ${\bf u}$ to the mean of $V$. We should note that although the error of estimating $P$ is small, after multiplying $Z^2$, the error of estimating $2PZ^2$ will be large. So to make the final error small, the complexity of estimating this distance will be enlarged by $Z$ too. The complexity to this classification problem analyzed in \cite{lloyd13} is $O(\epsilon^{-1}\log(MN))$, here $N$ is the dimension of the vectors. But based on the above analysis, the complexity should be $O(Z^2\epsilon^{-1}\log(MN))$. \emph{Least-square support vector machine.} As discussed in \cite{rebentros14}, the least-square support vector machine problem is equivalent to solve the following linear system \[F\left( \begin{array}{c} b \\ \vec{\alpha} \\ \end{array} \right)=\left( \begin{array}{cc} 0 &~ \vec{1}^T \\ \vec{1} &~ K+\gamma^{-1}I_M \\ \end{array} \right)\left( \begin{array}{c} b \\ \vec{\alpha} \\ \end{array} \right)=\left( \begin{array}{c} 0 \\ \vec{y} \\ \end{array} \right),\] where $K$ is the kernel matrix, $\vec{1}^T=(1,\cdots,1)$ and $I_M$ is the identity matrix. In paper \cite{rebentros14}, Rebentrost et al. provided a quantum algorithm to solve this linear system by HHL algorithm. A key point of this work is the Hamiltonian simulation of matrix $F$. The main technique comes from the paper \cite{lloyd14}. They also uses a technique proposed in paper \cite{lloyd13} to estimate the trace of the kernel matrix $K=({\bf x}_i\cdot{\bf x}_j)_{M\times M}$, which is required in the Hamiltonian simulation technique. Based on the method given in \cite{lloyd13}, they got a good approximate of $\textmd{Tr}(K)/M$ in time $\widetilde{O}(1/\epsilon)$. However, a good approximate of $\textmd{Tr}(K)/M$ does not imply a good approximate of $\textmd{Tr}(K)$. The error will be enlarged into $M\epsilon$. And in order to make this error small, the complexity becomes $\widetilde{O}(M/\epsilon)$. So it is a little hard to estimate $\textmd{Tr}(M)$ efficiently. And it seems that this is necessary for them to simulate $e^{{\bf i} tF}$. Actually, the trace of a matrix cannot being efficiently estimated in quantum computer in the general case, otherwise it will contradict the optimality of Grover algorithm. More specifically, suppose there is a quantum algorithm that can estimate $\textmd{Tr}(B)$ in time $O(\textmd{poly}(\log n)/\epsilon\delta)$ with accuracy $\epsilon$ and probability $1-\delta$ for any $n\times n$ matrix $B$. In the searching problem, assume $f$ is defined by $f(x)=1$ if $x$ is marked and $f(x)=2$ if $x$ is not marked. Then the trace of the diagonal matrix with diagonal entries formed by $f(x)$ will be estimated in time $O(\textmd{poly}(\log n)/\epsilon\delta)$ with probability $1-\delta$. However, the trace of this diagonal matrix corresponds to the summation of all $f(x)$. This means we can decide whether or not there exist marked items in time $O(\textmd{poly}(\log n)/\epsilon\delta)$ with probability $1-\delta$. Then the method of bisection will finally help us find the marked items in time $O(\textmd{poly}(\log n)/\epsilon\delta)$ with probability $(1-\delta)^{\log n}$. If we choose $\delta=1/\log n$, then $(1-\delta)^{\log n}\approx1/e\approx 0.368$, which is a constant, here $e\approx2.71828$ is the Euler number. This is impossible. Even without considering the method of bisection to the search problem, efficient algorithm to decide whether or not there exists marked items in polynomial time will imply quantum computer can solve NP-complete problems, which is a highly implausible result \cite{aaronson05}. \emph{Hamiltonian simulation of low rank matrix.} In paper \cite{rebentros16}, Rebentrost et al. provided a new method to exponentiate low rank but non-sparse Hermitian matrix. Instead exponentiating a low rank Hermitian matrix $A$, it exponentiates $A/M$, where $M$ is the size of the matrix. When considering the above analysis to HHL algorithm (\ref{complexity-hhl-1}), the choice of $t=1/M$ seems not good to the linear system solving problem, unless the problem does not related to the eigenvalues but eigenvectors, just like the Procrustes problem considered in the paper \cite{rebentros16}. On the other hand, as discussed in \textbf{Problem T}, in the $\textmd{poly}(\log M)$ sparse case and all entries of $A$ are bounded by $\textmd{poly}(\log M)$, then the fifth caveat can be resolved easily. However, in the low rank case, the fifth caveat seems not easy to handle. As we can seen from the above analysis, the main problem in HHL algorithm and its related quantum algorithms comes from the compression of a large number into a small one. A good compression brings a lot of helps to the complexity. And this should be done in the first step of the whole algorithm. But this compression seems not easy to resolve in the general case. Anyway, HHL algorithm is an important quantum algorithm, and plays an important role in quantum machine learning. And when applying it to solve other problems, we should be very careful to deal with its restrictions in order to get more practical algorithms. \textbf{Acknowledgements}. This work was supported by the NSFC Project 11671388, the CAS Frontier Key Project QYZDJ-SSW-SYS022, China Postdoctoral Science Foundation and the Guozhi Xu Posdoctoral Research Foundation. \nocite{*}
1,108,101,566,202
arxiv
\section{Introduction} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{image/XLM-RA} \caption{The XeroAligned XLM-R model (called XLM-RA) for cross-lingual NLU. The XeroAlign loss is added to the otherwise unaltered training to encourage the sentence embeddings in different languages to be similar, enabling zero-shot reuse of the classifier(s).} \label{fig:xlmra} \end{figure} In just a few years, transformer-based \cite{vaswani2017attention} pretrained language models have achieved state-of-the-art (SOTA) performance on many NLP tasks \cite{wang2019superglue}. Transfer learning enabled the self-supervised pretraining on unlabelled datasets to learn linguistic features such as syntax and semantics in order to improve tasks with limited training data \cite{wang2019glue}. Pretrained cross-lingual language models (PXLMs) have soon followed to learn general linguistic features and properties of dozens of languages \cite{lample2019cross,xue2020mt5}. For multilingual tasks, however, adequate labelled data is usually only available for a few well-resourced languages such as English. Zero-shot approaches were introduced to transfer the task knowledge to languages without the requisite training data. To this end, we introduce \textbf{XeroAlign}, a conceptually simple, efficient and effective method for task-specific alignment of sentence embeddings generated by PXLMs, aimed at effective zero-shot cross-lingual transfer. XeroAlign is an auxiliary loss function, which uses translated data (typically from English) to bring the zero-shot performance in the target language closer to the source (labelled) language, as illustrated in Figure \ref{fig:xlmra}. We apply our proposed method to the publicly available XLM-R transformer \cite{conneau2019unsupervised} but instead of pursuing large-scale model alignment with general parallel corpora such as Europarl \cite{koehn2005europarl}, we show that a simplified, task-specific model alignment is an effective and efficient approach to zero-shot transfer for cross-lingual natural language understanding (XNLU). We evaluate our method on 4 datasets that cover 11 unique languages. The XeroAligned XLM-R model (XLM-RA) achieves SOTA scores on three XNLU datasets, exceeds the text classification performance of XLM-R trained with labelled data and performs on par with SOTA models on an adversarial paraphrasing task. \section{Related Work} \label{related} In order to cluster prior work, we formulate an approximate taxonomy in Table \ref{tab:related} for the purposes of positioning our approach in the most appropriate context. The relevant zero-shot transfer methods can generally be grouped by a) whether the alignment is targeted at each task, i.e. is task-specific [TS] or is task-agnostic [TA] and b) whether the alignment is applied to the model [MA] or data [DA]. Our contribution falls mostly into the [MA,TS] category although close methodological similarities are also found in the [MA,TA] group. \begin{table}[h] \centering \begin{tabular}{@{}c|c|c@{}}\toprule Groups & \textit{Task-Specific} & \textit{Task-Agnostic} \\ \midrule \textit{Data Align} & [DA,TS] & No relevant work \\ \textit{Model Align} & [MA,TS] & [MA,TA] \\ \bottomrule \end{tabular} \caption{An approximate taxonomy of prior work.} \label{tab:related} \end{table} \paragraph{Transformer-based PXLMs} For transformer-based PXLMs, two basic types of representations are commonly used: 1) A sentence embedding for tasks such as text classification \cite{conneau2018xnli} or sentence retrieval \cite{zweigenbaum2018overview}, which use the \verb|[CLS]| representation of the full input sequence, and 2) Token embeddings, which are used for structured prediction \cite{pan2017cross} or Q\&A \cite{lewis2019mlqa}, requiring each token's contextualised representation for a per-token inference. While our method uses the \verb|[CLS]| embedding, other approaches based on Contrastive Learning have used both types of representations to obtain a sentence embedding. \paragraph{Contrastive Pretraining} \label{contrastive_learning} The closest prior works are related to Contrastive Learning \cite{becker1992self} (CL). CL is a self-supervised framework designed to improve visual representations. Recent examples include Momentum Contrast (MoCo) \cite{he2020momentum} and SimCLR \cite{chen2020simple}, both of which achieved strong improvements on image classification. The essence of CL is to generate representations that are similar for positive examples and dissimilar for negative examples. CL-based methods in cross-lingual NLP replace negative samples, formerly augmented images, with random sentences in the target language, typically thousands of sentences. Positive examples comprise sentences translated into the target language. While CL may be applicable to large-scale, task-agnostic model alignment, large batches of negative samples are infeasible for small labelled datasets. Negative samples drawn randomly from a small dataset are likely related (possibly duplicates), which is why our proposed alignment uses only positive samples. The following contrastive alignments are task-agnostic methods aiming to improve generic cross-lingual representations with large parallel datasets. In contrast, we align the PXLM with translated task data, making our approach simpler and more efficient while showing a strong zero-shot transfer on each task. [MA,TA] \citet{hu2020explicit} have proposed two objectives for cross-lingual zero-shot transfer a) sentence alignment and b) word alignment. While CL is not mentioned, the proposed sentence alignment closely resembles contrastive learning with one encoder (e.g. SimCLR). Taking the average of the contextualised token representations as the input representation (as an alternative to the \verb|[CLS]| token), the model predicts the correct translation of the sentence within a batch of negative samples. An improvement is observed for text classification tasks and sentence retrieval but not structured prediction. The alignment was applied to a 12-layer multilingual BERT and the scores are comparable to the translate-train baseline (translate data and train normally). Instead, we use one of the best publicly available models, XLM-R from Huggingface, as our starting point since an improvement in a weaker baseline is not guaranteed to work in a stronger model that may have already subsumed those upgrades during pretraining. Contrastive alignment based on MoCo with two PXLM encoders was proposed by \citet{pan2020multilingual}. Using an L2 normalised \verb|[CLS]| token with a non-linear projection as the input representation, the model was aligned on 250K to 2M parallel sentences with added Translation Language Modelling (TLM) and a code-switching augmentation. No ablation for MoCo was provided to estimate its effect although the combination of all methods did provide improvements with multilingual BERT as the base learner. Another model inspired by CL is InfoXLM \cite{chi2020infoxlm}. InfoXLM is pretrained with TLM, multilingual Masked Language Modelling (mMLM) and Cross-lingual Contrastive Learning called XLCo. Like MoCo, they use two encoders that use the \verb|[CLS]| token (or the layer average) as the sentence representation, taken from layers 8 (base model) and 12 (large model). Ablation showed a 0.2-0.3 improvement in accuracy for XNLI and MLQA \cite{lewis2019mlqa}. Reminiscent of earlier work \cite{hermann2014multilingual}, the task-agnostic sentence embedding model \cite{feng2020language} called LaBSe (Language-agnostic BERT sentence embeddings) uses the \verb|[CLS]| representations of two BERT encoders (compared to our single encoder) with a margin loss and 6 billion parallel sentences to generate multilingual representations. While similarities exist, our multi-task alignment is an independently devised, more efficient, task-specific and a simplified version of the aforementioned approaches. [DA,TS] Zero-shot cross-lingual models often use machine translation to provide a training signal. This is a straightforward data transformation for text classification tasks given that adequate machine translation models exist for many language pairs. However, for structured prediction tasks such as Slot Filling or Named Entity Recognition, the non-trivial task of \textit{aligning token/data labels} can lead to an improved cross-lingual transfer as well. One of the most used word alignment methods is fastalign \cite{dyer2013simple}. Frequently used as a baseline, it involves aligning the word indices in parallel sentences in an unsupervised manner, prior to regular supervised learning. In some scenarios, fastalign can approach SOTA scores for slot filling \cite{schuster2018cross}, however, the quality of alignment varies between languages and can even degrade performance \cite{li2020mtop} below baseline. An alternative data alignment approach called CoSDA \cite{qin2020cosda} uses code-switching as data augmentation. Random words in the input are translated and replaced to make model training highly multilingual, leading to improved cross-lingual transfer. Attempts were also made to automatically learn how to code-switch \cite{liu2020attention}. While improvements were reported, it's uncertain how much SOTA models would benefit. [MA,TS] Continuing with label alignment for slot filling, \citet{xu2020end} tried to predict and align slot labels jointly during training instead of modifying data labels explicitly before fine-tuning. While soft-align improves on fastalign, the difficulty of label alignment makes it challenging to improve on the SOTA. For text classification tasks such as Cross-lingual Natural Language Inference \cite{conneau2018xnli}, an adversarial cross-lingual alignment was proposed by \citet{qi2020translation}. Adding a self-attention layer on top of multilingual BERT \cite{devlin2018bert} or XLM \cite{lample2019cross}, the model learns the XNLI task while trying to fool the language discriminator in order to produce language-agnostic input representations. While improvements over baselines were reported, the best scores were around 2-3 points behind the standard XLM-R model. \section{Methodology} We introduce \textbf{XeroAlign}, a conceptually simple, efficient and effective method for task-specific alignment of sentence embeddings generated by PXLMs, aimed at effective zero-shot cross-lingual transfer. XeroAlign is an auxiliary loss function that is jointly optimised with the primary task, e.g. text classification and/or slot filling, as shown in Figure \ref{fig:xlmra}. We use standard architecture for each task and only add the minimum required number of new parameters. For text classification tasks, we use the \verb|[CLS]| token of the PXLM as our pooled sentence representation. A linear classifier (hidden size \verb|x| number of classes) is learnt on top of the \verb|[CLS]| embedding using cross-entropy as the loss function (TASK A in Figure \ref{fig:xlmra}). For slot filling, we use the contextualised representations of each token in the input sequence. Once again, a linear classifier (hidden size \verb|x| number of slots) is learnt with a cross-entropy loss (TASK B in Figure \ref{fig:xlmra}).\\ Algorithm \ref{alg:xlm-ra} shows a standard training routine augmented with XeroAlign. Let $\mathit{PXLM}$ be a pretrained cross-lingual transformer language model, $X$ be the standard English training data and $U$ be the machine translated parallel utterances (from $X$). Those English utterances were translated into each target language using our internal machine translation service. A public online translator e.g. Google Translate can also be used. For the PAWS-X task, we use the public version of the translated data\footnote{\url{https://github.com/google-research-datasets/paws}}. We then obtain the $CLS_S$ and $CLS_T$ embeddings by taking the first token of the $\mathit{PXLM}$ output sequence for the source $x_s$ and target $x_t$ sentences respectively. Using a Mean Squared Error loss function as our similarity function $sim$, we compute the distance/loss between $CLS_S$ and $CLS_T$. The sum of the losses ($total\_loss$) is then backpropagated normally. We have conducted all XeroAlign training as multi-task learning for the following reason. When the $\mathit{PXLM}$ is aligned first, followed by primary task training, the $\mathit{PXLM}$ exhibits poor zero-shot performance. Similarly, learning the primary task first, followed by XeroAlign fails as the primary task is partially unlearned during alignment. This is most likely due to the catastrophic forgetting problem in deep learning \cite{goodfellow2013empirical} hence the need for joint optimisation. \begin{algorithm}[t] \begin{algorithmic}[1] \Let{$\mathit{PXLM}$}{Pretrained Cross-lingual LM} \Let{$X$}{Training data tuples in English} \Let{$U$}{Utterances translated into Target Lang.} \Let{$sim$}{similarity function e.g. MSE} \Statex \State \textit{\# training loop} \For{$(x_s, y), x_t \in X, U$} \Let{$task\_loss$}{$task\_loss\_fn(x_s, y)$} \Let{$CLS_S$}{$\mathit{PXLM}(x_s)$} \Let{$CLS_T$}{$\mathit{PXLM}(x_t)$} \Let{$align\_loss$}{$sim(CLS_S, CLS_T)$} \Let{$total\_loss$}{$task\_loss+align\_loss$} \State \textit{\# update model parameters} \EndFor \end{algorithmic} \caption{The XeroAlign algorithm. \label{alg:xlm-ra}} \end{algorithm} \subsection{Experimental Setup} In order to make our method easily accessible and reproducible\footnote{Email \textbf{Milan Gritta} to request code and/or data.}, we use the publicly available XLM-R transformer from Huggingface \cite{wolf2019huggingface} built on top of PyTorch \cite{NEURIPS2019_9015}. We set a single seed for all experiments and a single learning rate for each dataset. No hyperparameter sweep was conducted to ensure a robust, low-resource, real-world deployment and to make a fair comparison with SOTA models. XLM-R was XeroAligned over 10 epochs and optimised using Adam \cite{kingma2014adam} and a OneCycleLR \cite{smith2019super} scheduler. \subsection{Datasets} We evaluate XeroAlign with four datasets covering 11 unique languages (en, de, es, fr, th, hi, ja, ko, zh, tr, pt) across three tasks (intent classification, slot filling, paraphrase detection). \paragraph{PAWS-X} \cite{yang2019paws} is a multilingual version of PAWS \cite{zhang2019paws}, a binary classification task for identifying paraphrases. Examples were sourced from Quora Question Pairs\footnote{\url{https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs}} and Wikipedia, chosen to mislead simple `word overlap' models. PAWS-X contains 4,000 random examples from PAWS, for the development and test set, covering seven languages (en, de, es, fr, ja, ko, zh), totalling 48,000 human translated paraphrases. We use the multilingual train sets that contain approximately 49K machine translated examples. \paragraph{MTOD} is a Multilingual Task-Oriented Dataset provided by \citet{schuster2018cross}. It covers three domains (alarm, weather, reminder) and three languages of different sizes: English (43K), human-translated Spanish (8.3K) and Thai (5K). MTOD comprises two correlated NLU tasks, intent classification and slot filling. The SOTA scores are reported by \citet{li2020mtop} and \citet{schuster2018cross}. \paragraph{MTOP} is a Multilingual Task-Oriented Parsing dataset provided by \citet{li2020mtop} that covers interactions with a personal assistant. We use the standard flat version, which has the highest reported zero-shot SOTA scores by \citet{li2020mtop}. A tree-like compositional version of the data designed for nested queries is also provided. MTOP contains 100K+ human-translated examples in 6 languages (en, de, es, fr, th, hi) spanning 11 domains. \paragraph{MultiATIS++} by \citet{xu2020end} is an extension of the Multilingual version of ATIS \cite{upadhyay2018almost}, initially translated into Hindi and Turkish only. Six new (human-translated\footnote{We have encountered some minor issues with slot annotations. Around 60-70 entities across 5 languages (fr, zh, hi, ja, pt) had to be corrected as the number of slot tags did not agree with the number of tokens in the sentence. However, this only concerns a tiny fraction of the $\sim$400k+ tags/tokens covered by those languages. We are happy to share the corrections, too.}) languages (de, es, fr, zh, ja, pt) were added with $\sim$4 times as many examples each (around 6K per language) for 9 languages in total. Both of these datasets are based on the original English-only ATIS \cite{price1990evaluation} featuring users interacting with an automated air travel information service (via intent recognition and slot filling tasks). \begin{table*}[t] \centering \setlength{\tabcolsep}{7pt} \begin{tabular}{l|ccccc|c} \toprule \textbf{Model} & \textbf{Spanish} & \textbf{French} & \textbf{German} & \textbf{Hindi} & \textbf{Thai} & \textbf{Average} \\ \midrule XLM-R Target & 95.9 / 91.2 & 95.5 / 89.6 & 96.6 / 88.3 & 95.1 / 89.1 & 94.8 / 87.7 & 95.6 / 89.2 \\ \midrule XLM-R 0-shot & 91.9 / 84.3 & 93.0 / \textbf{83.7} & 87.5 / 80.7 & 91.4 / 76.5 & 87.6 / 55.6 & 90.3 / 76.2 \\ XLM-RA & \textbf{96.6} / 84.4 & \textbf{96.5} / 83.3 & \textbf{95.7} / \textbf{84.5} & \textbf{95.2} / \textbf{80.1} & \textbf{94.1} / \textbf{69.1} & \textbf{95.6} / \textbf{80.3} \\ \citet{li2020mtop} & 96.3 / \textbf{84.8} & 95.1 / 82.5 & 94.8 / 83.1 & 94.2 / 76.5 & 92.1 / 65.6 & 94.5 / 77.9 \\ \bottomrule \end{tabular} \caption{MTOP results as Intent Classification Accuracy / Slot Filling F-Score. Best English scores: 97.3 / 93.9.} \label{tab:mtop_detailed} \end{table*} \subsection{Metrics} We use standard evaluation metrics, that is, accuracy for paraphrase detection and intent classification, F-Score\footnote{\url{https://pypi.org/project/seqeval/}} for slot filling. \section{Results and Analysis} We use `XLM-R Target' to refer to model performance on the labelled target language. We provide zero-shot scores (denoted `XLM-R 0-shot'), the XLM-RA results and the reported SOTA figures. For PAWS-X, we provide a second baseline called `Translate-Train', which comprises the union of Target and English train data. Scores are given for the large\footnote{Large=24 layers, 550M par, Base=12 layers, 270M par.} model unless specified otherwise. \\ The XeroAligned XLM-R achieves state-of-the-art scores on three task-oriented XNLU datasets. For MTOP (Table \ref{tab:mtop_detailed}), the intent classification accuracy (+1.1) and slot filling F-Score (+2.4) averaged over 5 languages improved on XLM-R-Large with translated utterances, slot label projection and distant supervision \cite{li2020mtop}. For MultiATIS++ (Table \ref{tab:multi_atis}), XLM-RA shows an improved intent accuracy (+1.1) and slot F-Score (+3.2) over 8 languages, as compared to a large multilingual BERT with translated utterances and slot label softalign \cite{xu2020end}. For MTOD (Table \ref{tab:mtod}), the classification accuracy (+1.3) and slot tagging F-Score (+5.0) on average improved on XLM-R-Large with translated utterances, slot label projection and distant supervision \cite{li2020mtop}. MTOD is the only dataset where the XLM-RA-base model outperforms (albeit marginally) XLM-RA-large. Finally, we also compare our intent classification accuracy (+8.1) and slot filling F-Score (+8.7) for the MTOD dataset to a BiLSTM with translated utterances and slot label projection \cite{schuster2018cross}, which had the SOTA F-Score for Thai.\\ On the adversarial paraphrase task (PAWS-X, Table \ref{tab:paws_x}), averaged over 7 languages, XLM-RA scores marginally higher (+0.1 accuracy) than VECO \cite{luo2020veco}, a variable cross-lingual encoder-decoder and marginally lower (-0.2 accuracy) than FILTER \cite{fang2020filter}, an enhanced cross-lingual fusion model, which was the SOTA until 01/2021. We now turn our attention to the improvements over 'vanilla' zero-shot XLM-R. \begin{table*}[t] \centering \setlength{\tabcolsep}{8pt} \begin{tabular}{l|ccccccc|c} \toprule \textbf{Model} & \textbf{EN} & \textbf{DE} & \textbf{ES} & \textbf{FR} & \textbf{JA} & \textbf{KO} & \textbf{ZH} & \textbf{Average} \\ \midrule XLM-R Target & 95.6 & 90.9 & 92.5 & 92.4 & 85.1 & 86.4 & 87.2 & 90.0 \\ XLM-R Translate-Train & 95.7 & 91.6 & 92.3 & 92.5 & 85.2 & 85.8 & 87.7 & 90.1 \\ \midrule XLM-R 0-shot & 95.6 & 91.0 & 91.1 & 91.9 & 81.7 & 81.6 & 85.4 & 88.3 \\ \citet{luo2020veco} & \textbf{96.4} & \textbf{93.0} & \textbf{93.0} & 93.5 & 87.2 & 86.8 & 87.9 & 91.1 \\ XLM-RA & 95.8 & 92.9 & \textbf{93.0} & \textbf{93.9} & 87.1 & 87.1 & 88.9 & 91.2 \\ \citet{fang2020filter} & 95.9 & 92.8 & \textbf{93.0} & 93.7 & \textbf{87.4} & 87.6 & \textbf{89.6} & \textbf{91.4} \\ \toprule \multicolumn{9}{c}{\textit{Section \ref{domain-shift} experiment below: aligning with development/test set utterances but no task labels.}} \\ \midrule XLM-RA (Exp) & 95.8 & 94.2 & 94.4 & 94.8 & 91.6 & 92.6 & 92.1 & 93.6 \\ \bottomrule \end{tabular} \caption{PAWS-X results as Paraphrase Classification Accuracy. } \label{tab:paws_x} \end{table*} \subsection{Zero-shot Text Classification} The intent classification accuracy of our XeroAligned XLM-R exceeds that of XLM-R trained with labelled data, averaged across three task-oriented XNLU datasets and 15 test sets (Tables \ref{tab:mtop_detailed}, \ref{tab:multi_atis} and \ref{tab:mtod}). Starting from an already competitive baseline model, XeroAlign improves intent classification by $\sim$5-10 points (larger for XLM-R-base, see Table \ref{tab:base} in Section \ref{smaller}). The benefits of cross-lingual alignment are particularly evident in low-resource languages (tr, hi, th), which is encouraging for real-world applications with limited resources. Zero-shot paraphrase detection is another instance of text classification. We report XLM-RA accuracy in Table \ref{tab:paws_x}, which exceeds both Target and the Translate-Train averages by over 1 point and by almost 3 points over the zero-shot XLM-R baseline (even mores for XLM-RA-base). \\ Note that the \textit{amount of training data} is the same for XeroAlign and Target (except MTOD) thus there is no advantage from using additional data. The primary task, which is learnt in English, has a somewhat higher average performance ($\sim$1.5 points) than the Target languages. We hypothesise that transferring this advantage from a high-resource language via \textbf{XeroAlign} is the primary reason behind its effectiveness compared to using target data directly. Given that Target performance has recently been exceeded with MoCo \cite{he2020momentum} and the similarities between contrastive learning and XeroAlign, our finding seems in line with recent work, which is subject to ongoing research \cite{zhao2020makes}. \subsection{Zero-shot Structured Prediction} While XLM-RA is able to exceed Target accuracy for text classification tasks, even our best F-Scores for slot filling are 8-19 points behind Target accuracy. This is despite a strong average improvement of +4.1 on MTOP, +5.7 on MultiATIS++ and +5.2 on MTOD for the XLM-R-large model (greater for the XLM-RA-base model). We think the gap is primarily down to the difficulty of the sequence labelling task, i.e. zero-shot text classification is `easier' than zero-shot slot filling, which is manifested by a $\sim$10-20 point gap between scores. Sentences in various languages have markedly different input lengths and token/entity order thus word-level inference in cross-lingual zero-shot settings becomes significantly more challenging than sentence-level prediction because syntax plays a less critical role in sequence classification.\\ A less significant reason, related to XeroAlign's architecture, may be our choice to align the PXLM on the \verb|[CLS]| embedding, which is subsequently used `as is' for text classification tasks. Aligning individual token representations through the \verb|[CLS]| embedding improves structured prediction as well, however, as the token embeddings are not directly used, the parameters in the uppermost transformer layer (following Multi-Head Attention) never receive any gradient updates from XeroAlign. Closing this gap is a challenging opportunity, which we reserve for future work. Once again, the languages with lower NLP resources (th, hi, tr) tend to benefit the most from cross-lingual alignment. \subsection{XeroAlign Generalisation} We briefly want to investigate the generalisation of XeroAlign, taking the PAWS-X task as our use case. We are interested in fining out whether aligning on \textit{just one language} has any zero-shot benefits for other languages. Table \ref{tab:one_language} shows the XLM-RA results when aligned on a single language (rows) and tested on other languages (columns). \begin{table}[h] \centering \small \setlength{\tabcolsep}{4pt} \begin{tabular}{c|c|cccccc|c} \toprule - & \textbf{EN} & \textbf{DE} & \textbf{ES} & \textbf{FR} & \textbf{JA} & \textbf{KO} & \textbf{ZH} & \textbf{AVE} \\ \midrule \textbf{DE} & \textbf{96.0} & \textbf{92.9} & 92.3 & 92.6 & 84.0 & 84.5 & 86.5 & 89.8 \\ \textbf{ES} & 95.9 & 92.6 & \textbf{93.0} & 93.1 & 83.9 & 84.1 & 86.4 & 89.9 \\ \textbf{FR} & 95.9 & 92.5 & 92.9 & \textbf{93.9} & 83.9 & 84.1 & 86.9 & 90.0 \\ \textbf{JA} & \textbf{96.0} & 92.6 & 91.8 & 93.1 & \textbf{87.1} & \textbf{87.4} & 87.9 & \textbf{90.8} \\ \textbf{KO} & 95.7 & 92.6 & 92.0 & 92.7 & 80.6 & 87.1 & 87.3 & 90.5 \\ \textbf{ZH} & 95.5 & 92.0 & 92.6 & 92.7 & 86.3 & 86.2 & \textbf{88.9} & 90.6 \\ \midrule \textbf{EU} & 96.2 & 92.5 & 93.0 & 94.1 & 84.9 & 85.2 & 87.1 & 90.4 \\ \textbf{AS} & 96.0 & 93.0 & 92.1 & 92.7 & 85.9 & 87.6 & 88.4 & 90.8 \\\bottomrule \end{tabular} \caption{XLM-RA aligned on \textbf{one} PAWS-X language (rows), evaluated on others (columns). AVE = average. EU = European languages, AS = Asian languages.} \label{tab:one_language} \end{table} We can see that aligning on Asian languages (Japanese in particular) attains the best average improvement compared to aligning with European languages. This seems to reflect the known performance bias of XLM-R towards (high-resource) European languages, all of which show a strong improvement, regardless of language. Aligning only on European languages (de, es, fr) improves the average to 90.4 but aligning on Asian languages (zh, ko, ja) does not improve over Japanese (90.8). In any case, it is notable that the XLM-R model XeroAligned on \textit{just a single language} is able to carry this advantage well beyond a single language thus improve average accuracy by 1.5-2.5 points over baseline (88.3) from Table \ref{tab:paws_x}. This effect is even stronger for MTOP (+4 accuracy, +3 F-Score). \begin{table*}[t] \centering \small \setlength{\tabcolsep}{3.7pt} \begin{tabular}{l|cccccccc|c} \toprule \textbf{Model} & \textbf{DE} & \textbf{ES} & \textbf{FR} & \textbf{TR} & \textbf{HI} & \textbf{ZH} & \textbf{PT} & \textbf{JA} & \textbf{AVE} \\ \midrule XLM-R Target & 97.0/95.3 & 97.3/87.9 & 97.8/93.8 & 80.6/74.0 & 89.7/84.1 & 95.5/95.9 & 97.2/94.1 & 95.5/92.6 & 93.8/89.7 \\ \midrule XLM-R 0-shot & 96.4/84.8 & 97.0/85.5 & 95.3/81.8 & 76.2/41.2 & 91.9/68.2 & 94.3/82.5 & 90.9/81.9 & 89.8/77.6 & 91.5/75.5 \\ XLM-RA & \textbf{97.6}/84.9 & \textbf{97.8}/\textbf{85.9} & 95.4/\textbf{81.4} & 93.4/\textbf{70.6} & \textbf{94.0}/\textbf{79.7} & \textbf{96.4}/83.3 & \textbf{97.6}/79.9 & \textbf{96.1}/\textbf{83.5} & \textbf{96.0}/\textbf{81.2} \\ \citet{jain2019entity} & 96.0/87.5 & 97.0/84.0 & 97.0/79.8 & \textbf{93.7}/44.8 & 92.4/77.2 & 95.2/\textbf{85.1} & 96.5/\textbf{81.7} & 88.5/82.6 & 94.5/77.8\\ \citet{xu2020end} & 96.7/\textbf{89.0} & 97.2/76.4 & 97.5/79.6 & \textbf{93.7}/61.7 & 92.8/78.6 & 96.0/83.3 & 96.8/76.3 & 88.3/79.1 & 94.9/78.0\\ \bottomrule \end{tabular} \caption{MultiATIS++ as Intent Classification Accuracy / Slot Filling F1. English model: 97.9/97. AVE = average.} \label{tab:multi_atis} \end{table*} \begin{table}[t] \centering \setlength{\tabcolsep}{4.3pt} \begin{tabular}{l|cc|c} \toprule \textbf{Model} & \textbf{Spanish} & \textbf{Thai} & \textbf{AVE} \\ \midrule $\mathsection$ Target (B) & 98.7/89.1 & 96.8/93.1 & 97.8/91.1 \\ $\mathsection$ Target (L) & 98.8/89.8 & 97.8/94.4 & 98.3/92.1 \\ \midrule $\mathsection$ 0-shot (B) & 90.7/70.1 & 71.9/53.1 & 81.3/61.6 \\ $\mathsection$ 0-shot (L) & 97.1/85.7 & 82.8/47.7 & 90.0/66.7 \\ XLM-RA (B) & 98.9/86.9 & 97.9/\textbf{60.2} & 98.4/\textbf{73.6} \\ XLM-RA (L) & \textbf{99.2}/\textbf{88.4} & \textbf{98.4}/57.3 & \textbf{98.8}/72.9 \\ \citeauthor{schuster2018cross} & 85.4/72.9 & 95.9/55.4 & 90.7/64.2 \\ \citeauthor{li2020mtop} & 98.0/83.0 & 96.9/52.8 & 97.5/67.9 \\ \bottomrule \end{tabular} \caption{MTOD results as Intent Classification Accuracy / Slot Filling F-Scores. Our best English score: 99.3/96.6, (B) = Base, (L) = Large, $\mathsection$ = XLM-R model.} \label{tab:mtod} \end{table} \subsection{Smaller Language Models} \label{smaller} We observed that the XeroAligned XLM-R-base model shows an even greater improvement than its larger counterpart with 24 layers and 550M parameters. To this end, we report the XLM-RA-base results (12 layers, 270M parameters) in Table \ref{tab:base} as the average scores over all languages for MTOP, PAWS-X, MTOD and MultiATIS++. We use a relative \% improvement over the baseline XLM-R to compare the models fairly. The paraphrase detection accuracy improves by 3.3\% for the large (L) PXLM versus 6.5\% for the base (B) model. \\ \begin{table}[h] \centering \small \setlength{\tabcolsep}{4.5pt} \begin{tabular}{l|cccc} \toprule \textbf{Model} & \textbf{MTOP} & \textbf{PAWS-X} & \textbf{M-ATIS} & \textbf{MTOD}\\ \midrule $\mathsection$ Target & 94.0/88.1 & 85.2 & 89.0/86.3 & 97.6/92.2 \\ \midrule $\mathsection$ 0-shot & 80.8/68.9 & 81.7 & 76.9/65.0 & 80.1/64.8\\ XLM-RA & 93.3/78.9 & 87.0 & 93.0/73.4 & 98.5/74.7\\ \bottomrule \end{tabular} \caption{The XLM-R(A) base model averages as intent classification accuracy / slot filling F-Score (or paraphrase accuracy for PAWS-X). $\mathsection$ = XLM-R model.} \label{tab:base} \end{table} Across three XNLU datasets, XeroAlign improves the standard XLM-R by 9.5\% (L) versus 14.2\% (B) on structured prediction (slot filling) and by 7.1\% (L) versus 19.8\% (B) on text classification (intent recognition). Therefore, applications with lower computational budgets can also achieve competitive performance with our simple cross-lingual alignment method for transformed-based PXLMs. In fact, the base XLM-RA can reach (on average) up to 90-95\% of the performance of its larger sibling using lower computational resources. \subsection{Discussion} \label{domain-shift} The XLM-RA intent classification accuracy is (on average) within $\sim$1.5 points of English accuracy across three task-oriented XNLU datasets. However, the PAWS-X paraphrase detection accuracy is almost 5 points below English models, which is also the case for other state-of-the-art PXLMs in Table \ref{tab:paws_x}. Why does XLM-R struggle to generalise more on this task for languages other than English? We can exclude translation issues since all models used the publicly available PAWS-X machine-translated data. Instead, we think that the greater than expected deficit may be caused by a) domain/topic shift within the dataset and b) a possible data leakage for English. The original PAWS data \cite{zhang2019paws} was sourced from Quora Question Pairs and Wikipedia with neither being limited to any particular domain. As the English Wikipedia provides a large chunk of the English training data for XLM-R, it is possible that some of the English PAWS sentences may have been seen in training, which could explain the smaller generalisation gap for English. \\ We also want to find out whether this gap will diminish if we artificially remove the domain shift. To this end, we use parallel utterances (but not task labels) from the development and test sets to XeroAlign the XLM-R on an extended vocabulary that may not be present in the train set. We observe that the (Exp) model in Table \ref{tab:paws_x} shows an average improvement of over 2 points compared to the best XLM-RA and other SOTA models suggesting that the increased generalisation gap may be caused by a domain shift for non-English languages on this task. When that topic shift gets (perhaps artificially) removed, the model is able to bring accuracy back within $\sim$2 points of the English model (in line with XNLU tasks). Note that this effect can be masked for English due to the language biases in data used for pretraining. \\ In section \ref{contrastive_learning}, we outlined the most conceptually similar methods that conducted large-scale model pretraining with task-agnostic parallel sentence alignment as part of the training routine \cite{hu2020explicit,feng2020language,pan2020multilingual,chi2020infoxlm}. Where ablation studies were provided, the average improvement attributed to contrastive alignment was $\sim$0.2-0.3 points (though the tasks were slightly different). While we do not directly compare XeroAlign to contrastive alignment, it seems that task-specific alignment may be a more effective and efficient technique to improve zero-shot transfer, given the magnitude of our results. This leads us to conclude that the effectiveness of our method comes primarily from cross-lingual alignment of the task-specific vocabulary. Language is inherently ambiguous, the semantics of words and phrases shift somewhat from topic to topic, therefore, a cross-lingual alignment of sentence embeddings \textit{within the context of the target task} should lead to better results. Our simplified, lightweight method only uses translated task utterances, a single encoder model and positive samples, the alignment of which is challenging enough without arbitrary negative samples. In fact, this is the main barrier for applying contrastive alignment in task-specific NLP scenarios, i.e. the lack of carefully constructed negative samples. For smaller datasets, random negative samples would mean that the task is either too easy to solve, resulting in no meaningful learning or the model would receive conflicting signals by training on false positive examples, leading to degenerate learning. \subsection{Future Work} Our recommendations for avenues of promising follow-up research involve any of the following: i) aligning more tasks such as Q\&A, Natural Language Inference, Sentence Retrieval, etc. ii) including additional languages, especially low-resource ones \cite{joshi2020state} and iii) attempting large-scale, task-agnostic alignment of PXLMs followed by task-specific alignment, which is reminiscent of the common transfer learning paradigm of pretraining with Masked Language Modelling before fine-tuning on the target task. To that end, there is already some emergent work on monolingual fine-tuning with an additional contrastive loss \cite{gunel2020supervised}. For the purposes of multilingual benchmarks \cite{hu2020xtreme,Liang2020XGLUEAN} or other pure empirical pursuits, an architecture or a language-specific hyperparameter search should optimise XLM-RA for significantly higher performance as the large transformer does not always outperform its smaller counterpart and because our hyperparameters remained fixed for all languages. Most importantly, the follow-up work needs to improve zero-shot transfer for cross-lingual \textit{structured prediction} such as Named Entity Recognition \cite{pan2017cross}, POS Tagging \cite{nivre2016universal} or Slot Filling \cite{schuster2018cross}, which is still lagging behind Target scores. \section{Conclusions} We have introduced \textbf{XeroAlign}, a conceptually simple, efficient and effective method for task-specific alignment of sentence embeddings generated by PXLMs, aimed at effective zero-shot cross-lingual transfer. XeroAlign is an auxiliary loss function that is easily integrated into the unaltered primary task/model. XeroAlign leverages translated data to bring the sentence embeddings in different languages closer together. We evaluated XeroAligned XLM-R models (named XLM-RA) on zero-shot cross-lingual text classification, adversarial paraphrase detection and slot filling tasks, achieving SOTA (or near-SOTA) scores across 4 datasets covering 11 unique languages. Our ultimate vision is a level of zero-shot performance at or near that of Target. The XeroAligned XLM-R partially achieved that goal by exceeding the intent classification and paraphrase detection accuracies of XLM-R trained with labelled data.
1,108,101,566,203
arxiv
\section{Introduction} A system of polynomial equations is called \emph{partition regular} if every finite colouring of the positive integers admits monochromatic non-constant\footnote{A solution $(x_{1},\ldots,x_{s})$ is \emph{non-constant} if $x_i\neq x_j$ holds for some $i\neq j$.} solutions to the system.\footnote{Some authors allow constant monochromatic solutions in the definition of partition regularity.} A foundational result in the field of arithmetic Ramsey theory is \emph{Rado's criterion} \cite[Satz IV]{Rad1933}, which provides necessary and sufficient conditions for a finite system of linear equations to be partition regular. For example, given $s\geqslant 3$ and non-zero integers $a_{1},\ldots,a_s$, Rado's criterion asserts that the linear homogeneous equation \begin{equation}\label{eqn1.1} a_{1}x_{1} + \cdots + a_s x_s = 0 \end{equation} is partition regular if and only if there exists a non-empty set $I\subseteq\{1,\ldots,s\}$ such that $\sum_{i\in I}a_i = 0$. A similar, stronger notion is that of \emph{density regularity}, which refers to systems of equations which have non-constant solutions over all sets of positive integers $A$ satisfying \begin{equation*} \limsup_{N\to\infty}\frac{|A\cap\{1,2,\ldots,N\}|}{N}>0. \end{equation*} Such sets $A$ are said to have \emph{positive upper density}. An influential Fourier analytic argument of Roth \cite{Rot1954} shows that if $s\geqslant 3$, then the linear homogeneous equation (\ref{eqn1.1}) is density regular if and only if $a_1 + \cdots + a_s = 0$. Recent work on partition regularity has focused on generalising the theorems of Rado and Roth by finding necessary \cite{BaLuMo21,DL18} and sufficient \cite{Cha2022,Cho2018,CLP2021,Pre2021,Sch2021} conditions for partition and density regularity for general systems of polynomial equations. In this paper we consider equations of the form \begin{equation}\label{eqn1.2} a_{1}P(x_{1}) + \cdots + a_{s}P(x_{s}) = 0, \end{equation} where $P$ is a polynomial with integer coefficients, and $a_1,\ldots,a_s$ are non-zero integers. Previous work of the second author with Lindqvist and Prendiville \cite{CLP2021} extended Rado's criterion to equations (\ref{eqn1.2}) for $P(x)=x^{d}$ under the assumption that the number of variables $s$ is sufficiently large in terms of $d$. In this paper, we extend these results further by completely characterising partition and density regularity for equations (\ref{eqn1.2}) in sufficiently many variables. To state our main results, we require the following definition. An integer polynomial $P\in\Z[X]$ is called \emph{intersective} if for every positive integer $n$, there exists an integer $x$ such that $P(x)$ is divisible by $n$. Integer polynomials which admit integer zeros are intersective, however, there exist numerous intersective polynomials which have no rational zeros, such as $P(X)=(X^{3} -19)(X^{2} +X +1)$. Our first theorem shows that Rado's criterion and Roth's theorem hold for equations in intersective polynomials with sufficiently many variables. \begin{thm}\label{thm1.1} Let $d\geqslant 2$ be an integer, and define \begin{equation}\label{eqn1.3} s_1(d) := \begin{cases} 5, &\text{if } d=2; \\ 9, &\text{if } d=3; \\ d^2-d+2\lfloor \sqrt{2d+2} \rfloor + 1, &\text{if } d \ge 4. \end{cases} \end{equation} Let $P$ be an intersective integer polynomial of degree $d$. Let $s\geqslant s_1(d)$ be an integer, and let $a_1,\ldots,a_s$ be non-zero integers. \begin{enumerate} \item[(PR)] The equation (\ref{eqn1.2}) is partition regular if and only if there exists a non-empty set $I\subseteq\{1,\ldots,s\}$ such that $\sum_{i\in I}a_i =0$. \item[(DR)] The equation (\ref{eqn1.2}) is density regular if and only if $a_1 +\cdots +a_s = 0$. \end{enumerate} \end{thm} \begin{remark} As we will soon clarify, intersectivity is also necessary. \end{remark} By performing a change of variables, one may interpret Theorem \ref{thm1.1} as generalisations of Rado and Roth's theorems to colourings and dense subsets respectively of the image set $P(\N):=\{P(1),P(2),P(3),\ldots\}$. More precisely, if $s\geqslant s_1(d)$ and $\sum_{i\in I}a_i =0$ for some non-empty $I\subseteq\{1,\ldots,s\}$, then Theorem \ref{thm1.1} asserts that the linear equation (\ref{eqn1.1}) admits non-constant monochromatic solutions with respect to any finite colouring of $P(\N)$. Similarly, if $a_1 + \cdots + a_s =0$, then Theorem \ref{thm1.1} implies that (\ref{eqn1.1}) has non-constant solutions over any set of positive integers $A$ satisfying \begin{equation*} \limsup_{N\to\infty}\frac{|A\cap\{P(1),\ldots,P(N)\}|}{N}>0. \end{equation*} \subsection{Inhomogeneous equations} Rado \cite{Rad1933} also studied inhomogeneous linear equations \begin{equation}\label{eqn1.4} a_{1}x_{1} + \cdots + a_{s}x_{s} = b, \end{equation} where $a_1,\ldots,a_s$ are non-zero integers and $b$ is a fixed integer. Rado showed that every finite colouring of the positive integers admits (possibly constant) monochromatic solutions to (\ref{eqn1.4}) if and only if $(a_{1} + \cdots + a_{s})$ divides $b$. If one does not permit constant solutions, then it was noted by Hindman and Leader \cite[Theorem 3.4]{HL2006} that (\ref{eqn1.4}) is partition regular if and only if $(a_{1} + \cdots + a_{s})$ divides $b$ and $\sum_{i\in I}a_i =0$ for some non-empty $I\subseteq\{1,\ldots,s\}$. Note that, by considering solutions over a non-zero residue class modulo a sufficiently large prime $p$, equation (\ref{eqn1.4}) cannot be density regular if $b\neq 0$. Our second theorem, of which Theorem \ref{thm1.1} is a special case, comprehensively characterises partition and density regularity for arbitrary polynomial analogues of (\ref{eqn1.4}) in sufficiently many variables. \begin{thm}\label{thm1.3} Let $d\geqslant 2$ be an integer, and define $s_1(d)$ by (\ref{eqn1.3}). Let $P$ be an integer polynomial of degree $d$, and let $s\geqslant s_1(d)$ be an integer. Let $a_1,\ldots,a_s$ be non-zero integers, and let $b$ be an integer. Consider the equation \begin{equation}\label{eqn1.5} a_{1}P(x_{1}) + \cdots + a_{s}P(x_{s}) = b. \end{equation} \begin{enumerate} \item[(PR)] The equation (\ref{eqn1.5}) is partition regular if and only if there exists a non-empty set $I\subseteq\{1,\ldots,s\}$ such that $\sum_{i\in I}a_i =0$ and an integer $m$ with $b=(a_{1}+\cdots +a_s)m$ such that $P(X)-m$ is an intersective polynomial. \item[(DR)] The equation (\ref{eqn1.5}) is density regular if and only if $b=a_1 +\cdots +a_s = 0$. \end{enumerate} \end{thm} Note that, if $a_1 + \cdots + a_s \neq 0$, then Theorem \ref{thm1.3} implies that (\ref{eqn1.5}) is partition regular only if $P$ is an intersective polynomial. In the case where $a_1 + \cdots + a_s = 0$, we see that the set of solutions to (\ref{eqn1.5}) is unchanged if we replace $P$ by the intersective polynomial $P - P(0)$. Thus, intersectivity is absolutely vital for partition regularity, and is not merely a technical assumption in Theorem \ref{thm1.1}. Our results are definitive in this regard, and also definitive in terms of the coefficients $a_1,\ldots,a_s,b$. In terms of the number of variables required, our results are state of the art in the sense that they match current progress on the asymptotic formula in Waring's problem. In the monomial case, the second author found with Lindqvist and Prendiville \cite{CLP2021} that $(1+o(1))d\log d$ variables suffice to characterise partition regularity. However, reducing the number of variables in that way requires estimates for moments of smooth Weyl sums that depend crucially on the multiplicative structure of the polynomial $P(x) = x^d$. \subsection{Supersaturation} Frankl, Graham, and R\"{o}dl \cite[Theorem 1]{FGR1988} obtained a stronger, quantitative version of Rado's theorem for systems of linear homogeneous equations. More precisely, they showed that, for a given partition regular system of linear equations and for sufficiently large $N$, a positive proportion of all solutions to the system over $\{1,\ldots,N\}$ become monochromatic under any $r$-colouring of $\{1,\ldots,N\}$. They also obtained an analogous result for density regular linear systems \cite[Theorem 2]{FGR1988}. This phenomenon, in which a positive proportion of solutions are found to be monochromatic or lie over an arbitrary dense set, is termed \emph{supersaturation}, in analogy with similar results from extremal combinatorics. One significant corollary of supersaturation results is that one can obtain monochromatic solutions which are \emph{non-trivial}, in the sense that the variables of the solution are distinct. This may be readily deduced from supersaturation if one can first show that the set of trivial solutions is sparse in the set of all solutions. In previous work of the second author with Lindqvist and Prendiville \cite[Theorem 1.4]{CLP2021}, it was shown that partition regular equations of the form (\ref{eqn1.2}) with $P(x)=x^2 -1$ and $s\geqslant 5$ satisfy supersaturation. They also obtained similar results for partition regular linear homogeneous equations in logarithmically smoothed numbers \cite[Theorem 1.5]{CLP2021}. More recently, Prendiville \cite[Theorem 1.7]{Pre2021} has established supersaturation for partition regular equations (\ref{eqn1.2}) in the case where $P(x)=x^2$ and $s\geqslant 5$. Our next theorem demonstrates that partition and density regular equations (\ref{eqn1.2}) in sufficiently many variables satisfy supersaturation. Furthermore, as per the remark above, we can ensure that the solutions we obtain are non-trivial. \begin{thm}\label{thm1.4} Let $d\geqslant 2$ be an integer, and define $s_1(d)$ by (\ref{eqn1.3}). Let $P$ be an intersective integer polynomial of degree $d$. Let $s\geqslant s_1(d)$ be an integer, and let $a_1,\ldots,a_s$ be non-zero integers. Given a set of integers $\cA$, write \begin{equation*} \cS(\cA) := \{(x_{1},\ldots,x_s)\in\cA^s : x_i\neq x_j \text{ for all } i\neq j, \text{ and } a_1P(x_1) + \cdots + a_s P(x_s) =0\}. \end{equation*} \begin{enumerate} \item[(PR)] If there exists a non-empty set $I\subseteq\{1,\ldots,s\}$ such that $\sum_{i\in I}a_i =0$, then for any positive integer $r$ there exists a positive real number $c_1(r) = c_1(P;a_1,\ldots,a_s;r)$ and a positive integer $N_1 = N_1(P;a_1,\ldots,a_s;r)$ such that the following is true for any positive integer $N\geqslant N_1$. Given any $r$-colouring $\{1,\ldots,N\}=\cC_1 \cup\cdots\cup \cC_r$, there exists $k\in\{1,\ldots,r\}$ such that $|\cS(\cC_k)|\geqslant c_1(r)N^{s-d}$. \item[(DR)] If $a_1 +\cdots +a_s = 0$, then for any positive real number $\delta>0$ there exists a positive real number $c_2(\delta) = c_2(P;a_1,\ldots,a_s;\delta)$ and a positive integer $N_2 = N_2(P;a_1,\ldots,a_s;\delta)$ such that the following is true for any positive integer $N\geqslant N_2$. Given any set $A\subseteq\{1,\ldots,N\}$ satisfying $|A|\geqslant\delta N$, we have $|\cS(A)|\geqslant c_2(\delta)N^{s-d}$. \end{enumerate} \end{thm} \subsection{Linearised equations} In the course of proving our main theorems, we are led to study certain `linearised' equations. These take the form \begin{equation}\label{eqn1.6} L_1(\bn) = L_2(P(\bz)), \end{equation} where $P$ is an integer polynomial and $P(\bz)=(P(z_1),\ldots,P(z_{t}))$, for some non-degenerate linear forms $L_1$ and $L_2$ such that $L_1(1,\ldots,1)=0$. Here, a \emph{non-degenerate linear form in $t$ variables} $L:\Z^{t}\to\Z$ is a multilinear map of the form $L(\bx)=b_1 x_1 +\cdots +b_t x_t$, where $b_1,\ldots,b_t$ are non-zero integers. This naturally provides us with an opportunity to consider partition regularity criteria for such linearised equations (\ref{eqn1.6}). Recently, Prendiville \cite{Pre2021} studied the equation (\ref{eqn1.6}) in the case where $P(z)=z^2$, obtaining necessary and sufficient conditions for partition regularity as well as a counting result for certain partition regular equations of this form. By incorporating Prendiville's `cleaving' strategy into our methods, we obtain a counting result on partition regularity for linearised equations (\ref{eqn1.6}) in sufficiently many variables. \begin{thm}\label{thm1.5} Let $d\geqslant 2$ and $r$ be positive integers, and let $s_1(d)$ be defined by (\ref{eqn1.3}). Let $P$ be an intersective integer polynomial of degree $d$. Let $s$ and $t$ be positive integers satisfying $s+t\geqslant s_1(d)$. Let $L_1$ and $L_2$ be non-degenerate linear forms in $s$ and $t$ variables respectively, and assume that $L_{1}(1,1,\ldots,1)=0$. There exists a positive constant $c_0=c_0(L_1,L_2,P,r)$ and a positive integer $N_{0}=N_{0}(L_1,L_2,P,r)$ such that the following holds for every positive integer $N\geqslant N_0$. For any $r$-colouring $\{1,\ldots,N\}=\cC_{1}\cup\cdots\cup\cC_r$, there exist $k\in\{1,\ldots,r\}$ such that, on writing $M:=N^{d^{-r}}$, we have \begin{equation*} \{ (\bn,\bz) \in \cC_k^{s} \times \cC_k^{t}: L_1(\bn) = L_2(P(\bz)) \} \geqslant c_0 M^{d(s-1)+t}. \end{equation*} \end{thm} As observed by Prendiville \cite{Pre2021}, bounds of this shape are sharp for generic linearisaed equations (\ref{eqn1.6}). Consider, for example, the equation \begin{equation*} n_1 + \cdots + n_{s-1} - (s-1)n_s = s(z_1^d + \cdots + z_t^d). \end{equation*} If $(\bn,\bz)\in\{1,\ldots,N\}^{s+t}$ is a solution to this equation, then we see that $z_1,\ldots,z_t\leqslant N^{1/d}$. Consequently, if we colour $\{1,\ldots,N\}=\cC_{1}\cup\cdots\cup\cC_r$ by taking $\cC_r := \{1,\ldots,M^d\}$ and \begin{equation*} \cC_i := \left\lbrace x\in\bN: N^{d^{-i}} <x\leqslant N^{d^{-(i-1)}}\right\rbrace \quad (1\leqslant i<r), \end{equation*} then we find that all monochromatic solutions to our equation come from $\cC_r$. We therefore conclude that there are at most $M^{d(s-1) +t}$ monochromatic solutions, which is within a constant factor of the lower bound given in Theorem \ref{thm1.5}. \subsection{Methods} As in the previous works \cite{Cha2022,Cho2018,CLP2021}, a key step in our argument is the application of a Fourier analytic \emph{transference principle}. The transference principle was originally developed by Green \cite{Gre2005A} to obtain solutions to linear equations in primes, and has subsequently been adapted to finding solutions over numerous different sets of arithmetic interest, such as the $k$th powers \cite[Part 2]{CLP2021}, logarithmically smooth numbers \cite[Part 3]{CLP2021}, and $k$th powers of primes \cite{Cho2018}. If we assume that there exists a non-empty set $I\subseteq\{1,\ldots,s\}$ such that $\sum_{i\in I}a_i =0$, then we may rewrite (\ref{eqn1.2}) as \begin{equation*} \sum_{i\in I}a_i P(x_{i}) = \sum_{j\in \{1,\ldots,s\}\setminus I}b_j P(x_{j}), \end{equation*} where $b_j = -a_j$ for all $j$. We then \emph{linearise} this equation to obtain a new equation \begin{equation*} \sum_{i\in I}a_i n_{i} = \sum_{j\in \{1,\ldots,s\}\setminus I}b_j P_{D}(z_{j}) \end{equation*} in variables $n_i$ and $z_j$, where $P_D$ is some auxiliary intersective polynomial (see (\ref{eqn3.9})). Counting solutions to this linearised equation may be accomplished more easily by using the arithmetic regularity lemma. We then use a transference principle to `transfer' solutions of the linearised equation to the original equation (\ref{eqn1.2}). The main contribution of this article is the development of a transference principle for intersective polynomials. Given an intersective polynomial $P$, we consider a $W$-tricked version of the image set $\{P(1),P(2),\ldots\}$, namely, a set of the form \begin{equation*} \cS_W=\left\{ \frac{P(x)-P(b)}{W_2}: x \equiv b \mmod{W_1} \right \}, \end{equation*} for some $1\leqslant b\leqslant W_1$. Here, $W$ is a product of powers of small primes, and $W\mid W_1 \mid W_2$ (see \S\ref{sec4} for further details). The upshot of working with this $W$-tricked set is that the elements of this new set are equidistributed in residue classes for small primes, whereas the original image set is not (consider, for example, the case where $P(X)=X^2$). To establish the desired transference principle, we construct a pseudorandom majorant of the set $\cS_W$ defined above. This is carried out in \S\S\ref{sec4}--\ref{sec7}, and makes use of the Hardy--Littlewood circle method. In particular, we study the properties of exponential sums of the form \begin{equation*} \sum_{y\leqslant q}e_{q}(aP_{D}(y)), \end{equation*} where $P_D$ is some auxiliary intersective polynomial defined in terms of some parameter $D\in\N$ (in our applications, one can take $D=W^2)$, and as usual $e_q(ax):=\exp(2\pi i ax/q)$. One difficulty that arises here is that we require restriction estimates that are independent of the parameter $D$, though the coefficients of $P_D$ increase with $D$. To establish these uniform bounds, we exploit a key insight of Lucier~\cite{Luc2006}, that one can nevertheless bound the greatest common divisor of the coefficients of $P_D(X)-P_D(0)$ in terms of $P$ alone. Finally, having applied the transference principle, it remains to prove that the linearised equation (\ref{eqn1.6}) admits many solutions $(\bn,\bz)$ with the $z_j$ lying in a (translated and dilated) colour class and the $n_i$ lying in a dense subset of $\{1,\ldots,N\}$. This is achieved by appealing to the arithmetic regularity lemma, as in \cite{Cha2022,Pre2021}. We finish this subsection with a brief description of how we deal with the colouring aspect. There is an increasingly popular mantra that every colouring phenomenon is driven by an underlying density phenomenon. In the case of homogeneous equations, the connection was solved in practice by the second author with Lindqvist and Prendiville \cite{CLP2021} using \emph{homogeneous sets}. Subsequently, a full theoretical explanation was provided by the first author \cite{Cha2020}, who showed that homogeneous systems are partition regular if and only if they admit solutions with variables drawn from an arbitrarily given homogeneous set. A fresh hurdle that arises in the present work, compared with \cite{CLP2021}, is that our equation is inhomogeneous, and so the theory of homogeneous sets does not help us. We resolve the issue by choosing the colour class which has the largest intersection with a certain polynomial Bohr set. For supersaturation, we supplement this with Prendiville's new cleaving technique, as alluded to earlier. \subsection{Organisation} We begin in \S\ref{sec2} by swiftly establishing necessary conditions for equations (\ref{eqn1.2}) and (\ref{eqn1.5}) to be partition or density regular. In particular, we prove the `only if' parts of Theorem \ref{thm1.1} and Theorem \ref{thm1.3}. We also give a short proof that Theorem \ref{thm1.4} implies Theorem \ref{thm1.1}, and Theorem \ref{thm1.1} implies Theorem \ref{thm1.3}. In \S\ref{sec3}, we state the two main results which are the focus of this paper: Theorem \ref{thm3.4} and Theorem \ref{thm3.8}. We show that these two theorems imply all of our results stated above. We also recall some useful properties on intersective polynomials from \cite{Luc2006}, in particular, the notion of \emph{auxiliary intersective polynomials}. The next four sections, \S\S\ref{sec4}--\ref{sec7}, are used to prove that Theorem \ref{thm3.4} follows from Theorem \ref{thm3.8}. In \S\ref{sec4}, we apply the $W$-trick and introduce the majorant $\nu$, the latter of which is the focus of our investigations in the next two sections. In \S\ref{sec5}, we use the Hardy--Littlewood circle method to establish a Fourier decay estimate for $\nu$. We continue in \S\ref{sec6} by establishing restriction estimates for $\nu$ and for a related majorant $\mu_D$. The conclusions of these three sections are combined in \S\ref{sec7} to apply a transference principle, which is used to complete the proof that Theorem \ref{thm3.8} implies Theorem \ref{thm3.4}. Finally, in \S\ref{sec8}, we prove Theorem \ref{thm3.8} by using a version of Green's arithmetic regularity lemma. We also include a section in the appendix on polynomial congruences, the results of which are used in \S\ref{sec4} to execute the $W$-trick. \subsection*{Notation} Let $\N$ denote the set of positive integers. For each prime $p$, let $\bQ_p$ and $\bZ_p$ denote the $p$-adic numbers and the $p$-adic integers respectively. Given a real number $X>0$, we write $[X] := \{n\in\N:n\leqslant X\}$. Set $\bT = [0,1]$. For $q \in \bN$ and $x \in \bR$, we write $e(x) = e^{2 \pi i x}$ and $e_q(x) = e(x/q)$. For $P(x) \in \bZ[x]$ and $\bx = (x_1,\ldots,x_s)$, where $s \in \bN$, we abbreviate $P(\bx) = (P(x_1),\ldots,P(x_s))$. If $P$ is a polynomial with integer coefficients, we write $\gcd(P)$ for the greatest common divisor of its coefficients. The letter $\eps$ denotes a small, positive constant, whose value is allowed to differ between separate occurrences. We employ the Vinogradov and Bachmann--Landau asymptotic notations, with the implied constants being allowed to depend on $\eps$. For a finitely supported function $f: \bZ \to \bC$, the \emph{Fourier transform} $\hat{f}$ is defined by \begin{equation*} \hat f(\alp) := \sum_{n \in \bZ} f(n) e(\alp n) \qquad (\alp \in \bR). \end{equation*} \subsection*{Acknowledgements} This project began when JC visited SC at the University of Warwick as part of a London Mathematical Society Early Career Fellowship, and was completed when SC visited JC at the University of Bristol. We are grateful to these institutions for their hospitality and support. We thank Christopher Frei, Sean Prendiville and Trevor Wooley for helpful conversations. \subsection*{Funding} JC was supported by an LMS Early Career Fellowship (07/2021 - 09/2021), and by the Heilbronn Institute for Mathematical Research (10/2021 - present). SC was supported by EPSRC Fellowship Grant EP/S00226X/2, and by the Swedish Research Council under grant no. 2016-06596. \section{Necessary conditions}\label{sec2} In this section, we establish the necessary conditions for partition and density regularity. In particular, we prove the `only if' directions of Theorem \ref{thm1.1} and Theorem \ref{thm1.3}. We begin by noting that the necessary conditions for equations (\ref{eqn1.2}) and (\ref{eqn1.5}) to be partition or density regular are the same as those for linear homogeneous equations. \begin{prop}\label{prop2.1} Let $s\in\bN$. Let $a_{1},\ldots,a_{s}$ be non-zero integers, and let $P$ be an integer polynomial of positive degree. \begin{enumerate}[\upshape(I)] \item\label{itemPR} If the equation (\ref{eqn1.2}) is partition regular, then there exists a non-empty set $I\subseteq[s]$ such that $\sum_{i\in I}a_{i}=0$. \item\label{itemDR} If the equation (\ref{eqn1.2}) is density regular, then $a_{1} + \cdots + a_{s}=0$. \end{enumerate} \end{prop} \begin{proof} First suppose that (\ref{eqn1.2}) is partition regular. By replacing each $a_{i}$ with $-a_{i}$, we may assume that the leading coefficient of $P$ is positive. Thus, we can find $M\in\N$ such that the restriction of $P$ to the set $\{M,M+1,\ldots\}$ defines a strictly increasing function with image $S=\{P(M),P(M+1),\ldots\}\subseteq\N$. By Rado's criterion \cite[Satz IV]{Rad1933}, the conclusion of (\ref{itemPR}) holds if and only if the underlying linear equation $a_{1}x_{1} + \ldots + a_{s}x_{s}=0$ is partition regular. We establish the latter by using a trick of Lefmann \cite[Theorem 2.1]{Lef1991}. Suppose that we have a finite colouring $S=\cC_{1}\cup\cdots\cup \cC_{r}$. By our choice of $M$, this induces a finite colouring $\{M,M+1,\ldots\}=\cC_{1}'\cup\cdots\cup \cC_{r}'$ with $\cC_{i}':=\{x\geqslant M: P(x)\in \cC_{i}\}$. By considering a colouring where each element of $[M-1]$ receives a unique colour, partition regularity guarantees that (\ref{eqn1.2}) admits monochromatic solutions with respect to any finite colouring of the set $\{M,M+1,\ldots\}$. We can therefore find $i\in[r]$ such that (\ref{eqn1.2}) has a solution over $\cC_{i}'$, whence $a_{1}x_{1} + \cdots + a_{s}x_{s}=0$ has a solution over $\cC_{i}$. We have therefore proven that $a_{1}x_{1} + \cdots + a_{s}x_{s}=0$ is partition regular, as required. Now suppose that (\ref{eqn1.2}) is density regular. Let $t\in\{0,1,\ldots,\deg(P)\}$ be such that $P(t)\neq 0$, and let $p$ be a prime satisfying $p>\deg(P) + |a_{1}| + \cdots + |a_{s}|$ and $p\nmid P(t)$. Since (\ref{eqn1.2}) is density regular, it has a solution over the set $\{n\in\N: n\equiv t \mmod{p}\}$. Thus, by reducing (\ref{eqn1.2}) modulo $p$, we observe that $p\mid (a_{1}+\cdots+a_{s})$. Our hypothesis on the size of $p$ therefore delivers the conclusion $a_{1} + \cdots + a_{s}=0$. \end{proof} \begin{prop}\label{prop2.2} Let $s\in\N$, and let $P\in\Z[X]$ have positive degree. Let $a_{1},\ldots,a_{s}\in\Z\setminus\{0\}$ and $b\in\Z$. If the equation (\ref{eqn1.5}) is partition regular, then $b=(a_{1}+\cdots+a_{s})m$ for some $m\in\Z$ such that $P(X)-m$ is an intersective polynomial. Furthermore, if (\ref{eqn1.5}) is density regular, then $b=a_{1}+\cdots+a_{s}=0$. \end{prop} \begin{proof} Suppose that the equation (\ref{eqn1.5}) is partition regular. Note that for any $q\in\N$, by partitioning $\N$ into distinct residue classes modulo $q$, the partition regularity of (\ref{eqn1.5}) implies that $b\equiv (a_{1}+\cdots+a_{s})m_{q}\pmod{q}$ for some $m_{q}\in\Z$. In particular, we see that every integer divisor of $(a_{1}+\cdots+a_{s})$ must also divide $b$, whence $b=(a_{1}+\cdots+a_{s})m$ for some $m\in\Z$. Now observe that if $a_{1}+\cdots+a_{s}=0$, then $b=0$ and we could take $m$ to be any integer. In particular, we could chose $m=P(0)$ so that $P(X)-m$ is trivially intersective. Suppose then that $a_{1}+\cdots+a_{s}\neq 0$, whence the integer $m=b(a_{1}+\cdots+a_{s})^{-1}$ is uniquely defined. Assume for a contradiction that $P(X)-m$ is not intersective. By the Chinese remainder theorem, we can find a prime $p$ and a positive integer $k$ such that $P(x)\equiv m \mmod{p^{k}}$ has no integer solutions $x$. Now choose $h\in\N$ such that $p^{h}\nmid(a_{1}+\cdots+a_{s})$. It follows that there does not exist $x\in\Z$ satisfying the congruence \begin{equation*} a_{1}P(x) + \cdots + a_{s}P(x) \equiv b\mmod{p^{h+k}}. \end{equation*} Hence, there are no monochromatic solutions to (\ref{eqn1.5}) with respect to the finite colouring given by partitioning $\N$ into distinct residue classes modulo $p^{h+k}$. This contradicts the assumption that (\ref{eqn1.5}) is partition regular, so $P(X)-m$ must be intersective. Finally, suppose that (\ref{eqn1.5}) is density regular. Since density regularity implies partition regularity, we deduce that $b=(a_1 + \cdots +a_s)m$ for some integer $m$ such that $P(X) - m$ is an intersective polynomial. Subtracting $b$ from both sides therefore reveals that (\ref{eqn1.5}) can be rewritten as \begin{equation}\label{eqn2.1} a_{1}(P(x_1)-m) + \cdots + a_s(P(x_s)-m) = 0. \end{equation} The conclusion that $b=a_1+\cdots +a_s = 0$ now follows from Proposition \ref{prop2.1}. \end{proof} \begin{remark} Observe that none of the results in this section make any assumptions on the number of variables $s$. The condition $s\geqslant s_1(d)$ introduced in Theorem \ref{thm1.1} is only used to find solutions to our equations, that is, to obtain sufficient conditions for partition or density regularity. \end{remark} With these necessary conditions established, we close this section by noting that Theorem \ref{thm1.4} implies Theorem \ref{thm1.1}, and Theorem \ref{thm1.1} implies Theorem \ref{thm1.3}. \begin{proof}[Proof of Theorem \ref{thm1.1} given Theorem \ref{thm1.4}] The `only if' parts of Theorem \ref{thm1.1} may be inferred from Proposition \ref{prop2.2}, whilst the remaining `if' statements follow from Theorem \ref{thm1.4}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1.3} given Theorem \ref{thm1.1}] The `only if' parts of Theorem \ref{thm1.3} follow from Proposition \ref{prop2.2}. Similarly, the (DR) statement of Theorem \ref{thm1.3} follows immediately from Theorem \ref{thm1.1} and Proposition \ref{prop2.2}. Finally, it remains to show that, under the hypotheses of Theorem \ref{thm1.3}, if $b=(a_1 + \cdots +a_s)m$ for some integer $m$ such that $P(X) - m$ is an intersective polynomial, then (\ref{eqn1.5}) is partition regular. Rewriting (\ref{eqn1.5}) as (\ref{eqn2.1}), the desired result now follows from Theorem \ref{thm1.1}. \end{proof} \section{Linear form equations}\label{sec3} As we have verified the necessary conditions for partition and density regularity, our focus is now on obtaining solutions to (\ref{eqn1.2}) under the assumption that Rado's condition holds, meaning that there exists $I\subseteq[s]$ with $I\neq\emptyset$ such that $\sum_{i\in I}a_i = 0$. This condition allows us to rewrite (\ref{eqn1.2}) as \begin{equation*} \sum_{i\in I}a_i P(x_{i}) = \sum_{j\in[s]\setminus I}(-a_j) P(x_j). \end{equation*} The above equation takes the shape \begin{equation}\label{eqn3.1} L_{1}(P(\bx)) = L_2(P(\by)) \end{equation} for some linear forms $L_1$ and $L_2$. We refer to a linear form $L(\bx)=b_1 x_1 + \cdots + b_t x_t$ in $t$ variables as \emph{non-degenerate} if $b_j\neq 0$ for all $j\in[t]$, and we write $\gcd(L):=\gcd(b_1,\ldots,b_t)$. \begin{remark} Note that in the above paragraph we could have $I=[s]$. We follow the convention that if we have an equation involving two linear forms $L_2$ where one of the forms has $t=0$ variables, then we replace $L_2$ with $0$. In particular, in this situation, the equation (\ref{eqn3.1}) takes the form \begin{equation*} L_1(P(\bx)) = 0. \end{equation*} \end{remark} To proceed further with our study of equations of the form (\ref{eqn3.1}), we require some notation. Let $T = T(d) \in \bN$ be minimal such that, for every integer polynomial $P$ of degree $d$, the equation \begin{equation} \label{eqn3.2} P(x_1) + \cdots + P(x_T) = P(x_{T+1}) + \cdots + P(x_{2T}) \end{equation} has $O_{P}(X^{2T - d + \eps})$ solutions $\bx \in [X]^{2T}$, and let \begin{equation} \label{eqn3.3} s_0(d) = 2T(d) + 1. \end{equation} The proof of \cite[Corollary 14.7]{Woo2019} yields \[ T(d) \le \frac{d(d-1)}2 + \lfloor \sqrt{2d+2} \rfloor, \] and it follows from Hua's lemma \cite[Equation (1)]{Hua1938} that $T(2) \le 2$ and $T(3) \le 4$. Hence \[ s_0(d) \le s_1(d), \] where $s_1(d)$ is as in (\ref{eqn1.3}). Moreover, by considering solutions with $x_{i}=x_{i+T}$ for all $i\in[T]$, we have \[ T(d) \geqslant d, \qquad s_0(d) \ge 2d + 1. \] By orthogonality, our definition of $T=T(d)$ is equivalent to the statement that \begin{equation}\label{eqn3.4} \int_{\T}\left\lvert \sum_{x\leqslant X}e(\alpha P(x))\right\rvert^{2T}\ll_{P} X^{2T-d+\eps} \end{equation} holds for any integer polynomial $P$ of degree $d$. We now use this observation to bound the number of trivial solutions to (\ref{eqn1.2}) and (\ref{eqn1.5}). \begin{lemma} \label{lem3.2} Let $d,s,X\in\N$ with $s\geqslant 2$, and let $P$ be an integer polynomial with degree $d$. Let $a_1,\ldots,a_s,b,c$ be fixed integers, and let $j,k \in [s]$ with $j \ne k$. If $s\geqslant s_{0}(d)$, then \[ \# \{ \bx \in [X]^s: a_1 P(x_1) + \cdots + a_s P(x_s) = b, \quad x_j = c \} \ll_P X^{s-d-1+\eps} \] and \[ \# \{ \bx \in [X]^s: a_1 P(x_1) + \cdots + a_s P(x_s) = b, \quad x_j = x_k \} \ll_P X^{s-d-1+\eps+d/(s-1)}. \] \end{lemma} \begin{proof} For $\alp \in \bT$, write $f(\alp) = \sum_{x \le X} e(\alp P(x))$. By orthogonality, H\"older's inequality, and (\ref{eqn3.4}), we have \begin{align*} & \#\{ \bx \in [X]^s: a_1 P(x_1) + \cdots + a_s P(x_s) = b, \quad x_j = c \} \\ &\qquad =\int_\bT \left( \prod_{i \in [s] \setminus \{j \}} f(a_i \alp) \right) e(\alp(a_j P(c) - b)) \d \alp \le \prod_{i \in [s] \setminus \{j \}} \left( \int_\bT |f(a_i \alp)|^{s-1} \right)^{1/(s-1)} \\ & \qquad \le \prod_{i \in [s] \setminus \{j \}} \left( X^{s-1-2T} \int_\bT |f(a_i \alp)|^{2T} \right)^{1/(s-1)} \ll_P X^{s-1-2T}X^{2T - d + \eps} = X^{s-d-1+\eps}. \end{align*} Similarly, irrespective of whether $a_j + a_k$ vanishes, we have \begin{align*} &\# \{ \bx \in [X]^s: a_1 P(x_1) + \cdots + a_s P(x_s) = b, \quad x_j = x_k \} \\ &\qquad = \int_\bT \left( \prod_{i \in [s] \setminus \{j,k\}} f(a_i \alp) \right) f((a_j+a_k)\alp) e(-b\alp) \d \alp \\ & \qquad \le \left( \prod_{i \in [s] \setminus \{j,k \}} \int_\bT |f(a_i \alp)|^{s-1} \right)^{1/(s-1)} \left( \int_\bT |f((a_j+a_k)\alp)|^{s-1} \d \alp \right)^{1/(s-1)} \\ &\qquad \ll_P (X^{s-1-2T}X^{2T-d+\eps})^{(s-2)/(s-1)}X \le X^{s - d - 1 + \eps + d/(s-1)}. \end{align*} \end{proof} \begin{remark} In our applications of Lemma \ref{lem3.2}, the quantity $X$ is chosen to be sufficiently large relative to $P$. Consequently, we can take $X^\eps$ sufficiently large such that the dependence on $P$ of the implicit constants can be removed. \end{remark} Let $P$ be an intersective integer polynomial of degree $d \ge 2$. To simplify our forthcoming arguments, we first restrict our attention to polynomials $P$ which are strictly monotone increasing and positive on the real interval $[1,\infty)$. That is, we assume $P$ satisfies \begin{equation} \label{eqn3.5} 1\leqslant P(x) < P(y) \qquad(x,y\in\bR, \; 1\leqslant x<y). \end{equation} To prove the main theorems stated in the introduction, we prove the following counting result for equations of the form (\ref{eqn3.1}) where the $y_j$ variables lie in a particular colour class and the $x_i$ variables are drawn from an arbitrary dense set. \begin{thm} \label{thm3.4} Let $r$ and $d\geqslant 2$ be positive integers, and let $0<\delta<1$ be a real number. Let $P$ be an intersective integer polynomial of degree $d$ which satisfies (\ref{eqn3.5}). Let $s \ge 1$ and $t \ge 0$ be integers such that $s + t \ge s_0(d)$. Let \[ L_1(\bx) \in \bZ[x_1,\ldots,x_s], \qquad L_2(\by) \in \bZ[y_1,\ldots,y_t] \] be non-degenerate linear forms such that $L_1(1,\ldots,1) = 0$. Let $X \in \bN$ be sufficiently large, and suppose $[X] = \cC_1 \cup \cdots \cup \cC_r$. Then there exists $k \in [r]$ with $|\cC_k|\gg_{\delta,r,L_{1},L_2,P} X$ such that the following is true. For all $A\subseteq[X]$ with $|A|\geqslant\delta X$, we have \begin{equation}\label{eqn3.6} \# \{ (\bx, \by) \in A^s \times \cC_k^t: L_1(P(\bx)) = L_2(P(\by)) \} \gg X^{s+t-d}. \end{equation} The implied constant may depend on $L_1, L_2, P, r, \del$. \end{thm} \subsection{Deducing Theorem \ref{thm1.4}} Having introduced Theorem \ref{thm3.4}, we now show how it can be used to prove Theorem \ref{thm1.4}. Given a polynomial $P$ satisfying (\ref{eqn3.5}), we see that the conclusion of Theorem \ref{thm1.4} would follow immediately from Theorem \ref{thm3.4} if we could ensure the colour class $\cC_k$ we obtain has density at least $\delta$, as this would enable us to set $A=\cC_k$. Unfortunately, this cannot always be guaranteed. Nevertheless, the conclusion of Theorem \ref{thm3.4} informs us that $|\cC_k|\geqslant \delta_2 |X|$ for some $\delta_{2}\gg_{L_1,L_2,P,r,\delta}1$. We may therefore apply Theorem \ref{thm3.4} with this new density $\delta_2$ and find another colour class $\cC_{k_2}$. Iterating this argument eventually yields a colour class of sufficient density that our initial strategy of setting $A$ equal to a colour class can now be used to obtain Theorem \ref{thm1.4}. The argument outlined above is termed \emph{cleaving} by Prendiville \cite{Pre2021}, who used this method to obtain a supersaturation result for the diagonal quadratic equations considered in \cite{CLP2021} (see \cite[\S2.1]{Pre2021} for an overview of the cleaving strategy in the context of Schur's theorem). We now use this argument to show that, for $X$ sufficiently large, there is a colour class $\cC_k$ such that the conclusion (\ref{eqn3.6}) of Theorem \ref{thm3.4} holds with $A=\cC_k$. \begin{thm}\label{thm3.5} Let $d$ and $r$ be positive integers, and let $P$ be an intersective integer polynomial of degree $d$ which satisfies (\ref{eqn3.5}). Let $s \ge 1$ and $t \ge 0$ be integers such that $s + t \ge s_0(d)$. Let \[ L_1(\bx) \in \bZ[x_1,\ldots,x_s], \qquad L_2(\by) \in \bZ[y_1,\ldots,y_t] \] be non-degenerate linear forms such that $L_1(1,\ldots,1) = 0$. Let $X \in \bN$ be sufficiently large, and suppose $[X] = \cC_1 \cup \cdots \cup \cC_r$. Then there exists $k \in [r]$ such that \begin{equation*} \# \{ (\bx, \by) \in \cC_k^s \times \cC_k^t: L_1(P(\bx)) = L_2(P(\by)) \} \gg X^{s+t-d}. \end{equation*} The implied constant may depend on $L_1, L_2, P, r, \del$. \end{thm} \begin{proof}[Proof of Theorem \ref{thm3.5} given Theorem \ref{thm3.4}] For each $\delta>0$, let $c_{0}(\delta)$ be the implicit constant appearing in the bound $|\cC_k|\gg_{L_1,L_2,,P,r,\delta} X$ in Theorem \ref{thm3.4}. Since decreasing the value of this constant does not invalidate the conclusion of Theorem \ref{thm3.4}, we may henceforth assume that $0<c_{0}(\delta)\leqslant\delta$ for all $\delta>0$. Now set $\delta_0 = 1/r$ and let $\delta_{i} = c_{0}(\delta_{i-1})$ for all $i\in[r]$. By the pigeonhole principle, we can find $k_{0}\in[r]$ such that $|\cC_{k_{0}}|\geqslant X/r$. For all $i\in[r]$, let $k_{i}\in[r]$ be the index obtained by applying Theorem \ref{thm3.4} with $\delta=\delta_i$. By the pigeonhole principle, we can find $0\leqslant i < j\leqslant r$ such that $k_i = k_j=:k$. We claim that $\cC_k$ satisfies the conclusion of Theorem \ref{thm3.5}. Indeed, since $|\cC_{k}|\geqslant c_{0}(\delta_i)X\geqslant \delta_j X$, our choice of $i$ and $j$ ensures that $\cC_{k}=\cC_{k_{j}}$ satisfies (\ref{eqn3.6}) with $A=\cC_{k_i}=\cC_k$. This completes the proof of Theorem \ref{thm3.5} provided that we assume that $X$ is sufficiently large in terms of $L_{1},L_{2},P,r,$ and $\delta_{r}$, which is permissible as $\delta_{0},\ldots,\delta_{r}$ are all bounded away from $0$ in terms of $L_{1},L_{2},P,r$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1.4} given Theorem \ref{thm3.4}] In this proof we allow all implicit constants to depend on the parameters $P,a_1,\ldots,a_s,r,\delta$, and assume that $N$ is sufficiently large with respect to these parameters. In view of Lemma \ref{lem3.2}, Theorem \ref{thm1.4} is equivalent to the same statement with the condition `$x_i\neq x_j$ for all $i\neq j$' removed from the definition of $\cS(\cA)$. We therefore proceed to prove this equivalent version of Theorem \ref{thm1.4}. Note that \[ s \ge s_1(d) \ge s_0(d). \] We first consider polynomials $P$ satisfying (\ref{eqn3.5}). Note that the density statement (DR) follows immediately from Theorem \ref{thm3.4}. For the colouring statement (PR), observe that the existence of a non-empty set $I\subseteq[s]$ such that $\sum_{i\in I}a_i =0$ implies that we may express the equation (\ref{eqn1.2}) as a linear form equation (\ref{eqn3.1}) with $L_1(1,\ldots,1)=0$. The desired result may therefore be deduced from Theorem \ref{thm3.5}. \bigskip Having proven Theorem \ref{thm1.4} for $P$ satisfying (\ref{eqn3.5}), it remains to treat the general case. By replacing each $a_i$ with $-a_i$ if necessary, it suffices to prove Theorem \ref{thm1.4} under the assumption that the leading coefficient of $P$ is positive. Hence, there exists $b\in\N$ such that the polynomial $\tilde{P}(X):=P(X+b)$ obeys (\ref{eqn3.5}). \bigskip Now, given a colouring $[N]=\cC_{1}\cup\cdots\cup\cC_r$, we define a new colouring $[N-b]=\tilde{\cC}_1\cup\cdots\cup\tilde{\cC}_r$ by setting $\tilde{\cC}_i:=\{x-b:x\in\cC_i\setminus[b]\}$. By our proof of the special case above, for $N$ sufficiently large, we deduce that there exists $k\in[r]$ such that \begin{equation*} \# \{ \bz \in \tilde{\cC}_k^s : a_1 \tilde{P}(z_1) +\cdots + a_s \tilde{P}(z_s)=0 \} \gg N^{s-d}. \end{equation*} The partition result (PR) now follows by adding $b$ to each entry of every solution $\bz\in \tilde{\cC}_k^s$ found above to obtain $\gg N^{s-d}$ solutions to (\ref{eqn1.2}) over $\cC_k$. Similarly, for the density statement (DR), we replace the $\delta$-dense set $A\subseteq[N]$ with the set $\tilde{A}=\{a-b:a\in A\setminus[b]\}$. As in the previous paragraph, we can find $\gg N^{s-d}$ solutions to $a_1 \tilde{P}(z_1) +\cdots + a_s \tilde{P}(z_s)=0$ over $\tilde{A}$, each of which lifts to a solution to (\ref{eqn1.2}) over $A$ by adding $b$ to each entry. \end{proof} \begin{remark} As is clear from the deduction above, we in fact establish our main results under the weaker assumption that $s \ge s_0$. This enables an immediate refinement in the number of variables required if a stronger upper bound for $T(d)$ is found. We will also see that one can replace $T(d)$ by $T(P)$, this being the least positive integer $T$ such that \eqref{eqn3.2} has $O_P(X^{2T-d+\eps})$ solutions $\bx \in [X]^{2T}$. \end{remark} \subsection{Auxiliary intersective polynomials} Akin to \cite{CLP2021}, we prove Theorem \ref{thm3.4} by using a `linearisation' procedure so that we may obtain solutions to (\ref{eqn3.1}) by transferring solutions from the linearised equation of the form \begin{equation}\label{eqn3.7} L_{1}(\bn) = L_2(P_D(\bz)), \end{equation} for some primitive linear form $L$ and some auxiliary integer polynomial $P_D$. The purpose of this subsection is to formally define the auxiliary polynomials that we use, as well as to state the linearised version of Theorem \ref{thm3.4}. Let $P$ be an intersective integer polynomial of degree $d \in \bN$. Recall from the introduction that this means that for each $n\in\N$ there exists $x\in\bZ$ such that $P(x)\equiv 0\mmod{n}$. Furthermore, observe that the property of being intersective is equivalent to having $p$-adic zeros for every prime $p$. Thus, for each prime $p$, we fix $z_p \in \bZ_p$ such that $P(z_p)=0$. Let $m_p\geqslant 1$ be the multiplicity of $z_p$ as a zero of $P$ over $\Z_p$. This allows us to define a completely multiplicative function $\lambda:\N\to\N$ such that $\lambda(p)=p^{m_{p}}$ for all primes $p$. Explicitly, writing $\ord_p(D)$ for the multiplicity of $p$ in the prime factorisation of $D$, we have \[ \lam(D) := \prod_p p^{m_p \ord_p(D)} \qquad(D\in\N). \] For later use, we record the following fact from \cite[Equation (73)]{Luc2006}: \begin{equation}\label{eqn3.8} D \mid \lam(D) \mid D^d. \end{equation} The Chinese remainder theorem shows that for each positive integer $D$ there is a unique integer $r_D\in (-D,0]$ such that \[ r_D \equiv z_p \mmod {p^{\ord_p(D)} \bZ_p} \] holds for all primes $p$. With this notation in place, we can introduce the auxiliary polynomial \begin{equation}\label{eqn3.9} P_D(x) := \frac{P(r_D + Dx)}{\lam(D)} \in \bZ[x]. \end{equation} Observe that our choice of $r_D$ and $\lambda(D)$ ensures that $P_D$ is indeed a polynomial with integer coefficients. These auxiliary polynomials and the surrounding notation were introduced by Lucier \cite{Luc2006} and have subsequently become a standard tool when working with intersective polynomials. The significance of this construction stems from Lucier's result \cite[Lemma 28]{Luc2006} that the greatest common divisor of the coefficients of $P_{D}(X)-P_D(0)$ is uniformly bounded over all $D\in\N$ in terms of $P$ only. This observation is critical in our application of the circle method to exponential sums with intersective polynomial phases (see Lemma \ref{lem6.3}). Before moving on, we note that $P_D$ is also intersective. \begin{lemma}\label{lem3.7} Let $P$ be an intersective integer polynomial of positive degree, and let $D$ be a positive integer. Then the auxiliary polynomial $P_D$ defined by (\ref{eqn3.9}) is intersective. \end{lemma} \begin{proof} It suffices to prove that $P_D$ has a zero over $\bZ_p$ for every prime $p$. Fix a prime $p$ and write $D=p^{k}M$, where $p\nmid M$ and $k\geqslant 0$. Our definition of $r_D$ implies that $r_D = z_p + p^k t$ for some $t\in\bZ_p$, and so $r_D + Dx = z_p + p^k(t+Mx)$ for all $x\in\bZ_p$. Since $M$ is a multiplicative unit in $\bZ_p$, we can find $x\in\bZ_p$ such that $t + Mx =0$, whence $P_D(x)=0$, as required. \end{proof} We now state a `linearised' version of Theorem \ref{thm3.4}. \begin{thm} \label{thm3.8} Let $r$ and $d\geqslant 2$ be positive integers, and let $0<\delta<1$ be a real number. Let $P$ be an intersective integer polynomial of degree $d$ which satisfies (\ref{eqn3.5}). Let $s \ge 1$ and $t \ge 0$ be integers such that $s + t \ge s_0(d)$. Let $L_1(\bx) \in \bZ[x_1,\ldots,x_s]$ be a non-degenerate linear form for which $L_1(1,1,\ldots,1)=0$, and let $L_2(\by) \in \bZ[y_1,\ldots,y_t]$ be a non-degenerate linear form. Let $D, Z \in \bN$ satisfy $Z \ge Z_0(D, r, \del, L_1, L_2, P)$, and set $N:=P_D(Z)$. If $[Z] = \cC_1 \cup \cdots \cup \cC_r$, then there exists $k \in [r]$ such that the following is true. For all $\cA\subseteq[N]$ such that $|\cA|\geqslant\delta N$, we have \begin{equation}\label{eqn3.10} \# \{ (\bn,\bz) \in \cA^s \times \cC_k^t: L_1(\bn) = L_2(P_D(\bz)) \} \gg N^{s-1} Z^t. \end{equation} The implied constant may depend on $L_1, L_2, P, r, \del$. \end{thm} \begin{remark} \label{rmk3.9} Observe that, in contrast with Theorem \ref{thm3.4}, we have not specified a lower bound for the density of the colour class $\cC_k$ provided by Theorem \ref{thm3.8}. This is because such a conclusion follows automatically by a simple counting argument. Indeed, for $t\geqslant 1$, the cardinality appearing on the left-hand side of (\ref{eqn3.10}) is bounded above by \begin{equation*} \sum_{z\in\cC_k}|\{ (\bn,\bz) \in [N]^s \times [Z]^t: L_1(\bn) = L_2(P_D(\bz)),\; z_{t} = z \}| \leqslant |\cC_{k}|N^{s-1}Z^{t-1}. \end{equation*} Thus, we see that $|\cC_k|\geqslant cZ$, where $c$ is the implicit constant in (\ref{eqn3.10}). \end{remark} Before moving on, we show that it suffices to prove Theorem \ref{thm3.8} under the assumption that $\gcd(L_1)=1$. In \S\ref{sec8}, we demonstrate the utility of this condition by parameterising solutions to (\ref{eqn3.7}) with $\bz$ fixed. \begin{prop}\label{prop3.10} Assume that Theorem \ref{thm3.8} is true in the cases where $\gcd(L_1)=1$. Then, up to modifying the quantity $Z_0(D,r,\delta,L_1,L_2,P)$ and the implicit constant in (\ref{eqn3.10}), Theorem \ref{thm3.8} holds in general. \end{prop} \begin{proof} Let $M=\gcd(L_1)$, and assume that $M>1$. Using (\ref{eqn3.8}), we can find $\kappa\in\N$ such that $\lambda(M)=M\kappa$. By \cite[Lemma 22]{Luc2006}, there exists an integer $m$ in the range $-M < m \leqslant 0$ such that $\lambda(M)P_{DM}(X) = P_{D}(m + MX) \in \bZ[X]$. Let $Z\in\N$ be sufficiently large, let $N=P_D(Z)$, and set \begin{equation*} \tilde{Z} := \frac{Z - m}{M}, \quad \tilde{N}:= P_{DM}(\tilde{Z})=\frac{P_D(m + M\tilde{Z})}{\lambda(M)}=\frac{N}{M\kappa}. \end{equation*} Finally, given an $r$-colouring $[Z]=\cC_1 \cup\cdots\cup \cC_r$, set $\tilde{\cC}_i:=\{z\in[\tilde{Z}]: m+Mz\in\cC_i\}$ for each $i\in[r]$. Let $\delta>0$, and let $L$ be the non-degenerate linear form satisfying $L_1=\gcd(L_1)L$. Let $k\in[r]$ be the index given by applying Theorem \ref{thm3.8} with the $r$-colouring $[\tilde{Z}]=\tilde{\cC}_1 \cup\cdots\cup \tilde{\cC}_r$ and with parameters $(L,DM,\delta/2)$ in place of $(L_1,D,\delta)$. Now given $\cA\subseteq[N]$ such that $|\cA|\geqslant\delta N$, we claim that there exists a set $\tilde{\cA}\subseteq[\tilde{N}]$ of the form $\tilde{\cA}=\{x\in[\tilde{N}]:(\kappa x + h)\in\cA\}$, for some integer $h$, such that $|\tilde{\cA}|\geqslant (\delta/2)\tilde{N}$. Assuming that this is true, we observe that for any $(\tilde{\bn},\tilde{\bz})\in\tilde{\cA}^{s}\times\tilde{\cC}_k^t$ satisfying \begin{equation*} L(\tilde{\bn}) = L_2(P_{DM}(\tilde{\bz})), \end{equation*} the tuple $(\bn,\bz)=(\kappa\tilde{\bn} + h,M\tilde{\bz} +m)\in\cA^{s}\times\cC_k^t$ satisfies \begin{equation*} L_1(\bn) = \lambda(M)L(\tilde{\bn}) = L_2(P_D(m + M\tilde{\bz})) = L_2(P_D(\bz)). \end{equation*} Since this map $(\tilde{\bn},\tilde{\bz})\mapsto(\bn,\bz)$ is injective, the desired bound (\ref{eqn3.10}) follows from our choice of $k$. It only remains to establish the existence of the set $\tilde{\cA}$. By partitioning $[N]$ into residue classes modulo $\kappa$, the pigeonhole principle furnishes an integer $b$ in the range $0 \le b < \kap$ such that the set \[ \cB:=\{x\in[(N+b)/\kappa]: (\kappa x - b)\in\cA \} \] satisfies $|\cB|\geqslant \delta N/\kappa$. Note that, provided $N$ is sufficiently large, we have \[ (N+b)/\kappa > N/(M\kappa)=\tilde{N}. \] Hence, by partitioning $[(N+b)/\kappa]$ into intervals of length between $\tilde{N}/2$ and $\tilde{N}$, we deduce from the pigeonhole principle that there exists a translate of $\cB$ with density at least $\delta/2$ on $[\tilde{N}]$. We can therefore find an integer $h$ such that the set $\tilde{\cA}=\{x\in[\tilde{N}]:(\kappa x + h)\in\cA\}$ satisfies $|\tilde{\cA}|\geqslant (\delta/2)\tilde{N}$, completing the proof of the claim. \end{proof} \subsection{Deducing Theorem \ref{thm1.5}} We close this section by demonstrating that Theorem \ref{thm1.5} follows from Theorem \ref{thm3.8}. We first state the following slightly more technical version of Theorem \ref{thm1.5}. \begin{thm}\label{thm3.11} Let $d\geqslant 2$ and $r$ be positive integers, and let $s_0(d)$ be defined by (\ref{eqn1.3}). Let $P$ be an intersective integer polynomial of degree $d$ which has a positive leading coefficient. Let $s$ and $t$ be positive integers satisfying $s+t\geqslant s_0(d)$. Let $L_1$ and $L_2$ be non-degenerate linear forms in $s$ and $t$ variables respectively, and assume that $L_1(1,1,\ldots,1)=0$. There exists a positive constant $c_0=c_0(L_1,L_2,P,\delta,r)$ and a positive integer $N_{0}=N_{0}(L_1,L_2,P,\delta,r)$ such that the following is true. Let $Z_{0}\geqslant N_0$ be a positive integer and set $Z_{i}=P(Z_{i-1})$ for all $1\leqslant i\leqslant r$. Then given any $r$-colouring $\{1,\ldots,Z_{r}\}=\cC_{1}\cup\cdots\cup\cC_r$, there exist $k,m\in\{1,\ldots,r\}$ and an interval of positive integers $I$ of length $Z_{m}$ such that \begin{equation*} \{ (\bn,\bz) \in (\cC_k \cap I)^{s} \times (\cC_k \cap [Z_{m-1}])^{t}: L_1(\bn) = L_2(P(\bz)) \} \geqslant c_0 Z_{m-1}^{d(s-1)+t}. \end{equation*} \end{thm} \begin{proof}[Proof of Theorem \ref{thm1.5} given Theorem \ref{thm3.11}] By replacing $P$ and $L_2$ with $-P$ and $-L_2$ respectively if necessary, we may assume without loss of generality that the leading coefficient of $P$ is positive. Note that, if $Q$ is an integer polynomial of positive degree, then $Q(x+1)/Q(x)\to 1$ as $x\to\infty$. We therefore deduce that, provided $N$ is sufficiently large, there exists $Z_0\in\N$ such that $N/2 < Z_r \leqslant N$, where $Z_1,\ldots,Z_r$ are as defined in the statement of Theorem \ref{thm3.11}. Moreover, if $N$ (and hence $Z_0$) is sufficiently large relative to $P$ and $r$, then we may assume that $Z_{r-m} \asymp_P N^{d^{-m}}$ for all $0\leqslant m\leqslant r$. Finally, since $M:=N^{d^{-r}}\ll_P Z_{j-1}$ for all $j\in[r]$, applying Theorem \ref{thm3.11} to the colouring $[Z_r]=(\cC_1 \cap [Z_r]) \cup\cdots\cup (\cC_r \cap [Z_r])$ establishes Theorem \ref{thm1.5}. \end{proof} As in our deduction of Theorem \ref{thm1.4} from Theorem \ref{thm3.4}, we prove Theorem \ref{thm3.11} from Theorem \ref{thm3.8} using Prendiville's cleaving method. The particular `multi-scale' cleaving argument we use is a variant of the proof of \cite[Theorem 8.1]{Pre2021}. \begin{proof}[Proof of Theorem \ref{thm3.11} given Theorem \ref{thm3.8}] Let $\eta(\delta)=\eta(L_1,L_2,P,r;\delta)>0$ be the implicit constant in (\ref{eqn3.10}). The conclusion of Theorem \ref{thm3.8} implies that we may assume that $\eta(\delta)$ is decreasing in $\delta$, and that $\eta(\delta)<\delta$ for all $0<\delta\leqslant 1$. Let $\delta_r:=1/r$, and for each $i\in[r]$ set $\delta_{r-i} = \eta(\delta_{r-i+1})/2$. Note that $\delta_0 \leqslant \delta_1 \leqslant \ldots\leqslant \delta_r$. Let $Z_0,Z_1,\ldots,Z_r$ be as defined in the statement of Theorem \ref{thm3.11}, and assume that $Z_0$ is sufficiently large in terms of $L_1,L_2,P,r$. By our construction of the $\delta_i$, we may therefore assume that each $Z_i$ is sufficiently large relative to $\delta_i$. For each $i\in\{0,1,\ldots,r\}$, let $k_i\in[r]$ be the index given by applying Theorem \ref{thm3.8} with parameters $(D,Z,\delta)=(1,Z_{i},\delta_i)$ to the colouring $[Z_i]=(\cC_1 \cap[Z_i])\cup\cdots\cup(\cC_r\cap[Z_i])$. By the pigeonhole principle, we can find $k\in[r]$ and $0\leqslant i<j\leqslant r$ such that $k=k_i = k_j$. Recall from Remark \ref{rmk3.9} that $|\cC_k \cap[Z_j]|\geqslant \eta(\delta_j)Z_j$. Hence, by partitioning $[Z_j]$ into intervals of lengths between $Z_{i+1}/2$ and $Z_{i+1}$, the pigeonhole principle furnishes an interval $I\subseteq[Z_j]$ of length $|I|=Z_{i+1}$ such that \begin{equation*} |\cC_k\cap I|\geqslant (\eta(\delta_j)/2)|I| = \delta_{j-1}Z_{i+1}\geqslant \del_i Z_{i+1}. \end{equation*} Let $h$ be the integer satisfying $I + h = [Z_{i+1}]$, whence $\cA:= h + (\cC_k\cap I) \subseteq[Z_{i+1}]$. By the translation invariance property $L_1(1,\ldots,1)=0$, observe that if $(\bn,\bz)\in \cA^s \times (\cC_k \cap [Z_i])^t$ is a solution to $L_1(\bn)=L_2(P(\bz))$, then $(\bn-h,\bz)\in (\cC_k\cap I)^s \times (\cC_k \cap [Z_i])^t$ is also a solution. Here, for $\bn=(n_1,\ldots,n_s)$, we have written $\bn - h =(n_1 - h,\ldots,n_s - h)$. Setting $m=i+1$, our choice of $k=k_i$ therefore completes the proof. \end{proof} To summarise, we have now shown that all of our main results follow from Theorem \ref{thm3.4} and Theorem \ref{thm3.8}. The focus of the rest of this paper is on first showing how to deduce Theorem \ref{thm3.4} from Theorem \ref{thm3.8} and then, finally, proving Theorem \ref{thm3.8}. \section{Linearisation and the \texorpdfstring{$W$}{W}-trick}\label{sec4} In this section we perform the preliminary manoeuvres needed to deduce Theorem \ref{thm3.4} from Theorem \ref{thm3.8}. Henceforth, until the end of \S\ref{sec7}, we fix the parameters \begin{equation}\label{eqn4.1} \del \in (0,1],r,L_1,L_2,P \end{equation} appearing in the statement of Theorem \ref{thm3.4} and allow all implicit constants to depend on these parameters unless specified otherwise. In particular, we assume that $P$ is an intersective integer polynomial satisfying (\ref{eqn3.5}). Finally, let $X$ and $C$ be positive integers, sufficiently large in terms of the parameters (\ref{eqn4.1}). \subsection{The \texorpdfstring{$W$}{W}-trick} There are two main obstacles which need to be overcome when attempting to replace the equation $L_1(P(\bx))=L_2(P(\by))$ appearing in Theorem \ref{thm3.4} with the linearised equation $L_1(\bn)=L_2(P_D(\bz))$ in Theorem \ref{thm3.8}. The first problem concerns the different scales of the variables in the latter equation. This issue is handled by considering a weighted count of solutions, which we address in the next subsection. The second obstacle comes from the fact that, unlike $\N$, the image set $\{P(n):n\in\N\}$ is not equidistributed in residue classes modulo $p$ for arbitrary primes $p$. This problem can be ameliorated for small primes $p\leqslant w$, for some parameter $w$, using the \emph{$W$-trick}. This technique, originally developed by Green \cite{Gre2005A} to solve linear equations in primes, has subsequently become a standard tool for solving Diophantine equations over sparse arithmetic sets \cite{BP2017, Cha2022,Cho2018,CLP2021,Pre2021,Sal2020}. Let $w$ be a positive integer which is large in terms of the quantity $C$, and assume that the positive integer $X$ is large in terms of $w$. Define \[ M = C d^2 10^{2w}, \qquad W = \left(\prod_{p \le w}p \right)^{100dw}, \qquad V = \sqrt{W}. \] Put \[ D = W^2, \] and let $N,Z \ge 1$ be given by \begin{equation}\label{eqn4.2} N = P_D(Z), \qquad Z = \frac{X - r_D}{D}. \end{equation} We assume that $Z$ is a positive integer; we will explain in \S \ref{sec7} why we are allowed to make this assumption. Given $A\subseteq[X]$ with $|A|\geqslant\delta X$, for $R \in \bN$ and $b \in [R]$, denote \[ A_{b,R} = \{ x \in A: x \equiv b \mmod R \}. \] Writing $(H,W)_d$ to denote the largest $m \in \bN$ for which $m^d \mid (H,W)$, Lemma \ref{lemA.5} implies that \[ \delta X \leqslant |A| \le \sum_{\substack{b \in [W]: \\ (P'(b),W)_d \le M}} |A_{b,W}| + O(10^w W M^{-1/2}\lceil X/W\rceil). \] Here we have made use of the trivial bound $|A_{b,W}|\leqslant \lceil X/W\rceil$ for all $b$. Note that, since $w$ is large relative to $C$, if $(P'(b),W)_d \le M$ then $(P'(b),W) \mid V$. As $10^w M^{-1/2} \le C^{-1/2}$ and $C$ is large in terms of $\delta$, we therefore have \[ \delta X \ll \sum_{\substack{b \in [W]: \\ (P'(b),W) \mid V}} |A_{b,W}|, \] and maximising yields $b_0$ for which \[ |A_{b_0,W}| \gg \frac{\delta X}{W}, \qquad (P'(b_0),W) \mid V. \] Define $\kap \in \bN$ by \[ W \kap (P'(b_0),W) = \lam(D). \] By pigeonholing, there exists $b\in[W\kappa]$ with $b \equiv b_0 \mmod W$ such that \[ |A_{b,W\kap}| \gg \frac{\delta X}{W \kap}. \] As \[ (P'(b),W) = (P'(b_0),W) \mid V, \] we see that \[ (P'(b),W\kap) = (P'(b),W) = (P'(b_0),W). \] Set \begin{equation*} \cA =\left \{ \frac{P(x)-P(b)}{\lambda(D)}: x \in A_{b,W\kappa} \right \}, \end{equation*} noting from the Taylor expansion that $\cA \subset \bZ$. Now, for a given colouring $[X]=\cC_1\cup\cdots\cup\cC_r$, for each $i \in [r]$ let \begin{equation*} \tilde \cC_i := \{ z \in [Z]: r_D + D z \in C_i \}. \end{equation*} Observe that if $(\bn,\bz)\in\cA^s\times \tilde{\cC}_k^t$ satisfies the linearised equation (\ref{eqn3.7}) then $(\bx,\by)\in A^s\times\cC_k^t$ satisfies the original equation (\ref{eqn3.1}), where \begin{equation*} n_{i} = \frac{P(x_i)-P(b)}{\lambda(D)} \quad (1\leqslant i\leqslant s), \qquad y_{j} = r_D + Dz_j \quad (1\leqslant j\leqslant t). \end{equation*} Moreover, by passing from the set $\{P(x):x\in\N\}$ to the set \begin{equation*} \left\{ \frac{P(x)-P(b)}{\lambda(D)}: x \equiv b \mmod{W\kappa} \right \}, \end{equation*} we have achieved our goal of equidistribution modulo all primes up to $w$. Indeed, writing $x = W\kappa y +b$, our choice of $b$ and the Taylor expansion of $P$ demonstrate that \begin{equation*} \frac{P(x)-P(b)}{\lambda(D)} \equiv \left(\frac{P'(b)}{(P'(b),W)}\right)y \mmod{p} \end{equation*} holds for any prime $p\leqslant w$. The bracketed factor is coprime to $p$, whence, as $y$ varies over residue classes modulo $p$, the polynomial on the left-hand side equidistributes over congruence classes modulo $p$. \subsection{Constructing the weight function} Having resolved the problem of equidistribution, we return to the problem of handling the different scales $N$ and $Z$ in Theorem \ref{thm3.8}. To proceed we construct the following weight function. Given $A\subseteq[X]$ with $|A|\geqslant\delta X$, let $b\in[W\kappa]$ and $\cA\subseteq\bZ$ be as defined above. Define \begin{equation}\label{eqn4.3} \nu=\nu_b: \bZ \to [0,\infty), \qquad \nu(n) = (P'(b), W)^{-1} \sum_{\substack{x \in (b,X] \\ x \equiv b \mmod{W \kap} \\ \frac{P(x) - P(b)}{\lambda(D)} = n}} P'(x). \end{equation} Observe that $\nu$ is supported on $[N]$. \begin{lemma}[Density transfer] \label{lem4.1} If $X$ and $N$ are sufficiently large in terms of $w$ and the fixed parameters in (\ref{eqn4.1}), then \[ \sum_{n \in \cA} \nu(n) \gg N. \] In particular, the implicit constant does not depend on $w$. \end{lemma} \begin{proof} Let $c=c(\delta)$ be a suitably small, positive constant. Then \[ \sum_{x \in A_{b,W\kap}} P'(x) \gg \sum_{y \le \frac{cX}{W\kap}} (W\kap y + b)^{d-1} \gg (W \kap)^{d-1} \left( \frac{X}{W\kap} \right)^d = \frac{X^d}{W\kap}. \] Whence, for $X$ sufficiently large, we have \[ (P'(b), W) \sum_{n \in \cA} \nu(n) = O((W \kap)^{d-1}) + \sum_{x \in A_{b,W\kap}} P'(x) \gg \frac{X^d}{W\kap}. \] From the definition of $\kappa$, we therefore conclude that \[ \sum_{n \in \cA} \nu(n) \gg \frac{X^d}{\lambda(D)} \gg N. \] \end{proof} Similarly \[ \lVert\nu \rVert_1 \asymp N. \] \section{Fourier decay}\label{sec5} Having introduced the weight function $\nu$, we study the properties of its Fourier transform $\hat{\nu}$ using the Hardy--Littlewood circle method. Throughout this section, we fix $\nu=\nu_b$ as given by (\ref{eqn4.3}), for some $b\in[W\kappa]$. The main result of this section is the Fourier decay estimate \begin{equation} \label{eqn5.1} \| \hat \nu - \hat 1_{[N]} \|_\infty \ll w^{\eps-1/d} N. \end{equation} \begin{remark} Although we made a judicious choice of $b$ in the previous section to establish Lemma \ref{lem4.1}, the results of this and the next section remain true for arbitrary $b$ satisfying $(P'(b),W) \mid \sqrt W$. In particular, these results do not make reference to any sets $A$ or $\cA$. \end{remark} For all $\alpha \in \bT$, we have \begin{align*} (P'(b), W) \hat \nu(\alp) &= \sum_{\substack{x \in (b,X] \\ x \equiv b \mmod{W \kap}}} P'(x) e \left(\alp \frac{P(x) - P(b)}{\lambda(D)} \right) \\ &= \sum_{W \kap y + b \in (b,X]} P'(W \kap y + b) e \left(\alp \frac{P(W \kap y + b) - P(b)} {\lambda(D)} \right). \end{align*} For $q \in \bN$, $a \in \bZ$ and $\bet \in \bR$, write \[ S(q,a) = \sum_{x \le q} e \left( \frac{a(P(W \kap x + b) - P(b))}{q\lambda(D)} \right), \qquad I(\bet) = \int_0^{N} e( \bet \gam) \d \gam. \] \begin{lemma} [Major arc asymptotic] \label{lem5.2} Let $q \in \bN$, $a \in \bZ$, and suppose $\| q \alp\| = |q \alp - a|$. Then \[ \hat \nu(\alp) = q^{-1} S(q,a) I(\alp - \tfrac{a}{q}) + O(X^{d-1}(q + N \| q \alp \| )). \] \end{lemma} \begin{proof} Put $\beta = \alp - \frac{a}{q}$. Breaking the sum into residue classes modulo $q$ yields \begin{align*} (P'(b), W) \hat \nu(\alp) = O((W \kap)^{d-1}) + \sum_{x \le q} \: \sum_{X_0 < z \le Y_0} & P'(W \kap qz + W \kap x + b) \\ & e \left( \left(\frac a q + \bet \right) \frac{P(W \kap qz + W \kap x + b) - P(b)}{\lambda(D)} \right), \end{align*} where \[ X_0 = \frac{-(W \kap x + b)}{W \kap q}, \qquad Y_0 = \frac{X - (W \kap x +b)}{W \kap q}. \] Taylor's theorem yields \[ P(W \kap qz + W \kap x + b) \equiv P(W \kap x + b) \mmod{q\lambda(D)}, \] so \begin{align*} (P'(b), W) \hat \nu(\alp) &= O((W \kap)^{d-1}) + \sum_{x \le q} e \left( \frac{a(P(W \kap x + b) - P(b))}{q\lambda(D)} \right) \sum_{X_0 < z \le Y_0} \phi_x(z), \end{align*} where \[ \phi_x(z) = P'(W \kap qz + W \kap x + b) e \left( \bet \frac{P(W \kap qz + W \kap x + b) - P(b)}{\lambda(D)} \right). \] By Euler--Maclaurin summation \cite[Equation (4.8)]{Vau1997}, we have \[ \sum_{X_0 < z \le Y_0} \phi_x(z) = \int_{X_0}^{Y_0} \phi_x(z) \d z + O(X^{d-1}(1 + N|\bet|)). \] The change of variables \[ \gam = \frac{P(W \kap qz + W \kap x + b) - P(b)} {\lambda(D)} \] now yields \[ \sum_{X_0 < z \le Y_0} \phi_x(z) = (P'(b),W) q^{-1} I(\bet) + O(X^{d-1}(1 + N|\bet|)), \] completing the proof. \end{proof} We have the standard bound \begin{equation} \label{eqn5.2} I(\bet) \ll \min \{ N, \| \bet \|^{-1} \}. \end{equation} Note that \[ S(q,a) = \sum_{u \le q} e_q (a \cP(x)), \] where \[ \cP(x) = \frac{ P(W \kap x + b) - P(b)}{\lambda(D)} =: \sum_{j \le d} v_j x^j \in \bZ[x]. \] \begin{lemma} \label{lem5.3} Suppose $(q,a) = 1$. Then \[ S(q,a) \ll q^{1 + \eps - 1/d}. \] Further, if $(q,W) > 1$ then $S(q,a) = 0$. Finally, if $q \ge 2$ then $q^{-1} S(q,a) \ll w^{\eps - 1/d}$. \end{lemma} \begin{proof} Write $q = q_1 q_2$, where $q_1$ is $w$-smooth and $(q_2, W) = 1$. Then \begin{align*} S(q,a) &= \sum_{u_1 \le q_1} \sum_{u_2 \le q_2} e_{q_1 q_2} \left(a \cP(q_2 u_1 + q_1 u_2)\right) = \sum_{u_1 \le q_1} \sum_{u_2 \le q_2} e_{q_1 q_2} \left(a \sum_{j \le d} v_j (q_2 u_1 + q_1 u_2)^j \right) \\ &= \sum_{u_1 \le q_1} \sum_{u_2 \le q_2} e_{q_1 q_2} \left(a \sum_{j \le d} v_j ((q_2 u_1)^j + (q_1 u_2)^j) \right) = S(q_1,a_1) S(q_2,a_2), \end{align*} where \[ q_2 a_1 \equiv a \mmod{q_1}, \qquad q_1 a_2 \equiv a \mmod{q_2}. \] Put \[ h = (q_1,W), \qquad q_1 = hq', \qquad W = hW'. \] Then \begin{align*} S(q_1,a_1) &= \sum_{u_1 \le q'} \sum_{u_2 \le h} e_{hq'}(a_1 \cP(u_1 + q' u_2)) = \sum_{u_1 \le q'} \sum_{u_2 \le h} e_{hq'} \left(a_1 \sum_{j \le d} v_j (u_1 + q' u_2)^j \right). \end{align*} As \[ W \mid v_j \qquad (2 \le j \le d), \] we have \[ S(q_1,a_1) = \sum_{u_1 \le q'} e_{q_1}(a_1 \cP(u_1)) \sum_{u_2 \le h} e_h(a_1 v_1 u_2). \] Observe that $v_1 = \frac{P'(b)}{(P'(b),W)}$, and recall that $(P'(b),W) \mid V = \sqrt W$. For each prime $p \le w$, we have $\ord_p(P'(b)) < \ord_p(W)$, so $\ord_p(v_1) = 0$. Thus $(v_1,W) = 1$, and in particular $(h,a_1v_1) = 1$. Hence \[ S(q_1,a_1) = \begin{cases} 1, &\text{if }q_1 = 1 \\ 0, &\text{if } q_1 \ne 1. \end{cases} \] This shows that if $(q,W) > 1$ then $S(q,a) = 0$. Next, we estimate \[ S(q_2,a_2) = \sum_{x \le q_2} e_{q_2} \left(a_2 \sum_{j \le d} v_j x^j \right). \] The binomial theorem tells us that \[ v_d = \frac{\ell_P (W \kap)^d}{\lam(D)} = \frac{\ell_P (W \kap)^{d-1}}{(P'(b),W)}, \] where $\ell_P$ is the leading coefficient of $P$. As $(q_2,W) = 1$, we have in particular $(v_d,q_2) \ll 1$. Thus, by periodicity and \cite[Theorem 7.1]{Vau1997}, we have \[ |S(q,a)| \le |S(q_2,a_2)| \ll q_2^{1+\eps-1/d} \le q^{1+\eps-1/d}. \] If $q \ge 2$ and $S(q,a) \ne 0$ then $q_1 = 1$ and $q_2 \ge 2$, whereupon $q_2 > w$ and \[ q^{-1} S(q,a) = q_2^{-1} S(q_2,a_2) \ll q_2^{\eps-1/d} < w^{\eps-1/d}. \] \end{proof} We establish \eqref{eqn5.1} using the circle method. Put $\tau = 1/100$ and $Q = X^\tau$. For coprime $q,a \in \bZ$ such that $0 \le a \le q \le Q$, define \[ \fM(q,a) = \{ \alp \in \bT: |\alp - a/q| \le Q/N \}. \] Let $\fM$ be the union of the sets $\fM(q,a)$, and put $\fm = \bT \setminus \fM$. First suppose $\alp \in \fm$. By Dirichlet's approximation theorem (see \cite[Lemma 2.1]{Vau1997}), there exist coprime $q \in \bN$ and $a \in \bZ$ such that $q \le Q$ and $|\alp - a/q| \le (q Q)^{-1}$. As $\alp \in \fm$, we must also have $|q\alp - a| > q Q/N$, so \[ \hat 1_{[N]}(\alp) \ll \| \alp \|^{-1} \le \frac{q}{\| q \alp \|} = \frac{q}{|q \alp - a|} < \frac{N}{Q}. \] By partial summation, we have \[ (P'(b),W) \hat \nu(\alp) \ll X^{d-1} \sup_{Y \le X/(W\kap)} \left| \sum_{y \le Y} e(\alp \cP(y)) \right|. \] By Dirichlet's approximation theorem, there exist coprime $v \in \bN$ and $b \in \bZ$ such that $v \le N/Q$ and $|\alp - b/v| \le Q/(vN)$. As $\alp \in \fm$, we must have $v > Q$. As $v_d \ll_w 1$, Weyl's inequality in the form \cite[Proposition 4.14]{Ove2014} yields \[ \sum_{y \le Y} e(\alp \cP(y)) \ll_w Y^{1+\eps}(Y^{-1} + v^{-1} + vY^{-d})^{2^{1-d}} \ll X^{1+\eps-\tau 2^{1-d}}. \] As $X$ is large in terms of $w$, we thus have \[ \hat \nu(\alp) \ll N^{1+\eps-2^{1-d}/(100d)} \ll w^{\eps-1/d}N, \] wherein the implied constants do not depend on $w$, and hence \begin{equation} \label{eqn5.3} \hat \nu(\alp) - \hat 1_{[N]}(\alp) \ll w^{\eps-1/d} N. \end{equation} Next, suppose $a \in \{0,1\}$ and $\alp \in \fM(1,a)$. Then, by Lemma \ref{lem5.2}, we have \[ \hat \nu(\alp) = I(\alp) + O(X^{d-1+\tau}). \] Euler--Maclaurin summation yields \[ I(\alp) - \hat 1_{[N]}(\alp) \ll 1 + N \| \alp \| \ll Q, \] so by the triangle inequality \[ \nu(\alp) - \hat 1_{[N]}(\alp) \ll X^{d-1 + \tau} \ll w^{\eps-1/d} N. \] Finally, suppose $\alp \in \fM(q,a)$ with $q \ge 2$. Then $\| \alp \| \ge q^{-1} - |\alp - a/q| \gg q^{-1}$, so \[ \hat 1_{[N]}(\alp) \ll \| \alp \|^{-1} \ll Q. \] By Lemmas \ref{lem5.2} and \ref{lem5.3}, as well as \eqref{eqn5.2}, we have \[ \hat \nu(\alp) \ll w^{\eps-1/d} N. \] Thus, we again have \eqref{eqn5.3}. We have secured \eqref{eqn5.3} in all cases, completing the proof of \eqref{eqn5.1}. We record, for later use, the following bounds that arose above. \begin{lemma} \label{lem5.4} For $\tau=1/100$, we have \[ \hat \nu(\alp) \ll N^{1+\eps-2^{1-d}/(100d)} \qquad (\alp \in \fm) \] and \[ \hat \nu(\alp) \ll q^{\eps-1/d} \min \{N, \| \alp - a/q \|^{-1} \} + X^{d-1+\tau} \qquad (\alp \in \fM(q,a) \subset \fM). \] \end{lemma} \section{Restriction estimates}\label{sec6} Continuing our study of $\nu=\nu_b$ for fixed $b$, in this section we establish restriction estimates for $\nu$. We also obtain restriction estimates for a related weight function $\mu_D$ corresponding to the auxiliary polynomial $P_D$. \subsection{Restriction for \texorpdfstring{$\nu$}{nu}} Recall that $T=T(d)\in\N$ is as defined in \S\ref{sec3}. \begin{lemma} \label{lem6.1} Let $E > 2T$ be real, and let $\phi: \bZ \to \bC$ with $|\phi| \le \nu$. Then \[ \int_\bT |\hat \phi(\alp)|^E \d \alp \ll_E N^{E-1}. \] \end{lemma} \begin{proof} Note that $\lVert \phi\rVert_{\infty}\leqslant\lVert\nu\rVert_{\infty}\ll X^{d-1}$. Hence, by orthogonality and the triangle inequality, we have \begin{align*} \int_\bT |\hat \phi(\alp)|^{2T} \d \alp &= \sum_{n_1 + \cdots + n_T = n_{T+1} + \cdots + n_{2T}} \phi(n_1) \cdots \phi(n_T) \overline{\phi(n_{T+1}) \cdots \phi(n_{2T})} \\ &\leqslant \lVert \phi\rVert_{\infty}^{2T} \sum_{\substack{x_1,\ldots,x_{2T} \in [X] \\ P(x_1) + \cdots + P(x_T) = P(x_{T+1}) + \cdots + P(x_{2T})}}1 \\ &\ll X^{2T(d-1)+2T-d+\eps} \ll N^{2T-1+\eps}. \end{align*} Let $u = (2T + E)/2$, in order to be sure that $u > 2d$. As \[ \| \hat \phi \|_\infty \le \| \nu \|_1 \ll N, \] we have \[ \int_\bT |\hat \phi(\alp)|^u \d \alp \ll N^{u-1+\eps}. \] To complete the proof, we insert this almost-sharp moment estimate, together with the ingredients in Lemma \ref{lem5.4}, into the general epsilon-removal lemma \cite[Lemma 25]{Sal2020}. \end{proof} \begin{lemma} \label{lem6.2} Let $E > 2T$ be real, and let $\phi: \bZ \to \bC$ with $|\phi| \le \nu + 1_{[N]}$. Then \[ \int_\bT |\hat \phi(\alp)|^E \d \alp \ll_E N^{E-1}. \] \end{lemma} \begin{proof} We decompose $\phi = \phi_1 + \phi_2$, where $|\phi_1| \le \nu$ and $|\phi_2| \le 1_{[N]}$. Then \begin{align*} \int_\bT |\hat \phi_2(\alp)|^E \d \alp &\le N^{E-2T} \int_\bT |\hat \phi_2(\alp)|^{2T} \d \alp \\ &\le N^{E-2T} \sum_{n_1 + \cdots + n_T = n_{T+1} + \cdots + n_{2T}} \phi_2(n_1) \cdots \phi_2(n_T) \overline{\phi_2(n_{T+1}) \cdots \phi_2(n_{2T})} \\ & \le N^{E-1}. \end{align*} By Lemma \ref{lem6.1} and the triangle inequality, we thus have \[ \int_\bT |\hat \phi(\alp)|^E \d \alp \ll_E \int_\bT |\hat \phi_1(\alp)|^E \d \alp + \int_\bT |\hat \phi_2(\alp)|^E \d \alp \ll_E N^{E-1}. \] \end{proof} \subsection{Restriction for \texorpdfstring{$P_D$}{PD}} In this subsection, we fix some $D\in\N$, and let $N,Z\in\N$ be as in (\ref{eqn4.2}), for some $X$ which is sufficiently large relative to $P$ and $D$. Recalling (\ref{eqn3.5}), define \begin{equation}\label{eqn6.1} \mu_D: \bZ \to \bC, \qquad \mu_D(n) = \frac{N}{Z} \sum_{\substack{z \le Z \\ P_D(z) = n}} 1 =\frac{N}{Z}1_{P_{D}([Z])}(n). \end{equation} Note that $\mu_D$ is supported on $[P_{D}(Z)]=[N]$, and that \[ \| \mu_D \|_1 = N, \qquad \hat \mu_D(\alp) = \frac{N}{Z}\sum_{z \le Z} e(\alp P_D(z)). \] The purpose of this subsection is to establish the following restriction estimate for $\mu_D$, which is analogous to the bound obtained in Lemma \ref{lem6.1} for $\nu$. \begin{lemma} \label{lem6.3} Let $E > 2T$ be real, and let $\phi: \bZ \to \bC$ with $|\phi| \le \mu_D$. Then \[ \int_\bT |\hat \phi(\alp)|^E \d \alp \ll_E N^{E-1}. \] \end{lemma} This is more subtle than Lemma \ref{lem6.1}, as it requires us to extract savings depending on the size of the coefficients of $P_D$. Indeed, it would be false if $\gcd(P_D -P_D(0))$ were large. However, we know from \cite[Lemma 28]{Luc2006} that \begin{equation} \label{eqn6.2} \gcd(P_D - P_D(0)) \ll_P 1. \end{equation} Our proof of Lemma \ref{lem6.3} proceeds along similar lines to that of Lemma \ref{lem6.1}. Orthogonality yields \begin{align*} \int_\bT |\hat \phi(\alp)|^{2T} \d \alp &= (N/Z)^{2T} \sum_{\substack{z_1,\ldots,z_{2T} \le Z \\ P_D(z_1) + \cdots + P_D(z_T) = P_D(z_{T+1}) + \cdots + P_D(z_{2T})}} 1 \\ &\le (N/Z)^{2T} \sum_{\substack{x_1,\ldots,x_{2T} \le X \\ P(x_1) + \cdots + P(x_T) = P(x_{T+1}) + \cdots + P(x_{2T})}} 1 \ll (N/Z)^{2T} X^{2T - d + \eps} \\ &\ll (N/Z)^{2T} (DZ)^{2T} (DZ)^{\eps - d} \ll N^{2T} D^{2T} N^{(\eps-d)/d} = D^{2T} N^{2T - 1 + \eps/d}. \end{align*} As $N$ is arbitrarily large compared to $D$, we thus have \[ \int_\bT |\hat \phi(\alp)|^{2T} \d \alp \ll N^{2T - 1 + \eps}, \] and the implied constant does not depend on $D$. Let $u = (2T + E)/2$, in order to be sure that $u > 2d$. Since \[ \| \hat \phi \|_\infty \le \| \mu_D \|_1 = N, \] we have \begin{equation} \label{eqn6.3} \int_\bT |\hat \phi(\alp)|^u \d \alp \ll N^{u-1+\eps}. \end{equation} \bigskip Suppose $\alp \in \fm$. By Dirichlet's approximation theorem, there exist coprime $v \in \bN$ and $b \in \bZ$ such that $v \le N/Q$ and $|\alp - b/v| \le Q/(vN)$. As $\alp \in \fm$, we must also have $v > Q$. The leading coefficient of $P_D$ is \[ \frac{\ell_P D^d}{\lam(D)} \ll_D 1, \] where $\ell_P$ is the leading coefficient of $P$. Hence, Weyl's inequality in the form \cite[Proposition 4.14]{Ove2014} gives \[ \hat \mu_D(\alp) \ll_D N^{1+\eps}(v^{-1} + Z^{-1} + vZ^{-d})^{2^{1-d}} \ll N^{1+\eps} (D^d/Q)^{2^{1-d}}. \] Since $N$ is arbitrarily large compared to $D$, we have \begin{equation} \label{eqn6.4} \hat \mu_D(\alp) \ll N^{1+\eps} Q^{-2^{1-d}}, \end{equation} and the implied constant does not depend on $D$. \bigskip We come to the major arcs. For $q \in \bN$, $a \in \bZ$ and $\bet \in \bR$, define \[ S_D(q,a) = \sum_{x \le q} e_q(a P_D(x)), \qquad I_D(\bet) = \frac{N}{Z} \int_0^Z e(\bet P_D(z)) \d z. \] \begin{lemma}\label{lem6.4} Let $q \in \bN$, $a \in \bZ$, and suppose $\|q \alp\| = |q \alp - a|$. Let $\bet = \alp - \frac{a}{q}$. Then \[ \hat \mu_D(\alp) = q^{-1} S_D(q,a) I_D(\bet) + O((q + N \| q \alp \|) N/Z). \] \end{lemma} \begin{proof} Breaking the sum into residue classes modulo $q$ yields \[ \hat \mu_D(\alp) = \frac{N}{Z} \sum_{y \le q} \sum_{X_0 < x \le Y_0} e(\alp P_D(qx + y)), \] where \[ X_0 = -y/q, \qquad Y_0 = (Z-y)/q. \] By periodicity, we have $e_q(aP_{D}(qx+y))=e_q(aP_{D}(y))$, whence \[ \hat \mu_D (\alp) = \frac{N}{Z} \sum_{y \le q} e_q(a P_D(y)) \sum_{X_0 < x \le Y_0} e(\bet P_D(q x + y)). \] Using Euler--Maclaurin summation, we find that \begin{align*} &\sum_{X_0 < x \le Y_0} e(\bet P_D(q x + y)) - \int_{X_0}^{Y_0} e(\bet P_D(q x + y)) \d x \\ &\ll 1 + \frac{ Y_0 q |\bet| Z^{d-1} D^d} {\lam(D)} \ll 1 + \frac{Z|\bet| X^{d-1}D} {\lam(D)} \ll 1 + N |\bet|. \end{align*} A change of variables gives \[ \int_{X_0}^{Y_0} e(\bet P_D(q x + y)) \d x = \frac{Z}{Nq} I_D(\bet), \] completing the proof. \end{proof} \begin{lemma}\label{lem6.5} Suppose $(q,a) = 1$. Then \[ S_D(q,a) \ll q^{1+\eps-1/d}. \] \end{lemma} \begin{proof} In view of \eqref{eqn6.2}, this follows from periodicity and \cite[Theorem 7.1]{Vau1997}. \end{proof} \begin{lemma}\label{lem6.6} We have \[ I_D(\bet) \ll N(1 + N \| \bet \|)^{-1/d}. \] \end{lemma} \begin{proof} The leading coefficient of $P_D$ is $\ell_P D^d/\lam(D)$, so by \cite[Theorem 7.3]{Vau1997} we have \[ I_D(\bet) \ll N(1 + Z^d D^d \| \bet \| / \lam(D))^{-1/d}. \] The claimed bound follows upon noting that \[ \frac{(ZD)^d}{\lam(D)} \asymp \frac{P(X)}{\lam(D)} = P_D(Z) = N. \] \end{proof} \begin{proof}[Proof of Lemma \ref{lem6.3}] Combining the results of Lemmas \ref{lem6.4}, \ref{lem6.5} and \ref{lem6.6} furnishes \begin{equation} \label{eqn6.5} \hat \mu_D (\alp) \ll \frac{q^{\eps-1/d} N}{(1 + N |\alp - a/q|)^{1/d}} + DNX^{2\tau-1} \qquad (\alp \in \fM(q,a) \subset \fM). \end{equation} Finally, observe from its proof that \cite[Lemma 25]{Sal2020} holds with $\| \alp - a/q \|^\kap$ in place of $\| \alp - a/q \|$ in its third assumption. Inserting \eqref{eqn6.3}, \eqref{eqn6.4} and \eqref{eqn6.5} into this, applied with $\kap = d^{-1} - \eps$, completes the proof. \end{proof} \section{The transference principle}\label{sec7} Let $s \ge 1$ and $t \ge 0$ be integers such that $s + t \ge s_0(d)$, where $s_0(d)$ is as defined in \eqref{eqn3.3}. For finitely supported $f_1 ,\ldots, f_s: \Z \to \R$ and $h_1 ,\ldots, h_t: \Z \to \R$, define \begin{equation}\label{eqn7.1} \Phi(f_1 ,\ldots, f_s;h_1,\ldots,h_t) = \sum_{L_1(\bn) = L_2(\mathbf{m})} f_1(n_1)\cdots f_s(n_s)h_1(m_1)\cdots h_t(m_t). \end{equation} For finitely supported $f,h: \bZ \to \bR$, we abbreviate \[ \Phi(f_1,\ldots, f_s;h) := \Phi(f_1,\ldots,f_s;h,\ldots,h), \qquad \Phi(f;h) := \Phi(f,\ldots,f;h,\ldots,h). \] Given a finite set of integers $A$, we also write $\Phi(A;h):=\Phi(1_A;h)$. We begin by showing that the size of counting operator $\Phi(f_1,\ldots,f_s;h)$ is controlled by the size of the Fourier coefficients of each of the $f_j$. Write \[ L_1(\bx) = a_1 x_1 + \cdots + a_s x_s, \qquad L_2(\bx) = c_1 x_1 + \cdots + c_t x_t. \] \begin{lemma} [Fourier control] \label{lem7.1} Let $f_{1},\ldots,f_s :\Z\to\R$ and $h:\Z\to\R$. If $|h| \leqslant \mu_D$ and $|f_j| \leqslant \nu + 1_{[N]}$ for all $j\in[s]$, then \[ \Phi(f_1,\ldots,f_s;h) \ll N^{s+t-1}\prod_{j\leqslant s}(\| \hat f_j \|_\infty/N)^{1/(2s+2t)} . \] \end{lemma} \begin{proof} By orthogonality, H\"older's inequality, and periodicity, we have \begin{align*} |\Phi(f_1,\ldots,f_s;h)| &= \left| \int_\bT \prod_{j \le s} \hat f_j(a_j \alp) \cdot \prod_{\ell \le t} \hat h(-c_\ell \alp) \d \alp \right| \\ &\le \prod_{j \le s} \left( \int_\bT |\hat f_j (a_j \alp)|^{s+t} \d \alp\right)^{1/(s+t)} \cdot \prod_{\ell \le t} \left( \int_\bT |\hat h (-c_\ell \alp)|^{s+t} \d \alp\right)^{1/(s+t)} \\ &= \prod_{j \le s} \left( \int_\bT |\hat f_j(\alp)|^{s+t} \d \alp\right)^{1/(s+t)} \cdot \left( \int_\bT |\hat h (\alp)|^{s+t} \d \alp\right)^{t/(s+t)} \\ &\le \left( \int_\bT |\hat h (\alp)|^{s+t} \d \alp\right)^{t/(s+t)} \prod_{j \leqslant s} \left( \|\hat f_j\|_\infty^{1/2} \int_\bT |\hat f_j (\alp)|^{s+t-1/2} \d \alp\right)^{1/(s+t)}. \end{align*} Lemmas \ref{lem6.2} and \ref{lem6.3} now give \begin{align*} \Phi(f_1,\ldots,f_s;h) &\ll (N^{s+t-1})^{t/(s+t)}\prod_{j\leqslant s}\left(\|\hat f_j\|_\infty^{1/(2s+2t)}N^{(s+t-3/2)/(s+t)} \right) \\ &= N^{s+t-1}\prod_{j\leqslant s}(\| \hat f_j \|_\infty/N)^{1/(2s+2t)}. \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{thm3.4} given Theorem \ref{thm3.8}] We fix the parameters $\delta,r,L_1,L_2,P$, as we did at the start of \S\ref{sec4}, and allow all forthcoming implicit constants to depend on these parameters. Let $\tilde{\delta}\in(0,1)$ be sufficiently small in terms of these parameters. We also choose $w\in\N$ to be sufficiently large in terms of the fixed parameters, and define $W$ and $D=W^2$ as in \S\ref{sec4}. Let $Z$ and $N$ be defined by (\ref{eqn4.2}). We begin by addressing the assumption that the quantity $Z$ defined in (\ref{eqn4.2}) is a positive integer, which is equivalent to requiring $D$ to divide $X-r_D$. If this is not the case, then we replace $X$ with $X'=X-m$, where $m\in[D]$ is chosen such that $D$ divides $X'-r_D$. Provided that $X$ is sufficiently large relative to $D$ and $\delta$, we have $(X/2)<X'\leqslant X$, and every $A\subseteq[X]$ with $|A|\geqslant\delta X$ satisfies $|A\cap [X']|\geqslant |A|-D\geqslant (\delta/2)X'$. Hence, by replacing $(X,\delta)$ with $(X',\delta/2)$, we may henceforth assume that $Z\in\N$. \bigskip Let $[X]=\cC_1\cup\cdots\cup\cC_r$ and set \begin{equation*} \tilde \cC_i := \{ z \in [Z]: r_D + D z \in C_i \} \quad (1\leqslant i\leqslant r). \end{equation*} Let $k \in [r]$ be the index provided by applying Theorem \ref{thm3.8} with respect to the colouring $[Z]=\tilde{\cC}_1 \cup\cdots\cup\tilde{\cC}_r$ and with $\tilde{\delta}$ in place of $\delta$. Our goal is to show that this $k\in[r]$ satisfies the conclusion of Theorem \ref{thm3.4}. In view of the remarks following the statement of Theorem \ref{thm3.8}, and since $D$ is ultimately bounded above in terms of the fixed parameters only, we note that \begin{equation*} |\cC_k|\geqslant |\tilde{\cC}_k| \gg Z \gg X. \end{equation*} Let $A\subseteq[X]$ satisfy $|A|\geqslant \delta X$, and define $\kappa,b,\cA$ and $\nu=\nu_b$ as in \S\ref{sec4}. In particular, recall that $b\in[W\kappa]$ is chosen to ensure that Lemma \ref{lem4.1} holds. Let \[ f = \nu 1_{\cA}, \qquad h_i(n) = \frac{N}{Z}\sum_{\substack{z \in \tilde{\cC}_i \\ P_D(z) = n}} 1 \quad (1\leqslant i\leqslant r). \] In light of \eqref{eqn3.5}, the function $h_i$ is supported on $[N]$. Recalling the Fourier decay estimate (\ref{eqn5.1}), the dense model lemma \cite[Theorem 5.1]{Pre2017} provides a function $g$ such that \[ 0 \le g \le 1_{[N]}, \quad \| \hat f - \hat g \|_\infty \ll (\log w)^{-3/2} N. \] For $\ell \in [s]$, write $\bu^{(\ell)} = (u^{(\ell)}_1,\ldots,u^{(\ell)}_s)$, where \[ u^{(\ell)}_j = \begin{cases} g, &\text{if } j < \ell \\ f - g, &\text{if } j = \ell \\ f, &\text{if } j > \ell. \end{cases} \] By the telescoping identity and Lemma \ref{lem7.1}, we now have \begin{align*} \Phi(f;h_i) - \Phi(g;h_i) &= \sum_{\ell \le s} \Phi(\bu^{(\ell)}; h_i) \ll (\log w)^{-3/(4s+4t)} N^{s+t-1} \qquad (1 \le i \le r). \end{align*} Recall from Lemma \ref{lem4.1} that \[ \sum_{n \in \bZ} f(n) \gg N. \] As $\hat f(0) - \hat g(0) \ll (\log w)^{-3/2} N$, for $w$ sufficiently large, it follows that \[ \sum_{n \in \bZ} g(n) \gg N. \] Let $c$ be a small, positive constant, which depends only on the fixed parameters, and set \[ \tilde \cA = \{ n \in \bZ: g(n) \ge c \}. \] By the popularity principle (see \cite[Exercise 1.1.4]{TV2006}), we have $|\tilde \cA| \gg N$. In particular, provided $\tilde{\delta}$ is sufficiently small, we can ensure that $|\tilde\cA| \geqslant \tilde{\delta}N$. Thus, Theorem \ref{thm3.8} informs us that \[ \Phi(\tilde \cA;h_k) \gg N^{s+t-1}. \] We therefore have $\Phi(g;h_k) \gg N^{s+t-1}$, whence $\Phi(f;h_k) \gg N^{s+t-1}$, and finally \begin{align*} |\{ (\bx,\by) \in A^s \times\cC_k^t: L_1(P(\bx)) = L_2(P(\by)) \}| & \geqslant \lVert f\rVert_\infty ^{-s}\lVert h_k\rVert_\infty ^{-t}\Phi(f; h_k) \\ &\gg_w (X^{1-d})^s (Z/N)^t N^{s+t-1} \\ &\gg_w X^{s+t-d}. \end{align*} Since $w= O_{\delta,r,L_1,L_2,P}(1)$, the proof is complete. \end{proof} \section{Arithmetic regularity}\label{sec8} In this section, we prove Theorem \ref{thm3.8} using the arithmetic regularity lemma. This lemma, originally due to Green \cite{Gre2005B}, allows one to decompose the indicator function $1_\cA$ of a dense set $\cA\subseteq[N]$ as $1_{\cA}=f_{\str}+f_{\sml}+f_{\unf}$, for some `structured' function $f_{\str}:[N]\to[0,1]$ and some `small' functions $f_{\sml},f_{\unf}:[N]\to[-1,1]$. The upshot is that, after some careful analysis, we can count solutions to $L(\bn)=L_2(P_D(\bz))$ with $\bn\in\cA^s$ by instead counting solutions with the $n_i$ weighted by $f_{\str}$. This new counting problem can be addressed directly by exploiting the `almost-periodicity' of the function $f_{\str}$. One issue with this approach is that Theorem \ref{thm3.8} requires us to find a colour class $\cC_k$ which delivers the conclusion (\ref{eqn3.10}) for all $\delta$-dense sets $\cA\subseteq[N]$ simultaneously. Unfortunately, the arithmetic regularity lemma is not well-suited to decomposing a potentially unbounded collection of indicator functions $1_{\cA}$ in such a way that we obtain a consistent structure for each of the corresponding functions $f_{\str}$. Instead, as in the work of Prendiville \cite[\S3]{Pre2021}, we fix an arbitrary finite collection of dense sets $\cA_1,\ldots,\cA_r\subseteq[N]$, which can then be decomposed simultaneously, and find a colour class $\cC_k$ for which (\ref{eqn3.10}) holds for all $\cA\in\{\cA_1,\ldots,\cA_r\}$. This delivers the following variation of Theorem \ref{thm3.8}. \begin{thm}\label{thm8.1} Let $d$ and $r$ be positive integers, and let $0<\delta<1$ be a real number. Let $P$ be an intersective integer polynomial of degree $d$ which satisfies (\ref{eqn3.5}). Let $s \ge 1$ and $t \ge 0$ be integers such that $s + t \ge s_0(d)$. Let $L_1(\bx) \in \bZ[x_1,\ldots,x_s]$ be a non-degenerate linear form for which $\gcd(L_1) = 1$ and $L_1(1,\ldots,1)=0$, and let $L_2(\by) \in \bZ[y_1,\ldots,y_t]$ be a non-degenerate linear form. Let $D, Z \in \bN$ satisfy $Z \ge Z_0(D, r, \del, L_1, L_2, P)$, and set $N:=P_D(Z)$. Let \[ \cA_i \subseteq [N], \quad |\cA_i| \ge \del N \qquad (1 \le i \le r). \] If $[Z] = \cC_1 \cup \cdots \cup \cC_r$, then there exists $k \in [r]$ such that \begin{equation}\label{eqn8.1} \# \{ (\bn,\bz) \in \cA_i^s \times \cC_k^t: L_1(\bn) = L_2(P_D(\bz)) \} \gg N^{s-1} Z^t \qquad (1 \le i \le r). \end{equation} The implied constant may depend on $L_1, L_2, P, r,$ and $\del$, but does not depend on $D$. \end{thm} Although Theorem \ref{thm8.1} may seem weaker than Theorem \ref{thm3.8}, they are in fact equivalent. This may be proved directly, however, as this argument may be applicable in other contexts, we instead encapsulate the proof strategy in the following combinatorial result, for which we have not found a reference. This lemma is essentially a finite version of the axiom of choice. \begin{lemma}\label{lem8.2} Let $U,V$ be non-empty sets such that $V$ is finite. Let $E\subseteq U\times V$. Suppose that for every $S\subseteq U$ with $|S|\leqslant |V|$ there exists $v\in V$ such that $(s,v)\in E$ for all $s\in S$. Then there exists $x\in V$ such that $(u,x)\in E$ for all $u\in U$. \end{lemma} \begin{proof} Suppose that for each $x\in V$ there exists $s_x \in U$ such that $(s_x , x)\notin E$. Taking $S=\{s_{x}:x\in V\}$ establishes the contrapositive. \end{proof} \begin{proof}[Proof of Theorem \ref{thm3.8} given Theorem \ref{thm8.1}] In view of Proposition \ref{prop3.10}, it suffices to consider only the case where $\gcd(L_1)=1$. Let $V=[k]$ and set $U=\{\cA\subseteq[N]:|\cA|\geqslant\delta N\}$. Let $E$ denote the set of pairs $(\cA,k)$ such that the inequality (\ref{eqn8.1}) holds for $\cC_k$ with $\cA_i=\cA$. We conclude from Theorem \ref{thm8.1} and Lemma \ref{lem8.2} that Theorem \ref{thm3.8} holds with the implicit constant in (\ref{eqn3.10}) equal to the one in (\ref{eqn8.1}), and with the same $Z_{0}(D,r,\delta,L_1,L_2,P)$. \end{proof} \subsection{The arithmetic regularity lemma} We now introduce the version of the arithmetic regularity lemma that we use to prove Theorem \ref{thm8.1}. In the sequel, we write $\bT^K$ for the $K$-dimensional torus $(\bR/\bZ)^K$. This is equipped with a metric $(\balpha,\bbeta)\mapsto\lVert \balpha - \bbeta\rVert$, where \begin{equation*} \lVert \btheta\rVert := \max_{1\leqslant i\leqslant K}\min_{n\in\bZ}|\theta_i - n| \qquad (\btheta=(\theta_1,\ldots,\theta_K) \in\T^K). \end{equation*} This allows us to define \emph{Lipschitz functions} on $\T^K$. Given a positive real number $H$, a function $F:\T^{K}\to\R$ is \emph{$H$-Lipschitz} if \begin{equation*} |F(\balpha) - F(\bbeta)| \leqslant H\lVert \balpha - \bbeta\rVert \qquad (\balpha,\bbeta \in\T^K). \end{equation*} \begin{lemma}[Arithmetic regularity lemma]\label{lem8.3} Let $r\in\N$, $\sig>0$, and let $\cF:\R_{\geqslant 0}\to\R_{\geqslant 0}$ be a monotone increasing function. Then there exists a positive integer $K_{0}(r;\sig,\cF)\in\N$ such that the following is true. Let $N\in\N$ and $f_{1},\ldots,f_r:[N]\to[0,1]$. Then there is a positive integer $K\leqslant K_{0}(r;\sig,\cF)$ and a phase $\btheta\in\T^{K}$ such that, for every $i\in[r]$, there is a decomposition \begin{equation*} f_{i}=f_{\str}^{(i)}+f_{\sml}^{(i)}+f_{\unf}^{(i)} \end{equation*} of $f_{i}$ into functions $f_{\str}^{(i)},f_{\sml}^{(i)},f_{\unf}^{(i)}:[N]\to[-1,1]$ with the following stipulations. \begin{enumerate}[\upshape(I)] \item\label{itemNon} The functions $f_{\str}^{(i)}$ and $f_{\str}^{(i)}+f_{\sml}^{(i)}$ take values in $[0,1]$. \item The function $f_{\sml}^{(i)}$ obeys the bound $\lVert f_{\sml}^{(i)}\rVert_{L^{2}(\Z)}\leqslant\sig\lVert 1_{[N]}\rVert_{L^{2}(\Z)}$. \item The function $f_{\unf}^{(i)}$ obeys the bound $\lVert \hat{f}_{\unf}^{(i)}\rVert_{\infty}\leqslant\lVert \hat{1}_{[N]}\rVert_{\infty}/\cF(K)$. \item\label{itemSum} The function $f_{\str}^{(i)}$ satisfies $\sum_{m=1}^{N}(f_{i}-f_{\str}^{(i)})(m)=0$. \item\label{itemStr} There exists a $K$-Lipschitz function $F_{i}:\T^{K}\to[0,1]$ such that $F_{i}(x\btheta )=f_{\str}^{(i)}(x)$ for all $x\in[N]$. \end{enumerate} \end{lemma} \begin{proof} This is essentially \cite[Lemma 3.3]{Pre2021} and can be proved using the methods of \cite[Theorem 1.2.11]{Tao2012} or \cite[Theorem 5]{Ebe2016} (see also \cite[Lemma 3]{Sal2020}). For the convenience of the reader, with reference to the arguments and notation of \cite{Tao2012}, we outline the minor modifications one needs to make to obtain the required result. Let $\cF_{0}:\R_{\geqslant 0}\to\R_{\geqslant 0}$ be defined by $\cF_{0}(x):=\cF(rx)$. By following the iterative procedure given in the proof of \cite[Theorem 1.2.11]{Tao2012} (with $F$ replaced by $\cF_0$), one obtains a sequence of factors $\cB_1^{(i)}\subset \cB_2^{(i)}\subset\ldots$ for each $i\in[r]$. The energies $\lVert\mathbf{E}(f|\cB^{(i)}_{1})\rVert_{L^{2}([N])}^2$, $\lVert\mathbf{E}(f|\cB^{(i)}_{2})\rVert_{L^{2}([N])}^2, \ldots$ are monotone increasing between $0$ and $1$, so it follows from the pigeonhole principle that there exists $k\ll r\sig^{-2}$ such that \begin{equation*} \max_{1\leqslant i\leqslant r}\left( \lVert\mathbf{E}(f|\cB^{(i)}_{k+1})\rVert_{L^{2}([N])}^2 -\lVert\mathbf{E}(f|\cB^{(i)}_{k})\rVert_{L^{2}([N])}^2\right)\leqslant \sig^{2}. \end{equation*} The choice of factors $\cB_j^{(i)}$ delivered by this argument then shows that, upon setting \begin{equation*} f_{\str}^{(i)}:=\mathbf{E}(f_i|\cB_{k}),\quad f_{\sml}^{(i)}:=\mathbf{E}(f_i|\cB_{k+1})-\mathbf{E}(f_i|\cB_{k}),\quad f_{\unf}^{(i)}:=f_i-\mathbf{E}(f_i|\cB_{k+1}), \end{equation*} properties (\ref{itemNon})-(\ref{itemSum}) hold with $\cF_0$ in place of $\cF$. We also have (\ref{itemStr}) but with some $\btheta^{(i)}\in\T^{K}$ for each $i\in[r]$ in place of the desired $\btheta$. To establish (\ref{itemStr}) in the form given above, we set $\btheta:=(\btheta^{(1)},\ldots,\btheta^{(r)})\in\T^{Kr}$. Thus, for each $i\in[r]$, we can define a projection map $\pi_{i}:\T^{Kr}\to \T^{K}$ such that $\pi_{i}(\btheta)=\btheta^{(i)}$, whence $f_{\str}^{(i)}(x)=F_{i}\circ\pi_{i}(x\btheta)$ for all $x\in[N]$. Since each $F_{i}\circ\pi_{i}$ is $Kr$-Lipschitz, and since $\cF_0(K)=\cF(Kr)$, we may replace $K$ with $Kr$ to complete the proof. \end{proof} To prove Theorem \ref{thm8.1}, we apply the arithmetic regularity lemma above to decompose the indicator functions $1_{\cA_i}$ of our dense sets $\cA_i\subseteq[N]$. As in \S\ref{sec5} and \S\ref{sec6}, where we focused our attention on a single weight function $\nu=\nu_b$, it is convenient for us to first study the consequences of applying the arithmetic regularity lemma to a single function $f$. In such instances, we omit the index $i$ and write $f=f_{\str}+f_{\sml}+f_{\unf}$ for the decomposition provided by Lemma~\ref{lem8.3}. One can think of these results as pertaining to $f=1_{\cA_i}$ for some $i\in[r]$, with the resulting conclusions being uniform in $i$. \bigskip Given finitely supported functions $f_{1},\ldots,f_{s},g_{1},\ldots,g_{t}:\Z\to\bC$, define the counting operator \begin{equation*} \Lambda_{D}(f_{1},\ldots,f_{s};g_{1},\ldots,g_{t}):=\sum_{L_1(\bn)=L_{2}(P_D(\bz)) }f_{1}(n_{1})\cdots f_{s}(n_{s})g_{1}(z_{1})\cdots g_{t}(z_{t}). \end{equation*} As with the counting operator $\Phi$, we make use of the abbreviations \[ \Lambda_D(f_1,\ldots, f_s;h) := \Lambda_D(f_1,\ldots,f_s;h,\ldots,h), \qquad \Lambda_D(f;h) := \Lambda_D(f,\ldots,f;h,\ldots,h), \] and, for finite $A, B \subset \bZ$: \[ \Lambda_D(f_1,\ldots,f_s;B) := \Lambda_D(f_1,\ldots,f_s;1_B), \qquad \Lambda_D(A;B) := \Lambda_D(1_A;1_B). \] By a change of variables, one can relate $\Lambda_D$ to the counting operator $\Phi$ defined by (\ref{eqn7.1}) which we studied in \S\ref{sec7}. In particular, one can adapt Lemma \ref{lem7.1} to $\Lambda_D$ as follows. \begin{lemma}[Fourier control]\label{lem8.4} Let $f_{1},\ldots,f_{s},g_{1},\ldots,g_{t}:\Z\to\bR$ be functions supported on $[N]$. Then for any $B\subseteq[Z]$, where $N=P_D(Z)$, we have \begin{equation*} |\Lambda_{D}(f_{1},\ldots,f_{s};B) -\Lambda_{D}(g_{1},\ldots,g_{s};B)| \ll \max_{1\leqslant i\leqslant s}(\| \hat f_{i}-\hat g_{i} \|_\infty/N)^{1/(2s+2t)} N^{s-1}Z^{t}. \end{equation*} \end{lemma} \begin{proof} Define the function $h:\Z\to\bR$ by \begin{equation*} h(x):= \begin{cases} 1_{B}(z),\quad &\text{if there exists }z\in[Z]\text{ such that }x=P_{D}(z) \\ 0, &\text{otherwise}. \end{cases} \end{equation*} Now note that, for all finitely-supported $F_{1},\ldots,F_{s}:\Z\to\bR$, we have \begin{equation*} \Lambda_{D}(F_{1},\ldots,F_{s};B)=\Phi(F_{1},\ldots,F_{s};h). \end{equation*} Let $\mu_D$ be given by (\ref{eqn6.1}). Since $|h|\leqslant (N^{-1}Z)\mu_{D}$, we deduce from the telescoping identity and Lemma \ref{lem7.1}, as in \S\ref{sec7}, that \begin{align*} |\Phi(f_{1},\ldots,f_{s};(NZ^{-1})h) - \Phi(g_{1},\ldots,g_{s};(NZ^{-1})h)| \ll \max_{1\leqslant i\leqslant s}(\| \hat f_{i}-\hat g_{i} \|_\infty/N)^{1/(2s+2t)} N^{s+t-1}. \end{align*} Here we have used the trivial bound $\lVert \hat{f}_i\rVert_{\infty}\leqslant N$ for all $i\in[s]$. Multiplying both sides by $(N^{-1}Z)^t$ completes the proof. \end{proof} An immediate consequence of this result is that we can show that $\Lambda_D(f;B)$ is well-approximated by $\Lambda(f_\str +f_\sml;B)$. \begin{lemma}[Removing $f_{\unf}$]\label{lem8.5} Let $f:\Z\to[0,1]$ be supported on $[N]$. Let $\sig>0$, and let $\cF:\R_{\geqslant 0}\to\R_{\geqslant 0}$ be a monotone increasing function. Let $f_{\str},f_{\sml},f_{\unf}$ be the functions provided by applying Lemma \ref{lem8.3} to $f$. Then for any $B\subseteq[Z]$, we have \begin{equation*} \lvert\Lambda_{D}(f;B)-\Lambda_{D}(f_{\str}+f_{\sml};B)\rvert\ll_{P}N^{s-1}Z^{t}\cF(K)^{-1/(2s+2t)}. \end{equation*} \end{lemma} \begin{proof} This follows immediately from Lemmas \ref{lem8.3} and \ref{lem8.4} with $f_{i}=f$ and $g_{i}=f_{\str}+f_{\sml}$ for all $i\in[s]$. \end{proof} \subsection{Polynomial Bohr sets} Having removed $f_\unf$, it remains to obtain a lower bound for the quantity $\Lambda(f_\str +f_\sml;B)$, thereby producing a lower bound for $\Lambda(f;B)$. As in typical applications of the arithmetic regularity lemma, this is accomplished by exploiting the `almost-periodicity' of the function $f_\str$. Explicitly, this is the observation that, as $F$ is a Lipschitz function, we have $f_\str(n+d)\approx f_\str(n)$ whenever $n,n+d\in[N]$ are such that $\lVert d\btheta\rVert$ is small. The set of such $d$ is known as a \emph{Bohr set}. Since we are interested in the case where $d=P_D(z)$ for some $z\in[Z]$, we therefore need to consider \emph{polynomial Bohr sets}, which are defined as follows. \begin{defn}[Bohr sets] Let $K\in\N$, $\rho>0$, and $\balpha\in\T^{K}$. Let $Q\in\Z[X]$ be an integer polynomial of positive degree. The (\emph{polynomial}) \emph{Bohr set} $\bohr_{Q}(\balpha,\rho)$ is the set \begin{equation*} \bohr_{Q}(\balpha,\rho):=\{n\in\N:\lVert Q(n)\balpha\rVert<\rho\}=\bigcap_{i=1}^{K}\{n\in\N: \lVert Q(n)\alpha_{i}\rVert<\rho\}. \end{equation*} \end{defn} Bohr sets are well-studied objects in additive combinatorics and analytic number theory \cite[\S4.4]{TV2006}. In the classical setting $Q(n) = n$, it is well known that the Bohr set has positive lower density. For our applications, we only need to ensure that \begin{equation*} |\bohr_{Q}(\balpha,\rho)\cap[Z]|\gg_{d,K,\rho} Z \end{equation*} for $Z$ large enough relative to $Q,K,\rho$, where $Q$ is an intersective polynomial of degree $d$. The crucial aspect of this bound which we emphasise is that the implicit constant does not depend on the frequency $\balpha$ nor on the coefficients of $Q$. Note that intersectivity is necessary even to ensure that the Bohr set is non-empty, for otherwise there is a local obstruction. \bigskip We start with the case $Q(0)=0$, which was investigated in \cite{Cha2022}. \begin{lemma}\label{lem8.7} Let $K\in\N$, $\rho>0$ and $\balpha\in\T^{K}$. Let $Q\in\Z[X]$ be a polynomial of degree $d\in\N$ such that $Q(0)=0$. Then there exists a positive real number $\Delta_{0}(\rho)=\Delta_{0}(d, K, \rho)$ and a positive integer $Z_{0}(d,K,\rho)$ such that if $Z\geqslant Z_{0}(d,K,\rho)$ then \begin{equation*} |\bohr_{Q}(\balpha,\rho)\cap[Z]|\geqslant \Delta_{0}(\rho)Z. \end{equation*} \end{lemma} \begin{proof} Write $Q(X)=\sum_{i=1}^{d}a_{i}X^{i}$ for some $a_{1},\ldots,a_{d}\in\Z$. We abuse notation and write $\bohr_{i}(\balpha,\rho)$ for $\bohr_{P}(\balpha,\rho)$ when $P(X)=X^{i}$. The triangle inequality implies that \begin{equation*} \bohr_{Q}(\balpha,\rho)\supseteq \bigcap_{i=1}^{d}\bohr_{i}(\bbeta^{(i)},\rho/d), \end{equation*} where $\bbeta^{(i)}:=a_{i}\balpha$. From $d$ applications of \cite[Corollary 6.9]{Cha2022}, we deduce that there exists a positive integer $M\ll_{d,K,\rho} 1$ such that $\{x,2x,\ldots,Mx\}\cap\bohr_{Q}(\balpha,\rho)\neq\emptyset$ holds for all $x\in\N$. Thus, we conclude from \cite[Lemma 4.2]{CLP2021} that $|\bohr_{Q}(\balpha,\rho)\cap[Z]|\geqslant Z/(2M^{2})$ holds for all $Z\geqslant M$. \end{proof} We now consider the general case where $Q$ is an arbitrary intersective polynomial. To deduce the required result from Lemma \ref{lem8.7}, we need to know that \[ \sup_{\balp \in \bT^K} \min_{z \in [Z]} \lVert Q(z)\balpha\rVert \to 0 \qquad (Z \to \infty). \] As previously mentioned, the significant feature is uniformity in $\balpha$. Such a result follows from the much stronger quantitative bound given in \cite[Theorem 1]{LR2015}. Using this, we now establish a lower bound for the density of an arbitrary intersective polynomial Bohr set. \begin{lemma}\label{lem8.8} Let $K\in\N$, $\rho>0$ and $\balpha\in\T^{K}$. Let $Q\in\Z[X]$ be an intersective polynomial of degree $d\in\N$. Then there exists a positive real number $\Delta_1(\rho)=\Delta_1(d,K;\rho)$ and a positive integer $Z_{1}(Q,K,\rho)$ such that if $Z\geqslant Z_{1}(Q,K,\rho)$ then \begin{equation*} |\bohr_{Q}(\balpha,\rho)\cap[Z]|\geqslant \Delta_1(\rho)Z. \end{equation*} \end{lemma} \begin{proof} If $Z$ is sufficiently large in terms of $(Q,K,\rho)$, then it follows from \cite[Theorem 1]{LR2015} that there exists $t\in\bohr_{Q}(\balpha,\rho/2)$ with $t<Z/2$. Let $P(X):=Q(X+t)-Q(t)$. Since $P(0)=0$, Lemma \ref{lem8.7} ensures that \begin{equation*} |\bohr_{P}(\balp,\rho/2)\cap[Z/2]|\gg_{d,K,\rho} Z. \end{equation*} By the triangle inequality, we now have \begin{equation*} \left\lbrace t+x: x\in (\bohr_{P}(\balpha,\rho/2)\cap[Z/2])\right\rbrace\subseteq \bohr_{Q}(\balpha,\rho)\cap[Z], \end{equation*} from which the desired bound follows. \end{proof} \subsection{Completing the proof of Theorem \ref{thm8.1}} Recall that the coefficients of $L_1$ are coprime. This implies that there exists $\bv\in\Z^{s}$ whose entries have size $O_{L_1}(1)$ such that $L_1(\bv)=1$. Thus, for any finitely supported $f_{1},\ldots,f_{s},g_{1},\ldots,g_{t}:\Z\to\bC$, we may write \begin{equation*} \Lambda_{D}(f_{1},\ldots,f_{s};g_{1},\ldots,g_{t})=\sum_{{\bz\in\Z^{t}}}g_{1}(z_{1})\cdots g_{t}(z_{t})\Psi_{\bz}(f_{1},\ldots,f_{s}), \end{equation*} where \[ \Psi_{\bz}(f_{1},\ldots,f_{s}):=\sum_{L_1(\bn)=0}\prod_{i=1}^{s}f_{i}(n_{i}+v_{i}L_2(P_D(\bz))). \] For brevity, we write $\Psi_{\bz}(f):=\Psi_{\bz}(f,\ldots,f)$. Following \cite[\S6.1]{Cha2022}, we proceed to study these auxiliary counting operators $\Psi_{\bz}$, with a view towards obtaining a lower bound for $\Lambda_{D}$ by summing over $\bz$ lying in a polynomial Bohr set. \begin{lemma}[Generalised von Neumann for $\Psi$]\label{lem8.9} Let $\bz\in\Z^{t}$, and let $\Psi_{\bz}$ be defined as above. If $f,g:[N]\to[0,1]$, then \begin{equation*} \lvert\Psi_{\bz}(f)-\Psi_{\bz}(g)\rvert\leqslant s N^{s-1}(\lVert f-g\rVert_{2}^{2}/N)^{1/2}. \end{equation*} \end{lemma} \begin{proof} For all $f_{1},\ldots,f_{s}:\bZ\to[-1,1]$ supported on $[N]$, we proceed to show that \begin{equation*} \lvert\Psi_{\bz}(f_{1},\ldots,f_{s})\rvert\leqslant (\lVert f_{j}\rVert_{2}^{2}/N)^{1/2}N^{s-1} \end{equation*} for all $j\in [s]$. Once this is established, the lemma then follows from the telescoping identity \begin{equation*} \Psi_{\bz}(f) - \Psi_{\bz}(g) = \sum_{i=1}^{s}\Psi_\bz(h_{1},\ldots,h_{i-1},h_{i}-g_{i},g_{i+1},\ldots,g_{s}), \end{equation*} where $h_{i}=f$ and $g_{i}=g$ for all $i\in[s]$. We demonstrate only the case $j=s$, as the other cases follow by symmetry. Given $\bn=(n_{1},\ldots,n_{s})\in\Z^{s}$, we write $L_1(\bn)=L_1(\widetilde{\bn},n_{s})$, where $\widetilde{\bn}=(n_{1},\ldots,n_{s-1})\in\Z^{s-1}$. Let $u=L_1(0,\ldots,0,v_s L_2(P_D(\bz)))\in\Z$. By the change of variables $n=n_{s}+v_s L_2(P_D(\bz))$, we have \begin{equation*} \Psi_{\bz}(f_{1},\ldots,f_{s}) = \sum_{n\in\Z}f_{s}(n)\sum_{\substack{\widetilde{\bn}\in\Z^{s-1}\\ L_1(\widetilde{\bn},n)=u}}\prod_{i=1}^{s-1}f_{i}(n_{i}+v_{i}L_2(P_D(\bz))). \end{equation*} Note that $f_s(n)$ vanishes if $n\notin[N]$. Hence, by applying Cauchy--Schwarz to the outer sum over $n$, we deduce that \begin{equation*} |\Psi_{\bz}(f_{1},\ldots,f_{s})|^{2} \leqslant \lVert f_{s}\rVert_{2}^{2}\sum_{n=1}^{N}\left(\sum_{\substack{\widetilde{\bn}\in\Z^{s-1}\\ L_1(\widetilde{\bn},n)=u}}\prod_{i=1}^{s-1}f_{i}(n_{i}+v_{i}L_2(P_D(\bz)))\right)^{2}. \end{equation*} Since $|f_{i}|\leqslant 1_{[N]}$ for all $i$, we deduce that the inner sum over $\widetilde{\bn}$ is bounded above by \begin{equation*} \# \{\widetilde{\bn}\in\Z^{s-1}:L_1(\widetilde{\bn},n)=u, \qquad (n_{i}+v_{i}L_2 (P_D (\bz)))\in[N] \quad (i\in[s-1]) \} \leqslant N^{s-2}. \end{equation*} Inserting this bound reveals that \begin{equation*} |\Psi_{\bz}(f_{1},\ldots,f_{s})|^{2} \leqslant \lVert f_{s}\rVert_{2}^{2}\sum_{n=1}^{N}N^{2(s-2)}=(\lVert f_{i}\rVert_{2}^{2}/N)N^{2(s-1)}, \end{equation*} and taking square roots completes the proof. \end{proof} Before we use this lemma to obtain a lower bound for $\Psi_{\bz}(f_{\str}+f_{\sml})$, we require two additional lemmas. Firstly, we require a functional version of the supersaturation result of Frankl, Graham, and R\"{o}dl \cite[Theorem 2]{FGR1988} for density regular linear equations. \begin{lemma}\label{lem8.10} Let $\delta>0$, and let $f:[N]\to[0,1]$. If $\lVert f\rVert_{1}\geqslant \delta N$, then \begin{equation}\label{eqn8.2} \sum_{L_1(\bn)=0}f(n_{1})\cdots f(n_s) \gg_{L_1,\delta} N^{s-1}. \end{equation} \end{lemma} \begin{proof} Let $\Omega=\{x\in[N]:f(x)\geqslant \delta/2\}$. The popularity principle \cite[Exercise 1.1.4]{TV2006} implies that $|\Omega|\geqslant (\delta/2)N$, and so \begin{equation*} \sum_{L_1(\bn)=0}\prod_{i=1}^{s}f(n_{i})\geqslant \sum_{L_1(\bn)=0}\prod_{i=1}^{s}\left((\delta/2)1_{\Omega}(n_{i})\right)=(\delta/2)^{s}|\{\bn\in\Omega^{s}: L_1(\bn)=0\}|. \end{equation*} Since $L_1(1,\ldots,1)=0$, the required bound now follows from \cite[Theorem 2]{FGR1988}. \end{proof} \begin{remark} Alternatively, one can prove Lemma \ref{lem8.10} without using \cite[Theorem 2]{FGR1988}. After applying the arithmetic regularity lemma (Lemma \ref{lem8.3}), one can then show that the sum (\ref{eqn8.2}) for $f_\str$ is $\gg_{L_1,\delta}N^{s-1}$ by restricting to a sum over $n_1,\ldots,n_s$ lying in a linear Bohr set (see \cite[\S6]{Cha2022} for further details). \end{remark} As mentioned previously, we intend to make use of the almost periodicity of $f_\str$ to obtain a lower bound for $\Psi_{\bz}(f_{\str}+f_{\sml})$ when $\bz$ lies in an intersective polynomial Bohr set. However, we have to be conscious of the fact that we are relying on the relation $f_{\str}(n)=F(n\btheta)$, which only holds for $n\in[N]$. To guarantee that quantities of the form $n_{i} + v_{i}L_{2}(P_{D}(z_{j}))$ lie in $[N]$, we restrict our variables according to $(\bn,\bz)\in[c(\eta)N,(1-c(\eta)N]^s\times [\eta Z]^t$, for some sufficiently small $\eta>0$ and some corresponding quantity $c(\eta)>0$ such that $c(\eta)\to 0^{+}$ as $\eta\to 0^{+}$. Moreover, since our final bound (\ref{eqn3.10}) does not depend on $D$, we need to ensure that the decay rate of $c(\eta)$ is independent of $D$. This is accomplished by the following simple lemma on polynomial growth. \begin{lemma}\label{lem8.12} Let $P$ be an intersective integer polynomial of degree $d\in\N$ satisfying (\ref{eqn3.5}). Then there exists $M_0(P) \in \bN$ such that the following is true. Let $\eta\in(0,1)$, let $D\in\N$, and define the auxiliary polynomial $P_D$ by (\ref{eqn3.9}). If $M\geqslant (M_0(P) + 1)/\eta$, then \[ P_D(\eta M)\leqslant (4\eta)^d P_D(M). \] \end{lemma} \begin{proof} Let $\ell_P$ denote the leading coefficient of $P$. Since (\ref{eqn3.5}) holds, we know that $\ell_P\geqslant 1$, and that there exists a positive integer $M_0(P)\geqslant 4$ such that \begin{equation*} \ell_P Y^d \leqslant 2P(Y) \leqslant 3\ell_P Y^d \end{equation*} holds for all real $Y\geqslant M_0(P) $. Since $-D<r_D\leqslant 0$, it follows that if $M\geqslant (M_0(P) + 1)/\eta$ then \begin{equation*} \frac{P_D(\eta M)}{P_D(M)} \leqslant \frac{3(r_D + D\eta M)^d}{(r_D + D M)^d} \leqslant \frac{3(D\eta M)^d}{(D M - D)^d} = 3\eta^d\left(1 + \frac{1}{M-1}\right)^d. \end{equation*} The asserted bound now follows upon noting that $M\geqslant M_0(P)\geqslant 4$. \end{proof} \begin{lemma}[Lower bound for $\Psi_{\bz}(f_{\str}+f_{\sml})$]\label{lem8.13} For all $\delta>0$, there exist positive constants $c_{1}(\delta)=c_{1}(L_1,L_2;\delta)>0$ and $\eta=\eta(d,L_1,L_2,\delta)>0$ such that the following is true. Suppose $f:\Z\to[0,1]$ is supported on $[N]$ and satisfies $\lVert f\rVert_1\geqslant \delta N$. Given $\sig>0$ and a monotone increasing function $\cF:\R_{\geqslant 0}\to\R_{\geqslant 0}$, let $f_{\str}$, $f_{\sml}$, $K$ and $\btheta$ be as given by applying Lemma~\ref{lem8.3} to $f$. Let $\rho>0$ satisfy $K\rho\leqslant 1$, and let $\bz\in\bohr_{P_D}(\btheta,\rho)^{t}$. If $\bz\in[\eta Z]^{t}$, then \begin{equation*} \Psi_{\bz}(f_{\str}+f_{\sml}) \geqslant \left(c_{1}(\delta)- O_{L_1,L_2}(\sig + K\rho )\right)N^{s-1}. \end{equation*} \end{lemma} \begin{proof} Lemma \ref{lem8.9} informs us that \begin{equation*} \Psi_{\bz}(f_{\str}+f_{\sml}) = \Psi_{\bz}(f_{\str}) + O(\sig N^{s-1}). \end{equation*} It therefore only remains to estimate $\Psi_{\bz}(f_{\str})$. For each $\bn\in\Z^{s}$, define \begin{equation*} I_{\bz}(\bn):= \begin{cases} 1, \quad &\text{if }(n_{i}+v_{i}L_2(P_D(\bz)))\in[N]\text{ for all }i\in [s];\\ 0, &\text{otherwise}. \end{cases} \end{equation*} Since $\bz\in\bohr_{P_D}(\btheta,\rho)^{t}$, we deduce from property (\ref{itemStr}) of Lemma \ref{lem8.3} that if $\bn\in[N]^s$, then \begin{equation*} I_{\bz}(\bn)|f_{\str}(n_{i})-f_{\str}(n_{i}+v_{i}L_2(P_D(\bz)))|\ll_{\bv,L_2}K\rho\quad (1\leqslant i\leqslant s). \end{equation*} Thus, by using property (\ref{itemNon}) to bound $f_{\str}$ by $1$, we find that \begin{align*} \Psi_{\bz}(f_{\str}) & =\sum_{L_1(\bn)=0}I_{\bz}(\bn)\prod_{i=1}^{s}[f_{\str}(n_{i})+O_{\bv,L_2}(K\rho)]\\ &=\left(\sum_{L_1(\bn)=0}I_{\bz}(\bn)f_{\str}(n_{1})\cdots f_{\str}(n_{s})\right) + O_{\bv,L_2}(K\rho N^{s-1}) . \end{align*} In view of (\ref{eqn3.5}) and Lemma \ref{lem8.12}, we see that \begin{equation*} |L_2(P_{D}(\bz))|\ll_{L_2} P_{D}(\eta Z) \leqslant (4\eta)^{d}N \quad (\bz\in[\eta Z]^t). \end{equation*} It follows that there exists a constant $c=c(L_1,L_2,d)>0$ such that, for all $\bz\in[\eta Z]^{t}$, the function $I_{\bz}$ is non-zero on the set $\Omega^{t}$, where $\Omega:=\left(c\eta^d N,(1-c\eta^d)N\right]\cap \bZ$. We therefore find that \begin{equation*} \Psi_{\bz}(f_{\str}) \geqslant\left(\sum_{L_1(\bn)=0}g(n_{1})\cdots g(n_{s})\right) - O_{\bv,L_2}(K\rho N^{s-1}), \end{equation*} where $g(n):=1_{\Omega}(n)f_{\str}(n)$. Finally, we infer from property (\ref{itemSum}) of Lemma \ref{lem8.3} that $g(1)+\cdots +g(N)\geqslant (\delta-2c\eta^d) N$. Thus, by taking $\eta$ sufficiently small, we can apply Lemma \ref{lem8.10} to $g$ to obtain the required bound. \end{proof} Combining all of these results finally allows us to prove Theorem \ref{thm8.1}, thereby completing the proof Theorem \ref{thm3.8}. \begin{proof}[Proof of Theorem \ref{thm8.1}] Fix $r\in\N$ and $\delta\in(0,1)$. Let $c_1 (\delta)$ and $\eta=\eta(\delta,P,L_1,L_2)$ be as given in Lemma \ref{lem8.13}. Notice that the conclusion of Lemma \ref{lem8.13} allows us to assume that $c_1 (\delta)<1$, which we do. Let $\sig =c_{1}(\delta)/M$, where $M=M(L_1,L_2)$ is some suitably large positive integer, and let $\Delta_1$ be a function given by Lemma \ref{lem8.8}. Let $\cF:\R_{\geqslant 0}\to\R_{\geqslant 0}$ be a monotone increasing function which satisfies \begin{equation}\label{eqn8.3} \cF(x)^{-1/(2s+2t)} \leqslant \tau c_{1}(\delta)\left(\eta r^{-1}\Delta_{1}(d,x;x^{-1}\sig)\right)^{t} \end{equation} for all $x\in\N$, where $\tau=\tau(P) > 0$ will be chosen shortly. Let $N,Z\in\N$ be as given in the statement of Theorem \ref{thm8.1}, and assume they are sufficiently large in terms of $(\delta,r,L,L_2,P)$. Suppose we have an $r$-colouring $[Z]=\cC_1 \cup\cdots\cup \cC_r$, and sets $\cA_1 ,\ldots,\cA_r\subseteq[N]$ satisfying $|\cA_i|\geqslant\delta N$ for each $i\in[r]$. Applying Lemma \ref{lem8.3} to each of the functions $f_i:=1_{\cA_i}$ provides a decomposition $f_i=f_{\str}^{(i)}+f_{\sml}^{(i)}+f_{\unf}^{(i)}$, as well as associated parameters $K \leqslant K_{0}(r;\sig,\cF)$ and $\btheta\in\T^{K}$. Let $\rho>0$ be defined by the equality $K\rho=\sig$. By our choices of parameters in the previous paragraph, we can assume that $N$ and $Z$ are also sufficiently large relative to $(\sig,\cF,\rho,\eta)$. Now let $\cC_{i}'=\cC_i \cap [\eta Z]$ for all $i\in[r]$. Recall from Lemma \ref{lem3.7} that $P_D$ is an intersective polynomial of degree $d$. Thus, applying the pigeonhole principle and Lemma \ref{lem8.8} to the colouring $[\eta Z]=\cC_1' \cup\cdots\cup \cC_r '$ yields an index $k\in[r]$ such that \begin{equation}\label{eqn8.4} |\bohr_{P_D}(\btheta,\rho)\cap \cC_{k}'|\geqslant r^{-1}\Delta_1(d,K;\rho)\eta Z. \end{equation} It now remains to establish (\ref{eqn8.1}) for this choice of $k$. Let $B:=\cC'_{k} \cap \bohr_{P_{D}}(\btheta,\rho)$. For any $\bz\in B^{t}$, if $M$ is large enough, then Lemma~\ref{lem8.13} implies that \begin{equation*} 2\Psi_{\bz}(f_{\str}^{(i)}+f_{\sml}^{(i)})\geqslant c_{1}(\delta)N^{s-1} \quad (1\leqslant i\leqslant r). \end{equation*} Summing over $\bz$ yields \[ 2\Lam_D (f_{\str}^{(i)} + f_{\sml}^{(i)}; B) \ge c_1(\del) |B|^t N^{s-1}. \] Incorporating Lemma \ref{lem8.5} and (\ref{eqn8.4}) reveals that \begin{align*} 2\Lambda_{D}(\cA_i;B)&\ge \left(c_{1}(\delta)|B|^{t} - C\cF(K)^{-1/(2s+2t)}Z^{t}\right)N^{s-1} \\ &\geqslant \left(c_{1}(\delta)r^{-t} \Delta_1(d,K;K^{-1}\sig)^{t}\eta^{t} - C\cF(K)^{-1/(2s+2t)}\right)N^{s-1}Z^{t}, \end{align*} for all $i\in[r]$ and some constant $C=C(P)>1$. Setting $\tau^{-1}=2C$ in (\ref{eqn8.3}) now gives \begin{equation*} \Lambda_{D}(\cA_i;\cC_{k})\geqslant \Lambda_{D}(\cA_i;B)\gg_{\delta,r,P,L_1,L_2} N^{s-1}Z^{t} \qquad (1\leqslant i\leqslant r), \end{equation*} as required. \end{proof}
1,108,101,566,204
arxiv
\section{Introduction} Let $p$ be a prime integer, and $q$ a power of $p$. We work over an algebraically closed field $\Bbbk$ of charcteristic $p$. We consider a plane curve $C$ of degree $q^2+q+1$ defined by a homogeneous polynomial of the form \begin{equation} F=\sum_{i,j,k} a_{ijk} x_i x_j ^q x_k ^{q^2},\label{1} \end{equation} where $a_{ijk}$ are coefficients in $\Bbbk$, and $[x_0 : x_1 : x_2 ]$ is a homogeneous coordinate system in $\mathbb{P} ^2 $. If $a_{ijk} $ are general, then the plane curve $C$ is smooth. The condition that the defining polynomial of $C$ is of the form (\ref{1}) is independent of the choice of homogeneous coordinates of $\mathbb{P}^2$ (see Proposition 2.1). Let $C^{\vee } $ be the dual curve of the plane curve $C$. The Gauss map \begin{equation} \Gamma : C\to C^{\vee } ; [x_0 : x_1 : x_2] \mapsto \left[\frac{\partial F}{\partial x_0} : \frac{\partial F}{\partial x_1} : \frac{\partial F}{\partial x_2}\right] \label{8} \end{equation} is an inseparable morphism. For every $i$, the partial derivative of $F$ with respect to $x_i$ is \begin{equation} \frac{\partial F}{\partial x_i} = \sum_{j, k} a_{ijk} x_j ^q x_k ^{q^2} = \left( \sum_{j,k} \alpha _{ijk} x_j x_{k} ^q \right) ^q,\label{5} \end{equation} where $\alpha _{ijk} = a_{ijk} ^{1/q} $. Thus, if $a_{ijk}$ are general, then the inseparable degree of the Gauss map is $q$. The purpose of this paper is to study singularities of the dual curve $C^{\vee}$ of a plane curve $C$ defined by a polynomial of the form (\ref{1}). We define $\mathscr{C}$ to be the set of all the projective plane curves defined by homogenious polynomials of the form (\ref{1}). Note that $\mathscr{C}$ is identified with $\mathbb{P}^{26}$. Note that all tangent lines of the curve $C\in \mathscr{C}$ intersect $C$ with multiplicity at least $q$ at the tangent point. In our case, a double tangent and a flex are defined as following: \begin{definition} Let $m$ be an integer at least 2. We define an \emph{m-ple tangent} to be a tangent line of $C$ which has distinct $m$ tangent points with multiplicity $q$, and a \emph{flex} to be a point at which the tangent line intersects $C$ with multiplicity $q+1$. A 2-ple tangent is called a double tangent. \end{definition} \begin{theorem} Suppose that $C$ is a general member of $\mathscr{C}$. Then \begin{enumerate}[\normalfont (i)] \item the degree of the dual curve $C^{\vee}$ is $(q^2+q+1)(q+1)$, \item the dual curve $C^{\vee}$ has only ordinary nodes as its singularities, \item the number of ordinary nodes of $C^{\vee } $ i.e. double tangent lines of C, is \[ \frac{q(q^2+q+1)(q^3+3q^2+3q-1)}{2}, \] and \item the number of flexes of C is \[ (q^3+2q^2-q+1)(q^2+q+1). \] \end{enumerate} \end{theorem} We compare our theorem with the classical situation. Let $\tilde{C}$ be a general \textit{complex} plane curve of degree $d$. Then the degree of the dual curve $\tilde{C} ^{\vee } $ is $d(d-1)$. Moreover, each flex of $\tilde{C}$ corresponds to a cusp of $\tilde{C} ^{\vee}$, whereas each flex of $C\in \mathscr{C}$ correponds to a smooth point of $C^{\vee}$. The singularities of $\tilde{C} ^{\vee}$ consist of $\frac{1}{2}d(d-2)(d-3)(d+3)$ ordinary nodes and $3d(d-2)$ cusps. As a special case, we consider the singularities of the dual curve of the Fermat curve $C_0\in \mathscr{C}$ of degree $q^2+q+1$. We will show that the dual curve $C_0 ^{\vee}$ is related to the Ballico-Hefez curve. Let $\gamma _d : \mathbb{P} ^2 \to \mathbb{P} ^2 $ be a morphism defined by $\gamma _d ([x_0 : x_1 : x_2]) = [x_0 ^d : x_1 ^d : x_2 ^d ]$, and $l_0$ be a line $x_0+x_1+x_2=0 $ in $\mathbb{P} ^2 $. \begin{definition} The \emph{Ballico-Hefez curve} is the image of the line $l_0$ of the morphism $\gamma _{q+1} $. \end{definition} In \cite{MR3323512}, Hoang and Shimada define the Ballico-Hefez curve to be the image of the morphism $\mathbb{P} ^1 \to \mathbb{P} ^2 $ defined by \[ [s : t] \mapsto [s^{q+1} : t^{q+1} : st^q + s^q t]. \] Note, however, that the image of this morphism is projectively isomorphic to the image of the line $l_0$ of the morphism $\gamma _{q+1} $. \begin{theorem} Let $B$ be the Ballico-Hefez curve. Let $\gamma _{q^2+q+1} : \mathbb{P} ^2 \to \mathbb{P} ^2 $ be a morphism defined by the above. If $C_0 \in \mathscr{C}$ is the Fermat curve of the degree $q^2+q+1$, then \begin{enumerate}[\normalfont (i)] \item the dual curve $C_0^{\vee }$ is $\gamma _{q^2+q+1} ^{-1} (B) $, and \item the singularities of $C^{\vee}_0$ consist of $(q^2+q+1)^2(q^2-q)/2$ ordinary nodes, and $3(q^2+q+1)$ singular points with the Milnor number $q^2(q+1)$ . \end{enumerate} \end{theorem} The author is grateful to Professor Ichiro Shimada for helpful comments. Part of this work was done during the author's stay in Vietnam. He is also grateful to Professor Pho Duc Tai in Vietnam National University of Science for many helpful suggestions. Moreover, the author is grateful to the referee for pointing out the author's mistakes and helpful comments. \section{Preliminaries} From now, let $\Bbbk$ be an algebraically closed field of characteristic $p>0$. \begin{proposition} Let $C$ be a plane curve. The defining polynomial of $C$ being of the form (\ref{1}) is a property independent of the choice of homogeneous coordinates. \end{proposition} \begin{proof} Under the coordinates change \[ x_i = \sum_{l=0}^{2} t_{il} y_l \ \ (t_{il} \in k), \] a homogeneous polynomial $F$ of the form (\ref{1}) is transformed into \begin{equation*} \begin{split} F&=\sum_{i, j, k} a_{ijk} \left( \sum_{l=0}^{2} t_{il} y_l \right) \left( \sum_{m=0}^{2} t_{im} y_m \right)^q \left( \sum_{n=0}^{2} t_{in} y_n \right)^{q^2} \\ &=\sum_{i,j,k} \sum_l \sum_m \sum_n a_{ijk} t_{jl} t_{im} ^q t_{kn} ^{q^2} y_l y_m ^q y_n ^{q^2} \\ &=\sum_{l, m, n} b_{lmn} y_l y_m ^q y_n ^{q^2}, \end{split} \end{equation*} where $b_{lmn} =\displaystyle \sum_{l, m, n} a_{ijk} t_{il} t_{jm}^q t_{kn} ^{q^2}$. \end{proof} \begin{lemma} If $a_{ijk} $ are general, then the plane curve $C$ is smooth. \end{lemma} \begin{proof} The Fermat curve of degree $q^2+q+1$ is smooth. Being smooth is an open condition. \end{proof} \section{Proof of the first half of Theorem 1} We define the \emph{reduced Gauss map} $\Gamma _{\mathrm{red}} : C \to (\mathbb{P} ^2)^{\vee }$ of $C\in \mathscr{C} $ by \begin{equation*} \begin{split} &\Gamma _{\mathrm{red}} ([x_0 : x_1 : x_2]) \\ &= \left[\left(\frac{\partial F}{\partial x_0}(x_0, x_1, x_2) \right)^{1/q} : \left(\frac{\partial F}{\partial x_1}(x_0, x_1, x_2) \right)^{1/q} : \left(\frac{\partial F}{\partial x_2}(x_0, x_1, x_2) \right)^{1/q} \right]. \end{split} \end{equation*} \begin{claim} The reduced Gauss map $ C \to C^{\vee } $ is the morphism of separable degree $1$. \end{claim} \begin{proof} We will prove that the degree of the dual curve of the Fermat curve of degree $q^2+q+1$ is $d(d-1)/q$, (see Section 5), and hence the reduced Gauss map of the Fermat curve is the morphism of separable degree 1. Thus the reduced Gauss map $ C \to C^{\vee } $ is also the morphism of separable degree 1. \end{proof} We denote the degree of a curve $C\in\mathscr{C}$ by $d = q^2+q+1$. If $C\in \mathscr{C}$ is general, then the Gauss map $\Gamma$ is an inseparable morphism of inseparable degree $q$ by (\ref{5}). Thus the degree of $C^{\vee}$ is \[ \frac{d(d-1)}{q} = \frac{(q^2+q+1)(q^2+q)}{q} = (q^2+q+1)(q+1). \] In order to prove (ii) of Theorem 1, first we prove the following: \begin{claim} If $C\in \mathscr{C}$ is general, then the curve $C$ has no $m$-ple tangent line for $m\geq 3$. \end{claim} \begin{proof} We define a variety $\mathscr{X} _1$ by \begin{eqnarray*} \mathscr{X} _1=\left\{ (Q_0, Q_1, Q_2, l)\in \mathbb{P} ^2 \times \mathbb{P} ^2 \times \mathbb{P} ^2 \times (\mathbb{P} ^2 )^{\vee } \middle| \begin{array}{ll} Q_0 \in l,\ Q_1 \in l,\ Q_2 \in l\\ \mathrm{and}\ Q_i\neq Q_j\ \mathrm{for}\ i\neq j \end{array} \right\} . \end{eqnarray*} Then the action of $\mathrm{PGL}_3(k)$ on $\mathscr{X}_1$ is transitive. Let $(P_0, P_1, P_2, l_0)$ be a point of $\mathscr{X} _1$ and let $[x_0 : x_1 : x_2] $ be a homogeneous coordinate system such that \[ P_0 = [0 : 0 : 1],\ P_1=[0 : 1 : 0],\ P_2=[0 : 1 : 1]\ \mathrm{and}\ l_0 = \{ x_0 = 0 \}. \] Let $C$ be a plane curve in $\mathscr{C}$. We define an algebraic subset $\mathscr{D} _1$ of $\mathscr{C}$ by \[ \mathscr{D}_1 = \left\{ Y\in \mathscr{C}\ \middle| \begin{array}{ll} P_0, P_1\ \mathrm{and}\ P_2\ \mathrm{are\ smooth\ points\ of}\ Y,\\ \mathrm{and}\ T_{P_0} Y = T_{P_1} Y = T_{P_2} Y = l_0 \end{array} \right\}. \] Then $C\in \mathscr{C}$ is in $\mathscr{D} _1$ if and only if \begin{gather*} a_{222} = 0,\ a_{122} = 0,\ a_{111} = 0,\ a_{211} = 0,\ a_{212} + a_{221} = 0,\ a_{112} + a_{121} = 0,\\ a_{022} \neq 0,\ a_{011} \neq 0\ \mathrm{and}\ a_{011} + a_{012} + a_{021} + a_{022} \neq 0. \end{gather*} Therefore $\mathscr{D} _1$ is of codimension 6 in $\mathscr{C} $. Since $\dim \mathscr{X} _1 = 5$, we have \[ \dim\mathscr{X} _1 + \dim\mathscr{D} _1 < \dim\mathscr{C}. \] Thus if the curve $C$ is general in $\mathscr{C}$, then $C$ does not have any $m$-ple tangent line for $m\geq 3$. \end{proof} Second we prove the following: \begin{claim} If $C \in \mathscr{C}$ is general, then $\Gamma _{\mathrm{red}} $ is an immersion at every point of $C$. \end{claim} \begin{proof} Let $P_0$ be the point $[0 : 0 : 1]$, and let $l_0$ be the line $\{ x_0 = 0\}$. By linear change of coordinates, we can assume that $P_0 \in C$ and $T_{P_0} C = l_0$. Let $(x, y)$ be affine coordinates such that $[x_0 : x_1 : x_2] = [x : y : 1]$. Then up to multiple constant, the polynomial $F$ can be written as \begin{equation*} \begin{split} F(x, y, 1) = f(x, y) =&\ x+a_{202} x^q + a_{212} y^q + a_{002} x^{q+1} + a_{102} x^q y + a_{012} x y^q \\ &+ a_{112} y^{q+1} + (\mathrm{terms\ of\ degree} \geq q^2). \end{split} \end{equation*} Then we have a local parametrization $x = \phi (t),\ y=t$ of $C$ at $P_0$ such that the power series $\phi (t) $ is written as \[ \phi (t) = -a_{212} t^q -a_{112} t^{q+1} +a_{012} a_{212} t^{2q} + \cdots. \] We consider the Gauss map given by (\ref{8}). Let $(\eta , \zeta )$ be the affine coordinates of $(\mathbb{P} ^2)^{\vee}$ with the origin $l_0 \in (\mathbb{P} ^2) ^{\vee}$ such that the point $(\eta , \zeta )$ corresponds to the line $x+ \eta y + \zeta = 0$. Then the tangent line of $C$ at $P_t = [\phi (t) : t : 1]$ is \[ \frac{\partial f}{\partial x} (P_t) x + \frac{\partial f}{\partial y} (P_t) y - \frac{\partial f}{\partial x} (P_t) \phi (t) -\frac{\partial f}{\partial y} (P_t) t = 0 \] Therefore the Gauss map locally around $P_0$ is written as \begin{equation*} \begin{split} \Gamma (P_t) &= \left(\frac{f_y (P_t) }{f_x (P_t) } , -\frac{f_y (P_t) }{f_x (P_t) } t - \phi (t) \right) \\ &=\left( -\frac{d\phi}{dt} (t) , t\frac{d\phi}{dt}(t) -\phi (t) \right). \end{split} \end{equation*} Since \[ -\frac{d\phi}{dt}(t) = a_{112} t ^q + (\mathrm{terms\ of\ degree}>q) \] and \[ t\frac{d\phi}{dt} (t) -\phi (t) = a_{212} t^q + (\mathrm{terms\ of\ degree}>q), \] the reduced Gauss map $\Gamma _{\mathrm{red}} $ locally around $P_0$ is \begin{equation} t \mapsto (\alpha _{112} t + (\mathrm{terms\ of\ degree}>1) , \alpha _{212} t + (\mathrm{terms\ of\ degree}>1) ) \label{3}, \end{equation} where $\alpha _{ijk} = a_{ijk} ^{1/q}$. The reduced Gauss map $\Gamma _{\mathrm{red}} $ is not smooth at the point $P_0$ if and only if $\alpha _{112} = \alpha _{212} = 0$. Since the codimension of the subset \[ \{ C \in \mathscr{C} \ |\ \alpha _{112} = \alpha _{212} =0 \} \] is 2 in $\mathscr{C}$, the reduced Gauss map $\Gamma _{\mathrm{red}} $ is locally immersion at every point of a general member $C$ of $\mathscr{C}$. \end{proof} Suppose that $C\in\mathscr{C}$ is general. We prove that the singular points of the dual curve $C^{\vee } $ are only ordinaly nodes. Let $P_0$ and $P_1$ be the points in the proof of claim 1, and let $l_0$ be the line $\{ x_0=0\}$. Suppose that $P_0$ and $P_1$ are smooth points of $C$ and $T_{P_0} C =T_{P_1} C = l_0$. Let $(x', y')$ be affine coordinates such that $[x_0 : x_1 : x_2] = [x' : 1 : y']$. Similar to the proof of the claim 2, up to multiple constant, the polynomial $F$ can be written as \begin{equation*} \begin{split} F(x', 1, y') = g(x', y') =&\ x'+a_{101} x'^q + a_{121} y'^q + a_{001} x'^{q+1} + a_{201} x'^q y' + a_{021} x' y'^q \\ &+ a_{221} y'^{q+1} + (\mathrm{terms\ of\ degree} \geq q^2 ). \end{split} \end{equation*} Then we have a local parametrization $x' = \psi (t),\ y'=t$, of $C$ at $P_0$ such that the power series $\psi (t) $ is written as \[ \psi (t) = -a_{121} t^q -a_{221} t^{q+1} +a_{021} a_{121} t^{2q} + \cdots. \] Let $(\eta, \zeta )$ be the affine coordinates of $(\mathbb{P} ^2)^{\vee}$ with the origin $l_0 \in (\mathbb{P} ^2) ^{\vee}$ such that the point $(\eta , \zeta )$ corresponds to the line $x'+ \eta y' + \zeta = 0$. The tangent line of $C$ at $P'_t = [\psi (t) : 1 : t] $ is \[ \frac{\partial g}{\partial x'} (P'_t) x' + \frac{\partial g}{\partial y'} (P'_t) y' - \frac{\partial g}{\partial x'} (P'_t) \phi (t) -\frac{\partial g}{\partial y'} (P'_t) t = 0. \] Therefore the Gauss map $\Gamma $ locally around $P_1$ is written as \begin{equation*} \begin{split} \Gamma (P_t') &= \left(\frac{g_{y'} (P'_t) }{g_{x'} (P'_t) } , -\frac{g_{y'} (P'_t) }{g_{x'} (P'_t) } t - \psi (t) \right) \\ &=\left( -\frac{d\psi}{dt}(t) , t\frac{d\psi}{dt} (t) -\psi (t) \right). \end{split} \end{equation*} Since \[ -\frac{d\psi}{dt} (t) = a_{221} t ^q + (\mathrm{terms\ of\ degree}>q) \] and \[ t\frac{d\psi}{dt} (t) -\psi (t) = a_{121} t^q + (\mathrm{terms\ of\ degree}>q), \] we describe the reduced Gauss map \begin{equation} t \mapsto (\alpha _{221} t + (\mathrm{terms\ of\ degree}>1) ,\ \alpha _{121} t + (\mathrm{terms\ of\ degree}>1) ) \label{4} \end{equation} locally around $P_1$. We define a variety $\mathscr{X}_2$ by \[ \mathscr{X}_2 = \{ (Q_0, Q_1, l) \in \mathbb{P}^2 \times \mathbb{P}^2 \times (\mathbb{P}^2)^{\vee} \ |\ Q_0 \in l, Q_1 \in l\ \mathrm{and}\ Q_0 \neq Q_1\}. \] Then the action of $\mathrm{PGL}_3(k)$ on $\mathscr{X}_2$ is transitive and $\dim\mathscr{X}_2 = 4$. Let $(P_0, P_1, l_0)$ be the point of $\mathscr{X}_2$ such that $P_0 = [0 : 0 : 1]$, $P_1 = [0 : 1 : 0]$ and $l_0 = \{x_0=0\}$. We define a subset $ \mathscr{D}_2$ of $\mathscr{C}$ by \[ \mathscr{D} _2 = \{ Y\in \mathscr{C}\ |\ P_0\ \mathrm{and}\ P_1\ \mathrm{are\ smooth\ points\ of}\ Y,\ \mathrm{and}\ T_{P_0} Y = T_{P_1} Y = l_0\} \] Then $C\in \mathscr{D}_2$ if and only if \[ a_{222} = 0,\ a_{122} = 0,\ a_{111} = 0,\ a_{211} = 0,\ a_{022} \neq 0,\ a_{011} \neq 0. \] Thus the codimension of $\mathscr{D}_2$ is 4. For $C\in \mathscr{D}_2$, by (\ref{3}) and (\ref{4}), the singularities of $C^{\vee}$ at the point $l_0$ is not a ordinary node if and only if \[ \begin{vmatrix} \alpha_{112} & \alpha_{212} \\ \alpha_{211} & \alpha_{121} \\ \end{vmatrix} =0. \] We define a subset $\mathscr{D}_2'$ of $\mathscr{C}$ by \[ \mathscr{D}_2' = \left\{ Y\in \mathscr{C}\ \middle| \begin{array}{ll} P_0\ \mathrm{and}\ P_1\ \mathrm{are\ smooth\ points\ of}\ Y,\\ T_{P_0} Y = T_{P_1} Y = l_0,\ \mathrm{and}\ Y^{\vee}\ \mathrm{does\ not\ have\ ordinary\ node\ at}\ l_0 \end{array} \right\}. \] Since the codimension of $\mathscr{D}_2'$ is 5, \[ \dim\mathscr{D}_2' + \dim\mathscr{X}_2 < \dim\mathscr{C}. \] Therefore, since $a_{ijk} $ are general, the dual curve $C^{\vee }$ has only ordinary nodes as its singularities. \section{Proof of the second half of Theorem 1} \subsection{Number of the ordinary nodes of $C^{\vee}$} Let $g$ and $g^{\vee }$ be the genera of a general curve $C\in \mathscr{C}$ and its dual curve $C^{\vee}$, respectively. Let $\delta$ be the number of the ordinary nodes of $C^{\vee}$. Then \[ g=\frac{(d-1)(d-2)}{2} = \frac{\{ (q^2+q+1)-1\}\{ (q^2+q+1)-2\} }{2} \] and \begin{equation*} \begin{split} g^{\vee} =& \frac{(d^{\vee}-1)(d^{\vee}-2)}{2} - \delta \\ =& \frac{\{(q^2+q+1)(q+1)-1\} \{(q^2+q+1)(q+1)-2\}}{2} - \delta , \end{split} \end{equation*} where $d$ and $d^{\vee}$ are the degree of $C$ and $C^{\vee}$, respectively, because, by the previous section, $C^{\vee}$ has only ordinary nodes. By claim 2 of section 3, the reduced Gauss map $\Gamma _{\mathrm{red}} $ is birational onto its image. Thus $g = g^{\vee}$ and hence we have \begin{equation*} \begin{split} \delta =& \frac{\{(q^2+q+1)(q+1)-1\} \{(q^2+q+1)(q+1)-2\}}{2} \\ &- \frac{\{ (q^2+q+1)-1\}\{ (q^2+q+1)-2\} }{2} \\ =& \frac{q(q^2+q+1)(q^3+3q^2+3q-1)}{2} \end{split} \end{equation*} \subsection{Number of the flexes} We denote by $\mathrm{mult}_{P} (D_1, D_2)$ the intersection multiplicity of projective plane curves $D_1$ and $D_2$ at a point $P \in D_1 \cap D_2$. \begin{lemma}\label{y} We suppose that $C \in \mathscr{C}$ is a general plane curve in $\mathscr{C}$. If the multiplicity $\mathrm{mult}_u (T_u C, C)$ is more than $q$ at $u\in C$, then the multiplicity $\mathrm{mult}_u (T_u C, C)$ is $q+1$ at $u\in C$ and all other intersection points of $T_uC$ and C are not tangent point. \end{lemma} \begin{proof} We use the same notation as in Section 3. We define a variety $\mathscr{X}_0$ by \[ \mathscr{X}_0 = \{ (Q, l) \in \mathbb{P} ^2 \times (\mathbb{P} ^2 ) ^{\vee }\ |\ Q\in l \} . \] Then the action of $\mathrm{PGL}_3(k)$ on $\mathscr{X}_0$ is transitive and $\mathrm{dim} \mathscr{X}_0 = 3$. We recall that $[x_0 : x_1 : x_2 ]$ are homogeneous coordinates, $P_0 = [ 0 : 0 : 1]$, $P_1 = [ 0 : 1 : 0 ]$ and $l_0 = \{ x_0 = 0\} $. We define two subsets $\mathscr{D}_0 $ and $\widetilde{\mathscr{D} } _0$ of $\mathscr{C}$ by \[ \mathscr{D} _0 = \left\{ Y\in \mathscr{C} \middle|\ \begin{array}{ll} P_0\ \mathrm{is\ the\ smooth\ point\ of}\ Y,\ T_{P_0} Y = l_0\\ \mathrm{and}\ \mathrm{mult}_{P_0} (T_{P_0} Y, Y) = q+1 \end{array} \right\} \] and \[ \widetilde{\mathscr{D} } _0 = \left\{ Y\in \mathscr{C} \middle|\ \begin{array}{ll} P_0\ \mathrm{is\ the\ smooth\ point\ of}\ Y,\ T_{P_0} Y = l_0\\ \mathrm{and}\ \mathrm{mult}_{P_0} (T_{P_0} Y, Y) > q+1 \end{array} \right\} . \] Then the curve $C \in \mathscr{D}_0$ if and only if \[ a_{222} = 0,\ a_{122} = 0,\ a_{212} = 0,\ a_{112} \neq 0\ \mathrm{and}\ a_{022}\neq 0, \] and $C\in \widetilde{\mathscr{D} } _0$ if and only if \[ a_{222} = 0,\ a_{122} = 0,\ a_{212} = 0,\ a_{112} = 0\ \mathrm{and}\ a_{022}\neq 0. \] Therefore the codimension of $\mathscr{D} _0$ is 3 and that of $\widetilde{\mathscr{D} } _0$ is more than 3 in $\mathscr{C}$. Thus we have \[ \mathrm{dim} \mathscr{X}_0 + \mathrm{dim} \widetilde{\mathscr{D} } _0 < \mathrm{dim} \mathscr{C}. \] We proved the first half of the lemma. We define a subset $\widetilde{\mathscr{D} } _2$ of $\mathscr{C}$ by \[ \widetilde{\mathscr{D} } _2 = \left\{ Y\in \mathscr{C} \middle|\ \begin{array}{ll} P_0\ \mathrm{and}\ P_1\ \mathrm{are\ the\ smooth\ points\ of}\ Y,\ T_{P_0} Y = l_0,\\ T_{P_1} Y= l_0\ \mathrm{and}\ \mathrm{mult}_{P_0} (T_{P_0} Y, Y) = q+1 \end{array} \right\} . \] Then the curve $C\in \widetilde{\mathscr{D} } _2$ if and only if \[ a_{222} = 0,\ a_{122} = 0,\ a_{111} = 0,\ a_{211} = 0,\ a_{212} = 0,\ a_{112} \neq 0,\ a_{022} \neq 0,\ a_{011} \neq 0. \] Therefore codimension of $\widetilde{\mathscr{D} } _2$ is 5, and we recall $\mathrm{dim} \mathscr{X}_2 = 4$. Thus, since we have \[ \mathrm{dim} \mathscr{X}_2 + \mathrm{dim} \widetilde{\mathscr{D} } _2 < \mathrm{dim} \mathscr{C}, \] the second half of the lemma is proved. \end{proof} Let $g$ be the genus of a general curve $C\in \mathscr{C} $. We use the notion and notation about the correspondence of curves introduced in \cite[Chap. 2, Section 5]{MR1288523}. Let $T : C \to C$ be correspondence defined by $T(u) = T_{u} C.C-qu $, $D\subset C\times C$ its curve of correspondence, i.e. $D =\overline{ \{ (u,v)\ |\ u\neq v,\ v \in T_u C \} }$. Then the degree of $T$ is \[ \deg T = (q^2+q+1)-q = q^2+1. \] Let $\pi _2 : C \times C \to C$ be the projection on second factor. In order to find the degree of $T^{-1}$, we have to caluculate the number of tangent lines to $C$, (counted with the intersection multiplicities of $D$ and $\pi_2 ^{-1} (v)$) other than $T_v C$ passing throght a general point $v\in C$. We consider the projection $\pi _v : C\to \mathbb{P}^1$ from the center $v\in C$ onto a line. Let $\Omega _{C / \mathbb{P}^1 }$ be the sheaf of the relative differential of $C$ over $\mathbb{P} ^1$. By Hurwitz-formula \cite[Chap. I\hspace{-.1em}V, Corollary 2.4]{MR0463157}, \[ 2g-2 = -2(q^2+q) + \deg R , \] where the divisor $R$ is the ramification divisor of $\pi _v$ i.e. $R = \sum_{u\in C} \mathrm{length} (\Omega _{C / \mathbb{P}^1 })_u u$. Hence \[ \deg R = q^4+2q^3+2q^2+q-2. \] Moreover, the length of $(\Omega _{C / \mathbb{P}^1 })_v$ is $q-2$. Hence, we have \begin{equation*} \begin{split} \deg T^{-1} &= (q^4+2q^3+2q^2+q-2)-(q-2) \\ &= q^4+2q^3+2q^2. \end{split} \end{equation*} \begin{lemma} Let $\pi _1,\ \pi _2 : C\times C\to C$ be the projections on first and second factors, respectively. The divisor $D$ on $C\times C$ is algebraically equivalent to \[ (q^4+2q^3+2q^2+q)E_u +(q^2+q+1)F_v -q\Delta, \] where $E_u = \pi _1 ^{-1} (u),\ F_v = \pi _2 ^{-1} (v)$ and $\Delta \subset C\times C$ is the diagonal. \end{lemma} \begin{proof} For some $u_0,\ v_0\in C$, we write \[ T(u_0) + qu_0 = \sum b_i v_i \] and \[ T^{-1} (v_0)+qv_0 = \sum a_iu_i. \] Let $L$ be the line bundle \[ L = D-\sum a_i E_{u_i} - \sum b_i F_{v_i} + q\Delta. \] For any $x\in C$, the restriction of $L$ to $E_x$ is trivial because the divisor $T(x) + qx$ is linearly equivalent to $T(u_0) + qu_0$. By \cite[Chap. III, Exercise 12.4]{MR0463157}, there is a line bundle $M$ on $C$ such that $L \cong \pi_1 ^{\ast} (M)$. Since the restriction of $L$ to $F_{v_0}$ is trivial, the line bundle $L$ is also trivial. Thus $D$ is linearly equivalent to \[ \sum a_i E_{u_i} + \sum b_i F_{v_i} - q\Delta. \] For any $u,\ v\in C$, the divisors $E_{u_i}$ (resp. $F_{v_i} $) are algebraically equivalent to $E_u$ (resp. $F_v$). Note that the degrees of $T(u_0) + qu_0$ and $T^{-1} (v_0) + qv_0$ are \[ \deg (T(u_0) + qu_0 ) = q^2+q+1 \] and \[ \deg (T^{-1} (v_0)+qv_0 ) = q^4+2q^3+2q^2+q, \] and hence the result is proved. \end{proof} \begin{lemma}\label{w} If $C\in\mathscr{C}$ is a general plane curve in $\mathscr{C}$, then $D$ and $\Delta$ intersect transversally at any point $(u, v) \in D \cap \Delta $. \end{lemma} \begin{proof} We use the same notations as in Section 3 and Lemma \ref{y}. We recall that $[x_0 : x_1 : x_2]$ is homogeneous coordinates, $P_0 = [0 : 0 : 1]$, $l_0 = \{ x_0 = 0 \}$ and \[ \mathscr{D} _0 = \left\{ Y\in \mathscr{C} \middle|\ \begin{array}{ll} P_0\ \mathrm{is\ the\ smooth\ point\ of}\ Y,\ T_{P_0} Y = l_0\\ \mathrm{and}\ \mathrm{mult}_{P_0} (T_{P_0} Y, Y) = q+1 \end{array} \right\}. \] By change of coordinates, we assume that $C \in \mathscr{D}_0$. Let $(x, y)$ be affine coordinates such that $[x_0 : x_1 : x_2] = [x : y : 1]$. Then up to multiple constant, the polynomial $F$ can be written as \begin{equation*} \begin{split} F(x, y, 1) =\ & x + a_{202} x^q + a_{002} x^{q+1} + a_{102} x^q y + a_{012} xy^q + a_{112} y^{q+1} \\ & + a_{220}x^{q^2} + a_{221} y^{q^2} + (\mathrm{terms\ of\ degree} > q^2). \end{split} \end{equation*} Then we have a local parametrization $x=\phi_1 (t)$, $y = t$ of $C$ at $P_0$ such that the power series $\phi_1 (t)$ is written as \[ \phi_1 (t) = -a_{112} t^{q+1} +a_{012} a_{112}t^{2q+1} + \cdots - a_{221} t^{q^2} + (\mathrm{terms\ of\ degree} > q^2). \] Let $(P_{t_1} , P_{t_2})$ be a point of $D$ in a small neighborhood of $(P_0, P_0)$ such that \[P_{t_1} = [\phi_1 (t_1) : t_1 : 1]\ \mathrm{and}\ P_{t_2} = [\phi_1 (t_2) : t_2 : 1]. \] The tangent line of $C$ at $P_{t_1}$ is \[ x = \frac{d\phi_1}{dt} (t_1)y-t_1\frac{d\phi_1}{dt} (t_1) + \phi_1(t_1), \] and hence \begin{equation*} \begin{split} x = &\ (-a_{112} t_1^q +a_{012}a_{112}t_1^{2q} + (\mathrm{terms\ of\ degree} >2q ))y \\ & + (-a_{221}t_1^{q^2} + (\mathrm{terms\ of\ degree}>q^2 )). \end{split} \end{equation*} Therefore $t_2$ is the solution of the equation \begin{equation}\label{z} \frac{d\phi_1}{dt} (t_1)y-t_1\frac{d\phi_1}{dt} (t_1) + \phi_1(t_1) - \phi_1(y) = 0 \end{equation} for $y$ that is not $t_1$ and approaches to 0 when $t_1$ tends to 0. We can express the left hand side of (\ref{z}) as \begin{equation*} \begin{split} & (-a_{112} t_1^q +a_{012}a_{112}t_1^{2q} + (\mathrm{terms\ of\ degree} >2q\ \mathrm{in}\ t_1))y \\ &+ (-a_{221}t_1^{q^2} + (\mathrm{terms\ of\ degree}>q^2\ \mathrm{in}\ t_1)) \\ &+ a_{112} y^{q+1} -a_{012} a_{112}y^{2q+1} + \cdots + a_{221} y^{q^2} + (\mathrm{terms\ of\ degree} > q^2\ \mathrm{in}\ y) \\ = &\ (y-t_1)^qf_{t_1}(y), \end{split} \end{equation*} where the power series $f_{t_1}(y)$ is written as \[ f_{t_1}(y) = a_{112}y + a_{221} t_1^q + a_{221} y^q + \cdots. \] Since $C\in\mathscr{D}_0$, $a_{112} \neq 0$. Thus a solution of $f_{t_1}(y) = 0$ is \[ y = -\frac{a_{221}}{a_{112}} t_1^q + (\mathrm{terms\ of\ degree} > q). \] Therefore we have \[ t_2 = -\frac{a_{221}}{a_{112}} t_1^q + (\mathrm{terms\ of\ degree} > q). \] If $(P_{t_1} , P_{t_2})$ is a point in $\Delta $, then $t_1 = t_2$. Therefore, if $(P_{t_1} , P_{t_2}) \in D \cap \Delta$, then \[ t_1 = -\frac{a_{221}}{a_{112}} t_1^q + (\mathrm{terms\ of\ degree} > q). \] Thus $D$ and $\Delta$ intersect transversally at $(P_0, P_0) \in D \cap \Delta $. \end{proof} By Lemma \ref{w}, the number of the flexes is equal to the intersection number $(D\cdot \Delta)$ for a general member $C$ of $\mathscr{C}$. Since the self-intersection number of $\Delta$ is $2-2g$, the intersection number $(D\cdot \Delta )$ is \begin{equation*} \begin{split} (D\cdot \Delta ) &= (\{ (q^4+2q^3+2q^2+q)E_u +(q^2+q+1)F_v -q\Delta \} \cdot \Delta ) \\ &= q^4+2q^3+3q^2+2q+1-q(2-2g ) \\ &= q^5+3q^4+2q^3+2q^2+1\\ &= (q^3+2q^2-q+1)(q^2+q+1). \end{split} \end{equation*} \section{Fermat curve} For any formal power series $f \in \Bbbk [[x, y]]$, we define the {\em Milnor number} $\mu (f)$ by \[ \mu (f) = \dim _{\Bbbk } \Bbbk [[x, y]] / \left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y} \right). \] Calculation method of the Milnor number for a formal power series in characteristic zero is well known. (For example, see \cite{MR2107253}.) However, in positive characteristic, the calculation method and result of the Milnor number differ from the characteristic-zero case in general. In the case of the following lemma, however, the Milnor number is the same as the characteristic-zero case. \begin{lemma} Let $a$ and $b$ be elements in $\Bbbk\setminus \{ 0\}$, and let $f\in \Bbbk[[x,y]]$ be a formal power series defined by \[ f(x, y) = ax^{\alpha } + by^{\beta } + \sum_{\alpha \beta < \alpha s + \beta r} c_{r, s} x^r y^s, \] where $\alpha $ and $\beta $ satisfy $p\mathrel{\not | } \alpha$, $p\mathrel{\not | } \beta$ and are relatively prime. Then the Milnor number $\mu (f) $ of $f$ is \[ \mu (f) = (\alpha -1)(\beta -1). \] \end{lemma} \begin{proof} We use notations of \cite{MR3839793}. The $(\beta, \alpha)$-order of $f$ is \[ \mathrm{ord} _{(\beta, \alpha)} (f) = \alpha \beta. \] The $(\beta, \alpha)$-initial of $f$ is \[ \mathrm{in} _{(\beta, \alpha)} (f) =ax^{\alpha} + by^{\beta}. \] Thus the formal power series $f$ is the semi-quasihomogeneous with respect to $(\beta, \alpha)$. By the Appendix of \cite{MR3839793}, \[ \mu (f) = (\alpha -1)(\beta -1). \] \end{proof} \begin{proof}[Proof of Theorem 2] The morphisms $\gamma _{q^2+q+1}$ and $\gamma _{q+1} $ satisfy \[ \gamma _{q^2+q+1} \circ \gamma _{q+1} = \gamma _{q+1} \circ \gamma _{q^2+q+1} = \gamma _{(q^2+q+1)(q+1)}. \] By the definition of the Ballico-Hefez curve and the line $l = \gamma _{q^2+q+1} (C_0)$, we have \[ B=\gamma _{q+1} (l) = \gamma _{q+1} (\gamma _{q^2+q+1} (C_0) ) = \gamma _{q^2+q+1} (\gamma _{q+1} (C_0) ) = \gamma _{q^2+q+1} (C^{\vee } _0), \] and hence (i) is proved. We define $X\subset \mathbb{P} ^2$ by \[ X= \{x_0=0 \} \cup \{x_1=0 \} \cup \{x_2=0 \} . \] The Ballico-Hefez curve $B$ has $\frac{q^2-q}{2} $ ordinary nodes on $\mathbb{P} ^2\setminus X$ (see \cite[Theorem 2.2]{MR2961398}), and no singular points on $X$. Let $H$ and $h$ be the defining polynomials of $C^{\vee } _0$ and $B$, respectively. Using Proposition 1.6 of \cite{MR3323512}, if $p=2$, then \begin{equation} \begin{split} h &= x_0 ^{q+1} + x_1^{q+1} + x_2^{q+1} +x_0^q x_2+x_1^q x_2 + x_0 x_2^q + x_1 x_2^q \\ &\quad + \sum_{i=0}^{\nu -1} x_0^{2^i} x_1^{2^i} (x_0 + x_1+ x_2) ^{q+1-2^{i+1} } , \label{6} \end{split} \end{equation} whereas if $p$ is odd, then \begin{equation} \begin{split} h &= x_0^{q+1} + x_1^{q+1} + x_2^{q+1} -x_0^q x_1 - x_0^q x_2 -x_0 x_1^q - x_1^q x_2 -x_0 x_2^q -x_1 x_2^q \\ &\quad + (x_0^2 + x_1^2 + x_2 ^2 - 2x_0x_1 - 2x_1x_2 - 2x_2x_0)^{\frac{q+1}{2} } . \label{7} \end{split} \end{equation} By (i), the polynomial $H$ satisfies $H(x_0 , x_1, x_2 ) = h(x_0^{q^2+q+1} , x_1^{q^2+q+1}, x_2^{q^2+q+1} ) $, and two polynomials $H$ and $h$ are symmetric under the permutation of coordinates $x_0$, $x_1$ and $x_2$. First we consider the singularities of $C^{\vee } _0$ on $\mathbb{P} ^2 \setminus X$. The morphism $\gamma _{q^2+q+1} : \mathbb{P} ^2 \setminus X \to \mathbb{P} ^2 \setminus X $ is \'{e}tale of degree $(q^2+q+1)^2$. Thus, the ordinary nodes of $C^{\vee} _0$ on $\mathbb{P} ^2 \setminus X$ are $(q^2+q+1)^2(q^2-q)/2 $. Next, we consider the singularities of $C^{\vee } _0$ on $X$. $h(0, x_1, x_2) = 0$ if and only if $x_1 = x_2$ by (\ref{6}) and (\ref{7}). Moreover, the polynomial $H$ and its partial derivatives $\partial H/\partial x_i = x_i^{q^2+q} (\partial h/\partial x_i )$ vanish at a point in $\{ x_0 = 0 \} $. Thus all the points on $C_0^{\vee} \cap \{ x_0 = 0 \} $ are singular points of $C_0^{\vee}$. The morphism $\gamma _{q^2+q+1} |_{\{ x_0=0\} } $ restricted to $\{x_0=0\} $ is degree $q^2+q+1$. Thus the number of the singular points of $C^{\vee } _0$ on $\{ x_0=0 \} $ are $q^2+q+1$. Therefore, by the polynomial $H$ is symmetric, the number of the singular points of $C^{\vee } _0$ on $X$ are $3(q^2+q+1)$. Finally, since all Milnor numbers at points in $\gamma_{q^2+q+1} ^{-1} ([ 0 : 1 : 1 ]) $ are equal, we should caluculate the Milnor number at the point $[0 : 1 : 1 ] \in C^{\vee } _0$. If $p=2$, \begin{equation*} \begin{split} h(x_0^{q^2+q+1}, x_1+1, 1) &= x_0^{q^2+q+1} + x_1^{q+1} + x_0^{q(q^2+q+1)} +x_0^{(q+1)(q^2+q+1)} \\ &\quad + \sum_{i=0}^{\nu -1} (x_0^{q^2+q+1} )^{2^i} (x_1+1)^{2^i} (x_0^{q^2+q+1} + x_1 )^{q+1-2^i }, \end{split} \end{equation*} whereas if $p$ is odd, \begin{equation*} \begin{split} h(x_0^{q^2+q+1}, x_1+1, 1) &= -2x_0^{q^2+q+1} + x_1^{q+1} + x_0^{(q^2+q+1)(q+1)} \\ &\quad - x_0^{q(q^2+q+1)} x_1 - 2x_0 ^{q(q^2+q+1)} - x_0^{q^2+q+1} x_1^q \\ &\quad + (x_0^{2(q^2+q+1)} + x_1^2 -2x_0^{q^2+q+1} x_1 - 4x_0^{q^2+q+1} )^{\frac{q+1}{2} }. \end{split} \end{equation*} By Lemma 3.1, the Milnor number of $h(x_0^{q^2+q+1}, x_1+1, 1)$ is \[ q(q^2+q)=q^2(q+1). \] \end{proof} We confirm that the genus of the Fermat curve agree with the genus of its dual curve. The genus $g$ of the Fermat curve $C_0$ of the degree $d=q^2+q+1$ is \[ g=\frac{(d-1)(d-2)}{2} = \frac{(q^2+q)(q^2+q-1)}{2}. \] Let $\mu _P$ be the Milnor number and let $r_P$ be the number of the branches at a singular point of the dual curve $C_0^{\vee}$. If a point $P\in C_0^{\vee}$ is an ordinary node, then $\mu_P = 1$ and $r_P =2$, whereas if a point $P$ is in $C_0^{\vee} \cap X$, then $\mu_P = q^2(q+1)$ and $r_P =1$. Thus the degree $d^{\vee}$ of $C_0^{\vee}$ is $ (q+1)(q^2+q+1)$, and the genus $g^{\vee}$ of $C_0^{\vee}$ is \begin{equation*} \begin{split} g^{\vee} &= \frac{(d^{\vee} -1)(d^{\vee} -2)}{2} - \frac{1}{2} \sum_{P\in \mathrm{Sing} C_0^{\vee} } (\mu_P + r_P -1) \\ &= \frac{\{(q^2+q+1)(q+1)-1\} \{(q^2+q+1)(q+1)-2\}}{2} \\ &\quad -\frac{1}{2} \{ (q^2+q+1)^2(q^2-q)+3(q^2+q+1)q^2(q+1)\} \\ &= \frac{(q^2+q)(q^2+q-1)}{2}. \end{split} \end{equation*} \bibliographystyle{abbrv}
1,108,101,566,205
arxiv
\section{Introduction} Reflected stochastic differential equations have been introduced in the pionneering work of Skorokhod (see \cite{Sko61}), and their numerical approximations by Euler schemes have been widely studied (see \cite{Slo94}, \cite{Slo01}, \cite{Lep95}, \cite{Pet95}, \cite{Pet97}). Reflected stochastic differential equations driven by a Lévy process have also been studied in the literature (see \cite{MR85}, \cite{KH92}). More recently, reflected backward stochastic differential equations with jumps have been introduced and studied (see \cite{HO03}, \cite{EHO05}, \cite{HH06}, \cite{Ess08}, \cite{CM08}, \cite{QS14}), as well as their numerical approximation (see \cite{DL16a} and \cite{DL16b}). The main particularity of our work comes from the fact that the constraint acts on the law of the process $X$ rather than on its paths. The study of such equations is linked to the mean field games theory, which has been introduced by Lasry and Lions (see \cite{LL07a}, \cite{LL07b}, \cite{LL06a}, \cite{LL06b}) and whose probabilistic point of view is studied in \cite{CD18a} and \cite{CD18b}. Stochastic differential equations with mean reflection have been introduced by Briand, Elie and Hu in their backward forms in \cite{BEH16}. In that work, they show that mean reflected stochastic processes exist and are uniquely defined by the associated system of equations of the following form: \begin{equation}\label{eq:main2} \begin{cases} \begin{split} & X_t =X_0+\int_0^t b(X_{s}) ds + \int_0^t \sigma(X_{s}) dB_s + K_t,\quad t\geq 0, \\ & \mathbb{E}[h(X_t)] \geq 0, \quad \int_0^t \mathbb{E}[h(X_s)] \, dK_s = 0, \quad t\geq 0. \end{split} \end{cases} \end{equation} Due to the fact that the reflection process $K$ depends on the law of the position, the authors of \cite{BCGL17}, inspired by mean field games, study the convergence of a numerical scheme based on particle systems to compute numerically solutions to \eqref{eq:main2}.\\ In this paper, we extend previous results to the case of jumps, i.e. we study existence and uniqueness of solutions to the following mean reflected stochastic differential equation (MR-SDE in the sequel) \begin{equation}\label{eq:main} \begin{cases} \begin{split} & X_t =X_0+\int_0^t b(X_{s^-}) ds + \int_0^t \sigma(X_{s^-}) dB_s + \int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz) + K_t,\quad t\geq 0, \\ & \mathbb{E}[h(X_t)] \geq 0, \quad \int_0^t \mathbb{E}[h(X_s)] \, dK_s = 0, \quad t\geq 0, \end{split} \end{cases} \end{equation} where $E=\mathbb{R}^*$, $\tilde{N}$ is a compensated Poisson measure $\tilde{N}(ds,dz)=N(ds,dz)-\lambda(dz)ds$, and $B$ is a Brownian process independent of $N$. We also propose a numerical scheme based on a particle system to compute numerically solutions to \eqref{eq:main} and study the rate of convergence of this scheme. \medskip Our main motivation for studying~\eqref{eq:main} comes from financial problems submitted to risk measure constraints. Given any position $X$, its risk measure $\rho(X)$ can be seen as the amount of own fund needed by the investor to hold the position. For example, we can consider the following risk measure: $\rho(X) = \inf\{m:\ \mathbb{E}[u(m+X)]\geq p\}$ where $u$ is a utility function (concave and increasing) and $p$ is a given threshold (we refer the reader to~\cite{ADEH99} and to~\cite{FS02} for more details on risk measures). Suppose that we are given a portfolio $X$ of assets whose dynamic, when there is no constraint, follows the jump diffusion model \begin{equation*} d X_t = b(X_t) d t + \sigma(X_t) d B_t +\int_E F(X_{t-},z) \tilde{N}(dt,dz), \qquad t\geq 0. \end{equation*} Given a risk measure $\rho$, one can ask that $X_t$ remains an acceptable position at each time $t$. The constraint rewrites $\mathbb{E} \left[h(X_t)\right] \geq 0$ for $t\geq 0$ where $h=u-p$. In order to satisfy this constraint, the agent has to add some cash in the portfolio through the time and the dynamic of the wealth of the portfolio becomes \begin{equation*}\label{eq:exempleport2} d X_t = b(X_t) d t + \sigma(X_t) d B_t +\int_E F(X_{t-},z) \tilde{N}(dt,dz)+d K_t, \qquad t\geq 0, \end{equation*} where $K_t$ is the amount of cash added up to time $t$ in the portfolio to balance the "risk" associated to $X_t$. Of course, the agent wants to cover the risk in a minimal way, adding cash only when needed: this leads to the Skorokhod condition $\mathbb{E}[h(X_t)] d K_t = 0$. Putting together all conditions, we end up with a dynamic of the form \eqref{eq:main} for the portfolio. \medskip The paper is organized as follows. In Section \ref{sec:EU}, we show that, under Lipschitz assumptions on $b$, $\sigma$ and $F$ and bi-Lipchitz assumptions on $h$, the system admits a unique strong solution, \emph{i.e.} there exists a unique pair of process $(X,K)$ satisfying system \eqref{eq:main} almost surely, the process $K$ being an increasing and deterministic process. Then, we show that, by adding some regularity on the function $h$, the Stieltjes measure $dK$ is absolutely continuous with respect to the Lebesgue measure and we obtain the explicit expression of its density. In Section \ref{sec:PMRSDE} we show that the system \eqref{eq:main} can be seen as the limit of an interacting particles system with oblique reflection of mean field type. This result allows to define in Section \ref{sec:NSMRSDE} an algorithm based on this interacting particle system together with a classical Euler scheme which gives a strong approximation of the solution of \eqref{eq:main}. When $h$ is bi-Lipschitz, this leads to an approximation error in $L^2$-sense proportional to $n^{-1} + N^{-\frac{1}{2}}$, where $n$ is the number of points of the discretization grid and $N$ is the number of particles. When $h$ is smooth, we get an approximation error proportional to $n^{-1} + N^{-1}$. By the way, we improve the speed of convergence obtained in \cite{BCGL17}. Finally, we illustrate these results numerically in Section \ref{sec:NI}. \section{Existence, uniqueness and properties of the solution.}\label{sec:EU} Throughout this paper, we consider the following set of assumptions. \begin{customassumption}{(A.1)} \label{lip} {\color{white} \rule{\linewidth}{0.5mm} } (i) Lipschitz assumption: There exists a constant $C_p>0$, such that for all $x,x'\in\mathbb{R}$ and $p>0$, we have \begin{equation*} \mid b(x)-b(x')\mid^p+\mid \sigma(x)-\sigma(x')\mid^p+\int_E\mid F(x,z)-F(x',z)\mid^p\lambda(dz)\leq C_p\mid x-x'\mid^p. \end{equation*} (ii) The random variable $X_0$ is square integrable independent of $B_t$ and $N_t$. \end{customassumption} \begin{customassumption}{(A.2)} \label{bilip} {\color{white} \rule{\linewidth}{0.5mm} } (i) The function $h:\mathbb{R} \longrightarrow \mathbb{R}$ is an increasing function and there exist $0<m\leq M$ such that $$\forall x\in\mathbb{R},~\forall y\in\mathbb{R},~m|x-y|\leq|h(x)-h(y)|\leq M|x-y|.$$ (ii) The initial condition $X_0$ satisfies: $\mathbb{E} [h(X_0)]\geq0$. \end{customassumption} \begin{customassumption}{(A.3)} \label{int} $\exists p>4$ such that $X_0$ belongs to $\mathrm{L}^p$: $\mathbb{E} [|X_0|^p]<\infty$. \end{customassumption} \begin{customassumption}{(A.4)} \label{reg} The mapping $h$ is a twice continuously differentiable function with bounded derivatives. \end{customassumption} \subsection{Preliminary results} Define the function \begin{equation} \label{H} {H}    : \mathbb{R} \times \mathcal{P}(\mathbb{R}) \ni  (x,\nu) \mapsto \int h(x+z) \nu(dz), \end{equation} and the inverse function in space of $H$ evaluated at $0$, namely: \begin{equation} \bar{G}_0    : \mathcal{P}(\mathbb{R}) \ni \nu \mapsto \inf \{x  \in \mathbb{R} : H(x,\nu) \ge 0 \} , \end{equation} as well as ${G}_0$, the positive part of $\bar{G}_0$: \begin{equation} {G}_0    : \mathcal{P}(\mathbb{R}) \ni \nu \mapsto \inf \{x  \ge 0 : H(x,\nu) \ge 0 \}. \end{equation} We start by studying some properties of $H$ and $G_0$. \begin{lemma} Under \ref{bilip}, we have: \begin{enumerate} \item[(i)] For all $\nu$ in $\mathcal{P}(\mathbb{R})$, the mapping $H(\cdot,\nu): \mathbb{R}\ni x \mapsto H(x,\nu)$ is a bi-Lipschitz function, namely:         \begin{equation}     \label{Hbilip}       \forall x,y \in \mathbb{R}, m|x-y| \le |H(x,\nu)-H(y,\nu)|\le M|x-y|.     \end{equation}     \item[(ii)]  For all $x$ in $\mathbb{R}$, the mapping $H(x,\cdot) : \mathcal{P}(\mathbb{R})\ni \nu \mapsto H(x,\nu)$ satisfies the following Lipschitz estimate:     \begin{equation}     \label{Hlip}  \forall \nu,\nu' \in \mathcal{P}(\mathbb{R}),  |H(x,\nu)-H(x,\nu')|\le \left|\int h(x+\cdot) (d\nu-d\nu')\right|.     \end{equation}  \end{enumerate}   \end{lemma} \begin{proof} The proof is straightforward from the definition of $H$ (see $\eqref{H}$). \end{proof} Note that thanks to Monge-Kantorovitch Theorem, assertion $\eqref{Hlip}$ implies that for all $x$ in $\mathbb{R}$, the function $H(x,\cdot)$ is Lipschitz continuous w.r.t. the Wasserstein-1 distance. Indeed, for two probability measures $\nu$ and $\nu'$, the Wasserstein-1 distance between $\nu$ and $\nu'$ is defined by: $$W_1(\nu,\nu')=\sup_{\varphi~1-Lipschitz}\bigg|\int\varphi(d\nu-d\nu')\bigg|=\inf_{X\sim\nu~;~Y\sim\nu'}\mathbb{E}[|X-Y|].$$ Therefore \begin{equation} \label{Hwiss} \forall\nu,\nu'\in\mathcal{P}(\mathbb{R}),~|H(x,\nu)-H(x,\nu')|\leq MW_1(\nu,\nu'). \end{equation} Then, we have the following result about the regularity of $G_0$: \begin{lemma} \label{wiss} Under \ref{bilip}, the mapping ${G}_0 :\mathcal{P}(\mathbb{R}) \ni \nu \mapsto {G}_0(\nu)$ is Lipschitz-continuous in the following sense: \begin{equation} |{G}_0(\nu)-{G}_0( \nu') | \le {1\over{m}}\left|\int h(\bar{G}_0(\nu)+\cdot) (d\nu-d\nu')\right|, \end{equation} where $\bar{G}_0(\nu)$ is the inverse of $H(\cdot,\nu) $ at point $0$. In particular \begin{equation} |{G}_0(\nu)-{G}_0( \nu') | \le {M\over{m}}W_1(\nu, \nu'). \end{equation} \end{lemma} \begin{proof} The proof is given in (\cite{BCGL17}, Lemma 2.5). \end{proof} \subsection{Existence and uniqueness of the solution of \eqref{eq:main}} We emphasize that existence and uniqueness results hold only under \ref{lip} which is the standard assumption for SDEs and \ref{bilip} which is the assumption used in \cite{BEH16}. The convergence of particles systems requires only an additional integrability assumption on the initial condition, namely \ref{int}. We sometimes add the smoothness assumption \ref{reg} on $h$ in order to improve some of the results. We first recall the existence and uniqueness result of in the case of SDEs. \begin{definition} A couple of processes $(X,K)$ is said to be a flat deterministic solution to \eqref{eq:main} if $(X,K)$ satisfy \eqref{eq:main} with $K$ being a non-decreasing deterministic function with $K_0=0$. \end{definition} Given this definition we have the following result. \begin{theorem} \label{thrm_exacte} Under Assumptions $\ref{lip}$ and $\ref{bilip}$, the mean reflected SDE \eqref{eq:main} has a unique deterministic flat solution $(X, K)$. Moreover, \begin{equation} \label{K_t} \forall t\geq 0,~K_t=\sup_{s\leq t} \inf\{x\geq0:\mathbb{E}[h(x+U_s)] \geq 0\}=\sup\limits_{s\le t} {G}_0(\mu_{s}), \end{equation} where $(U_t)_{0\leq t\leq T}$ is the process defined by: \begin{equation} \label{U_t} U_t=X_0+\int_0^t b(X_{s^-}) ds + \int_0^t \sigma(X_{s^-}) dB_s + \int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz), \end{equation} and $(\mu_{t})_{0\le t\le T}$ is the family of marginal laws of $(U_{t})_{0\le t\le T}$. \end{theorem} \begin{proof} The proof for the case of continuous backward SDEs is given in \cite{BEH16}. For the ease of the reader, we sketch the proof for the forward case with jumps. Let $\hat{X}$ be a given process such that, for all $t>0$, $\mathbb{E}\big[\sup_{s\leq t}|\hat X_s|^2\big]<\infty$. We set $$\hat U_t=X_0+\int_0^t b(\hat X_{s^-}) ds + \int_0^t \sigma(\hat X_{s^-}) dB_s + \int_0^t\int_E F(\hat X_{s^-},z) \tilde{N}(ds,dz),$$ and define the function $K$ by setting \begin{equation} \label{K} K_t=\sup_{s\leq t} \inf\{x\geq0:\mathbb{E}[h(x+\hat U_s)] \geq 0\}=\sup\limits_{s\le t} {G}_0(\hat\mu_{s}). \end{equation} The function $K$ being given, let us define the process $X$ by the formula $$X_t=X_0+\int_0^t b(\hat X_{s^-}) ds + \int_0^t \sigma(\hat X_{s^-}) dB_s + \int_0^t\int_E F(\hat X_{s^-},z) \tilde{N}(ds,dz)+K_t.$$ Let us check that $(X, K)$ is the solution to \eqref{eq:main}. By definition of $K$, $\mathbb{E}[h(X_t)] \geq 0$ and we have $dK$ almost everywhere, $$K_t=\sup_{s\leq t} \inf\{x\geq0:\mathbb{E}[h(x+\hat U_s)] \geq 0\}>0,$$ so that $\mathbb{E}[h(X_t)]=\mathbb{E}[h(\hat U_t+K_t)]=0~~~dK$-a.e. since $h$ is continuous and nondecreasing.\\ \\ Next, we consider the set $\mathcal{C}^2=\{X~ \mbox{càdlàg},~\mathbb{E}(\sup_{t\leq T}|X_s|^2)<\infty\}$ and the map $\Xi:\mathcal{C}^2\longrightarrow \mathcal{C}^2$ which associates to $\hat X$ the process $X$. Let us show that $\Xi$ is a contraction. Let $\hat X,~\hat X'\in \mathcal{C}^2$ be given, and define $K$ and $K'$ as above, using the same Brownian motion. We have from Assumption $\ref{lip}$, Cauchy-Schwartz and Doob inequality \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{t\leq T}|X_t-X'_t|^2\bigg] & \leq 4 \mathbb{E}\bigg[\sup_{t\leq T}\Bigg\{\bigg|\int_0^t \bigg(b(\hat X_{s^-})-b(\hat X'_{s^-})\bigg) ds\bigg|^2 + \bigg|\int_0^t \bigg(\sigma(\hat X_{s^-})-\sigma(\hat X'_{s^-})\bigg) dB_s\bigg|^2 \\&~~~~ + \bigg|\int_0^t\int_E \bigg(F(\hat X_{s^-},z)-F(\hat X'_{s^-},z)\bigg) \tilde{N}(ds.dz)\bigg|^2 + |K_t-K'_t|^2\Bigg\}\bigg]\\ & \leq 4 \Bigg\{\mathbb{E}\bigg[\sup_{t\leq T}t\int_0^t \Big|b(\hat X_{s^-})-b(\hat X'_{s^-})\Big|^2 ds\bigg] + \mathbb{E}\bigg[\sup_{t\leq T}\bigg|\int_0^t \bigg(\sigma(\hat X_{s^-})-\sigma(\hat X'_{s^-})\bigg) dB_s\bigg|^2\bigg] \\&~~~~ + \mathbb{E}\bigg[\sup_{t\leq T}\bigg|\int_0^t\int_E \bigg(F(\hat X_{s^-},z)-F(\hat X'_{s^-},z)\bigg) \tilde{N}(ds,dz)\bigg|^2\bigg] + \sup_{t\leq T}|K_t-K'_t|^2\Bigg\} \\ & \leq C \Bigg\{T\mathbb{E}\bigg[\int_0^T \Big|b(\hat X_{s^-})-b(\hat X'_{s^-})\Big|^2 ds\bigg]+ \mathbb{E}\bigg[\int_0^T \Big|\sigma(\hat X_{s^-})-\sigma(\hat X'_{s^-})\Big|^2ds\bigg] \\&~~~~ + \int_0^T\int_E\mathbb{E}\bigg[\Big|F(\hat X_{s^-},z)-F(\hat X'_{s^-},z)\Big|^2\bigg] \lambda(dz)ds + \sup_{t\leq T}|K_t-K'_t|^2\Bigg\}\\ & \leq C\Bigg\{T^2C_1\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t^-}-\hat X'_{t^-}|^2\bigg]+ TC_1\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t^-}-\hat X'_{t^-}|^2\bigg] \\&~~~~ + TC_1\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t^-}-\hat X'_{t^-}|^2\bigg] + \sup_{t\leq T}|K_t-K'_t|^2\Bigg\}\\ & \leq C\bigg(T^2C_1+TC_2\bigg)\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t}-\hat X'_{t}|^2\bigg]+C\sup_{t\leq T}|K_t-K'_t|^2. \end{aligned} \end{equation*} From the representation $\eqref{K}$ of the process $K$ and Lemma \ref{wiss}, we have that \begin{equation*} \begin{aligned} \sup_{t\leq T}|K_t-K'_t|^2 & \leq \frac{M}{m} \mathbb{E}\bigg[\sup_{t\leq T}|\hat U_t-\hat U'_t|^2\bigg]\\ &\leq C(T^2C_1+TC_2)\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t}-\hat X'_{t}|^2\bigg]. \end{aligned} \end{equation*} Therefore, \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{t\leq T}|X_t-X'_t|^2\bigg] & \leq C(1+T)T\mathbb{E}\bigg[\sup_{t\leq T}|\hat X_{t}-\hat X'_{t}|^2\bigg]. \end{aligned} \end{equation*} Hence, there exists a positive $\mathcal{T}$, depending on $b$, $\sigma$, $F$ and $h$ only, such that for all $T <\mathcal{T}$, the map $\Xi$ is a contraction. We first deduce the existence and uniqueness of solution on $[0, \mathcal{T}]$ and then on $\mathbb{R}^+$ by iterating the construction. \end{proof} \subsection{Regularity results on $K$, $X$ and $U$} \begin{remark} \label{rem} Note that from this construction, we deduce that for all $0\leq s< t$:     \begin{align*} &K_{t}-K_{s}\\ &= \sup\limits_{s\le r\le t} \inf  \left\{x\ge 0 : \mathbb{E}\left[ h \left( x+X_s+\int_s^{r} b(X_{u^-}) du +\int_s^{r} \sigma(X_{u^-}) dB_u + \int_s^{r}\int_E F(X_{u^-},z) \tilde{N}(du,dz) \right)\right] \right\}.        \end{align*}       \end{remark} \begin{proof} From the representation $\eqref{K_t}$ of the process $K$, we have \begin{equation*} \begin{aligned} K_t&=\sup_{r\leq t} \inf\bigg\{x\geq0:\mathbb{E}[h(x+U_r)] \geq 0\bigg\}\\& =\sup_{r\leq t} G_0(U_r)\\& =\max\bigg\{\sup_{r\leq s}G_0(U_r),\sup_{s\leq r\leq t} G_0(U_r)\bigg\}\\& =\max\bigg\{K_s,\sup_{s\leq r\leq t} G_0(U_r)\bigg\}\\& =\max\bigg\{K_s,\sup_{s\leq r\leq t} G_0(X_s-K_s+U_r-U_s)\bigg\}\\& =\max\bigg\{K_s,\sup_{s\leq r\leq t} \bigg[\bar{G_0}(X_s-K_s+U_r-U_s)^+\bigg]\bigg\}. \end{aligned} \end{equation*} By the definition of $\bar{G_0}$, we can observe that for all $y\in\mathbb{R}$, $\bar{G_0}(X+y)=\bar{G_0}(X)-y$, so we get \begin{equation*} \begin{aligned} K_t&=\max\bigg\{K_s,\sup_{s\leq r\leq t} \bigg[\bigg(K_s+\bar{G_0}(X_s+U_r-U_s)\bigg)^+\bigg]\bigg\}\\& =K_s+\max\bigg\{0,\sup_{s\leq r\leq t} \bigg[\bigg(K_s+\bar{G_0}(X_s+U_r-U_s)\bigg)^+-K_s\bigg]\bigg\}. \end{aligned} \end{equation*} Note that $\sup_r (f(r)^+)=(\sup_r f(r))^+=\max(0,\sup_r f(r))$ for all function $f$, and obviously \begin{equation*} \begin{aligned} K_t&=K_s+\sup_{s\leq r\leq t} \bigg[\bigg\{\bigg(K_s+\bar{G_0}(X_s+U_r-U_s)\bigg)^+-K_s\bigg\}^+\bigg]\\& =K_s+\sup_{s\leq r\leq t}\bigg[\bigg(\bar{G_0}(X_s+U_r-U_s)\bigg)^+\bigg]\\& =K_s+\sup_{s\leq r\leq t}G_0(X_s+U_r-U_s), \end{aligned} \end{equation*} and so \begin{equation*} \begin{aligned} K_t-K_s=\sup_{s\leq r\leq t}G_0(X_s+U_r-U_s). \end{aligned} \end{equation*} \end{proof} In the following, we make an intensive use of this representation formula of the process $K$.\\ Let $(\m{F}_s)_{s\geq0}$ be a filtration on $(\Omega,\m{F},\mathbb{P})$ such that $(X_s)_{s\geq0}$ is an $(\m{F}_s)_{s\geq0}$-adapted process. \begin{proposition} \label{propriete_1} Suppose that Assumptions $\ref{lip}$ and $\ref{bilip}$ hold. Then, for all $p\geq 2$, there exists a positive constant $K_p$, depending on $T$, $b$, $\sigma$, $F$ and $h$ such that \begin{enumerate} \item[(i)] $\mathbb{E}\big[\sup_{t\leq T}|X_t|^p\big]\leq K_p\big(1+\mathbb{E}\big[|X_0|^p\big]\big).$ \item[(ii)]$\forall~0\leq s\leq t\leq T,~~~~\mathbb{E}\big[\sup_{s\leq u\leq t}|X_u|^p|\m{F}_s\big]\leq C\big(1+|X_s|^p\big).$ \end{enumerate} \end{proposition} \begin{remark} \label{rem_U_1} Under the same conditions, we conclude that $$\mathbb{E}\big[\sup_{t\leq T}|U_t|^p\big]\leq K_p\big(1+\mathbb{E}\big[|X_0|^p\big]\big).$$ \end{remark} \begin{proof}[Proof of (i)]\renewcommand{\qedsymbol}{} We have \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{t\leq T}|X_t|^p\bigg]&\leq 5^{p-1}\Bigg\{\mathbb{E}|X_0|^p+\mathbb{E} \sup_{t\leq T}\bigg(\int_0^t |b(X_{s^-})| ds\bigg)^p + \mathbb{E} \sup_{t\leq T}\bigg|\int_0^t \sigma(X_{s^-}) dB_s\bigg|^p \\&~~+ \mathbb{E} \sup_{t\leq T}\bigg|\int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz)\bigg|^p + K_T^p\Bigg\}. \end{aligned} \end{equation*} Let us first consider the last term $K_T =\sup_{t\leq T}G_0(\mu_s)$. From the Lipschitz property of Lemma $\ref{wiss}$ of $G_0$ and the definition of the Wasserstein metric we have $$\forall t\geq 0,~|G_0(\mu_t)|\leq \frac{M}{m}\mathbb{E}[|U_t-U_0|],$$ since $G_0(\mu_0)=0$ as $\mathbb{E}[h(X_0)]\geq 0$ and where $U$ is defined by $\eqref{U}$. Therefore \begin{equation*} \begin{aligned} |K_T|^p=|\sup_{t\leq T}G_0(\mu_t)|^p&\leq 3^{p-1}\bigg(\frac{M}{m}\bigg)^p\Bigg\{\mathbb{E} \sup_{t\leq T}\bigg(\int_0^t |b(X_{s^-})| ds\bigg)^p + \mathbb{E} \sup_{t\leq T}\bigg|\int_0^t \sigma(X_{s^-}) dB_s\bigg|^p \\&~~+ \mathbb{E} \sup_{t\leq T}\bigg|\int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz)\bigg|^p\Bigg\}, \end{aligned} \end{equation*} and so \begin{equation*} \begin{aligned} \mathbb{E}\big[\sup_{t\leq T}|X_t|^p\big]&\leq C(p,M,m)\mathbb{E}\bigg[|X_0|^p+\sup_{t\leq T}\bigg(\int_0^t |b(X_{s^-})| ds\bigg)^p + \sup_{t\leq T}\bigg|\int_0^t \sigma(X_{s^-}) dB_s\bigg|^p \\&~~+ \sup_{t\leq T}\bigg|\int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz)\bigg|^p\bigg]. \end{aligned} \end{equation*} Hence, using Assumption $\ref{lip}$, Cauchy-Schwartz, Doob and BDG inequality we get \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{t\leq T}|X_t|^p\bigg]&\le C\Bigg\{\mathbb{E}\bigg[|X_0|^p\bigg] + T^{p-1}\mathbb{E}\bigg[\int_0^T (1+|X_{s^-}|)^p ds\bigg] + C_1\mathbb{E}\bigg[\int_0^T (1+|X_{s^-}|)^2 ds\bigg]^\frac{p}{2} \\&~~+ C_2\mathbb{E}\bigg[\int_0^T (1+|X_{s^-}|)^p ds\bigg]\Bigg\}\\ &\le C_1\bigg(1+\mathbb{E}|X_0|^p\bigg)+C_2\int_{0}^{T}\mathbb{E}\bigg[\sup_{t\leq r}|X_t|^p\bigg]dr, \end{aligned} \end{equation*} and from Gronwall's Lemma, we can conclude that for all $p\geq 2$, there exists a positive constant $K_p$, depending on $T$, $b$, $\sigma$, $F$ and $h$ such that $$\mathbb{E}\big[\sup_{t\leq T}|X_t|^p\big]\leq K_p\big(1+\mathbb{E}\big[|X_0|^p\big]\big).$$ \end{proof} \begin{proof}[Proof of (ii)] For the first part, we have \begin{equation*} \begin{aligned} X_u &=U_u+K_u\\&=X_s+(U_u-U_s)+(K_u-K_s)\\&=X_s+\int_s^u b(X_{r^-}) dr + \int_s^u \sigma(X_{r^-}) dB_r + \int_s^u\int_E F(X_{r^-},z) \tilde{N}(dr,dz)\\&~~~~ + (K_u-K_s). \end{aligned} \end{equation*} Define that $\mathbb{E}_s[\cdotp]=\mathbb{E}[\cdotp|\m{F}_s]$, then we get \begin{equation*} \begin{aligned} \mathbb{E}_s\bigg[\sup_{s\leq u\leq t}|X_u|^p\bigg]&\leq 5^{p-1}\Bigg\{\mathbb{E}_s\bigg[|X_s|^p\bigg]+\mathbb{E}_s\bigg[ \sup_{s\leq u\leq t}\bigg|\int_s^u b(X_{r^-}) dr\bigg|^p\bigg] + \mathbb{E}_s \bigg[\sup_{s\leq u\leq t}\bigg|\int_s^u \sigma(X_{r^-}) dB_r\bigg|^p\bigg] \\&~~+ \mathbb{E}_s \bigg[\sup_{s\leq u\leq t}\bigg|\int_s^u\int_E F(X_{r^-},z) \tilde{N}(dr,dz)\bigg|^p\bigg] +\bigg|K_t-K_s\bigg|^p\Bigg\}\\& \leq C\Bigg\{|X_s|^p+T^{p-1}\int_s^t \mathbb{E}_s\bigg[\bigg|b(X_{r^-})\bigg|^p\bigg] dr+ \int_s^t \mathbb{E}_s\bigg[\bigg|\sigma(X_{r^-})\bigg|^p\bigg] dr \\&~~+ \int_s^t\int_E \mathbb{E}_s\bigg[\bigg|F(X_{r^-},z)\bigg|^p\bigg]\lambda(dz) dr +2\bigg|K_T\bigg|^p\Bigg\}\\& \leq C(T)\Bigg\{|X_s|^p+C_1\int_s^t \mathbb{E}_s\bigg[1+|X_{r^-}|^p\bigg]dr+2\bigg|K_T\bigg|^p\Bigg\}\\& \leq C_1(1+|X_s|^p)+C_2\int_s^t \mathbb{E}_s[\sup_{s\leq u \leq r} |X_{u^-}|^p]dr. \end{aligned} \end{equation*} Finally, from Gronwall's Lemma, we deduce that for all $0\leq s\leq t\leq T$, there exists a constant $C$, depending on $p$, $T$, $b$, $\sigma$, $F$ and $h$ such that$$\mathbb{E}\big[\sup_{s\leq u\leq t}|X_u|^p|\m{F}_s\big]\leq C\big(1+|X_s|^p\big).$$ \end{proof} \begin{proposition} \label{propriete_2} Let $p\geq2$ and let Assumptions \ref{lip}, \ref{bilip} and \ref{int} hold. There exists a constant $C$ depending on $p$, $T$, $b$, $\sigma$, $F$ and $h$ such that \begin{enumerate} \item [(i)] $\forall~0\leq s<t\leq T,~~~~|K_t-K_s|\leq C|t-s|^{(1/2)}.$ \item [(ii)] $\forall~0\leq s\leq t\leq T,~~~~\mathbb{E}\big[|U_t-U_s|^p\big]\leq C|t-s|.$ \item [(iii)] $\forall~0\leq r<s<t\leq T,~~~~\mathbb{E}[|U_s-U_r|^p|U_t-U_s|^p]\leq C|t-r|^2.$ \end{enumerate} \end{proposition} \begin{remark} \label{rem_X} Under the same conditions, we conclude that $$\forall~0\leq s\leq t\leq T,~~~~\mathbb{E}\big[|X_t-X_s|^p\big]\leq C|t-s|.$$ \end{remark} \begin{proof}[Proof of (i)]\renewcommand{\qedsymbol}{} Let us recall that $$\bar{G_0}(X)=\inf\{x\in\mathbb{R}:\mathbb{E}[h(x+ X)] \geq 0\},$$ $$G_0(X)=(\bar{G_0}(X))^+=\inf\{x\geq 0:\mathbb{E}[h(x+X)] \geq 0\},$$ for all process $X$.\\ From Remark \ref{rem}, we have \begin{equation} \label{Kt-Ks} \begin{aligned} K_t-K_s=\sup_{s\leq r\leq t}G_0(X_s+U_r-U_s). \end{aligned} \end{equation} Hence, from the previous representation of $K_t-K_s$, we will deduce the $\frac{1}{2}$-Hölder property of the function $t\longmapsto K_t$. Indeed, since by definition $G_0(X_s)=0$, if $s<t$, using Lemma \ref{wiss}, \begin{equation*} \begin{aligned} |K_t-K_s|&=\sup_{s\leq r\leq t}G_0(X_s+U_r-U_s)\\& =\sup_{s\leq r\leq t}[G_0(X_s+U_r-U_s)-G_0(X_s)]\\& =\frac{M}{m}\sup_{s\leq r\leq t}\mathbb{E}[|U_r-U_s|], \end{aligned} \end{equation*} and so \begin{equation*} \begin{aligned} |K_t-K_s|&\leq C\Bigg\{\mathbb{E} \bigg[\sup_{s\leq r\leq t}\bigg|\int_s^r b(X_{u^-}) du\bigg|\bigg] + \bigg(\mathbb{E} \bigg[\sup_{s\leq r\leq t}\bigg|\int_s^r \sigma(X_{u^-}) dB_u\bigg|^2\bigg]\bigg)^{1/2}\\&~~~~ + \bigg(\mathbb{E} \bigg[\sup_{s\leq r\leq t}\bigg|\int_s^r\int_E F(X_{u^-},z) \tilde{N}(du,dz)\bigg|^2\bigg]\bigg)^{1/2}\Bigg\}\\& \leq C\Bigg\{\int_s^t\mathbb{E}\bigg[\bigg|b(X_{u^-})\bigg|\bigg]du+\bigg(\mathbb{E}\bigg[\int_s^t\bigg|\sigma(X_{u^-})\bigg|^2du\bigg]\bigg)^{1/2}\\&~~~~+\bigg(\mathbb{E} \bigg[\int_s^t\int_E \bigg|F(X_{u^-},z) \bigg|^2\lambda(dz)du\bigg]\bigg)^{1/2}\Bigg\}\\& \leq C\Bigg\{|t-s|\mathbb{E}\bigg[1+\sup_{u\leq T}|X_u|\bigg]+|t-s|^{1/2}\bigg(\mathbb{E}\bigg[1+\sup_{u\leq T}|X_u|^2\bigg]\bigg)^{1/2}\Bigg\}. \end{aligned} \end{equation*} Therefore, if $X_0\in \mathrm{L}^p$ for some $p\geq2$, it follows from Proposition \ref{propriete_1} that \begin{equation*} \begin{aligned} |K_t-K_s|\leq C|t-s|^{1/2}. \end{aligned} \end{equation*} \end{proof} \begin{proof}[Proof of (ii)]\renewcommand{\qedsymbol}{}     \begin{equation*}     \begin{aligned}     \mathbb{E}\bigg[|U_t-U_s|^p\bigg]&\leq 4^{p-1}\mathbb{E}\Bigg[\bigg(\int_s^t |b(X_{r^-})| dr\bigg)^p + \bigg|\int_s^t \sigma(X_{r^-}) dB_r\bigg|^p \\&~~+\bigg|\int_s^t\int_E F(X_{r^-},z) \tilde{N}(dr,dz)\bigg|^p \Bigg]\\     &\leq C\sup\limits_{0\le r\le t}\mathbb{E}\bigg[\bigg(\int_s^r|b(X_{u^-})| du\bigg)^p + \bigg|\int_s^r \sigma(X_{u^-}) dB_u\bigg|^p \\&~~+\bigg|\int_s^r\int_E F(X_{u^-},z) \tilde{N}(du,dz)\bigg|^p\bigg]\\     &\leq C\Bigg\{|t-s|^{p-1}\mathbb{E}\bigg[\int_s^t (1+|X_{u^-}|)^p du\bigg] + C_1\mathbb{E}\bigg[\bigg(\int_s^t (1+|X_{u^-}|)^2 du\bigg)^{p/2}\bigg] \\&~~+ C_2\mathbb{E}\bigg[\int_s^t (1+|X_{u^-}|)^p du\bigg]\Bigg\}\\     & \leq C_1\mathbb{E}\bigg[1+\sup_{t\leq T}|X_t|^p\bigg]|t-s|^p+C_2\mathbb{E}\bigg[\bigg(1+\sup_{t\leq T}|X_t|^2\bigg)^{p/2}\bigg]|t-s|^{p/2}\\&~~+C_3\mathbb{E}\bigg[1+\sup_{t\leq T}|X_t|^p\bigg]|t-s|.     \end{aligned}     \end{equation*}     Finally, if $X_0\in \mathrm{L}^p$ for some $p\geq2$, we can conclude that there exists a constant $C$, depending on $p$, $T$, $b$, $\sigma$, $F$ and $h$ such that     $$\forall~0\leq s\leq t\leq T,~~~~\mathbb{E}\big[|X_t-X_s|^p\big]\leq C|t-s|.$$ \end{proof} \begin{proof}[Proof of (iii)] Let $0\leq r<s<t\leq T$, we have \begin{equation*} \begin{aligned} \mathbb{E}\bigg[|U_s-U_r|^p|U_t-U_s|^p\bigg]& \leq \mathbb{E}\bigg[|U_s-U_r|^p\mathbb{E}_s[|U_t-U_s|^p]\bigg]\\& \leq C\mathbb{E}\Bigg[|U_s-U_r|^p\Bigg\{\mathbb{E}_s\bigg[\bigg|\int_s^t b(X_{s^-}) ds\bigg|^p\bigg]+\mathbb{E}_s\bigg[\bigg|\int_s^t \sigma(X_{s^-}) dB_s\bigg|^p\bigg]\\&~~~~+\mathbb{E}_s\bigg[\bigg|\int_s^t\int_E F(X_{s^-},z) d\tilde{N}(ds,dz)\bigg|^p\bigg]\Bigg\}\Bigg]. \end{aligned} \end{equation*} Then, from Burkholder-Davis-Gundy inequality, we get \begin{equation*} \begin{aligned} \mathbb{E}\bigg[|U_s-U_r|^p|U_t-U_s|^p\bigg]& \leq C\mathbb{E}\Bigg[|U_s-U_r|^p\Bigg\{\mathbb{E}_s\bigg[\bigg|\int_s^t b(X_{s^-}) ds\bigg|^p\bigg]+\bigg(\mathbb{E}_s\bigg[\int_s^t \bigg|\sigma(X_{s^-})\bigg|^2 ds\bigg]\bigg)^{p/2}\\&~~~~+\mathbb{E}_s\bigg[\int_s^t\int_E \bigg|F(X_{s^-},z)\bigg|^p \lambda(dz)ds\bigg]\Bigg\}\Bigg]\\& \leq C\mathbb{E}\Bigg[|U_s-U_r|^p\Bigg\{|t-s|^p\bigg(1+\mathbb{E}_s\bigg[\sup_{s\leq u\leq t}|X_u|^p\bigg]\bigg)\\&~~~~+|t-s|^{p/2}\bigg(1+\mathbb{E}_s\bigg[\sup_{s\leq u\leq t}|X_u|^2\bigg]^{p/2}\bigg)+|t-s|\bigg(1+\mathbb{E}_s\bigg[\sup_{s\leq u\leq t}|X_u|^p\bigg]\bigg)\Bigg\}\Bigg]\\& \leq C\mathbb{E}\Bigg[|U_s-U_r|^p\Bigg\{|t-s|\bigg(1+\mathbb{E}_s\bigg[\sup_{s\leq u\leq t}|X_u|^p\bigg]\bigg)\Bigg\}\Bigg], \end{aligned} \end{equation*} thus, from (i) and Proposition \ref{propriete_1}, we obtain \begin{equation*} \begin{aligned} \mathbb{E}\bigg[|U_s-U_r|^p|U_t-U_s|^p\bigg]& \leq C_1|t-s|\mathbb{E}\Bigg[|U_s-U_r|^p\Bigg]+C_2|t-s|\mathbb{E}\Bigg[|U_s-U_r|^p|X_s|^p\Bigg]\\& \leq C_1|t-s||s-r|+C_2|t-s|\mathbb{E}\Bigg[|U_s-U_r|^p\bigg(|X_s-X_r|^p+|X_r|^p\bigg)\Bigg]\\& \leq C_1|t-r|^2+C_2|t-s|\mathbb{E}\Bigg[2^{p-1}|U_s-U_r|^{p}\bigg(|U_s-U_r|^p+|K_s-K_r|^p\bigg)\Bigg]\\&~~~~+C_3|t-s|\mathbb{E}\Bigg[|U_s-U_r|^p|X_r|^p\Bigg]\\& \leq C_1|t-r|^2+C_2|t-s|\mathbb{E}\Bigg[|U_s-U_r|^{2p}\Bigg]+C_3|t-s||s-r|^{p/2}\mathbb{E}\Bigg[|U_s-U_r|^{p}\Bigg]\\&~~~~+C_4|t-s|\mathbb{E}\Bigg[|U_s-U_r|^p|X_r|^p\Bigg]\\& \leq C_1|t-r|^2+C_4|t-s|\mathbb{E}\Bigg[|X_r|^p\mathbb{E}_r[|U_s-U_r|^p]\Bigg]. \end{aligned} \end{equation*} Following the proof of (ii), we can also get \begin{equation*} \begin{aligned} \mathbb{E}_r[|U_s-U_r|^p]\leq C|s-r|\bigg(1+\mathbb{E}_r\bigg[\sup_{r\leq u\leq s}|X_u|^p\bigg]\bigg). \end{aligned} \end{equation*} Then \begin{equation*} \begin{aligned} \mathbb{E}\bigg[|U_s-U_r|^p|U_t-U_s|^p\bigg]& \leq C_1|t-r|^2+C_2|t-s||s-r|\mathbb{E}\Bigg[|X_r|^p\bigg(1+\mathbb{E}_r\bigg[\sup_{r\leq u\leq s}|X_u|^p\bigg]\bigg)\Bigg]\\& \leq C_1|t-r|^2+C_2|t-r|^2\mathbb{E}\Bigg[|X_r|^p\bigg(1+\sup_{r\leq u\leq s}|X_u|^p\bigg)\Bigg]. \end{aligned} \end{equation*} Under \ref{int}, we conclude that \begin{equation*} \mathbb{E}[|U_s-U_r|^p|U_t-U_s|^p]\leq C|t-r|^2,~~~~\forall~0\leq r<s<t\leq1. \end{equation*} \end{proof} \subsection{Density of $K$} Let $\m{L}$ be the linear partial operator of second order defined by \begin{equation} \label{L} \m{L}f(x):=b(x)\frac{\partial}{\partial x}f(x)+\frac{1}{2}\sigma\sigma^*(x)\frac{\partial^2}{\partial x^2}f(x)+\int_E\bigg(f\big(x+F(x,z)\big)-f(x)-F(x,z)f'(x)\bigg)\lambda(dz), \end{equation} for any twice continuously differentiable function $f$. \begin{proposition} \label{propdensite} Suppose that Assumptions $\ref{lip}$, $\ref{bilip}$ and $\ref{reg}$ hold and let $(X,K)$ denote the unique deterministic flat solution to \eqref{eq:main}. Then the Stieljes measure $dK$ is absolutely continuous with respect to the Lebesgue measure (Proposition \ref{propriete_2}) with density \begin{equation} \label{densite} k:\mathbb{R}^+\ni t\longmapsto\frac{(\mathbb{E}[\m{L}h(X_{t^-})])^-}{\mathbb{E}[h'(X_{t^-})]}{\bf{1}}_{\mathbb{E}[h(X_t)]=0}. \end{equation} \end{proposition} Let us admit for the moment the following result that will be useful for our proof. \begin{lemma} \label{lemmacont} The functions $t\longmapsto\mathbb{E}\left[h(X_t)\right]$ and $t\longmapsto\mathbb{E} \left[\m{L}h(X_t)\right]$ are continuous. \end{lemma} \begin{lemma} \label{lemmacont_generale} If $\varphi$ is a continuous function such that, for some $C\geq 0$ and $p\geq 1$, $$ \forall x\in\mathbb{R}, \quad |\varphi(x)|\leq C(1+|x|^p), $$ then the function $t\longmapsto\mathbb{E} [\varphi(X_t)]$ is continuous. \end{lemma} The proof of Lemma \ref{lemmacont_generale} is given in Appendix A.1. We may now proceed to the proof of Proposition \ref{propdensite}. \begin{proof} Let $t$ in $[0,T]$. For all positive r, we have \begin{equation*} \begin{aligned} X_{t+r} &=X_t+\int_t^{t+r} \bigg(b(X_{s^-})-\int_E F(X_{s^-},z)\lambda(dz)\bigg) ds + \int_t^{t+r} \sigma(X_{s^-}) dB_s + \int_t^{t+r}\int_E F(X_{s^-},z) N(ds,dz) \\&~~~~+ K_{t+r}-K_t. \end{aligned} \end{equation*} Under \ref{reg} and thanks to It\^o's formula we get \begin{equation*} \begin{aligned} h(X_{t+r})-h(X_t)&=\int_t^{t+r}b(X_{s^-})h'(X_{s^-}) ds+\int_t^{t+r} \sigma(X_{s^-})h'(X_{s^-}) dB_s+\int_t^{t+r}\int_E F(X_{s^-},z)h'(X_{s^-}) \tilde{N}(ds,dz)\\&~~~~+\int_t^{t+r} h'(X_{s^-}) dK_s+\frac{1}{2}\int_t^{t+r} \sigma^2(X_{s^-})h''(X_{s^-}) ds\\&~~~~+\int_t^{t+r}\int_E\bigg(h\big(X_{s^-}+F(X_{s^-},z)\big)-h(X_{s^-})-F(X_{s^-},z)h'(X_{s^-})\bigg)N(ds,dz)\\& = \int_t^{t+r}b(X_{s^-})h'(X_{s^-}) ds+\int_t^{t+r} \sigma(X_{s^-})h'(X_{s^-}) dB_s+\int_t^{t+r}\int_E F(X_{s^-},z)h'(X_{s^-}) \tilde{N}(ds.dz)\\&~~~~+\int_t^{t+r} h'(X_{s^-}) dK_s+\frac{1}{2}\int_t^{t+r} \sigma^2(X_{s^-})h''(X_{s^-}) ds\\&~~~~+\int_t^{t+r}\int_E\bigg(h\big(X_{s^-}+F(X_{s^-})\big)-h(X_{s^-})-F(X_{s^-})h'(X_{s^-})\bigg)\lambda(dz)ds\\&~~~~+\int_t^{t+r}\int_E\bigg(h\big(X_{s^-}+F(X_{s^-},z)\big)-h(X_{s^-})-F(X_{s^-},z)h'(X_{s^-})\bigg)\tilde{N}(ds,dz)\\& = \int_t^{t+r}\m{L}h(X_{s^-}) ds+\int_t^{t+r} h'(X_{s^-}) dK_s+\int_t^{t+r} \sigma(X_{s^-})h'(X_{s^-}) dB_s\\&~~~~+\int_t^{t+r}\int_E\bigg(h\big(X_{s^-}+F(X_{s^-},z)\big)-h(X_{s^-})\bigg)\tilde{N}(ds,dz), \end{aligned} \end{equation*} where $\m{L}$ is given by $\eqref{L}$. Thus, we obtain \begin{equation} \label{ito} \mathbb{E}\bigg(\int_t^{t+r} h'(X_{s^-}) dK_s\bigg)=\mathbb{E} h(X_{t+r})-\mathbb{E} h(X_t)-\int_t^{t+r}\mathbb{E}\m{L}h(X_{s^-}) ds. \end{equation} Suppose, at the one hand, that $\mathbb{E} h(X_t)>0$. Then, by the continuity of $t\longmapsto\mathbb{E} h(X_t)$, we get that there exists a positive $\m{R}$ such that for all $r\in[0,\m{R}]$, $\mathbb{E} h(X_{t+r})>0$. This implies in particular, from the definition of $K$, that $dK([t,t+r])=0$ for all $r$ in $[0,\m{R}]$. At the second hand, suppose that $\mathbb{E} h(X_t)=0$, then two cases arise. Let us first assume that $\mathbb{E} \m{L}h(X_t)>0$. Hence, we can find a positive $\m{R}'$ such that for all $r\in[0,\m{R}']$, $\mathbb{E} \m{L}h(X_{t+r})>0$. We thus deduce from our Assumptions and $\eqref{ito}$ that $\mathbb{E} h(X_{t+r})>0$ for all $r\in(0,\m{R}']$. Therefore, $dK([t,t+r])=0$ for all $r\in[0,\m{R}']$. Suppose next that $\mathbb{E} \m{L}h(X_t)\leq0$. By continuity of $t\longmapsto\mathbb{E} \m{L}h(X_t)$, there exists a positive $\m{R}''$ such that for all $r\in[0,\m{R}'']$ it holds $\mathbb{E} \m{L}h(X_{t+r})\leq0$. Since $\mathbb{E} h(X_{t+r})$ must be positive on this set, we could have to compensate and $K_{t+r}$ is then positive for all $r\in[0,\m{R}'']$. Moreover, the compensation must be minimal i.e. such that $\mathbb{E} h(X_{t+r})=0$. Equation $\eqref{ito}$ becomes: \begin{equation*} \mathbb{E}\bigg(\int_t^{t+r} h'(X_{s^-}) dK_s\bigg)=-\int_t^{t+r}\mathbb{E}\m{L}h(X_{s^-}) ds, \end{equation*} on $[0,\m{R}'']$. Dividing both sides by $r$ and taking the limit $r\longrightarrow 0$ gives (by continuity): \begin{equation*} dK_t=-\frac{\mathbb{E}[\m{L}h(X_{t^-})]}{\mathbb{E}[h'(X_{t^-})]}dt. \end{equation*} Thus, $dK$ is absolutely continuous w.r.t. the Lebesgue measure with density: \begin{equation*} k_t=\frac{(\mathbb{E}[\m{L}h(X_{t^-})])^-}{\mathbb{E}[h'(X_{t^-})]}{\bf{1}}_{\mathbb{E}[h(X_t)]=0}. \end{equation*} \end{proof} \begin{remark} This justifies, at least under the smoothness Assumption $\ref{reg}$ on the constraint function $h$, the non-negative hypothesis imposed on $h'$. \end{remark} \begin{proof}[Proof of Lemma \ref{lemmacont}] Under Assumption \ref{bilip}, and by using Lemma \ref{lemmacont_generale}, we obtain the continuity of the function $t\longmapsto\mathbb{E} h(X_t)$. Under the assumptions \ref{lip}, \ref{bilip} and \ref{reg}, we observe that $x\longmapsto\m{L}h(X_t)$ is a continuous function such that, for all $x\in\mathbb{R}$, there exists constants $C_1$, $C_2$ and $C_3 > 0$, $$|b(x)h'(x)|\leq C_1(1+|x|),$$ $$|\sigma^2(x)h''(x)|\leq C_2(1+|x|^2),$$ and \begin{equation*} \begin{aligned} \bigg|\int_E\bigg(h\big(x+F(x,z)\big)-h(x)-F(x,z)h'(x)\bigg)\lambda(dz)\bigg|& \leq C_3\int_E|F(x,z)|\lambda(dz)\\& \leq C_3\bigg(\int_E|F(x,z)-F(0,z)|\lambda(dz)+\int_E|F(0,z)|\lambda(dz)\bigg)\\& \leq C_3\int_E|x|\lambda(dz)+C'_3\\& \leq C_3(1+|x|). \end{aligned} \end{equation*} Finally, by using Lemma \ref{lemmacont_generale}, we conclude the continuity of $t\longmapsto\mathbb{E} \m{L}h(X_t)$. \end{proof} \section{Mean reflected SDE as the limit of an interacting reflected particles system.}\label{sec:PMRSDE} Having in mind the notations defined in the beginning of Section 2 and especially equation $\eqref{K_t}$, we can write the unique solution of the SDE \eqref{eq:main} as: \begin{equation} X_t =X_0+\int_0^t b(X_{s^-}) ds + \int_0^t \sigma(X_{s^-}) dB_s + \int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz)+\sup\limits_{s\le t} {G}_0(\mu_{s}), \end{equation} where $\mu_t$ stands for the law of $$U_t =X_0+\int_0^t b(X_{s^-}) ds + \int_0^t \sigma(X_{s^-}) dB_s + \int_0^t\int_E F(X_{s^-},z) \tilde{N}(ds,dz).$$ We here are interested in the particle approximation of such a system. Our candidates are the particles, for $1\leq i\leq N$, \begin{equation} \label{particle_system} X^i_t = \bar{X}^i_0+\int_0^t b(X^i_{s^-}) ds + \int_0^t \sigma(X^i_{s^-}) dB^i_s + \int_0^t\int_E F(X^i_{s^-},z) \tilde{N}^i(ds,dz)+\sup\limits_{s\le t} {G}_0(\mu_{s}^N), \end{equation} where $B^i$ are independent Brownian motions, $\tilde{N}^i$ are independent compensated Poisson measure, $\bar{X}^i_0$ are independent copies of $X_0$ and $\mu_s^N$ denotes the empirical distribution at time $s$ of the particles \begin{equation*} U^i_t =\bar{X}^i_0+\int_0^t b(X^i_{s^-}) ds + \int_0^t \sigma(X^i_{s^-}) dB^i_s + \int_0^t\int_E F(X^i_{s^-},z) \tilde{N}^i(ds,dz),\quad 1\leq i\leq N. \end{equation*} namely $\displaystyle\mu^N_s=\frac{1}{N}\sum^N_{i=1}\delta_{U^i_s}$. It is worth noticing that \begin{equation*} G_0(\mu_s^N)=\inf\Bigg\{x\geq0:\frac{1}{N}\sum_{i=1}^{N}h(x+U^i_s) \geq 0\Bigg\}. \end{equation*} In order to prove that there is indeed a propagation of chaos effect, we introduce the following independent copies of $X$ \begin{equation*} \bar X^i_t =\bar X^i_0+\int_0^t b(\bar X^i_{s^-}) ds + \int_0^t \sigma(\bar X^i_{s^-}) dB^i_s + \int_0^t\int_E F(\bar X^i_{s^-},z) \tilde{N}^i(ds,dz)+\sup\limits_{s\le t} {G}_0(\mu_{s}),\quad 1\leq i\leq N, \end{equation*} and we couple these particles with the previous ones by choosing the same Brownian motions and the same Poisson processes.\\ In order to do so, we introduce the decoupled particles $\bar U^i$, $1\leq i\leq N$: \begin{equation*} \bar U^i_t =\bar X^i_0+\int_0^t b(\bar X^i_{s^-}) ds + \int_0^t \sigma(\bar X^i_{s^-}) dB^i_s + \int_0^t\int_E F(\bar X^i_{s^-},z) \tilde{N}^i(ds,dz). \end{equation*} Note that for instance the particles $\bar U^i_t$ are i.i.d. and let us denote by $\bar\mu^N$ the empirical measure associated to this system of particles.\\ \begin{remark} \begin{itemize} \item[(i)] Under our assumptions, we have $\mathbb{E}\left[h\left(\bar{X}^i_0\right)\right]=\mathbb{E}\left[h(X_0)\right]\geq 0$. However, there is no reason to have \begin{equation*} \frac{1}{N}\, \sum_{i=1}^N h\left(\bar{X}^i_0\right) \geq 0, \end{equation*} even if $N$ is large. As a consequence, \begin{equation*} {G}_0(\mu_{0}^N)=\inf\left\{x\geq 0:\frac{1}{N}\sum_{i=1}^{N}h\left(x+\bar{X}^i_0\right) \geq 0\right\} \end{equation*} is not necessarily equal to 0. As a byproduct, we have $X^i_0 = \bar{X}^i_0 + {G}_0(\mu_{0}^N)$ and the non decreasing process $\sup_{s\le t} {G}_0(\mu_{s}^N)$ is not equal to 0 at time $t=0$. Written in this way, the particles defined by \eqref{particle_system} can not be interpreted as the solution of a reflected SDE. To view the particles as the solution of a reflected SDE, instead of \eqref{particle_system} one has to solve \begin{gather*} X^i_t = \bar{X}^i_0 + {G}_0(\mu_{0}^N) +\int_0^t b(X^i_{s^-}) ds + \int_0^t \sigma(X^i_{s^-}) dB^i_s + \int_0^t\int_E F(X^i_{s^-},z) \tilde{N}^i(ds,dz)+ K^N_t, \\ \frac{1}{N}\sum_{i=1}^{N}h\left(X^i_t\right) \geq 0, \qquad \int_0^t \frac{1}{N}\sum_{i=1}^{N}h\left(X^i_t\right)\, dK^N_s, \end{gather*} with $K^N$ non decreasing and $K^N_0=0$. Since, we do not use this point in the sequel, we will work with the form \eqref{particle_system}. \item[(ii)] Following the same proof of Theorem $\ref{thrm_exacte}$, it is easy to demonstrate the existence and the uniqueness of a solution for the particle approximated system $\eqref{particle_system}$. \end{itemize} \end{remark} We have the following result concerning the approximation \eqref{eq:main} by interacting particles system. \begin{theorem} \label{cv_part} Let Assumptions \ref{lip} and \ref{bilip} hold and $T>0$. \begin{enumerate} \item [(i)] Under Assumption \ref{int}, there exists a constant $C$ depending on $b$, $\sigma$ and $F$ such that, for each $j\in\{1,\ldots,N\}$, \begin{equation*} \mathbb{E}\bigg[\sup_{s\leq T}|X^j_s-\bar X^j_s|^2\bigg]\leq C\exp\bigg(C\bigg(1+\frac{M^2}{m^2}\bigg)(1+T^2)\bigg)\frac{M^2}{m^2}N^{-1/2}. \end{equation*} \item [(ii)] If Assumption \ref{reg} is in force, then there exists a constant $C$ depending on $b$, $\sigma$ and $F$ such that, for each $j\in\{1,\ldots,N\}$, \begin{equation*} \mathbb{E}\bigg[\sup_{s\leq T}|X^j_s-\bar X^j_s|^2\bigg]\leq C\exp\bigg(C\bigg(1+\frac{M^2}{m^2}\bigg)(1+T^2)\bigg)\frac{1+T^2}{m^2}\bigg(1+\mathbb{E}\bigg[\sup_{s\leq T}|X_T|^2\bigg]\bigg)N^{-1}. \end{equation*} \end{enumerate} \end{theorem} \begin{proof} Let $t>0$. We have, for $r\leq t$, \begin{equation*} \begin{aligned} \big|X^j_r-\bar{X}^j_r\big|& \leq \bigg|\int_0^r b(X^j_{s^-})- b(\bar{X}^j_{{s}^-})ds\bigg| + \bigg|\int_0^r \bigg(\sigma(X^j_{s^-})-\sigma(\bar{X}^j_{{s}^-})\bigg) dB^i_s\bigg| \\&~~ ~~+ \bigg|\int_0^r\int_E\bigg(F(X^j_{s^-},z) -F(\bar{X}^j_{{s}^-},z)\bigg) \tilde{N}^j(ds,dz)\bigg| + \big|\sup_{s\le r}{G}_0(\mu_s^N)-\sup_{s\le r}{G}_0({\mu}_{s})\big|. \end{aligned} \end{equation*} Taking into account the fact that \begin{equation*} \begin{aligned} \big|\sup_{s\le r}{G}_0(\mu_s^N)-\sup_{s\le r}{G}_0({\mu}_{s})\big|& \leq \sup_{s\le r}\big|{G}_0(\mu_s^N)-{G}_0({\mu}_{s})\big| \leq \sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\mu}_{s})\big|\\& \leq \sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\bar\mu}^N_{s})\big|+\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|, \end{aligned} \end{equation*} we get the inequality \begin{equation} \label{inq1_proof_part} \begin{aligned} \sup_{r\le t}\big|X^j_r-\bar{X}^j_r\big|& \leq I_1+\sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\bar\mu}^N_{s})\big|+\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|, \end{aligned} \end{equation} where we have set \begin{equation*} \begin{aligned} I_1&=\int_0^t \big|b(X^j_{s^-})- b(\bar{X}^j_{{s}^-})\big|ds + \sup_{r\le t}\bigg|\int_0^r \bigg(\sigma(X^j_{s^-})-\sigma(\bar{X}^j_{{s}^-})\bigg) dB^i_s\bigg|\\&~~~~+ \sup_{r\le t}\bigg|\int_0^r\int_E\bigg(F(X^j_{s^-},z) -F(\bar{X}^j_{{s}^-},z)\bigg) \tilde{N}^j(ds,dz)\bigg|. \end{aligned} \end{equation*} On the one hand we have, using Assumption \ref{lip}, Doob and Cauchy-Schwartz inequalities \begin{equation*} \begin{aligned} \mathbb{E}\big[\big|I_1\big|^2\big]& \leq C\Bigg\{\mathbb{E} \bigg[t\int_0^t\bigg|b(X^j_{s^-})- b(\bar{X}^j_{{s}^-}) \bigg|^2ds\bigg] + \mathbb{E}\bigg[\int_0^t \bigg|\sigma(X^j_{s^-})-\sigma(\bar{X}^j_{{s}^-})\bigg|^2 ds\bigg]\\&~~ ~~+\mathbb{E} \bigg[\int_0^t\int_E\bigg|F(X^j_{s^-},z) -F(\bar{X}^j_{{s}^-},z)\bigg|^2 \lambda(dz)ds\bigg]\Bigg\} \\& \leq C\Bigg\{tC_1\int_0^t \mathbb{E}\bigg[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\bigg] ds+ C_1\int_0^t \mathbb{E}\bigg[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\bigg] ds\\&~~ ~~+C_1\int_0^t \mathbb{E}\bigg[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\bigg] ds\Bigg\}\\& \leq C(1+t)\int_0^t \mathbb{E}\bigg[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\bigg] ds. \end{aligned} \end{equation*} where $C$ depends only on $b$, $\sigma$ and $F$ and may change from line to line. On the other hand, by using Lemma \ref{wiss}, \begin{equation*} \sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\bar\mu}^N_{s})\big| \leq \frac{M}{m}\sup_{s\le t}\frac{1}{N}\sum_{i=1}^{N}\big|U^i_s-\bar{U}^i_{{s}}\big| \leq \frac{M}{m}\frac{1}{N}\sum_{i=1}^{N}\sup_{s\le t}\big|U^i_s-\bar{U}^i_{{s}}\big|, \end{equation*} and Cauchy-Schwartz inequality gives, since the variables are exchangeable, \begin{equation*} \mathbb{E}\bigg[\sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\bar\mu}^N_{s})\big|^2\bigg] \leq \frac{M^2}{m^2}\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}\bigg[\sup_{s\le t}\big|U^i_s-\bar{U}^i_{{s}}\big|^2\bigg] = \frac{M^2}{m^2}\mathbb{E}\bigg[\sup_{s\le t}\big|U^j_s-\bar{U}^j_{{s}}\big|^2\bigg]. \end{equation*} Since \begin{equation*} U^j_s-\bar{U}^j_{{s}}=\int_0^s (b(X^j_{r^-})-b(\bar X^j_{r^-})) dr + \int_0^s (\sigma(X^j_{r^-})-\sigma(\bar X^j_{r^-})) dB^j_r + \int_0^s\int_E (F(X^j_{r^-},z)-F(\bar X^j_{r^-}),z) \tilde{N}^j(dr,dz), \end{equation*} the same computations as we did before lead to \begin{equation*} \mathbb{E}\bigg[\sup_{s\le t}\big|{G}_0(\mu_s^N)-{G}_0({\bar\mu}^N_{s})\big|^2\bigg] \leq C \frac{M^2}{m^2}(1+t)\int_0^t \mathbb{E}\bigg[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\bigg] ds. \end{equation*} Hence, with the previous estimates we get, coming back to $\eqref{inq1_proof_part}$, \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{r\le t}\big|X^j_r-\bar{X}^j_r\big|^2\bigg]& \leq K\int_0^t \mathbb{E}\bigg[\big|X^j_s-\bar{X}^j_{{s}}\big|^2\bigg] ds+4\mathbb{E}\bigg[\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|^2\bigg]\\& \leq K\int_0^t \mathbb{E}\bigg[\sup_{r\le s}\big|X^j_r-\bar{X}^j_{{r}}\big|^2\bigg] ds+4\mathbb{E}\bigg[\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|^2\bigg], \end{aligned} \end{equation*} where $K=C(1+t)(1+M^2/m^2)$. Thanks to Gronwall's Lemma \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{r\le t}\big|X^j_r-\bar{X}^j_r\big|^2\bigg]& \leq Ce^{Kt}\mathbb{E}\bigg[\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|^2\bigg]. \end{aligned} \end{equation*} By Lemma \ref{wiss} we know that \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{s\le t}\big|{G}_0(\bar\mu_s^N)-{G}_0({\mu}_{s})\big|^2\bigg] \leq {1\over{m^2}}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu_s)+\cdot) (d\bar\mu_s^N-d\mu_s)\right|^2\bigg], \end{aligned} \end{equation*} from which we deduce that \begin{equation} \label{inq2_proof_part} \begin{aligned} \mathbb{E}\bigg[\sup_{r\le t}\big|X^j_r-\bar{X}^j_r\big|^2\bigg]& \leq Ce^{Kt}{1\over{m^2}}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu_s)+\cdot) (d\bar\mu_s^N-d\mu_s)\right|^2\bigg]. \end{aligned} \end{equation} \begin{proof}[Proof of (i)]\renewcommand{\qedsymbol}{} Since the function $h$ is, at least, a Lipschitz function, we understand that the rate of convergence follows from the convergence of empirical measure of i.i.d. diffusion processes. The crucial point here is that we consider a uniform (in time) convergence, which may possibly damage the usual rate of convergence. We will see that however here, we are able to conserve this optimal rate. Indeed, in full generality (i.e. if we only suppose that \ref{bilip} holds) we get that: \begin{equation*} \begin{aligned} {1\over{m^2}}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu_s)+\cdot) (d\bar\mu_s^N-d\mu_s)\right|^2\bigg] \leq \frac{M^2}{m^2}\mathbb{E}\bigg[\sup_{s\le t}W_1^2(\bar{\mu}_s^N,\mu_s)\bigg]. \end{aligned} \end{equation*} Thanks to the additional Assumption \ref{int} and to Remark \ref{rem_U_1} and Proposition \ref{propriete_2}, we will adapt and simplify the proof of Theorem 10.2.7 of \cite{RR98} using recent results about the control Wasserstein distance of empirical measures of i.i.d. sample to the true law by \cite{FG15}, to obtain \begin{equation*} \mathbb{E}\bigg[\sup_{s\le 1}W_1^2(\bar{\mu}_s^N,\mu_s)\bigg] \leq CN^{-1/2}. \end{equation*} The reader can refer to the work on (\cite{BCGL17}, Theorem 3.2, Proof of (i)) in order to find this result. \end{proof} \begin{proof}[Proof of (ii)]\renewcommand{\qedsymbol}{} In the case where $h$ is a twice continuously differentiable function with bounded derivatives (i.e. under \ref{reg}), we succeed to take benefit from the fact that $\bar\mu^N$ is an empirical measure associated to i.i.d. copies of diffusion process, in particular we can get rid of the supremum in time. In view of $\eqref{inq2_proof_part}$, we need a sharp estimate of \begin{equation} \label{more order} \mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu_s)+\cdot) (d\bar\mu_s^N-d\mu_s)\right|^2\bigg]. \end{equation} Let us first observe that $s\longmapsto\bar{G}_0(\mu_{s})$ is locally Lipschitz continuous. Indeed, since by definition $H(\bar{G}_0(\mu_{t}),\mu_t)=0$, if $s<t$, using $\eqref{Hbilip}$, \begin{equation*} \begin{aligned} |\bar{G}_0(\mu_{s})-\bar{G}_0(\mu_{t})|&\leq \frac{1}{m}|H(\bar{G}_0(\mu_{s}),\mu_t)-H(\bar{G}_0(\mu_{t}),\mu_t)|\\&=\frac{1}{m}|H(\bar{G}_0(\mu_{s}),\mu_t)|,\\&=\frac{1}{m}|\mathbb{E}[h(\bar{G}_0(\mu_{s})+U_t)]|\\& =\frac{1}{m}\bigg|\mathbb{E}\bigg[h\bigg(\bar{G}_0(\mu_{s})+U_s+\int_{s}^{t}b(X_{r^-})dr+\int_{s}^{t}\sigma(X_{r^-})dB_r+\int_{s}^{t}\int_E F(X_{r^-},z)\tilde{N}(dr,dz)\bigg)\bigg]\bigg|. \end{aligned} \end{equation*} We get from It\^o's formula, setting \begin{equation*} \bar{\m{L}}_yf(x):=b(y)\frac{\partial}{\partial x}f(x)+\frac{1}{2}\sigma\sigma^*(y)\frac{\partial^2}{\partial x^2}f(x)+\int_E\bigg(f\big(x+F(y,z)\big)-f(x)-F(y,z)f'(x)\bigg)\lambda(dz), \end{equation*} \begin{equation*} \begin{aligned} h(\bar{G}_0(\mu_{s})+U_t)&=h(\bar{G}_0(\mu_{s})+U_s)+\int_s^tb\big(X_{r^-}\big)h'\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)dr+\int_s^t\sigma\big(X_{r^-}\big)h'\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)dB_r\\&~~~~+\int_s^t\int_E F\big(X_{r^-},z\big)h'\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)\tilde{N}(dr,dz)+\frac{1}{2}\int_s^t\sigma^2\big(X_{r^-}\big)h''\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)dr\\&~~~~+\int_s^t\int_E m(r,z)\lambda(dz)dr+\int_s^t\int_E m(r,z) \tilde{N}(dr,dz), \end{aligned} \end{equation*} with $$m(r,z)=\bigg(h\big(\bar{G}_0(\mu_{s})+U_{r^-}+F(X_{r^-},z)\big)-h\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)-F\big(X_{r^-},z\big)h'\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)\bigg).$$ Thus, we obtain \begin{equation*} \begin{aligned} h(\bar{G}_0(\mu_{s})+U_t)& =h(\bar{G}_0(\mu_{s})+U_s)+\int_s^t\bar{\m{L}}_{X_{r^-}}h(\bar{G}_0(\mu_{s})+U_{r^-})dr+\int_s^t\sigma(X_{r^-})h'(\bar{G}_0(\mu_{s})+U_{r^-})dB_r\\&~~~~+\int_s^t\int_E\bigg(h\big(\bar{G}_0(\mu_{s})+U_{r^-}+F(X_{r^-},z)\big)-h\big(\bar{G}_0(\mu_{s})+U_{r^-}\big)\bigg)\tilde{N}(dr,dz), \end{aligned} \end{equation*} and so \begin{equation*} \begin{aligned} \mathbb{E}[h(\bar{G}_0(\mu_{s})+U_t)]&=\mathbb{E}[h(\bar{G}_0(\mu_{s})+U_s)]+\int_s^t\mathbb{E}[\bar{\m{L}}_{X_{r^-}}h(\bar{G}_0(\mu_{s})+U_{r^-})]dr\\& =H(\bar{G}_0(\mu_{s}),\mu_s)+\int_s^t\mathbb{E}[\bar{\m{L}}_{X_{r^-}}h(\bar{G}_0(\mu_{s})+U_{r^-})]dr\\& =\int_s^t\mathbb{E}[\bar{\m{L}}_{X_{r^-}}h(\bar{G}_0(\mu_{s})+U_{r^-})]dr. \end{aligned} \end{equation*} Since $h$ has bounded derivatives and $\sup_{s\leq T}|X_s|$ is a square integrable random variable for each $T > 0$ (see Proposition \ref{propriete_1}), the result follows easily. Now, we deal with \eqref{more order}.\\ Let us denote by $\psi$ the Radon-Nikodym derivative of $\bar{G}_0(\mu)$. By definition, we have, denoting by $V^i$ the semi-martingale $s\longmapsto \bar{G}_0(\mu_s)+\bar{U}_s^i$, since $\bar{U}^i$ are independent copies of $U$, \begin{equation*} \begin{aligned} R_N(s):=\int h(\bar{G}_0(\mu_s)+\cdot) (d\bar\mu_s^N-d\mu_s)& =\frac{1}{N}\sum_{i=1}^N h\big(\bar{G}_0(\mu_s)+\bar{U}_s^i\big)-\mathbb{E}\big[h\big(\bar{G}_0(\mu_s)+{U}_s\big)\big]\\& =\frac{1}{N}\sum_{i=1}^N \big\{h\big(\bar{G}_0(\mu_s)+\bar{U}_s^i\big)-\mathbb{E}\big[h\big(\bar{G}_0(\mu_s)+\bar{U}_s^i\big)\big]\big\}\\& =\frac{1}{N}\sum_{i=1}^N \big\{h\big(V^i_s\big)-\mathbb{E}\big[h\big(V_s^i\big)\big]\big\}, \end{aligned} \end{equation*} It follows from It\^o's formula \begin{equation*} \begin{aligned} h\big(V^i_s\big)&=h\big(V^i_0\big)+\int_{0}^{s}h'\big(V^i_{r^-}\big)d\bar{G}_0\big(\mu_r\big)+\int_{0}^{s}b\big(\bar{X}^i_{r^-}\big)h'\big(V^i_{r^-}\big)dr+\int_{0}^{s}\sigma\big(\bar{X}^i_{r^-}\big)h'\big(V^i_{r^-}\big)dB^i_r\\&~~~~+\int_{0}^{s}\int_EF\big(\bar{X}^i_{r^-},z\big)h'\big(V^i_{r^-}\big)\tilde{N}^i(dr,dz)+\frac{1}{2}\int_{0}^{s}\sigma^2\big(\bar{X}^i_{r^-}\big)h''\big(V^i_{r^-}\big)dr\\&~~~~+\int_0^{s}\int_E\bigg(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h(V^i_{r^-})-F(\bar{X}^i_{r^-},z)h'(V^i_{r^-})\bigg)\lambda(dz)dr\\&~~~~+\int_0^s\int_E\bigg(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h(V^i_{r^-})-F(\bar{X}^i_{r^-},z)h'(V^i_{r^-})\bigg)\tilde{N}^i(dr,dz)\\& =h\big(V^i_0\big)+\int_{0}^{s}h'\big(V^i_{r^-}\big)\psi_r dr+\int_{0}^{s}b\big(\bar{X}^i_{r^-}\big)h'\big(V^i_{r^-}\big)dr+\frac{1}{2}\int_{0}^{s}\sigma^2\big(\bar{X}^i_{r^-}\big)h''\big(V^i_{r^-}\big)dr\\&~~~~+\int_0^{s}\int_E\bigg(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h(V^i_{r^-})-F(\bar{X}^i_{r^-},z)h'(V^i_{r^-})\bigg)\lambda(dz)dr\\&~~~~+\int_{0}^{s}\sigma\big(\bar{X}^i_{r^-}\big)h'\big(V^i_{r^-}\big)dB^i_r+\int_0^s\int_E\bigg(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h(V^i_{r^-})\bigg)\tilde{N}^i(dr,dz)\\& =h\big(V^i_0\big)+\int_{0}^{s}h'\big(V^i_{r^-}\big)\psi_r dr+\int_{0}^{s}\bar{\m{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)dr+\int_{0}^{s}h'\big(V^i_{r^-}\big)\sigma\big(\bar{X}^i_{r^-}\big)dB^i_r\\&~~~~+\int_{0}^{s}\int_E\bigg(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\bigg)\tilde{N}^i(dr,dz)\\& =h\big(V^i_0\big)+\int_{0}^{s}\big\{h'\big(V^i_{r^-}\big)\psi_r +\bar{\m{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)\big\}dr+\int_{0}^{s}h'\big(V^i_{r^-}\big)\sigma\big(\bar{X}^i_{r^-}\big)dB^i_r\\&~~~~+\int_{0}^{s}\int_E\bigg(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\bigg)\tilde{N}^i(dr,dz), \end{aligned} \end{equation*} Taking expectation gives \begin{equation*} \begin{aligned} \mathbb{E}\bigg[h\big(V^i_s\big)\bigg]& =\mathbb{E}\bigg[h\big(V^i_0\big)\bigg]+\int_{0}^{s}\mathbb{E}\big[h'\big(V^i_{r^-}\big)\psi_r +\bar{\m{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)\big]dr\\& =H(\bar{G}_0(\mu_{0}),\mu_0)+\int_{0}^{s}\mathbb{E}\big[h'\big(V^i_{r^-}\big)\psi_r +\bar{\m{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)\big]dr\\& =0+\int_{0}^{s}\mathbb{E}\big[h'\big(V^i_{r^-}\big)\psi_r +\bar{\m{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)\big]dr. \end{aligned} \end{equation*} We deduce immediately that \begin{equation*} \begin{aligned} R_N(s)&=\frac{1}{N}\sum_{i=1}^N h(V^i_0)+\frac{1}{N}\sum_{i=1}^N \int_{0}^{s}C^i(r) dr +M_N(s)+L_N(s)\\& =\frac{1}{N}\sum_{i=1}^N h(V^i_0)+ \int_{0}^{s}\bigg(\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg) dr +M_N(s)+L_N(s), \end{aligned} \end{equation*} where we have set $$C^i(r)=h'\big(V^i_{r^-}\big)\psi_r +\bar{\m{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)-\mathbb{E}\big[h'\big(V^i_{r^-}\big)\psi_r +\bar{\m{L}}_{\bar{X}^i_{r^-}}h\big(V^i_{r^-}\big)\big],$$ $$M_N(s)=\frac{1}{N}\sum_{i=1}^N\int_{0}^{s}h'\big(V^i_{r^-}\big)\sigma\big(\bar{X}^i_{r^-}\big)dB^i_r,$$ $$L_N(s)=\frac{1}{N}\sum_{i=1}^N\int_{0}^{s}\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)\tilde{N}^i(dr,dz).$$ As a byproduct, \begin{equation*} \begin{aligned} \sup_{s\le t}|R_N(s)|& \leq\bigg|\frac{1}{N}\sum_{i=1}^N h(V^i_0)\bigg|+ \sup_{s\le t}\int_{0}^{s}\bigg|\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg| dr +\sup_{s\le t}|M_N(s)|+\sup_{s\le t}|L_N(s)|\\& \leq \bigg|\frac{1}{N}\sum_{i=1}^N h(V^i_0)\bigg|+\int_{0}^{t}\bigg|\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg| dr +\sup_{s\le t}|M_N(s)|+\sup_{s\le t}|L_N(s)|. \end{aligned} \end{equation*} We get, using Cauchy-Schwartz inequality, since $U^i$ and $\bar{X}^i$ are i.i.d, \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{s\le t}|R_N(s)|^2\bigg]& \leq 4\Bigg\{\mathbb{V}\bigg[\frac{1}{N}\sum_{i=1}^N h(V^i_0)\bigg]+\mathbb{E}\bigg[\bigg(\int_{0}^{t}\bigg|\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg| dr\bigg)^2\bigg] \\&~~~~+\mathbb{E}\bigg[\sup_{s\le t}|M_N(s)|^2\bigg]+\mathbb{E}\bigg[\sup_{s\le t}|L_N(s)|^2\bigg]\Bigg\}\\& \leq 4\Bigg\{\mathbb{V}\bigg[\frac{1}{N}\sum_{i=1}^N h(V^i_0)\bigg]+t\mathbb{E}\bigg[\int_{0}^{t}\bigg|\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg|^2 dr\bigg] \\&~~~~+\mathbb{E}\bigg[\sup_{s\le t}|M_N(s)|^2\bigg]+\mathbb{E}\bigg[\sup_{s\le t}|L_N(s)|^2\bigg]\Bigg\}\\& = 4\Bigg\{\mathbb{V}\bigg[\frac{1}{N}\sum_{i=1}^N h(V^i_0)\bigg]+t\int_{0}^{t}\mathbb{V}\bigg(\frac{1}{N}\sum_{i=1}^N C^i(r)\bigg) dr \\&~~~~+\mathbb{E}\bigg[\sup_{s\le t}|M_N(s)|^2\bigg]+\mathbb{E}\bigg[\sup_{s\le t}|L_N(s)|^2\bigg]\Bigg\}. \end{aligned} \end{equation*} Thus, we get \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{s\le t}|R_N(s)|^2\bigg]& \leq \frac{4}{N} \mathbb{V}[h(V_0)]+\frac{4t}{N}\int_{0}^{t}\mathbb{V}(C(r)) dr +4\mathbb{E}\bigg[\sup_{s\le t}|M_N(s)|^2\bigg]+4\mathbb{E}\bigg[\sup_{s\le t}|L_N(s)|^2\bigg]\\& =\frac{4}{N} \mathbb{V}[h(V_0)]+\frac{4t}{N}\int_{0}^{t}\mathbb{V}(h'\big(V_{r^-}\big)\psi_r +\bar{\m{L}}_{{X}_{r^-}}h\big(V_{r^-}\big)) dr \\&~~~~+4\mathbb{E}\bigg[\sup_{s\le t}|M_N(s)|^2\bigg]+4\mathbb{E}\bigg[\sup_{s\le t}|L_N(s)|^2\bigg]. \end{aligned} \end{equation*} Since $M_N$ is a martingale with $$\langle M_N\rangle_t=\frac{1}{N^2}\sum_{i=1}^N\int_{0}^t\big(h'(V^i_{r^-})\sigma(\bar{X}^i_{r^-})\big)^2dr,$$ Doob's inequality leads to \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{s\le t}|M_N(s)|^2\bigg]&\leq 4\mathbb{E}\big[|M_N(t)|^2\big]\\&=\frac{4}{N^2}\sum_{i=1}^N\int_{0}^t\mathbb{E}\bigg[\big(h'(V^i_{r^-})\sigma(\bar{X}^i_{r^-})\big)^2\bigg]dr\\&=\frac{4}{N}\int_{0}^t\mathbb{E}\bigg[\big(h'(V_{r^-})\sigma({X}_{r^-})\big)^2\bigg]dr. \end{aligned} \end{equation*} Then, using Doob inequality for the last martingale $L_N$, we obtain \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{s\le t}|L_N(s)|^2\bigg]&\leq 4\mathbb{E}\big[|L_N(t)|^2\big]\\&=\frac{4}{N^2}\sum_{i=1}^N\mathbb{E}\bigg[\bigg(\int_{0}^t\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)\tilde{N}^i(dr,dz)\bigg)^2\bigg]\\&~~~~+\frac{8}{N^2}\sum_{1\leq i<j\leq N}\mathbb{E}\bigg[\int_{0}^t\int_E\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)\tilde{N}^i(dr,dz)\\&~~~~\int_{0}^t\int_E\Big(h\big(V^j_{r^-}+F(\bar{X}^j_{r^-},z)\big)-h\big(V^j_{r^-}\big)\Big)\tilde{N}^j(dr,dz)\bigg]\\&=\frac{4}{N^2}\sum_{i=1}^N\int_{0}^t\int_E\mathbb{E}\bigg[\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)^2\bigg]\lambda(dz)dr+0\\&=\frac{4}{N}\int_{0}^t\int_E\mathbb{E}\bigg[\Big(h\big(V^i_{r^-}+F(\bar{X}^i_{r^-},z)\big)-h\big(V^i_{r^-}\big)\Big)^2\bigg]\lambda(dz)dr. \end{aligned} \end{equation*} Finally, using the fact that $h$ has bounded derivatives, $b$, $\sigma$ and $F$ are Lipschitz, we get $$\mathbb{E}\bigg[\sup_{s\le t}|R_N(s)|^2\bigg]\leq C(1+t^2)\bigg(1+\mathbb{E}\bigg[\sup_{s\le t}|X_s|^2\bigg]\bigg)N^{-1}.$$ This gives the result coming back to $\eqref{inq2_proof_part}$. \end{proof} \end{proof} \section{A numerical scheme for MRSDE.}\label{sec:NSMRSDE} We are interested in the numerical approximation of the SDE \eqref{eq:main} on $[0,T]$. Here are the main steps of the scheme. Let $0=T_0<T_1<\cdots<T_n=T$ be a subdivision of $[0,T]$. Given this subdivision, we denote by "$\_$" the mapping $s\mapsto\underline{s}=T_k$ if $s\in[T_k,T_{k+1})$, $k\in \{0,\cdots ,n-1\}$. For simplicity, we consider only the case of regular subdivisions: for a given integer $n$, $T_k=kT/n$, $k=0,\ldots,n$.\\ Let us recall that we proved in the previous section that the particles system, for $1\leq i\leq N$, $$ X^i_t=\bar{X}^i_0+\int_0^t b(X^i_{s^-}) ds + \int_0^t \sigma(X^i_{s^-}) dB^i_s + \int_0^t\int_E F(X^i_{s^-},z) \tilde{N}^i(ds,dz) + \sup_{s\le t} {G}_0(\mu^N_{s}),$$ where we have set \begin{align*} \mu^N_t & =\frac{1}{N}\sum^N_{i=1}\delta_{U^i_t}, \\ U^i_t & =\bar{X}^i_0+\int_0^t b(X^i_{s^-}) ds + \int_0^t \sigma(X^i_{s^-}) dB^i_s + \int_0^t\int_E F(X^i_{s^-},z) \tilde{N}^i(ds,dz),\quad 1 \leq i\leq N, \end{align*} $B^i$ being independent Brownian motions, $N^i$ being independent Poisson processes and $\bar{X}^i_0$ being independent copies of $X_0$, converges toward the solution to \eqref{eq:main}. Thus, the numerical approximation is obtained by an Euler scheme applied to this particles system. We introduce the following discrete version of the particles system: for $1\leq i\leq N$, $$ \tilde{X}^i_t=\bar{X}^i_0+\int_0^t b(\tilde{X}^i_{\underline{s}^-}) ds + \int_0^t \sigma(\tilde{X}^i_{\underline{s}^-}) dB^i_s + \int_0^t\int_E F(\tilde{X}^i_{\underline{s}^-},z) \tilde{N}^i(ds,dz) + \sup_{s\le t} {G}_0(\tilde{\mu}^N_{\underline{s}}), $$ with the notation \begin{align*} \tilde{\mu}^N_{\underline{t}} & =\frac{1}{N}\sum^N_{i=1}\delta_{\tilde{U}^i_t}, \\ \tilde{U}^i_t & =\bar{X}^i_0+\int_0^t b(\tilde{X}^i_{\underline{s}^-}) ds + \int_0^t \sigma(\tilde{X}^i_{\underline{s}^-}) dB^i_s + \int_0^t\int_E F(\tilde{X}^i_{\underline{s}^-},z) \tilde{N}^i(ds,dz),\quad 1 \leq i\leq N. \end{align*} \subsection{Scheme.} Using the notations given above, the result on the interacting system of mean reflected particles of the MR-SDE of Section 3 and Remark $\ref{rem}$, we deduce the following algorithm for the numerical approximation of the MR-SDE: \begin{remark} We emphasize that, at each step $k$ of the algorithm, we approximate the increment of the reflection process $K$ by the increment of the approximation: \begin{equation} \label{increment} \Delta_k \hat{K}^N:=\sup_{l\leq k}G_0(\tilde{\mu}^N_{T_l})-\sup_{l\leq k-1}G_0(\tilde{\mu}^N_{T_l}). \end{equation} \end{remark} First, we consider the special case when the SDE is defined by \begin{equation*} \begin{cases} \begin{split} & X_t =X_0+\int_0^t b(X_{s^-}) ds + \int_0^t \sigma(X_{s^-}) dB_s + \int_0^t F(X_{s^-}) d\tilde{N}_s + K_t,\quad t\geq 0, \\ & \mathbb{E}[h(X_t)] \geq 0, \quad \int_0^t \mathbb{E}[h(X_s)] \, dK_s = 0, \quad t\geq 0. \end{split} \end{cases} \end{equation*} where $N$ is a Poisson process with intensity $\lambda$, and $\tilde{N}_t=N_t-\lambda t.$ \\ As suggested in Remark \ref{rem}, the increment \eqref{increment} can be approached by:\\ \\ $\widehat{\Delta_k K}^N:=$ \begin{equation*} \begin {aligned} \inf\Bigg\{x\geq 0:\frac{1}{N}\sum_{i=1}^N h\Bigg(&x+\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^i+\frac{T}{n} \Bigg(b\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^i\bigg)-\lambda F\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^j\bigg)+\frac{\sqrt{T}}{\sqrt{n}}\sigma\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^i\bigg)G^i\\&+F\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^i\bigg)H^i\Bigg)\geq0\Bigg\}, \end {aligned} \end{equation*} where $G^j\sim \mathcal{N}(0,1)$ and $H^j\sim \mathcal{P}(\lambda(T/n))$ and are i.i.d. \begin{algorithm} \caption{Particle approximation}\label{alg_part} \begin{algorithmic}[1] \sFor {$1\leq j\leq N$}{ \State$\bigg(\Big(\tilde{X}_{0}^{\tilde{\mu}^N}\Big)^j,\Big(\tilde{U}_{0}^{\tilde{\mu}^N}\Big)^j,\hat{\mu}^N_0\bigg)=(x,x,\delta_x)$} \sFor {$1\leq k\leq n$}{ \sFor {$1\leq j\leq N$}{\State $G^j\sim \mathcal{N}(0,1)$ \State $H^j\sim \mathcal{P}(\lambda(T/n))$ \State $\Big(\tilde{U}_{T_{k}}^{\tilde{\mu}^N}\Big)^j=\Big(\tilde{U}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^j+(T/n)\Bigg(b\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^j\bigg)-\lambda F\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^j\bigg)\Bigg)$ \State ~~~~~~~~~~~~~~~$+\sqrt{(T/n)}\sigma\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^j\bigg)G^j+F\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^j\bigg)H^j$} \State$\tilde{\mu}^N_{T_k}=N^{-1}\sum_{j=1}^{N}\delta_{(\tilde{U}_{T_k}^{\tilde{\mu}^N})^j}$ \State$\Delta_k \hat{K}^N=\sup_{l\leq k}G_0(\tilde{\mu}^N_{T_l})-\sup_{l\leq k-1}G_0(\tilde{\mu}^N_{T_l})$ \sFor {$1\leq j\leq N$}{\State $\Big(\tilde{X}_{T_{k}}^{\tilde{\mu}^N}\Big)^j=\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^j+(T/n)\Bigg(b\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^j\bigg)-\lambda F\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^j\bigg)\Bigg)$ \State ~~~~~~~~~~~~~~~$+\sqrt{(T/n)}\sigma\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^j\bigg)G^j+F\bigg(\Big(\tilde{X}_{T_{k-1}}^{\tilde{\mu}^N}\Big)^j\bigg)H^j+\Delta_k\hat{K}^N$} } \end{algorithmic} \end{algorithm} Indeed, using the same kind of arguments as in the sketch of the proof of Theorem $\ref{thrm_exacte}$, one can show that the increments of the approximated reflection process are equals to the approximation of the increments: \begin{equation*} \forall k\in\{1,\cdots n\}:~~ \widehat{\Delta_k K}^N=\Delta_k\hat{K}^N. \end{equation*} \newpage Returning to the general case \eqref{eq:main}, we can see in \cite{YS12}, $N=\{N(t):=N(E\times[0,t])\}$ is a stochastic process with intensity $\lambda$ that counts the number of jumps until some given time. The Poisson random measure $N(dz,dt)$ generates a sequence of pairs $\{(\iota_i,\xi_i),i\in \{1,2,\cdots,N(T)\}\}$ for a given finite positive constant $T$ if $\lambda<\infty$. Here $\{\iota_i,i\in \{1,2,\cdots,N(T)\}\}$ is a sequence of increasing nonnegative random variables representing the jump times of a standard Poisson process with intensity $\lambda$, and $\{\xi_i,i\in \{1,2,\cdots,N(T)\}\}$ is a sequence of independent identically distributed random variables, where $\xi_i$ is distributed according to $f(z)$, where $\lambda(dz)dt=\lambda f(z)dzdt$. Then, the numerical approximation can equivalently be the following form \begin{equation*} \begin{aligned} \bar{X}^j_{T_k}&=\bar{X}^j_{T_{k-1}}+\frac{T}{n}\bigg(b(\bar{X}^j_{T_{k-1}})-\int_E\lambda F(\bar{X}^j_{T_{k-1}},z)f(z)dz\bigg)+\sqrt{\frac{T}{n}}\sigma(\bar{X}^j_{T_{k-1}})G^j\\&~~~~+\sum_{i=H^j_{T_{k-1}}+1}^{H^j_{T_{k}}}F(\bar{X}^j_{T_{k-1}},\xi_i)+\Delta_k \hat{K}^N, \end{aligned} \end{equation*} $\Delta_k\hat{K}^N=\widehat{\Delta_k K}^N=$ \begin{equation*} \begin {aligned} \inf\Bigg\{x\geq 0:\frac{1}{N}\sum_{i=1}^N h\Bigg(&x+\bar{X}^j_{T_{k-1}}+\frac{T}{n}\bigg(b(\bar{X}^j_{T_{k-1}})-\int_E\lambda F(\bar{X}^j_{T_{k-1}},z)f(z)dz\bigg)+\sqrt{\frac{T}{n}} \sigma(\bar{X}^j_{T_{k-1}})G^j\\&~~~~+\sum_{i=H^j_{T_{k-1}}+1}^{H^j_{T_{k}}}F(\bar{X}^j_{T_{k-1}},\xi_i)\Bigg)\geq0\Bigg\}, \end {aligned} \end{equation*} where $G^j\sim \mathcal{N}(0,1)$ and $H^j\sim \mathcal{P}(\lambda(T/n))$ and are i.i.d. \subsection{Scheme error.} \begin{proposition} \label{cv_num} \begin{enumerate} \item[(i)] Let $T>0$, $N$ and $n$ be two non-negative integers and let Assumptions $\ref{lip}$, $\ref{bilip}$ and $\ref{int}$ hold. There exists a constant $C$, depending on $T$, $b$, $\sigma$, $F$, $h$ and $X_0$ but independent of $N$, such that: for all $i=1,\ldots,N$ $$\mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1/2}\bigg).$$ \item[(ii)] Moreover, if Assumption \ref{reg} is in force, there exists a constant $C$, depending on $T$, $b$, $\sigma$, $F$, $h$ and $X_0$ but independent of $N$, such that: for all $i=1,\ldots,N$ $$\mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1}\bigg).$$ \end{enumerate} \end{proposition} \begin{proof} Let us fix $i\in {1,\ldots,N}$ and $T>0$. We have, for $t\leq T$, \begin{equation*} \begin{aligned} \bigg|X^i_s-\tilde{X}^i_s\bigg|& \leq \bigg|\int_0^s b(X^i_{r^-})- b(\tilde{X}^i_{\underline{r}^-})dr \bigg| + \bigg|\int_0^s \bigg(\sigma(X^i_{r^-})-\sigma(\tilde{X}^i_{\underline{r}^-})\bigg) dB^i_r\bigg| \\&~~ ~~+ \bigg|\int_0^s\int_E\bigg(F(X^i_{r^-},z) -F(\tilde{X}^i_{\underline{r}^-},z)\bigg) \tilde{N}^i(dr,dz)\bigg| + \sup_{r\le s} \big|{G}_0(\mu_r^N)-{G}_0(\tilde{\mu}^N_{\underline{r}})\big|. \end{aligned} \end{equation*} Hence, using Assumption \ref{lip}, Cauchy-Schwartz, Doob and BDG inequalities we get: \begin{equation} \label{X1} \begin{aligned} \mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg] & \leq 4\mathbb{E}\bigg[\sup_{s\le t}\Bigg\{\bigg|\int_0^s\bigg(b(X^i_{r^-})- b(\tilde{X}^i_{\underline{r}^-})\bigg) dr\bigg|^2 + \bigg|\int_0^s \bigg(\sigma(X^i_{r^-})-\sigma(\tilde{X}^i_{\underline{r}^-})\bigg) dB^i_r\bigg|^2 \\&~~ ~~+ \bigg|\int_0^s\int_E\bigg(F(X^i_{r^-},z) -F(\tilde{X}^i_{\underline{r}^-},z)\bigg) \tilde{N}^i(dr,dz)\bigg|^2 + \sup_{r\le s} \big|{G}_0(\mu_r^N)-{G}_0(\tilde{\mu}^N_{\underline{r}})\big|^2\Bigg\}\bigg] \\& \leq C\Bigg\{\mathbb{E} \bigg[t\int_0^t\bigg|b(X^i_{s^-})- b(\tilde{X}^i_{\underline{s}^-}) \bigg|^2ds\bigg] + \mathbb{E}\bigg[\int_0^t \bigg|\sigma(X^i_{s^-})-\sigma(\tilde{X}^i_{\underline{s}^-})\bigg|^2 ds\bigg]\\&~~ ~~+\mathbb{E} \bigg[\int_0^t\int_E\bigg|F(X^i_{s^-},z) -F(\tilde{X}^i_{\underline{s}^-},z)\bigg|^2 \lambda(dz)ds\bigg] + \mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]\Bigg\} \\& \leq C\Bigg\{TC_1\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds+ C_1\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds\\&~~ ~~+C_1\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds+ \mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]\Bigg\}\\& \leq C\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds +4\mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]. \end{aligned} \end{equation} Denoting by $(\mu^i_{t})_{0\le t\le T}$ the family of marginal laws of $(U^i_{t})_{0\le t\le T}$ and $(\tilde{\mu}^i_{\underline{t}})_{0\le t\le T}$ the family of marginal laws of $(\tilde{U}^i_{t})_{0\le t\le T}$, we have \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]& \leq3\Bigg\{\mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\mu_s^i)\big|^2\bigg] +\sup_{s\le t} \big|{G}_0(\mu_s^i)-{G}_0(\tilde{\mu}^i_{\underline{s}})\big|^2\\&~~ ~~+\mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\tilde{\mu}^i_{\underline{s}})-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]\Bigg\}, \end{aligned} \end{equation*} and from Lemma $\ref{wiss}$, \begin{equation*} \begin{aligned} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~& \leq3\Bigg\{{1\over{m^2}}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu^i_s)+\cdot) (d\mu_s^N-d\mu_s^i)\right|^2\bigg] +\bigg(\frac{M}{m}\bigg)^2\sup_{s\le t} W_1^2(\mu_s^i,\tilde{\mu}^i_{\underline{s}})\\&~~ ~~+{1\over{m^2}}\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\tilde{\mu}^i_{\underline{s}})+\cdot) (d\tilde{\mu}^N_{\underline{s}}-d\tilde{\mu}^i_{\underline{s}})\right|^2\bigg]\Bigg\}\\& \leq C\Bigg\{\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu^i_s)+\cdot) (d\mu_s^N-d\mu_s^i)\right|^2\bigg] +\sup_{s\le t} \mathbb{E}\bigg[\bigg|U_s^i-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]\\&~~ ~~+\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\tilde{\mu}^i_{\underline{s}})+\cdot) (d\tilde{\mu}^N_{\underline{s}}-d\tilde{\mu}^i_{\underline{s}})\right|^2\bigg]\Bigg\}. \end{aligned} \end{equation*} \begin{proof}[Proof of (i)]\renewcommand{\qedsymbol}{} Following the Proof of (i) in Theorem \ref{cv_part}, we obtain $$\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu^i_s)+\cdot) (d\mu_s^N-d\mu_s^i)\right|^2\bigg]\leq C\mathbb{E}\bigg[\sup_{s\le t} W_1^2(\mu_s^N,\mu_s^i)\bigg]\leq CN^{-1/2},$$ $$\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\tilde{\mu}^i_{\underline{s}})+\cdot) (d\tilde{\mu}^N_{\underline{s}}-d\tilde{\mu}^i_{\underline{s}})\right|^2\bigg]\leq C\mathbb{E}\bigg[\sup_{s\le t} W_1^2(\tilde{\mu}^i_{\underline{s}},\tilde{\mu}^N_{\underline{s}})\bigg]\leq CN^{-1/2}.$$ From which, we can derive the inequality \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]& \leq C_1\sup_{s\le t} \mathbb{E}\bigg[\bigg|U_s^i-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]+C_2N^{-1/2}\\& \leq C_1\bigg\{\sup_{s\le t} \mathbb{E}\bigg[\bigg|U_s^i-\tilde{U}^i_{s}\bigg|^2\bigg]+\sup_{s\le t} \mathbb{E}\bigg[\bigg|\tilde{U}^i_{s}-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]\bigg\} +C_2N^{-1/2}. \end{aligned} \end{equation*} For the first term of the right hand side, we can observe that, \begin{equation*} \begin{aligned} \sup_{s\le t}\mathbb{E}\bigg[\bigg|U_s^i-\tilde{U}^i_{s}\bigg|^2\bigg]\bigg]& \leq \mathbb{E}\bigg[\sup_{s\le t}\bigg|U_s^i-\tilde{U}^i_{s}\bigg|^2\bigg]\bigg]\\& \leq 3\mathbb{E}\bigg[\sup_{s\le t}\Bigg\{\bigg|\int_0^s\bigg(b(X^i_{r^-})- b(\tilde{X}^i_{\underline{r}^-})\bigg) dr\bigg|^2 + \bigg|\int_0^s \bigg(\sigma(X^i_{r^-})-\sigma(\tilde{X}^i_{\underline{r}^-})\bigg) dB^i_r\bigg|^2 \\&~~+ \bigg|\int_0^s\int_E\bigg(F(X^i_{r^-},z) -F(\tilde{X}^i_{\underline{r}^-},z)\bigg) \tilde{N}^i(dr,dz)\bigg|^2\bigg\}\bigg] \\& \leq C\Bigg\{\mathbb{E} \bigg[t\int_0^t\bigg|b(X^i_{s^-})- b(\tilde{X}^i_{\underline{s}^-}) \bigg|^2ds\bigg] + \mathbb{E}\bigg[\int_0^t \bigg|\sigma(X^i_{s^-})-\sigma(\tilde{X}^i_{\underline{s}^-})\bigg|^2 ds\bigg]\\&~~ ~~+\mathbb{E} \bigg[\int_0^t\int_E\bigg|F(X^i_{s^-},z) -F(\tilde{X}^i_{\underline{s}^-},z)\bigg|^2 \lambda(dz)ds\bigg]\Bigg\} \\& \leq C\Bigg\{TC_1\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] dr+ 2C_1\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] dr\Bigg\}\\& \leq C\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds. \end{aligned} \end{equation*} Using Assumption \ref{lip}, the second term $\sup_{s\le t} \mathbb{E}\bigg[\bigg|\tilde{U}^i_{s}-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]$ becomes \begin{equation*} \begin{aligned} \sup_{s\le t} \mathbb{E}\bigg[\bigg|\tilde{U}^i_{s}-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]& \leq 3\sup_{s\le t}\Bigg\{\mathbb{E}\bigg[\bigg|\int_{\underline{s}}^s b(\tilde{X}^i_{\underline{r}^-}) dr\bigg|^2 + \bigg|\int_{\underline{s}}^s \sigma(\tilde{X}^i_{\underline{r}^-}) dB^i_r\bigg|^2 + \bigg|\int_{\underline{s}}^s \int_E F(\tilde{X}^i_{\underline{r}^-},z) \tilde{N}^i(dr,dz)\bigg|^2\bigg]\Bigg\}\\& \leq 3\sup_{s\le t}\Bigg\{\mathbb{E}\bigg[\bigg| b(\tilde{X}^i_{\underline{s}}) \bigg|^2 \big|s-\underline{s}\big|^2 + \bigg|\sigma(\tilde{X}^i_{\underline{s}}) \bigg|^2 \big|B^i_s-B^i_{\underline{s}}\big|^2 + \int_{\underline{s}}^s \int_E \bigg|F(\tilde{X}^i_{\underline{r}^-},z)\bigg|^2 \lambda(dz)dr\bigg]\Bigg\}\\& \leq 3\sup_{s\le t}\Bigg\{\mathbb{E}\bigg[\bigg| b(\tilde{X}^i_{\underline{s}}) \bigg|^2 \big|s-\underline{s}\big|^2 + \bigg|\sigma(\tilde{X}^i_{\underline{s}}) \bigg|^2 \big|B^i_s-B^i_{\underline{s}}\big|^2 + C\int_{\underline{s}}^s(1+|\tilde{X}^i_{\underline{r}^-}|^2) dr\bigg]\Bigg\}\\& \leq 3\sup_{s\le t}\Bigg\{\bigg(\frac{T}{n}\bigg)^2\mathbb{E}\bigg[\big|\sup_{\underline{s}\leq r\leq s} b(\tilde{X}^i_{\underline{r}^-}) \big|^2\bigg] + \mathbb{E}\bigg[\big|B^i_s-B^i_{\underline{s}}\big|^2\bigg] \mathbb{E}\bigg[\big|\sigma(\tilde{X}^i_{\underline{s}}) \big|^2\bigg] \\&~~~~+C\bigg(\frac{T}{n}\bigg)\mathbb{E}\bigg[\sup_{\underline{s}\leq r\leq s} (1+|\tilde{X}^i_r|^2)\bigg]\Bigg\}\\& \leq C_1\bigg(\frac{T}{n}\bigg)^2\mathbb{E}\bigg[\sup_{s\le T} \big|b(\tilde{X}^i_s) \big|^2\bigg] + C_2 \sup_{s\le t}\mathbb{E}\bigg[\big|B^i_s-B^i_{\underline{s}}\big|^2\bigg] \mathbb{E}\bigg[\sup_{s\le T}\big|\sigma(\tilde{X}^i_s) \big|^2\bigg]\\&~~~~ + C_3 \bigg(\frac{T}{n}\bigg)\mathbb{E}\bigg[\sup_{s\le T} (1+|\tilde{X}^i_s|^2)\bigg]\\& \leq C_1\bigg(\frac{T}{n}\bigg)^2\bigg(1+\mathbb{E}\bigg[\sup_{s\le T} \big|\tilde{X}^i_s \big|^2\bigg]\bigg) + C_2 \sup_{s\le t}\mathbb{E}\bigg[\big|B^i_s-B^i_{\underline{s}}\big|^2\bigg] \bigg(1+\mathbb{E}\bigg[\sup_{s\le T} \big|\tilde{X}^i_s \big|^2\bigg]\bigg)\\&~~~~ + C_3 \bigg(\frac{T}{n}\bigg)\bigg(1+\mathbb{E}\bigg[\sup_{s\le T} \big|\tilde{X}^i_s \big|^2\bigg]\bigg), \end{aligned} \end{equation*} and from Proposition $\ref{propriete_1}$, we get \begin{equation*} \begin{aligned} \sup_{s\le t} \mathbb{E}\bigg[\bigg|\tilde{U}^i_{s}-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]& \leq C_1\bigg(\frac{T}{n}\bigg)+ C_2 \sup_{s\le t}\mathbb{E}\bigg[\big|B^i_s-B^i_{\underline{s}}\big|^2\bigg]. \end{aligned} \end{equation*} Then, by using BDG inequality, we obtain \begin{equation*} \begin{aligned} \sup_{s\le t}\mathbb{E}\bigg[\big|B^i_s-B^i_{\underline{s}}\big|^2\bigg] =\sup_{s\le t}\mathbb{E}\bigg[\bigg(\int_{\underline{s}}^s dB^i_u\bigg)^2\bigg] \leq\sup_{s\le t}|s-\underline{s}| \leq \frac{T}{n}. \end{aligned} \end{equation*} Therefore, we conclude \begin{equation} \label{U} \begin{aligned} \sup_{s\le t} \mathbb{E}\bigg[\bigg|\tilde{U}^i_{s}-\tilde{U}^i_{\underline{s}}\bigg|^2\bigg]& \leq C_1n^{-1}+ C_2 n^{-1}\\& \leq Cn^{-1}, \end{aligned} \end{equation} from which we derive the inequality \begin{equation} \label{G} \begin{aligned} \mathbb{E}\bigg[\sup_{s\le t} \big|{G}_0(\mu_s^N)-{G}_0(\tilde{\mu}^N_{\underline{s}})\big|^2\bigg]& \leq C\Bigg\{n^{-1}+N^{-1/2}+\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds\Bigg\}, \end{aligned} \end{equation} and taking into account $\eqref{X1}$ we get \begin{equation} \label{X2} \begin{aligned} \mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg] & \leq C\Bigg\{n^{-1}+N^{-1/2}+\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg] ds\Bigg\}. \end{aligned} \end{equation} Since \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg]& \leq 2\mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{s}\big|^2\bigg]+2\mathbb{E}\bigg[\big|\tilde{X}^i_s-\tilde{X}^i_{\underline{s}}\big|^2\bigg]\\& =2\mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_{s}\big|^2\bigg]+2\mathbb{E}\bigg[\big|\tilde{U}^i_s-\tilde{U}^i_{\underline{s}}\big|^2\bigg], \end{aligned} \end{equation*} it follows from $\eqref{U}$ and $\eqref{X2}$ that \begin{equation*} \begin{aligned} \mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg] & \leq C\Bigg\{n^{-1}+N^{-1/2}+\int_0^t \mathbb{E}\bigg[\big|X^i_s-\tilde{X}^i_s\big|^2\bigg] ds\Bigg\}. \end{aligned} \end{equation*} and finally, we conclude the proof of (i) with Gronwall's Lemma. \end{proof} \begin{proof}[Proof of (ii)]\renewcommand{\qedsymbol}{} Following the proof of (ii) in Theorem \ref{cv_part}, we obtain $$\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\mu^i_s)+\cdot) (d\mu_s^N-d\mu_s^i)\right|^2\bigg]\leq CN^{-1},$$ $$\mathbb{E}\bigg[\sup_{s\le t}\left|\int h(\bar{G}_0(\tilde{\mu}^i_{\underline{s}})+\cdot) (d\tilde{\mu}^N_{\underline{s}}-d\tilde{\mu}^i_{\underline{s}})\right|^2\bigg]\leq CN^{-1}.$$ According to the same strategy applied to proof of (i) in Theorem \ref{cv_num}, the result follows easily: $$\mathbb{E}\bigg[\sup_{s\le t}\big|X^i_s-\tilde{X}^i_s\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1}\bigg).$$ \end{proof} \end{proof} \begin{theorem} \label{cv} Let $T > 0$, $N$ and $n$ be two non-negative integers. Let assumptions $\ref{lip}$, $\ref{bilip}$ and $\ref{int}$ hold. \begin{enumerate} \item[(i)] There exists a constant $C$, depending on $T$, $b$, $\sigma$, $F$, $h$ and $X_0$ but independent of $N$, such that: for all $i=1,\ldots,N$, $$\mathbb{E}\bigg[\sup_{t\le T}\big|\bar{X}^i_t-\tilde{X}^i_t\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1/2}\bigg).$$ \item[(ii)] If in addition $\ref{reg}$ holds, there exists a positive constant $C$, depending on $T$, $b$, $\sigma$, $F$, $h$ and $X_0$ but independent of $N$, such that: for all $i=1,\ldots,N$, $$\mathbb{E}\bigg[\sup_{t\le T}\big|\bar{X}^i_t-\tilde{X}^i_t\big|^2\bigg]\leq C\bigg(n^{-1}+N^{-1}\bigg).$$ \end{enumerate} \end{theorem} \begin{proof} The proof is straightforward writing $$\big|\bar{X}^i_t-\tilde{X}^i_t\big|\leq \big|\bar{X}^i_t-{X}^i_t\big|+\big|{X}^i_t-\tilde{X}^i_t\big|,$$ and using Theorem \ref{cv_part} and Proposition \ref{cv_num}. \end{proof} \section{Numerical illustrations.}\label{sec:NI} Throughout this section, we consider, on $[0,T]$ the following sort of processes: \begin{equation} \label{eq-exacte} \begin{cases} \begin{split} & X_t =X_0-\int_0^t (\beta_s+a_s X_{s^-}) ds + \int_0^t (\sigma_s+\gamma_s X_{s^-}) dB_s + \int_0^t \int_E c(z)(\eta_s+\theta_s X_{s^-}) \tilde{N}(ds,dz) + K_t,\quad t\geq 0, \\ & \mathbb{E}[h(X_t)] \geq 0, \quad \int_0^t \mathbb{E}[h(X_s)] \, dK_s = 0, \quad t\geq 0. \end{split} \end{cases} \end{equation} where $(\beta_t)_{t\geq0}$, $(a_t)_{t\geq0}$, $(\sigma_t)_{t\geq0}$, $(\gamma_t)_{t\geq0}$, $(\eta_t)_{t\geq0}$ and $(\theta_t)_{t\geq0}$ are bounded adapted processes. This sort of processes allow us to make some explicit computations leading us to illustrate the algorithm. Our results are presented for different diffusions and functions $h$ that are summarized below. \textbf{Linear constraint.} We first consider cases where $h:\mathbb{R}\ni x \longmapsto x-p\in\mathbb{R}.$ \begin{itemize} \item [Case (i)] Drifted Brownian motion and compensated Poisson process: $\beta_t=\beta>0$, $a_t=\gamma_t=\theta_t=0$, $\sigma_t=\sigma>0$, $\eta_t=\eta>0$, $X_0=x_0\geq p$, $c(z)=z$ and $$f(z)=\frac{1}{\sqrt{2\pi z}}\exp\bigg(-\frac{(\ln z)^2}{2}\bigg)\bf{1}_{\{0<z\}}.$$ We have $$K_t=(p+\beta t-x_0)^+,$$ and $$X_t=X_0-(\beta+\lambda\sqrt{e})t+\sigma B_t+\sum_{i=0}^{N_t}\eta \xi_i+K_t,$$ where $N_t\sim \mathcal{P}(\lambda t)$ and $\xi_i\sim lognormal(0,1).$ \item [Case (ii)] Black and Scholes process: $\beta_t=\sigma_t=\eta_t=0$, $a_t=a>0$, $\gamma_t=\gamma>0$, $\theta_t=\theta>0$, $c(z)=\delta_1(z)$. Then\\ \\ $K_t=ap(t-t^*)1_{t\geq t^*}$, where $t^*=\frac{1}{a}(\ln(x_0)-\ln(p))$,\\ and $$X_t=Y_t+Y_t\int_0^tY_s^{-1}dK_s,$$ where $Y$ is the process defined by: $$Y_t=X_0\exp\Big(-(a+\gamma^2/2+\lambda\theta)t+\gamma B_t\Big)(1+\theta)^{N_t}.$$ \end{itemize} \textbf{Nonlinear constraint.} Secondly, we illustrate the case of non-linear function $h$: $$h:\mathbb{R}\ni x\longmapsto x+\alpha \sin(x)-p\in\mathbb{R},~~-1<\alpha<1,$$ and we illustrate this case with \begin{itemize} \item [Case (iii)] Ornstein Uhlenbeck process: $\beta_t=\beta>0$, $a_t=a>0$, $\gamma_t=\theta_t=0$, $\sigma_t=\sigma>0$, $\eta_t=\eta>0$, $X_0=x_0$ with $x_0>|\alpha|+p$, $c(z)=\delta_1(z)$. We obtain $$\mathrm{d}K_t=e^{-at}\mathrm{d}\sup_{s\leq t}(F^{-1}_s(0))^+,$$ where for all $t$ in $[0,T],$ \begin{equation*} \begin{aligned} F_t:\mathbb{R}\ni x\longmapsto &\Bigg\{e^{-at}\bigg(x_0-\beta\bigg(\frac{e^{at}-1}{a}\bigg)+x\bigg) +\alpha\exp\bigg(-e^{-at}\frac{\sigma^2}{2a}\sinh(at)\bigg) \\&\times\Bigg[\frac{1}{2}\bigg(\exp(\lambda t(e^{i\eta}-1))+\exp(\lambda t(e^{-i\eta}-1))\bigg)\sin\bigg(e^{-at}\bigg(x_0-(\beta+\lambda\eta)\bigg(\frac{e^{-at}-1}{a}\bigg)+x\bigg)\bigg) \\&+\frac{1}{2i}\bigg(\exp(\lambda t(e^{i\eta}-1))-\exp(\lambda t(e^{-i\eta}-1))\bigg)\cos\bigg(e^{-at}\bigg(x_0-(\beta+\lambda\eta)\bigg(\frac{e^{-at}-1}{a}\bigg)+x\bigg)\bigg)\Bigg] \\&-p\Bigg\} \end{aligned} \end{equation*} \end{itemize} \begin{remark} These examples have been chosen in such a way that we are able to give an analytic form of the reflecting process $K$. This enables us to compare numerically the “true” process $K$ and its empirical approximation $\hat{K}$ . When an exact simulation of the underlying process is available, we compute the approximation rate of our algorithm. \end{remark} \subsection{ Proofs of the numerical illustrations.} In order to have closed, or almost closed, expression for the compensator $K$ we introduce the process $Y$ solution to the non-reflected SDE \begin{equation*} Y_t =X_0-\int_0^t (\beta_s+a_s Y_{s^-}) ds + \int_0^t (\sigma_s+\gamma_s Y_{s^-}) dB_s + \int_0^t\int_E c(z)(\eta_s+\theta_s Y_{s^-}) \tilde{N}(ds,dz). \end{equation*} By letting $A_s=\int_0^ta_sds$ and applying Itô’s formula on $e^{A_t}X_t$ and $e^{A_t}Y_t$, we get \begin{equation*} \begin{aligned} e^{A_t}X_t&=X_0+\int_0^t e^{A_s}X_s a_s ds+\int_0^t e^{A_s}(-\beta_s-a_s X_{s^-}) ds + \int_0^t e^{A_s}(\sigma_s+\gamma_s X_{s^-}) dB_s \\&~~~~+ \int_0^t\int_E e^{A_s} c(z)(\eta_s+\theta_s X_{s^-}) \tilde{N}(ds,dz)+\int_0^te^{A_s}dK_s\\& =X_0-\int_0^t e^{A_s}\beta_s ds + \int_0^t e^{A_s}(\sigma_s+\gamma_s X_{s^-}) dB_s + \int_0^t\int_E e^{A_s} c(z)(\eta_s+\theta_s X_{s^-}) \tilde{N}(ds,dz)+\int_0^te^{A_s}dK_s, \end{aligned} \end{equation*} in the same way, \begin{equation*} \begin{aligned} e^{A_t}Y_t=X_0-\int_0^t e^{A_s}\beta_s ds + \int_0^t e^{A_s}(\sigma_s+\gamma_s Y_{s^-}) dB_s + \int_0^t\int_E e^{A_s} c(z)(\eta_s+\theta_s Y_{s^-}) \tilde{N}(ds,dz), \end{aligned} \end{equation*} and so \begin{equation*} \begin{aligned} X_t=Y_t+e^{-A_t}\int_0^te^{A_s}dK_s+ e^{-A_t}\int_0^t e^{A_s}\gamma_s (X_{s^-}+Y_{s^-}) dB_s +e^{-A_t} \int_0^t\int_E e^{A_s} c(z)\theta_s(X_{s^-}+Y_{s^-}) \tilde{N}(ds,dz). \end{aligned} \end{equation*} \begin{remark} \label{E(Y)} In all of cases, we have $a_t=a$ i.e. $A_t=at$, so we get \begin{equation*} \begin{aligned} \mathbb{E}[Y_t]&=\mathbb{E}\bigg[e^{-at}\bigg(x_0-\int_0^t e^{as}\beta ds+ \int_0^t e^{as}(\sigma_s+\gamma_s Y_{s^-}) dB_s + \int_0^t\int_E e^{as} c(z)(\eta_s+\theta_s Y_{s^-}) \tilde{N}(ds,dz)\bigg)\bigg]\\& =e^{-at}\bigg(x_0-\int_0^t e^{as}\beta ds\bigg)\\& =e^{-at}\bigg(x_0-\beta \bigg(\frac{e^{at}-1}{a}\bigg)\bigg). \end{aligned} \end{equation*} \end{remark} \begin{proof}[Proof of assertions (i)] From Proposition $\ref{propdensite}$ and Remark $\ref{E(Y)}$, we have \begin{equation*} \begin{aligned} k_t&=\beta \bf{1}_{\mathbb{E}(X_t)=p}\\& =\beta \bf{1}_{\mathbb{E}(Y_t)+K_t-p=0}\\& =\beta \bf{1}_{x_0-\beta t+K_t-p=0}, \end{aligned} \end{equation*} so, we obtain that \begin{equation*} \begin{aligned} K_t&=\int_0^t k_t dt\\& =\int_0^t \beta \bf{1}_{K_t=p+\beta t-x_0} dt, \end{aligned} \end{equation*} and as $K_t\geq 0$, we conclude that $$K_t=(p+\beta t-x_0)^+.$$ Next, we have $$f(z)=\frac{1}{\sqrt{2\pi z}}\exp\bigg(-\frac{(\ln z)^2}{2}\bigg),$$ the density function of a lognormal random variable, so we can obtain $$\int_E \eta z \lambda(dz)=\lambda\eta\int_E z f(z) dz=\lambda\eta \mathbb{E}(\xi)$$ where $\xi\sim lognormal(0,1)$, and we conclude that $$\int_E \eta z \lambda(dz)=\lambda\eta\sqrt{e}.$$ Finally, we deduce the exact solution $$X_t=X_0-(\beta+\lambda\sqrt{e}) t+\sigma B_t+\sum_{i=0}^{N_t}\eta \xi_i+K_t,$$ where $N_t\sim \mathcal{P}(\lambda t)$ and $\xi_i\sim lognormal(0,1).$ \end{proof} \begin{proof}[Proof of assertions (ii)] In this case, and using the same Proposition and Remark, we have \begin{equation*} \begin{aligned} k_t&=(\mathbb{E}(-aX_t))^- \bf{1}_{\mathbb{E}(X_t)=p}, \end{aligned} \end{equation*} which \begin{equation*} \begin{aligned} \mathbb{E}(X_t)=p&\Longleftrightarrow \mathbb{E}(Y_t)-p+e^{-at}\int_0^t e^{as}dK_s=0\\& \Longleftrightarrow -x_0e^{-at}+p=e^{-at}\int_0^t e^{as}dK_s\\& \Longleftrightarrow K_s=ap, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} K_t\geq0&\Longleftrightarrow -x_0e^{-at}+p\geq0\\& \Longleftrightarrow e^{-at}\leq\frac{p}{x_0}\\& \Longleftrightarrow t\geq\frac{1}{a}(\ln(x_0)-\ln(p)):=t^*. \end{aligned} \end{equation*} So, we conclude that $K_t=ap(t-t^*)1_{t\geq t^*}$, where $t^*=\frac{1}{a}(\ln(x_0)-\ln(p))$.\\ Next, by the definition of the process $Y_t$ in this case, \begin{equation*} dY_t =-a Y_{t^-} dt + \gamma Y_{t^-} dB_s + \theta Y_{t^-} d\tilde{N}_t, \end{equation*} we have $$Y_t=X_0\exp\Big(-(a+\gamma^2/2+\lambda\theta)t+\gamma B_t\Big)(1+\theta)^{N_t}.$$ Thanks to It\^o formula we get \begin{equation*} \begin{aligned} d\bigg(\frac{1}{Y_t}\bigg)&=-\frac{1}{Y_t^2}dY_t+\frac{1}{2}\bigg(\frac{2}{Y_t^3}\bigg)\gamma^2Y_t^2dt+d\sum_{s\leq t}\bigg(\frac{1}{Y_{s^-}+\Delta Y_s}-\frac{1}{Y_{s^-}}+\frac{1}{Y_{s^-}^2}\Delta Y_s\bigg)\\& =\frac{a}{Y_t}dt-\frac{\gamma}{Y_t}dB_t-\frac{\theta}{Y_{t^-}}d\tilde{N}_t+\frac{\gamma^2}{Y_t}dt+d\sum_{s\leq t}\bigg(\frac{1}{(1+\theta)Y_{s^-}}-\frac{1}{Y_{s^-}}+\frac{\theta}{Y_{s^-}}\bigg), \end{aligned} \end{equation*} and so \begin{equation*} \begin{aligned} dY_t^{-1}& =(a+\gamma^2)Y_t^{-1}dt-\gamma Y_t^{-1}dB_t-\theta Y_{t^-}^{-1}d\tilde{N}_t+\bigg(\frac{\theta^2}{1+\theta}\bigg)d\sum_{s\leq t}Y_{s^-}^{-1}\\& =\bigg(a+\gamma^2+\frac{\lambda\theta^2}{1+\theta}\bigg)Y_t^{-1}dt-\gamma Y_t^{-1}dB_t-\bigg(\frac{\theta}{1+\theta}\bigg) Y_{t^-}^{-1}d\tilde{N}_t. \end{aligned} \end{equation*} Then, using integration by parts formula, we obtain \begin{equation*} \begin{aligned} d(X_t Y_t^{-1})& =X_{t^-} dY_t^{-1}+Y_{t^-}^{-1}dX_t+d[X,Y^{-1}]_t \\& =(a+\gamma^2)X_tY_t^{-1}dt-\gamma X_tY_t^{-1}dB_t-\theta X_{t^-}Y_{t^-}^{-1}d\tilde{N}_t+\bigg(\frac{\theta^2}{1+\theta}\bigg)d\sum_{s\leq t}X_{s^-}Y_{s^-}^{-1}\\&~~~~-aX_t Y_t^{-1}dt+\gamma X_t Y_t^{-1}dB_t+\theta X_{t^-} Y_{t^-}^{-1}d\tilde{N}_t+Y_t^{-1}dK_t\\&~~~~-\gamma^2X_tY_t^{-1}dt-\bigg(\frac{\theta^2}{1+\theta}\bigg)d\sum_{s\leq t}X_{s^-}Y_{s^-}^{-1}\\& =Y_t^{-1}dK_t. \end{aligned} \end{equation*} Finally, we deduce that $$X_t=Y_t+Y_t\int_0^tY_s^{-1}dK_s.$$ \end{proof} \begin{proof}[Proof of assertions (iii)] In that case, we have \begin{equation*} \begin{aligned} Y_t&=e^{-at}\bigg(x_0-\beta \bigg(\frac{e^{at}-1}{a}\bigg)\bigg)+ \sigma_s e^{-at}\int_0^t e^{as}dB_s + e^{-at}\int_0^t \eta_s e^{as} d\tilde{N}_s \\&=e^{-at}\bigg(x_0-(\beta+\lambda\eta) \bigg(\frac{e^{at}-1}{a}\bigg)\bigg)+ \sigma_s e^{-at}\int_0^t e^{as}dB_s + e^{-at}\int_0^t \eta_s e^{as} dN_s \\&:=f_t+G_t+F_t, \end{aligned} \end{equation*} and $$X_t=Y_t+e^{-at}\bar{K}_t,~~~~\bar{K}_t=\int_0^te^{as}dK_s.$$ Hence \begin{equation*} \begin{aligned} h(X_t)&=Y_t+e^{-at}\bar{K}_t+\alpha\sin(Y_t+e^{-at}\bar{K}_t)-p\\& =Y_t+e^{-at}\bar{K}_t+\alpha\Big(\sin(Y_t)\cos(e^{-at}\bar{K}_t)+\cos(Y_t)\sin(e^{-at}\bar{K}_t)\Big)-p\\& =Y_t+e^{-at}\bar{K}_t+\alpha\Big[\cos(e^{-at}\bar{K}_t)\Big\{\sin(f_t)\cos(G_t)\cos(F_t)+\cos(f_t)\sin(G_t)\cos(F_t)\\&~~~~+\cos(f_t)\cos(G_t)\sin(F_t)-\sin(f_t)\sin(G_t)\sin(F_t)\Big\}+\sin(e^{-at}\bar{K}_t)\Big\{\cos(f_t)\cos(G_t)\cos(F_t)\\&~~~~-\sin(f_t)\sin(G_t)\sin(F_t)-\sin(f_t)\cos(G_t)\sin(F_t)-\cos(f_t)\sin(G_t)\sin(F_t)\Big\}\Big]-p.\\& \end{aligned} \end{equation*} On one side, since $G_t$ is a centered gaussian random variable with variance $V$ given by \begin{equation*} V=\sigma^2\frac{1-e^{-2at}}{2a}=\sigma^2e^{-at}\frac{\sinh(at)}{a}, \end{equation*} we obtain that $$\mathbb{E}[e^{iG_t}]=e^{-V/2},$$ $$\mathbb{E}[\sin(G_t)]=\mathbb{E}\bigg[\frac{e^{iG_t}-e^{-iG_t}}{2i}\bigg]=0,$$ and $$\mathbb{E}[\cos(G_t)]=\mathbb{E}\bigg[\frac{e^{iG_t}+e^{-iG_t}}{2}\bigg]=\mathbb{E}(e^{iG_t})=\exp\bigg(-e^{-at}\frac{\sigma^2}{2a}\sinh(at)\bigg)=:g(t).$$ On the other side, \begin{equation*} \begin{aligned} \mathbb{E}[e^{iF_t}]=\mathbb{E}\bigg[\exp\bigg(i\eta e^{-at}\int_0^t e^{as}dN_s\bigg)\bigg], \end{aligned} \end{equation*} by taking \textquoteleft $a$\textquoteright small, we get \begin{equation*} \begin{aligned} \mathbb{E}[e^{iF_t}]&\approx\mathbb{E}\bigg[\exp\bigg(i\eta \int_0^t dN_s\bigg)\bigg]\\& \approx\mathbb{E}\Big[\exp\Big(i\eta N_t\Big)\Big]\\& \approx\exp\Big(\lambda t(e^{i\eta}-1)\Big), \end{aligned} \end{equation*} and so \begin{equation*} \mathbb{E}[\sin(F_t)]\approx\frac{\exp\Big(\lambda t(e^{i\eta}-1)\Big)-\exp\Big(\lambda t(e^{-i\eta}-1)\Big)}{2i}=:m(t), \end{equation*} \begin{equation*} \mathbb{E}[\cos(F_t)]\approx\frac{\exp\Big(\lambda t(e^{i\eta}-1)\Big)+\exp\Big(\lambda t(e^{-i\eta}-1)\Big)}{2}=:n(t). \end{equation*} Using Remark \ref{E(Y)}, we conclude that, for small \textquoteleft $a$\textquoteright, \begin{equation*} \begin{aligned} \mathbb{E}[h(X_t)]&\approx\mathbb{E}[Y_t]+e^{-at}\bar{K}_t+\alpha\Big(g(t)m(t)\cos(f_t+e^{-at}\bar{K}_t)+g(t)n(t)\sin(f_t+e^{-at}\bar{K}_t)\Big)-p\\&:=F_t(\bar{K}_t). \end{aligned} \end{equation*} Therefore, $$\bar{K}_t=\sup_{s\leq t}\Big(F_s^{-1}(0)\Big)^+~~~~and~~~~ dK_t=e^{-at}d\sup_{s\leq t}\Big(F_s^{-1}(0)\Big)^+.$$ \end{proof} \subsection{Illustrations.} This computation works as follows. Let $0 = T_0 < T_1 <\cdots< T_n = T $ be a subdivision of $[0, T ]$ of step size $T/n$, $n$ being a positive integer, let $X$ be the unique solution of the MRSDE \eqref{eq-exacte} and let, for a given $i$, $(\tilde{X}^i_{T_k})_{0\leq k\leq n}$ be its numerical approximation given by Algorithm 1. For a given integer $L$, we draw $(\bar{X}^l)_{0\leq l\leq L}$ and $(\tilde{X}^{i,l})_{0\leq l\leq L}$, $L$ independent copies of $X$ and $\tilde{X}^i$. We then approximate the $\mathbb{L}^2$-error of Theorem~\ref{cv} by: \begin{equation*} \hat{E}=\frac{1}{L}\sum_{l=1}^{L}\max_{0\leq k\leq n}\left|\bar{X}^l_{T_k}-\tilde{X}^{i,l}_{T_k}\right|^2. \end{equation*} \begin{figure}[h!] \centering \includegraphics[scale=0.5]{fig1.pdf} \caption{Case (i). $n = 500$, $N = 100000$, $T = 1$, $\beta = 2$, $\sigma = 1$, $\lambda=5$, $x_0 =1$, $p = 1/2$.} \label{fig:un} \end{figure} \begin{figure}[h!] \label{fig:deux} \centering \includegraphics[scale=0.5]{fig2.pdf} \caption{Case (i). Regression of $\log(\hat{E})$ w.r.t. $\log(N)$. Data: $\hat{E}$ when $N$ varies from $100$ to $2200$ with step size $300$. Parameters: $n = 100$, $T = 1$, $\beta =2$, $\sigma= 1$, $\lambda=5$, $x_0 = 1$, $p = 1/2$, $L = 1000$.} \end{figure} \begin{figure}[h!] \label{fig:trois} \centering \includegraphics[scale=0.5]{fig3.pdf} \caption{Case (ii). Parameters: $n = 500$, $N = 10000$, $T = 1$, $\beta=0$, $a = 3$, $\gamma=1$, $\eta =1$, $\lambda=2$, $x_0 = 4$, $p = 1$.} \end{figure} \begin{figure}[h!] \label{fig:quatre} \centering \includegraphics[scale=0.5]{fig4.pdf} \caption{Case (ii). Regression of $\log(\hat{E})$ w.r.t. $\log(N)$. Data: $\hat{E}$ when $N$ varies from $100$ to $800$ with step size $100$. Parameters: $n = 1000$, $T = 1$, $\beta =0$, $a=3$, $\gamma=1$, $\eta= 1$, $\lambda=2$, $x_0 = 4$, $p = 1$, $L = 1000$.} \end{figure} \begin{figure}[h!] \label{fig:cinq} \centering \includegraphics[scale=0.5]{fig5.pdf} \caption{Case (iii). Parameters: $n = 1000$, $N = 100000$, $T = 15$, $\beta =10^{-2}$, $\sigma = 1$, $p =\pi/2$, $\alpha = 0.9$, $a=10^{-2}$, $x_0$ is the unique solution of $x+\alpha\sin (x)-p=0$ plus $10^{-1}$.} \end{figure}
1,108,101,566,206
arxiv
\section{Motivation} Grand Unified Theories (GUTs) of quarks and leptons have played an important role for almost 40 years in theoretical attempts to make sense of the apparent group structure and mass spectra of the quark and lepton matter fields and coupling strengths of the gauge and Higgs fields observed in Nature. Although many varieties of models have been constructed, the most popular unified ones are based on the unitary, orthogonal, or exceptional groups \SU{5}, \SO{10}, or E$_6$, respectively. But in each of these cases, the chiral irreducible representations (irreps) can uniquely describe only one family of quarks and leptons: towit, ${\bf 10,\ \overline{5}}$, and ${\bf 1}$ for \SU{5}, ${\bf 16}$ for \SO{10}, and ${\bf 27}$ for E$_6$. In order to accommodate the three families observed todate, it has been conventional to introduce in addition to one of the above $G_{\rm family}$ groups, a $G_{flavor}$ symmetry group which also distinguishes the families. While continuous flavor symmetries such as \U{1}, \SU{2}, \SU{3} and their products have been considered in the past, more recently discrete symmetry groups such as $A_4,\ T'$ and $S_4$, etc. have been fashionable in the past 10 years \cite{af,ikoso}. In either case, the GUT model then involves the direct product group $G_{family} \times G_{flavor}$. True family and flavor unification requires the introduction of a higher rank simple group. Some earlier studies along this line have been based on \SO{18}~\cite{GellMann:1980vs,Fujimoto:1981bv}, \SU{11}~\cite{Georgi:1979md,Kim:1981bb}, and \SU{9}~\cite{Frampton:1979cw,Frampton:1979fd}. More recently, models based on \SU{7}~\cite{Barr:2008gz}, \SU{8}~\cite{Barr:2008pn}, and \SU{9} again~\cite{Frampton:2009ce,Dent:2009pd}, the latter reference by two of us (RF and TWK), but none have been totally satisfactory due to a huge number of unwanted states and/or unsatisfactory mass matrices. Here we describe an \SU{12} unification model~\cite{afksu12} with interesting features that was constructed with the help of a Mathematica computer package called LieART written by two of us (RPF and TWK)~\cite{fgLieART}. This program allows one to compute tensor products, branching rules, etc., and perform detailed searches for satisfactory models in a timely fashion. While other smaller and larger rank unitary groups were examined, a model based on \SU{12} appeared to be the most satisfactory minimal one for our purpose. We sketch here the model construction and point out that further details can be found in ~\cite{afksu12}. \section{\SU{12} Unification Model and Particle Assignments} While the three popular GUT groups cited earlier each have just one useful chiral irreducible representation (irrep), \SU{12} has 11 totally antisymmetric irreps: $\irrep{12},\ \irrep{66},\ \irrep{220},\ \irrep{495}, \ \irrep{792},\ \irrep{924},\ \irrepbar{792}, \ \irrepbar{495},\ \irrepbar{220},\ \irrepbar{66}$, and $\irrepbar{12}$, of which 10 are complex (while \irrep{924} is real), which allow three \SU{5} families to be assigned to different chiral irreps. For this purpose, one chooses an anomaly-free set of \SU{12} irreps which contains three chiral \SU{5} families and pairs of fermions which will become massive at the \SU{5} scale. One such suitable set consists of \begin{equation}\label{eq:AnomFreeSet} 6\irrepvar{495} + 4\irrepbarvar{792} + 4\irrepbarvar{220} + \irrepbarvar{66} + 4\irrepbarvar{12} \rightarrow 3(\irrep{10} + \irrepbar{5} + \irrep{1}) + 238(\irrep{5} + \irrepbar{5}) + 211(\irrep{10} + \irrepbar{10}) + 484(\irrep{1}) \end{equation} \noindent where the decomposition to anomaly-free \SU{5} states has been indicated. The latter follows from the \SU{12} $\rightarrow$ \SU{5} branching rules: \begin{eqnarray}\label{eq:branchrules} \irrep{495} &\rightarrow & 35\irrepvar{5} + 21\irrepvar{10} + 7\irrepbarvar{10} + \irrepbar{5} + 35\irrepvar{1},\\ \irrepbar{792} &\rightarrow& 7\irrepvar{5} + 21\irrepvar{10} + 35\irrepbarvar{10} + 35\irrepbarvar{5} + 22\irrepvar{1},\\ \irrepbar{220} &\rightarrow& \irrep{10} + 7\irrepbarvar{10} + 21\irrepbarvar{5} + 35\irrepvar{1},\\ \irrepbar{66} &\rightarrow& \irrepbar{10} + 7\irrepbarvar{5} + 21\irrepvar{1},\\ \irrepbar{12} &\rightarrow& \irrepbar{5} + 7\irrepvar{1} \end{eqnarray} A search through the possible assignments of the three light chiral families to the \SU{12} irreps appearing in the anomaly-free set of Eq. (1) reveals the following selection for a satisfactory low scale phenomenology: \begin{equation}\label{eq:famassign} \begin{array}{lrcl} {\rm 1st\ Family:} & \irrepvar{10}\irrep{495_1} & \supset & u_L,\ u^c_L,\ d_L,\ e^c_L\\ & \irrepbarvar{5}\irrepbar{66_1} & \supset & d^c_L,\ e_L,\ \nu_{1L}\\ & \irrepvar{1}\irrepbar{792_1} & \supset & N^c_{1L}\\ {\rm 2nd\ Family:} & \irrepvar{10}\irrepbar{792_2} & \supset & c_L,\ c^c_L,\ s_L,\ \mu^c_L\\ & \irrepbarvar{5}\irrepbar{792_2} & \supset & s^c_L,\ \mu_L,\ \nu_{2L}\\ & \irrepvar{1}\irrepbar{220_2} & \supset & N^c_{2L}\\ {\rm 3rd\ Family:} & \irrepvar{10}\irrepbar{220_3} & \supset & t_L,\ t^c_L,\ b_L,\ \tau^c_L\\ & \irrepbarvar{5}\irrepbar{792_3} & \supset & b^c_L,\ \tau_L,\ \nu_{3L}\\ & \irrepvar{1}\irrepbar{12_3} & \supset & N^c_{3L} \end{array} \end{equation} \noindent Here the subscripts on the \SU{12} irreps refer to the family in question, while the numbers in parentheses are just the \SU{5} irreps chosen. Note that each \SU{5} family multiplet can be uniquely assigned to a different \SU{12} multiplet in the anomaly-free set according to (1). On the other hand, the remaining \SU{5} multiplets are unassigned but form conjugate pairs which become massive and decouple at the \SU{5} scale and are of no further interest. \section{Effective Theory Approach and Leading Order Tree Diagrams} We start with the \SU{12} model sketched above and take it to be supersymmetric. With a $\irrep{143_H}$ adjoint Higgs field present, the breaking of \SU{12} to \SU{5} can occur via \SU{12} $\rightarrow$ \SU{5} $\times$ \SU{7} $\times$ \U{1}, and in steps down to \SU{5} via a set of antisymmetric chiral superfield irreps appropriately chosen to preserve supersymmety \cite{pFtwK1,pFtwK2}. Unbroken supersymmetry at the \SU{5} GUT scale allows us to deal only with tree diagrams in order to generate higher dimensional operators, for loop corrections are much suppressed. For this purpose, we introduce massive $\irrep{220} \times \irrepbar{220}$ and $\irrep{792} \times \irrepbar{792}$ fermion pairs at the \SU{12} scale. In addition, we introduce $\irrepvar{1}\irrep{66_H},\ \irrepvar{1}\irrepbar{66_H}$, and $\irrepvar{1}\irrep{220_H},\ \irrepvar{1}\irrepbar{220_H}$ conjugate Higgs pairs which acquire \SU{5} singlet VEVs at the SUSY \SU{5} GUT scale. Finally, doublets in $\irrepvar{5}\irrep{924_H}$ and $\irrepbarvar{5}\irrep{924_H}$ Higgs fields effect the electroweak symmetry breaking at the electroweak scale. The list comprises then the following: \begin{equation} \begin{array}{llc} \multicolumn{2}{c}{\rm Higgs\ Bosons} & {\rm Massive\ Fermions}\\ (\irrep{5})\higgs{924},& (\irrepbar{5})\higgs{924}, & \irrep{220}{\times} \irrepbar{220},\\ (\irrep{1})\higgs{66}, & (\irrep{1})\higgsbar{66}, & \irrep{792}{\times} \irrepbar{792}\\ (\irrep{1})\higgs{220},& (\irrep{1})\higgsbar{220}, & \\ (\irrep{24})\higgs{143} &\\ \end{array} \end{equation} \noindent For each element of the quark and lepton mass matrices, tree diagrams can then be constructed from three-point vertices which respect the \SU{12} and \SU{5} multiplication rules. For illustration we present the lowest order tree diagram contributions to the 33 elements for the up and down quark mass matrices, taking into account the family assignments in \eqref{eq:famassign}. These are listed as ${\bf U33}$ and ${\bf D33}$, respectively in \eqref{eq:figs}. The convention is adopted that the left-handed fields appear on the left and the left-handed conjugate fields appear on the right. \begin{equation} \label{eq:figs} \textbf{U33:}\quad\raisebox{-.38\height}{\includegraphics[scale=0.85]{TopQuarkMassTermDiagram-crop.pdf}}\qquad\qquad \textbf{D33:}\quad\raisebox{-.45\height}{\includegraphics[scale=0.85]{BottomQuarkMassTermDiagram-crop.pdf}} \end{equation} \noindent For convenience we introduce the following short-hand notation to describe each of these diagrams: \begin{equation}\label{eq:33diagrams} \begin{array}{llcll} {\bf U33}: & \irrepvar{10}\irrepbar{220_3}.\irrepvar{5}\irrep{924_H}.\irrepvar{10} \irrepbar{220_3}, & \qquad & {\bf D33}: & \irrepvar{10}\irrepbar{220_3}.\irrepbarvar{5}\irrep{924_H}.\irrepbarvar{5}\irrepbar{220} \times \irrepvar{5}\irrep{220}.\irrepvar{1}\irrep{66_H}.\irrepbarvar{5} \irrepbar{792_3},\\ \end{array} \end{equation} \noindent The leading order term for {\bf U33} is seen to have dim-4, while that for {\bf D33} has dim-5, due to the $\irrepvar{1}\irrep{66_H}$ \SU{5} Higgs singlet insertion resulting in one extra external Higgs field. The full sets of leading order up- and down-quark diagrams for each matrix element is presented in Table I, while those for the Dirac and Majorana neutrino diagrams are listed in Table II. It is rather remarkable that only one diagram for each matrix element appears at leading order for all four mass matrices. \input{QuarkMassTermDiagramsTablec.tex} \section{Mass Matrices and Mixings} Given the leading-order diagrams for each matrix element in Tables 1 and 2, we can then construct the quark and lepton mass matrices as follows. To each diagram corresponds a coupling constant or prefactor, $h^u_{ij},\ h^d_{ij},\ h^{dn}_{ij}$ or $h^{mn}_{ij}$ for the $ij$th element of the appropriate mass matrix, which is assumed to be of order one at the \SU{12} unification scale, as naturalness predicts. Every \SU{5} Higgs singlet insertion in higher-order tree diagrams introduces one power of $\varepsilon \equiv M_{\SU{5}}/M_{\SU{12}} \sim 1/50$ through the appearance of the ratio of the singlet Higgs VEV to the mass of the conjugate fermion fields after the latter are integrated out. Finally as a result of the electroweak spontaneous symmetry breaking, the ${\bf 924_H}$ acquires a weak scale VEV, ${\rm v}$. Hence for the two quark diagrams illustrated, the matrix element contributions are \begin{equation} \label{eq:33contributions} {\bf U33}: h^u_{33}{\rm v}\ t^T_Lt^c_L, \hspace{1in} {\bf D33}: h^d_{33}\epsilon {\rm v}\ b^T_L b^c_L. \\ \end{equation} \noindent Note that as a result of the chiral \SU{5} irrep structure, the lowest order tree diagram contribution to the 33 element of the charged lepton mass matrix is just the reflection of the diagram for the down quark 33 mass matrix element about the center of the diagram. Thus its 33 matrix element contribution is just the transpose of {\bf D33}. More generally, the prefactors are related by $h^\ell_{ij} = h^d_{ji}$. By the same reasoning, it is clear that the up quark mass matrix elements are symmetric under interchange of $i$ and $j$. \input{NeutrinoMassTermDiagramsTablec.tex} From Table 1 we then see that the two quark and charged lepton mass matrices are given by \begin{equation} \begin{array}{rl} M_U =& \left(\matrix{ h^u_{11}\epsilon^4 & h^u_{12}\epsilon^3 & h^u_{13}\epsilon^2 \\ h^u_{12}\epsilon^3 & h^u_{22}\epsilon^2 & h^u_{23}\epsilon \\ h^u_{13}\epsilon^2 & h^u_{23}\epsilon & h^u_{33} \\}\right) \!{\rm v}\:,\\[0.3in] M_D =& \left(\matrix{ h^d_{11}\epsilon^4 & h^d_{12}\epsilon^3 & h^d_{13}\epsilon^3 \\ h^d_{21}\epsilon^3 & h^d_{22}\epsilon^2 & h^d_{23}\epsilon^2 \\ h^d_{31}\epsilon^2 & h^d_{32}\epsilon & h^d_{33}\epsilon \\}\right) \!{\rm v}\:,\\[0.3in] M_L =& \left(\matrix{ h^\ell_{11}\epsilon^4 & h^\ell_{12}\epsilon^3 & h^\ell_{13}\epsilon^2 \\ h^\ell_{21}\epsilon^3 & h^\ell_{22}\epsilon^2 & h^\ell_{23}\epsilon \\ h^\ell_{31}\epsilon^3 & h^\ell_{32}\epsilon^2 & h^\ell_{33}\epsilon \\} \right)\!{\rm v} = M_D^T. \end{array} \end{equation} \noindent While the up-quark matrix is symmetric, the down-quark and charged-lepton mass matrices are doubly lopsided in that the terms with $h^d_{23}$ and $h^l_{32}$ are suppressed by one extra power of $\epsilon$ compared with the $h^d_{32}$ and $h^l_{23}$ terms, respectively. For $M_D$, for example, this implies that a larger right-handed rotation than left-handed rotation is needed to bring the down quark matrix into diagonal form, while the opposite is true for $M_L$. With the heavy right-handed neutrinos assigned to \SU{5} singlets in \eqref{eq:famassign}, the resulting Dirac and Majorana neutrino 33 mass matrix elements receive the following dim-4 contributions as seen from Table 2: \begin{equation} \label{eq:neutrinocont} \begin{array}{rl} {\bf DN33}: h^{dn}_{33} {\rm v} \overline{\nu}_{3L}N^c_{3L}, \qquad & \qquad {\bf MN33}: h^{mn}_{33} \Lambda_R N^{c^T}_{3L} N^c_{3L}.\\ \end{array} \end{equation} \noindent Here $\Lambda_R$ represents the right-handed mass scale, typically of $\order{10^{14}}$ GeV, whereas the \SU{5} SUSY GUT scale is $2 \times 10^{16}$ GeV to obtain gauge coupling unification. Again, a factor of $\epsilon$ enters for every singlet Higgs insertion in higher order diagrams. The two neutrino mass matrices can then be read off from Table 2, and we find \begin{equation} \begin{array}{rl} M_{DN} =& \left(\matrix{ h^{dn}_{11}\epsilon^3 & h^{dn}_{12}\epsilon^2 & h^{dn}_{13}\epsilon \\ h^{dn}_{21}\epsilon^2 & h^{dn}_{22}\epsilon & h^{dn}_{23} \\ h^{dn}_{31}\epsilon^2 & h^{dn}_{32}\epsilon & h^{dn}_{33} \\}\right)\!{\rm v}\:,\\[0.3in] M_{MN} =& \left(\matrix{ h^{mn}_{11} & h^{mn}_{12}\epsilon & h^{mn}_{13}\epsilon^2 \\ h^{mn}_{12}\epsilon & h^{mn}_{22}\epsilon^2 & h^{mn}_{23}\epsilon^3 \\ h^{mn}_{13}\epsilon^2 & h^{mn}_{23}\epsilon^3 & h^{mn}_{33} \\}\right)\!\Lambda_R.\\ \end{array} \end{equation} \noindent where ${M_{DN}}$ is also double lopsided, while ${M_{MN}}$ is complex symmetric as usual. The symmetric light-neutrino mass matrix is obtained via the Type I seesaw mechanism: \begin{equation} M_\nu = -M_{\rm DN}M_{\rm MN}^{-1}M_{\rm DN}^T. \end{equation} Keeping only the leading-order terms in $\epsilon$ for each matrix element, we find \begin{equation} M_\nu \approx \frac{{\rm v}^2}{\Lambda_R}\times\!\!\\ \left(\begin{array}{ccc} \epsilon ^2 \left(\frac{h^{dn^2}_{12} h^{mn}_{11}}{h^{mn^2}_{12}{-}h^{mn}_{11} h^{mn}_{22}}{-}\frac{h^{dn^2}_{13}}{h^{mn}_{33}}\right) & \epsilon \left(\frac{h^{dn}_{12} h^{dn}_{22} h^{mn}_{11}}{h^{mn^2}_{12}{-}h^{mn}_{11} h^{mn}_{22}}{-}\frac{h^{dn}_{3} h^{dn}_{23}}{h^{mn}_{33}}\right) & \epsilon \left(\frac{h^{dn}_{12} h^{dn}_{32} h^{mn}_{11}}{h^{mn^2}_{12}{-}h^{mn}_{11} h^{mn}_{22}}{-}\frac{h^{dn}_{13} h^{dn}_{33}}{h^{mn}_{33}}\right)\\[0.1in] \epsilon \left(\frac{h^{dn}_{12} h^{dn}_{22} h^{mn}_{11}}{h^{mn^2}_{12}{-}h^{mn}_{11} h^{mn}_{22}}{-}\frac{h^{dn}_{13} h^{dn}_{23}}{h^{mn}_{33}}\right) & \frac{h^{dn^2}_{22} h^{mn}_{11}}{h^{mn^2}_{12}{-}h^{mn}_{11} h^{mn}_{22}}{-}\frac{h^{dn^2}_{23}}{h^{mn}_{33}} & \frac{h^{dn}_{22} h^{dn}_{32} h^{mn}_{11}}{h^{mn^2}_{12}{-}h^{mn}_{11} h^{mn}_{22}}{-}\frac{h^{dn}_{23} h^{dn}_{33}}{h^{mn}_{33}} \\[0.1in] \epsilon \left(\frac{h^{dn}_{12} h^{dn}_{32} h^{mn}_{11}}{h^{mn^2}_{12}{-}h^{mn}_{11} h^{mn}_{22}}{-}\frac{h^{dn}_{13} h^{dn}_{33}}{h^{mn}_{33}}\right) & \frac{h^{dn}_{22} h^{dn}_{32} h^{mn}_{11}}{h^{mn^2}_{12}{-}h^{mn}_{11} h^{mn}_{22}}{-}\frac{h^{dn}_{23} h^{dn}_{33}}{h^{mn}_{33}} & \frac{h^{dn^2}_{32} h^{mn}_{11}}{h^{mn^2}_{12}{-}h^{mn}_{11} h^{mn}_{22}}{-}\frac{h^{dn^2}_{33}}{h^{mn}_{33}}\\ \end{array} \right) \end{equation} which does not involve the prefactors $h^{dn}_{11}$, $h^{dn}_{21}$, $h^{dn}_{31}$, $h^{mn}_{13}$ and $h^{mn}_{23}$. The light-neutrino mass matrix exhibits a much milder hierarchy compared to the up-type and down-type mass matrices, as can be seen from the pattern of powers of $\epsilon$. A mild or flat hierarchy of $M_\nu$ is conducive to obtaining large mixing angles and similar light neutrino masses. Furthermore, one observes that the light neutrino mass matrix obtained via the seesaw mechanism involves the doubly lopsided Dirac neutrino mass matrix twice. The lopsided feature of $M_{DN}$ is such as to require a large left-handed rotation to bring $M_\nu$ into diagonal form. \section{Numerical Results} From the above up and down quark, charged lepton and light neutrino mass matrices, one can diagonalize the corresponding Hermitian matrices, $MM^\dagger$, in the usual manner to obtain the mass eigenvalues and the unitary transformations, $U$, effecting the diagonalizations. The Cabbibo-Kobayashi-Maskawa (CKM) quark mixing matrix and the corresponding Pontecorvo-Maki-Nakagawa-Sakata (PMNS) lepton mixing matrix then follow as \begin{equation} \begin{array}{rl} V_{CKM} = U^\dagger_U U_D \qquad\qquad V_{PMNS} = U^\dagger_L U_\nu \end{array} \end{equation} To obtain numerical results for the model predictions, we evaluate the mass matrices at the top quark mass scale and use just real prefactors to avoid too many fit parameters for good fit convergence. There are 6 prefactors each for the symmetric up quark and Majorana matrices, 9 each for the lopsided down quark and Dirac neutrino matrices, but 5 of them do not appear in the light neutrino mass matrix, making a total of 25 parameters. In addition, we have one for the right-handed neutrino scale, $\Lambda_R$ plus a value for $\epsilon$ which we fix at $\epsilon = 1/6.5^2 = 0.0237$ again for good fit convergence, for a grand total of 26 adjustable fit parameters. To avoid correlated data parameters, we make use of the 9 quark and charged lepton masses plus the 3 neutrino $\Delta m^2$'s and the 18 CKM and PMNS mixing parameters taken to be real, for a total of 30 data points. We refer the reader to our published paper ~\cite{afksu12} for full details of the fitting procedure, where we have included the latest best value for the reactor neutrino mixing angle, $\theta_{13}$. There can be found a table giving the phenomenological mass and mixing data entering the fit, as well as the theoretical mass and mixing results obtained from the fitting procedure. The best fit was obtained with a normal neutrino mass hierarchy with $\Lambda_R = 7.4 \times 10^{14}$ GeV and the following mass matrices: \begin{equation} \label{eq:fitmassmatrices} \begin{array}{rlrl} M_U =& \left(\matrix{ -1.1\epsilon^4 & 7.1\epsilon^3 & 5.6\epsilon^2 \\ 7.1\epsilon^3 & -6.2\epsilon^2 & -0.10\epsilon \\ 5.6\epsilon^2 & -0.10\epsilon & -0.95 \\}\right) \!{\rm v}\:,\qquad\qquad & M_D =& \left(\matrix{ -6.3\epsilon^4 & 8.0\epsilon^3 & -1.9\epsilon^3 \\ -4.5\epsilon^3 & 0.38\epsilon^2 & -1.3\epsilon^2 \\ 0.88\epsilon^2 & -0.23\epsilon & -0.51\epsilon \\}\right) \!{\rm v}\: = M^T_L, \\[0.3in] M_{DN} =& \left(\matrix{ h^{dn}_{11}\epsilon^3 & 0.21\epsilon^2 & -2.7\epsilon \\ h^{dn}_{21}\epsilon^2 & -0.28\epsilon & -0.15 \\ h^{dn}_{31}\epsilon^2 & 2.1\epsilon & 0.086 \\}\right)\!{\rm v}\:,\qquad\qquad & M_{MN} =& \left(\matrix{ -0.72 & -1.5\epsilon & h^{mn}_{13}\epsilon^2 \\ -1.5\epsilon & 0.95\epsilon^2 & h^{mn}_{23}\epsilon^3 \\ h^{mn}_{13}\epsilon^2 & h^{mn}_{23}\epsilon^3 & 0.093 \\}\right)\!\Lambda_R,\\[0.3in] M_\nu =& \left(\matrix{ -81\epsilon^2 & -4.3\epsilon & 2.4\epsilon \\ -4.3\epsilon & -0.25 & 0.28 \\ 2.4\epsilon & 0.28 & -1.1 \\}\right)\ {\rm v}^2/{\Lambda_R}.\\[0.3in] \end{array} \end{equation} \noindent Note that all prefactors except three in the above matrices are within a factor of 0.1 to 10 of unity for this best fit. The five independent prefactors, $h^{dn}_{11},\ h^{dn}_{21},\ h^{dn}_{31},\ h^{mn}_{13}$ and $h^{mn}_{23}$, do not influence the fit and remain undetermined as noted earlier. For this best fit we find the neutrino mass values \begin{equation} m_1 = 0,\quad m_2 = 8.65,\quad m_3 = 49.7 {\rm \ meV};\qquad M_1 = 1.67 \times 10^{12}, \quad M_2 = 6.85 \times 10^{13},\quad M_3 = 5.30 \times 10^{14}\ {\rm GeV}. \end{equation} \noindent In addition, the best fit favors $\delta_{CP} = \pi$ for the leptonic CP Dirac phase. The value of $\epsilon$ used then implies that the \SU{12} GUT scale is about $M_{\SU{5}}/\epsilon = 8.4 \times 10^{17}$ GeV, just below the reduced Planck scale, where we have used $2 \times 10^{16}$ GeV for the \SU{5} unification scale. All remaining mass and mixing parameters are fit quite well by the model; however, since $M_L$ is just the transpose of $M_D$ in leading order in $\epsilon$, the Georgi-Jarlskog relations~\cite{gj} are not satisfied for the down quarks and charged leptons. We have checked that the addition of an adjoint \higgs{143}, Higgs field whose VEV points in the $B - L$ direction contributes to $M_D$ and $M_L$ at one higher order of $\epsilon$, so that the $M_L = M^T_D$ relation is broken, and more accurate values can be obtained for the down quark and charged lepton mass eigenvalues. \section{Summary} A unified \SU{12} SUSY GUT model was obtained by a brute force computer scan over many \SU{12} anomaly-free sets of irreps containing 3 \SU{5} chiral families under the assumption that the symmetry is broken in stages from \SU{12} $\rightarrow$ \SU{5} $\rightarrow$ SM. In doing so, looping over all \SU{12} fermion and Higgs assignments was performed with good fits to the input data required. For this purpose an effective theory approach was used to determine the leading order tree-level diagrams for the dim-(4 + n) matrix elements in powers of $\epsilon^n$, where $\epsilon$ is the ratio of the \SU{5} to the \SU{12} scale. The best fit was obtained by requiring all prefactors to be \order{1}, but the large number of them implies just a few predictions. With no discrete flavor symmetry adopted, problems with breaking by gravity, domain walls and explanation of its origin can be avoided~\cite{adv1,adv2,adv3}. On the contrary, with such a large \SU{N} gauge group, a host of heavy fermions is predicted which are integrated out at the \SU{5} scale. The \SU{12} model considered is just one of many possibilities (including other assignments and larger \SU{N} groups), but its features were among the most attractive found: Each \SU{5} family supermultiplet can be assigned to a different \SU{12} multiplet in the anomaly-free set. In the model considered, only one diagram appears for each matrix element for all 5 mass matrices, but some additional contribution is needed to obtain the Georgi-Jarlskog relations. Among some distracting features we point out the prefactors are determined at the top quark scale. They should be run to the \SU{5} unification scale to test their naturalness. The fit considers only real prefactors, so CP violation is not accommodated, but the fit preferred $\delta_{CP} = \pi$ over $\delta_{CP} =0$ for the leptonic CP phase. The complete breaking of \SU{12} $\rightarrow$ \SU{5} while preserving supersymmetry needs to be worked out in more detail and is under further study. \begin{theacknowledgments} One of us (CHA) thanks Kaladi Babu, Rabi Mohapatra, and Barbara Szczerbinska for the kind invitation to attend and present a talk on this work at the CETUP* Workshop on Neutrino Physics and Unification in Lead, SD in July 23 - 29, 2012. He especially appreciated some constructive suggestions by participants at the Workshop. He thanks the Fermilab Theoretical Physics Department for its kind hospitality, where part of this work was carried out. The work of RPF was supported by a fellowship within the Postdoc-Programme of the German Academic Exchange Service (DAAD). The work of RPF and TWK was supported by US DOE grant E-FG05-85ER40226. Fermilab is operated by Fermi Research Alliance, LLC under Contract No. De-AC02-07CH11359 with the U.S. Department of Energy. \end{theacknowledgments} \bibliographystyle{aipproc}
1,108,101,566,207
arxiv
\section{Introduction} Eventhough the word {\em scattering} appears to imply {\em transport}, studies of Helmholtz scatterers have shown effects of {\em trapped} periodic orbits \cite{Predrag}. As there is nothing particular about the Helmholtz equation, similar effects are expected for wave equations for other media such as dielectric or elastodynamic ones. We shall discuss the relation between periodic trapped rays and the scattering determinant corresponding to a medium with several polarizations, each with their own velocity. The example to be treated is the case of elastic wave propagation in a solid punctured by a finite number of voids. These systems have the feature that a ray hitting a boundary can either reflect or refract. Particularly, ray splitting occurs when the polarization changes. This leads to a ray dynamics which no longer is unique, since -- in general -- a single polarized ray evolves into a tree of rays. A similar behaviour is observed in microwave resonators with dielectrica: characteristically rays can either be reflected or be transmitted at the boundaries of the dielectrica. \section{Scalar case} \label{sect:ScalarCase} The Helmholtz equation \begin{equation} (\Delta + k^2) \psi = 0 \end{equation} describes the wave propagation of a scalar field $\psi$ of wave number $k$ in a homogeneous and isotropic medium. Furthermore, if obstacles are embedded in this background, the scalar field typically has to satisfy Dirichlet ($ \psi = 0$) or Neumann ($\partial \psi/\partial n = 0$) boundary conditions on the surfaces of the obstacles. For an exterior problem, i.e.\ a scattering problem, the geometry of the scatterers and the corresponding boundary conditions are usually specified and the typical goal is to calculate the scattering matrix $\multiScat$. In short, this matrix contains the information on how an incoming wave transforms to a superposition of outgoing scattering solutions. From the knowledge of the scattering matrix several interesting quantities can be calculated: cross-sections, resonances, phase shifts, time-delays etc. One way of finding the scattering matrix is via the so-called {\em null-field method} \cite{lloyd,lloyd_smith,berry81,gaspard,AW_report,PetersonStrom,bostrom,hwg97,cavityThesis}. A given field on the boundary of one scatterer gives rise to secondary fields on the full set of boundaries, including the one at infinity. These fields are calculated via boundary integral identities. If a basis is chosen for each boundary, the initial and the secondary fields can be expanded in these bases. The matrices that describe their relationships can be used to construct the scattering matrix~\cite{gaspard}. In this way the application of the standard null-field method determines the $\multiScat$ matrix as \begin{equation} \label{scatCMD} \multiScat = \identityMat - i \, {\sf C} \, \cluster^{-1} {\sf D}, \end{equation} where the ${\sf C}$ and ${\sf D}$ matrices connect the incoming, respectively, outgoing waves to the interior scattering boundaries and where the matrix $\cluster$ relates the waves at one interior scattering boundary to the waves at a different one. $\cluster$ itself can be written in terms of a transfer matrix $\kernel$ as \begin{equation} \cluster=\identityMat + \kernel . \end{equation} The multi-scattering expansion arises when \begin{equation} \cluster^{-1} = \identityMat - \kernel + \kernel^2 + \dots \end{equation} is inserted in \refeq{scatCMD} \cite{lloyd,lloyd_smith,berry81,gaspard}. This signals that $\cluster$ has the role of an inverse multi-scattering matrix. In some situations the exact matrix elements of these infinite dimensional matrices, $\kernel$, ${\sf C}$, and ${\sf D}$, can be derived. Of course these matrices depend crucially on the scattering problem in question, but the overall structure is the same. In general, the null--field method can be applied to solve numerous multi-scattering problems in various media. An explicit example is the scattering of a scalar wave, propagating in the two-dimensional plane, off a finite number of non-overlapping hard discs that have fixed positions and Dirichlet boundary conditions. For the discussion here only the matrix $\cluster$ is of interest. The fields at two different discs $j$ and $j'$, expanded in basis states $\exp(i m \phi_j)$ and $\exp(i m' \phi_{j'})$ (in polar coordinates with $m,m'=0,\pm 1,\pm 2,\cdots$ two-dimensional angular momentum quantum numbers) are connected via the (transfer) kernel~\cite{gaspard,AW_report,aw_chaos,aw_nucl,wh98,threeInARow} \begin{equation} \label{scalarCluster} [\kernel]_{m m'}^{j j'} = (1-\delta_{j j'}) \frac{J_m (k a_j)}{H_{m'}^{(1)}(k a_{j'})} \, H_{m-m'}^{(1)}(k R_{j j'}) \, e^{i m \alpha_{j'}^{(j)}- i m'(\alpha_{j}^{(j')}-\pi)} \,. \end{equation} Here $R_{jj'}$ is the center-to-center distance of the discs of radii $a_j$, and $\alpha_{j'}^{(j)}$ is the angle to the center of cavity $j'$ in the coordinate system of cavity $j$. All this follows from the application of the above-sketched null--field method to the scalar-wave problem. Less familiar is the ray limit of the pertinent multi-scattering determinant $\mbox{\rm Det}\, \cluster(k)$ \cite{gaspard,AW_report}. As the various building blocks of the scattering matrix are known in analytical form it is possible to explicitly calculate the short-wave-length limit. The main result is that asymptotically~\cite{AW_report} \begin{equation} \label{ScalarFredGeom} \mbox{\rm Det}\, \cluster(k) \approx F =\left. \exp \left(- \sum_p \sum_{r = 1}^\infty \frac{1}{r} \, \frac{e^{i r \,( k L_p-\nu_p\pi/2) }}{|\mbox{\rm Det}\, (\identityMat-\stab_p^r)|^{1/2}}\, z^{r n_p} \right)\right|_{z=1} , \end{equation} where $p$ is a prime orbit (a periodic orbit that cannot be split into shorter ones), $r$ is the number of its repeats, $L_p$ is its length, $\nu_p$ is its Maslov index, $\stab_p$ is its monodromy matrix, and $n_p$ is its number of bounces on the scatterers. The right-hand side of \refeq{ScalarFredGeom} is the semiclassical {\em spectral} determinant~\cite{Predrag,gutbook}. The parameter $z$, which at the end is put equal to unity, serves in the formal expansion of $F$ in $z$ up to a sufficiently high number $N_{max} \geq n_p$. A {\em periodic orbit} is defined here as a closed ray in phase space obeying the law of reflection at the scatterers. Our goal is to generalize these types of problems to two-dimensional elastodynamics. \section{Elastodynamics} In isotropic elasticity the wave equation in the frequency domain $\omega$ is \begin{equation} \mu \Delta\, {\bf u} + (\lambda + \mu) {\bf \nabla ( \nabla \cdot u)} + \rho \omega^2 {\bf u} = 0 \,, \label{pde} \end{equation} where ${\bf u}({\bf x})$ is the displacement vector field in the body, $\lambda$, $\mu$ are the material-dependent Lam\'e coefficients and $\rho$ is the density \cite{lAndl,auld}. This wave equation admits two different polarizations: longitudinal L and transverse T with velocities \begin{equation} c_L = \sqrt{\frac{\lambda + 2 \mu}{\rho}} \qquad \mathrm{and} \qquad c_T = \sqrt{\frac{\mu}{\rho}} \,. \end{equation} The longitudinal and transverse waves correspond to {\em pressure}, respectively, {\em shear} deformations (or to {\em primary} and {\em secondary} arriving pulses in seismology). This leads to the law of refraction for incoming plane waves \be{Snell} \frac{c_L}{c_T}=\frac{\sin \theta_L}{\sin \theta_T}, \end{equation} where $\theta_L$, $\theta_T$ denote the angle of incidence or reflection of the longitudinal and transverse wave, respectively, measured with respect to the normal to the surface. The stress tensor in elasticity has the form \begin{equation} \sigma_{ij} = \lambda \, \partial_k u_k \delta_{ij} + \mu \left(\partial_i u_j + \partial_j u_i\right) \,. \end{equation} The boundary conditions considered here are {\it free}. Hence \be{freebound} {\bf t}({\bf u}) \equiv \bm{\sigma}({\bf u}) \cdot {\bf n} = \biggl[\lambda \Bigl({\bf \nabla} \cdot \bf{u}\Bigr) {\bf 1} + \mu \Bigl\{ \bigl({\bf \nabla} \bf{u}\bigr) + \bigl({\bf \nabla} \bf{u}\bigr)^{\mathsf{T}}\Bigr\}\biggr] \cdot {\bf{n}} = {\bf 0} \end{equation} for the displacement field at the boundary where $\mathbf{ n}$ denotes the normal to the boundary and $\mathsf{T}$ indicates a transposition. The operator $\mathbf{ t}$ refers to the traction. \section{Scattering determinant} \begin{figure}[t] \rotatebox{-90}{ \includegraphics[width=.5\textwidth]{scatZone.eps}} \caption{\label{scatZonePl} General zone of two--dimensional cavity scattering. Here the unit vectors ${\bf n}$ denote the outside normals to the surfaces $\partial_i$ ($i=1,2,3$) of the cavities or to the surface $\partial_\infty$ of the total scattering zone. Furthermore, an incoming plane wave and a scattered cylindrical wave are shown.} \end{figure} As mentioned, in this treatment the medium corresponds to a cylindrical solid made, for simplicity, of an isotropic and homogeneous elastic material. The scattering geometry consists of parallel cylindrical voids, which are perpendicular to the endcaps of the overall cylinder, see \reffig{scatZonePl}. If the fields are stimulated by in-phase line-sources parallel to the voids, this symmetry is respected and the problem reduces to one of two-dimensional elasticity, see Fig. 1, referred to as {\em plane strain}. This scattering problem generalizes the simpler scalar scattering off discs in two dimensions, mentioned in \refsect{sect:ScalarCase}. Similar to the scalar case \cite{AW_report,wh98,hwg97}, the scattering determinant may be factorized into an incoherent single-scatterer part (in terms of the determinants over the single-scattering matrices ${\sf S}^{(1)j}(\omega)$, $j=1,2,\cdots$) and the genuine multi-scatterer part (in terms of the determinant of the inverse multi-scattering matrix $\cluster(\omega)$) \cite{cavityThesis} \begin{equation} \label{detFactor} {\det \,} {\multiScat}(\omega) = \left\{\! \prod_{{j} \in \mathrm{Cavities}}\det{} {\bigl[ {\sf S}^{(1)j}(\omega) \bigr]} \!\right \} \frac{ {\mbox{\rm Det}\, }\bigl[{\cluster}(\omega^\ast)^\dagger \bigr ]} { {\mbox{\rm Det}\, }\bigl[{\cluster}(\omega) \bigr]}. \end{equation} When the {\em relative} positions of the cavities are changed, only the latter factor, composed from the {\em cluster} determinant $\mbox{\rm Det}\, \, {\sf M}(\omega)$, changes: \begin{equation} \label{clusterDef} {\sf M} = {\bf 1}+ {\sf A}\,, \end{equation} \begin{eqnarray*} &&\Bigl[{\kernel}^{jj'}_{ll'}\Bigr]_{i i'} = (1\!-\!\delta_{jj'}) \frac{a_j}{a_{j'}} \sum_{\sigma=L}^T\sum_{\sigma'=L}^T \bigl[ {\sf t}_l^{(J) j} \bigr]_{i \sigma} \Bigl[{\sf T}^{(+) jj'}_{ll'}\Bigr]_{\sigma\sigma'} \bigl[ {\sf t}_{l'}^{(+) j'} \bigr]^{-1}_{\sigma' i'}\,, \\ &&\Bigl[ {\sf T}^{(+) jj'}_{ll'}\Bigr]_{\sigma \sigma'}= \delta_{\sigma \sigma'} H_{l-l'}^{(+)}(k_{\sigma} R_{jj'}) e^{il \alpha_{j'}^{(j)}- il'(\alpha_{j}^{(j')}-\pi)} . \end{eqnarray*} The indices $i,i' \in \{ r,\phi \}$ are the labels of the two-dimensional polar coordinates and the indices $\sigma,\sigma' \in \{ L,T \}$ refer to the two polarization states. The diagonal matrix $\Bigl[ {\sf T}^{(+) jj'}_{ll'}\Bigr]$ may be interpreted as a translation matrix acting on the polarized scattering states, where $k_L=\omega/c_L$ and $k_T=\omega/c_T$ are the pertinent wave numbers~\cite{PetersonStrom,bostrom}. As in \refeq{scalarCluster}, $R_{jj'}$ is the center-to-center distance of the circular cavities of radii $a_j$ and $\alpha_{j'}^{(j)}$ is the angle to the center of cavity $j'$ in the coordinate system of cavity $j$. The single-cavity scattering matrices (enumerated by the cavity index $j=1,2,\dots$) are separable in the (two-dimensional) angular momentum $l=0,\pm 1,\pm 2, \cdots$ due to the rotational symmetry. They have the general form: \begin{equation} \label{singleCavityDef} \singleScat{j}{l} = -\tracMat{l}{+}{j}^{-1} \cdot \tracMat{l}{-}{j} \end{equation} in terms of the $2\times 2$ traction {\em matrices} $\tracMat{l}{Z}{j}$ which incorporate the {\em free} boundary conditions \refeq{freebound}. Here the ``type'' $Z \in \{+,-,J \}$ refers to outgoing, incoming or regular scattering states and involves $H_l^{(1)}$, $H_l^{(2)}$ or $J_l$ Bessel functions of argument $z_{L,T}\equiv a k_{L,T}$, respectively. Thus we have, e.g., for the outgoing case: \begin{equation} \label{tracOut} \small {\tracMat{l}{+}{j}}=\frac{2 \mu}{a_j^2}\,\MatrixII{(l^2 -z_T^2/2)H^{(1)}_l(z_L)-z_L \frac{d H^{(1)}_l(z_L)}{dz_L}}{i\, l \left(H^{(1)}_l(z_T) -z_T \frac{d H^{(1)}_l(z_T)}{dz_T}\right)}{i \,l \left(H^{(1)}_l(z_L) -z_L \frac{d H^{(1)}_l(z_L)}{dz_L}\right)}{-(l^2 -z_T^2/2) H^{(1)}_l(z_T)+z_T \frac{d H^{(1)}_l(z_T)}{dz_T}} \,. \end{equation} Note that the single-cavity scattering matrix connects different polarizations. For a full discussion, see \cite{cavityThesis,izbicki,paoAndmow,cavityLetter}. The connection to the interior problem of a single disc is described in \cite{disc}. \begin{figure}[t] \includegraphics[width=.85\textwidth]{scatResoPl.eps} \caption{\label{resoPl} Elastodynamic scattering resonances from \cite{cavityLetter} for two cavities of common radius $a$ ($A_1$-representation, center-to-center-separation $R=6a$) in the complex longitudinal Helmholtz number--plane: $k_L a = \omega a /c_L $. } \end{figure} As first shown for the scalar problem~\cite{AW_report}, the poles of the cluster determinant $\mbox{\rm Det}\, \cluster(\omega)$ cancel -- by construction -- the poles of the single-scattering determinants. Likewise, the poles of $\mbox{\rm Det}\, \cluster(\omega^*)$ are canceled by the zeros of the single-scattering determinants. Thus all scattering resonances defined by the {\em poles} of the scattering determinant ${\det \,} {\multiScat}(\omega)$ can be found from the {\em zeros} of the cluster determinant $\mbox{\rm Det}\, \cluster(\omega)$ \cite{AW_report,wh98}. As an example consider the resonances in \reffig{resoPl}, see \cite{cavityLetter}, of a two--cavity system made of polyethylene \cite{izbicki}, a material with $c_L = 1950$ m/s and $c_T = 540$ m/s. Furthermore, it is assumed that the cavity radii are equal, i.e.\ $a_1=a_2\equiv a$, and that the inter-cavity separation $R$, measured from the centers, is 6 times larger than $a$. Note that the regular spaced horizontal set of resonances in \reffig{resoPl} is placed below an irregular set. This is opposite to the scalar Helmholtz case for the same geometry where the regular spaced resonances are above the irregular ones \cite{vwr94,vattay94,threeInARow,rvw96}. The regular resonances particular to the fundamental $A_1$-representation are well described by the following condition \cite{aw_chaos} \begin{equation} 0 = 1 + \exp(i k_L L)/\sqrt{\Lambda} \end{equation} with the length $L = 4 a$ and instability $\Lambda =5 + 2 \sqrt{6} $ \cite{Predrag}. $L$ corresponds to the length of the shortest periodic orbit moving in a symmetry--reduced domain spanned by the surface of the cavity and the center-of-mass of the two cavities. $\Lambda$ is obtained from the product of ray matrices as the leading eigenvalue of the monodromy matrix \cite{gutbook,Predrag} of the corresponding (geometric acoustic) ray system. This next raises the question about the effect of the remaining set of orbits. \section{Orbits in time--delay} For real frequencies the total scattering phase $\Theta$ is given by the sum over the cluster phase $\Theta_{c}$ and the single cavity phases $\Theta_j$: \begin{equation} \Theta(\omega) = \frac{1}{2 i} \ln \, \det \, S(\omega) \qquad \mathrm{and} \qquad \Theta_c(\omega)=\frac{1}{2 i} \ln \frac{\mbox{\rm Det}\, \, M(\omega^\ast)^\dagger} {\mbox{\rm Det}\, \, M(\omega) } \, , \end{equation} see \refeq{detFactor}. Likewise, the derivative with respect to frequency $d\Theta/d\omega$, the Wigner-Smith time delay, can be decomposed into single-scatterer and cluster contributions. The numerics of the cluster time--delay $d\Theta_c/d\omega$ show fluctuations which are related to trapped periodic orbits in the scattering geometry, see \reffig{timeSpec} where the results for two identical cavities (the same system as in \reffig{resoPl}) are presented. Due to the symmetry of the system the cluster delay decomposes further into a sum over four irreducible representations of the symmetry group $C_{2v}$ \cite{hamermesh}, of which the representations $A_1$ and $B_2$ are shown. Some of these orbits are diffractive (see \cite{cavityThesis,cavityLetter} for more details) including segments of surface propagation of Rayleigh type, which are also important in earthquakes \cite{viktorov}. For these proceedings we focus the discussion to purely non-diffractive contributions, called geometrical ray-splitting orbits. \begin{figure} \includegraphics[width =0.9\textwidth ]{TimeSpectrum.bw.eps} \caption{\label{timeSpec} Power spectrum of the time--delays derived from the cluster phase shift of two cavities of common radius $a$ and center-to-center separation $R=6a$. The thick and thin line denote the $A_1$- and $B_2$-representation, respectively. The symbols P$i$ and S$i$ label, in turn, periodic orbits with $i$ legs of pressure or shear polarization. The circular arcs indicate Rayleigh--surface waves. } \end{figure} \section{Expanding the cluster determinant} As in the scalar case \cite{AW_report}, a central point in the orbit-construction is the definition of the cluster determinant in terms of traces: \begin{equation} \label{eq:fredExp} \fredholm(z)=\mbox{\rm Det}\, (\identityMat + z\,\kernel) \equiv \exp \left(- \sum_{n=1}^\infty \frac{z^n \, \mbox{Tr}\, (-\kernel)^n}{n} \right) \,, \end{equation} where $z$ is again a formal expansion parameter which is put equal to one at the end. Equation \refeq{eq:fredExp} holds if $\kernel$ is trace--class, and its Taylor expansion is called the cumulant expansion \cite{aw_chaos,aw_nucl, AW_report,wh98}. Trace-class operators (or matrices) are those, in general, non-Hermitian operators (matrices) of a separable Hilbert space which have an absolutely convergent trace in {\em every} orthonormal basis \cite{reed_simon,simon}. Especially the determinant $\mbox{\rm Det}\, (\identityMat + z \kernel)$ exists and is an entire function of $z$, if $\kernel$ is trace-class. Presently the trace-class property of $\kernel$ has only been proved in detail in the two-dimensional scalar case \cite{AW_report,wh98} and sketched for the three-dimensional case in \cite{hwg97}. Nevertheless, we shall proceed as if this is true also in our elastodynamical case. This is supported by the following numerical evidences: (i) the sum over the moduli of the eigenvalues of the matrix $\kernel$ from \refeq{clusterDef} is absolutely converging in agreement with the expected trace-class property of this matrix; (ii) the determinant \refeq{eq:fredExp} converges to a finite result as the dimension of $\kernel$ is increased beyond a minimal number $N_{\rm dim} > 2\times \left(\frac{e}{2} (c_L/c_T) |k_L| a \right)$, see \cite{cavityLetter} and also \cite{AW_report,berry81}; and (iii) the resonances of the cumulant expansion truncated at fourth order $z^4$ agree very well with the exact ones plotted in \reffig{resoPl}, see \cite{cavityLetter}. \section{Ray limit and orbits} The expansion \refeq{eq:fredExp} indicates that the cluster determinant can be obtained from the knowledge of an increasing number of traces. Moreover, each trace $\mbox{Tr}\, \kernel^n$ is given by the sum over all exact periodic {\em itineraries} of topological length $n$, see \cite{AW_report}. In the saddle-point approximation the periodic itineraries become the periodic orbits of topological length $n$, which means that they bounce $n$ times between the cavities \cite{AW_report}. These orbits fulfill the laws of reflection and refraction and have phases corresponding to their time periods of revolution $T_p$. A periodic itinerary of topological length $n$ corresponds to a cyclic product of $n$ terms involving one operator of the type \begin{equation} \tracMat{l}{+}{j}^{-1} \cdot \tracMat{l}{J}{j} \end{equation} (which corresponds to the ${\sf T}$-matrix-part (times $-1/2$) of a single-cavity scattering matrix \refeq{singleCavityDef}) followed by one translation operator $\Bigl[ {\sf T}^{(+) jj'}_{ll'}\Bigr]$ (the free propagation). The pruning rule that two successive scatterings must take place at different cavities is automatically built in (see the term $(1-\delta_{jj'})$ in the kernel $\kernel$). The ray limit of the single cavity ${\sf T}$-matrix gives unitary reflection coefficients similar to those of the scattering from an infinite half--plane \cite{lAndl,disc}. This leads to an overall amplitude $\alpha_p$ defined as a product over all reflection coefficients along the orbit. This amplitude describes the leakage from the orbit due to ray splitting. The calculation of the geometric amplitudes of the orbits requires more work. See \cite{shudo} for a general discussion with respect to the interior scalar case and \cite{AW_report} for the exterior counter part. Asymptotic wave theory indicates \cite{kellerPTD,KellerElasto,rulf,achenbach} that for {\it open} trajectories in two dimensions the amplitude scales as $(k R)^{-1/2}$ where $k$ is the wave number in question and $R$ is the radius of curvature of the wave front at the observer. This radius is studied in e.g.\ geometric optics. It it is possible to keep track of its evolution in the free-propagation period between the scatterers and at impacts including possible refractions with the help of suitable ray matrices \cite{cavityThesis}. Indeed, for our problem it can be shown that all those open segments that have fixed end points, but intermediate points (variables) determined by saddle-point integrations, have such an amplitude evolution. This comes about by calculating the accompanying sparse Hessian of this restricted integration. For a full saddle-point integration over all variables, in other words for a {\it periodic} orbit $p$, the amplitude turns out to be expressible as yet another sparse Hessian that can be expanded into Hessians of the type of the previously considered open pieces; see \cite{AW_report} for the scalar case. The use of the previous information then allows the full calculation with the amplitude evolving as \begin{equation} \mathcal{A}_p = \frac{\alpha_p}{|\mbox{\rm Det}\, ({\identityMat - \stab_p})|^{1/2}} \,\, z^{n_p}\,, \end{equation} where $\stab_p$ is the product of the ray matrices and and $\alpha_p$ is the product of the reflection coefficients, calculated along the orbit $p$ with its $n_p$ bounces. This form is precisely part of the conventional semiclassical density of states \cite{gutbook,brack,Stock}. However, the formal parameter $z$ is also present and can be seen as a counting and ordering parameter of the various orbits in the expansion over infinitely many orbits \cite{Predrag,artuso1,artuso2}. Incorporating the results of the geometric ray-splitting orbits gives the following factor of the ray-dynamical approximation of the cluster determinant: \begin{equation} \label{fredGeom} \fredholm_G(z) = \exp \left(- \sum_p \sum_{r = 1}^\infty \frac{1}{r} \, {\alpha}_p^r \, \frac{e^{i r \, \omega T_p }}{|\mbox{\rm Det}\, (\identityMat-\stab_p^r)|^{1/2}}\, z^{r n_p} \right) \, . \end{equation} The sum over $r$ counts the repeats of the primary periodic orbits, the {\it prime} cycles $p$. If the logarithmic derivative with respect to $\omega$ is taken, a result very similar to the spectral density for the interior problem is obtained. This is in agreement with the general result for the density of states for ray-splitting systems described in \cite{couch}. Similar results for the case of flexural vibrations in the interior case are given in \cite{HB}. As the orbits are unstable and $\stab_p$ is symplectic, it is possible to expand, for each orbit, the instability denominator in \refeq{fredGeom} in the inverse of its leading eigenvalue $1/\Lambda_p$ and to obtain a so-called Gutzwiller--Voros resummed zeta function similar to those of two-dimensional Hamiltonian flows~\cite{Predrag}: \begin{equation} \fredholm_G(z) =\prod_{k=0}^\infty \, \zeta_k^{-1}(z) \, , \end{equation} where \begin{equation} 1/\zeta_k(z) = \prod_p (1-t_p^{(k)} ) \quad \mbox{with} \quad t_p^{(k)} = {\alpha}_p \, \frac{e^{i \omega T_p}}{\sqrt{ |\Lambda_p|}\Lambda_p^k} \, z^{n_p} \, . \end{equation} \section{Summary} Detailed studies of Helmholtz scattering determinants at small wave lengths have shown the influence of periodic orbits. The case of scattering from voids in two--dimensional elastodynamics was considered here with a discussion of the analytical contribution of periodic ray-splitting orbits to the scattering determinant. \begin{theacknowledgments} N.S. acknowledges discussions with J.~D.~Achenbach and funding from the European Network on {\it Mathematical aspects of Quantum Chaos}, the Crafoord Foundation and the Swedish Research Council. \end{theacknowledgments} \bibliographystyle{aipproc} \IfFileExists{\jobname.bbl}{} {\typeout{} \typeout{******************************************} \typeout{** Please run "bibtex \jobname" to optain} \typeout{** the bibliography and then re-run LaTeX} \typeout{** twice to fix the references!} \typeout{******************************************} \typeout{} }
1,108,101,566,208
arxiv
\section{Introduction}\label{intro1} It all boils down to the following elementary inequality named after W. H. Young: if $p>1$ and $1/p+1/q=1$, then for any $\alpha,\beta\in\mathbb R^+$, $$\alpha\beta\le \frac{1}{p}\alpha^p+\frac{1}{q}\beta^q $$ with equality if and only if $\alpha^p=\beta^q$. \medskip Operator analogues of this elegant fact are considered, following the fundamental paper by T. Ando \cite{ando} for $n\times n$ matrices, and an extension for compact operators by J. Erlijman, D. R. Farenick, R. Zeng \cite{efz}. If $a,b$ are compact operators on Hilbert space then for all $k\in\mathbb N_0$, $$ \lambda_k(|ab^*|)\le \lambda_k\left(\frac1p |a|^p +\frac1q |b|^q\right) $$ where each eigenvalue is counted with multiplicity. This allows to construct a partial isometry $u$ such that $$ u |ab^*|u^*\le \frac1p |a|^p +\frac1q |b|^q $$ for the partial order of operators. Therefore, this raised the natural question of whether $$ \|u |ab^*|u^*\|=\| \frac1p |a|^p +\frac1q |b|^q\| $$ implies $|a|^p=|b|^q$. It is easy to construct examples where this is false, if $\|\cdot\|$ is the operator norm. But O. Hirzallah and F. Kittaneh showed with an elegant inequality \cite{hk} that it is true if $a,b$ are Hilbert-Schmidt operators, and the norm is the Hilbert-Schmidt norm. Another nice paper, this time by M. Argerami and D. Farenick \cite{af} proved that that it is also true for $|a|^p,|b|^q$ nuclear operators, that is when the norm is the trace norm $\|\cdot\|_1=Tr|\cdot|$. \medskip In this paper, we prove that the necessary and sufficient condition is in fact the equality of all singular numbers, which enables us to characterize for exactly which norms the assertion above is true (Theorem \ref{elteo}). \section{Young's inequality for compact operators} Let $\mathcal H$ be a complex Hilbert space, and let us denote with ${\cal B}({\cal H})$ the bounded linear operators acting in ${\cal H}$. For $y\in{\cal B}({\cal H})$, with $|y|=\sqrt{y^*y}$ we denote the positive square root and then $y=\nu|y|$ is the polar decomposition of $y$. With $\nu:\overline{{\rm{Ran\, }}|y|}\to\overline{{\rm{Ran\, }} y}$ we denote its partial isometry, when necessary, the projection $\nu\nu^*$ onto the closure of the range of $y$ will be denoted by $p_y$. \medskip In the following lemma we collect some results that will be used throughout this paper (and will help us fix the notation): \begin{lem}\label{ellema} Let $a,b,x\in {\cal B}({\cal H})$, \begin{enumerate} \item If $b=\nu |b|$ then $\nu^*\nu$ is the orthogonal projection onto the closure of the range of $|b|$, $|b^*|=\nu|b|\nu^*$ and $\nu\nu^*$ is the orthogonal projection onto the closure of the range of $|b^*|$. \item $|ab^*|=\nu | |a| |b| |\nu^*$ and $\nu^*|ab^*|\nu=||a||b||$. \item If $p$ is a projection, then $x=pxp$ implies $x=px$ and in particular $p_xp=p_x$ (equivalently $p_x\le p$). \item If $p$ is a projection, $pxp=p$ and either $x\ge p$ or $0\le x\le p$, then $xp=p$. In particular if ${\rm{Ran\, }}(p)=span(\xi)$ for some $\xi \in\mathcal H$, $$ \langle x\eta,\eta \rangle\ge \langle p\eta,\eta\rangle \textit{ for any } \eta\in\mathcal H $$ and $\langle x \xi ,\xi\rangle =\langle \xi,\xi\rangle$ imply $x\xi=\xi$. There is a similar assertion for the other case. \end{enumerate} \end{lem} \begin{proof} 1. is trivial, to prove 2. write the polar decompositions $a=u|a|,b=\nu|b|$. Note that $|ab^*|^2=\nu|b||a|^2|b|\nu^*$; since $\nu^*\nu|b|=|b|$ then $(\nu|b||a|^2|b|\nu^*)^n=\nu(|b||a|^2|b|)^n\nu^*$ for any $n\in\mathbb N$, and an elementary functional calculus argument shows that $$ |ab^*|=(\nu|b||a|^2|b|\nu^*)^{\frac12}=\nu(|b||a|^2|b|)^{\frac12}\nu^*=\nu | |a||b||\nu^*. $$ On the other hand, since $\nu\nu^*\nu=\nu$, then $\nu\nu^*|ab^*|=|ab^*|=|ab^*|\nu\nu^*$, therefore from $\nu^*|ab^*|^2\nu=||a||b||^2$ taking square roots and using a similar argument we derive $\nu^*|ab^*|\nu=||a||b||$ 3. If $pxp=x$ then ${\rm{Ran\, }} x\subset {\rm{Ran\, }} p$, therefore $p_x\le p$ or equivalently $pp_x=p_x$. Multiplying both sides by $x$ gives $x=px$. 4. Assume $x\ge p$ (the case $0\le x\le p$ can be treated in a similar fashion therefore its proof is omitted). Since $x-p\ge 0$, we have, for each $\eta\in\mathcal H$, $$ \|(x-p)^{1/2}p\eta\|^2=\langle p(x-p)p\eta,\eta\rangle=0, $$ thus $(x-p)^{1/2}p=0$ and multipliying by $(x-p)^{1/2}$ on the left we obtain $(x-p)p=0$ wich shows that $xp=p$. \end{proof} \subsection{Singular values} Denote with ${\cal K}({\cal H})$ the compact operators on $\mathcal H$. Let $\lambda_k(x)$ ($k\in\mathbb N_0$) denote the $k$-th eigenvalue of the positive compact operator $x\in{\cal B}({\cal H})$, arranged in decreasing order, $$ \|x\|=\lambda_0\ge \lambda_1\ge\cdots \lambda_k\ge\lambda_{k+1}\ge \cdots $$ where we allow equality because each singular value is counted with multiplicity. Clearly $ \lambda_k(f(x))=|f|(\lambda_k(x))$ for any function defined in $\sigma(x)$. \begin{rem}\label{autov} For given $a,b\in {\cal B}({\cal H})$ and $x\in {\cal K}({\cal H})$, the min-max characterization of the singular values \cite[Theorem 1.5]{simon} and Lemma \ref{ellema}.2 easily imply that \begin{enumerate} \item $\lambda_k(axb)\le \|a\|\|b\|\lambda_k(x)$, \item $\lambda_k(|ab^*|)=\lambda_k(||a||b||)$. \end{enumerate} \end{rem} \subsection{Unitarily invariant norms} For a given vector $a=(a_i)_{i\in\mathbb N_0}$ with $a_i\in\mathbb R$, we will denote with $a^{\downarrow}$ the rearrangement of $a$ in decreasing order, that is $a^{\downarrow}$ is a permutation of $a$ such that $$ a_0^{\downarrow}\ge a_1^{\downarrow}\ge \cdots\ge a_k^{\downarrow}\ge a_{k+1}^{\downarrow}\ge \cdots. $$ \medskip Let $\|\cdot\|_{\phi}$ stand for a unitarily invariant norm in ${\cal B}({\cal H})$, and $\phi:{\mathbb R}_+^{\mathbb N_0}\to\mathbb R_+$ its associated permutation invariant gauge \cite{simon}. \begin{defi} We say that the norm is \textbf{strictly increasing} if, given two sequences $a=(a_i),b=(b_i)$ such that $0\le a_i\le b_i$ for all $i\in\mathbb N_0$ and $\phi(a)=\phi(b)$ implies that $a_i=b_i$ for all $i\in\mathbb N$ (see Hiai's paper \cite{hiai}, it is also property (3) in Simon's paper \cite{s2}). Examples of these norms on ${\cal K}({\cal H})$ are the Schatten $p$-norms $1\le p<\infty$, and examples of non-strictly increasing norms are the supremum norm and the Ky-Fan norms. Note that we can always define \begin{equation}\label{normastr} \phi(a)=\sum\limits_{k\ge 0} a_k^{\downarrow} 2^{-k}, \end{equation} which is a strictly increasing norm defined in the whole of ${\cal K}({\cal H})$. \end{defi} \medskip \begin{rem} If ${\cal I}_{\phi}$ is not equivalent to the supremum norm $\|x\|=\sup\limits_{\|\xi\|_H=1}\|x\xi\|_H$, then ${\cal I}_{\phi}=\{x\in{\cal K}({\cal H}): \|x\|_{\phi}<\infty\}$ is a proper bilateral ideal in ${\cal K}({\cal H})$ according to Calkin's theory. Assume that a symmetric norm has the Radon-Riesz property $$ \|x_n\|_{\phi}\to \|x\|_{\phi} \,\textit{ and } \, x_n\rightarrow x \,\textit{ weakly }\, \Longrightarrow\, \|x-x_n\|_\phi\to 0 $$ (see Arazy's paper \cite{a} on the equivalence for sequences and compact operators). Simon proved in \cite{s2} that in that case the norm is strictly increasing according to our definition. It is unclear for us if the assertion can be reversed. \end{rem} \subsection{Inequality} \begin{rem}\label{nota} For given $a,b\in{\cal K}({\cal H})$ we will always denote $$ \alpha_k=\lambda_k(|a|),\quad \beta_k=\lambda_k(|b|), \quad \gamma_k=\lambda_k(|ab^*|),\quad \delta_k=\lambda_k\left(\frac1p |a|^p +\frac1q |b|^q\right). $$ Moreover, we will denote $$ |a|=\sum_k \alpha_k a_k,\quad |b|=\sum_k\beta_k b_k,\quad |ab^*|=\sum_k \gamma_k p_k,\quad \frac1p |a|^p +\frac1q |b|^q=\sum_k \delta_k q_k $$ the spectral decompositions of each operator, with $a_k,b_k,$etc. one dimensional projections. Note that we allow multiplicity, and if $\gamma_1=\cdots=\gamma_j$ for some finite $j$, the election of the first $q_j$ is arbitrary (i.e., it amounts to select an orthonormal base of that span). \end{rem} \begin{rem}\label{aef} Concerning $a,b\in {\cal K}({\cal H})$, the following was proved in \cite{efz} by Erlijman, Farenick and Zeng: for each $k\in\mathbb N$, $$ \lambda_k(|ab^*|)\le \lambda_k\left(\frac1p |a|^p +\frac1q |b|^q\right) $$ hence there exists a partial isometry $u$ such that $q_k=up_ku^*$ and $u^*u=\sum_k p_k$ the projection on the (closure of) the range of $|ab^*|$. Then, for any $a,b\in {\cal K}({\cal H})$, $$ u|ab^*|u^*\le \frac1p |a|^p +\frac1q |b|^q. $$ This extended the original result of T. Ando \cite{ando} which was stated for positive matrices. \end{rem} \medskip From their result, it can be deduced that the relevant condition to deal with the equality is $\gamma_k=\delta_k$ for all $k$, to be more precise: \begin{lem}\label{gammaks}Let $a,b\in{\cal K}({\cal H})$, $p>1$, $1/p+1/q=1$. \begin{enumerate} \item If $|a|^p=|b|^q$, then $\alpha_k^p=\beta_k^q=\gamma_k=\delta_k$ for each $k\in\mathbb N_0$. \item If either $$ z|ab^*|z^* =\frac1p |a|^p +\frac1q |b|^q $$ for some contraction $z\in {\cal B}({\cal H})$, or $$ \|z|ab^*|w\|_\phi=\|\frac1p |a|^p +\frac1q |b|^q\|_\phi, $$ for a pair of contractions $z,w\in{\cal B}({\cal H})$ and a strictly increasing norm, then $\gamma_k=\delta_k$ for each $k\in\mathbb N_0$ and $$ u|ab^*|u^* =\frac1p |a|^p +\frac1q |b|^q. $$ where $u$ is the partial isometry $u$ of the result in Remark \ref{aef}, i.e $up_ku^*=q_k$ for each $k$. \end{enumerate} \end{lem} \begin{proof} To prove the first assertion, note that clearly $\delta_k=\alpha_k^p=\beta_k^q$. By Lemma \ref{ellema}.2, $|ab^*|=\nu|a|^p\nu^*$, and in particular $$ \lambda_k(|ab^*|)\le\lambda_k(|a|^p)=\lambda_k(|b|^q) $$ by Remark (\ref{autov}.1). Now since $\nu$ is the partial isometry of $|b|$, then also $\nu^*|ab^*|\nu=\nu^*\nu|a|^p\nu^*\nu=|a|^p$ which in turn shows the reversed inequality, and then $\gamma_k=\alpha_k^p=\beta_k^q$ follows. Regarding 2., note that if equality is attained by a contraction $z$, then by Remark \ref{aef} and Remark \ref{autov}.1 $$ \gamma_k\le \delta_k=\lambda_k(z|ab^*|z^*)\le \lambda_k(|ab^*|)=\gamma_k. $$ Likewise, if equality is attained for a strictly increasing norm and a pair of contraction $z,w$, since $$ \lambda_k(z|ab^*|w)\le \lambda_k(|ab^*|)=\gamma_k\le \delta_k=\lambda_k, $$ then $\gamma_k=\delta_k$ for every $k$. \end{proof} \subsection{Equality} The following result will be crucial to obtain the proof of our main assertion. \begin{prop}\label{pnoes2} Let $0\le a,b\in{\cal K}({\cal H})$. Let $1<p<2$ and $1/p+1/q=1$. If $$ \lambda_k(|ab|)=\lambda_k\left(\frac1p a^p +\frac1q b^q\right)\quad \textit{ for all }k $$ then ${\rm{Ran\, }}|ba|\subset \overline{{\rm{Ran\, }} b}$. \end{prop} \begin{proof} Let $\varepsilon >0$, let $p_b$ stand for the projection to the closure of the range of $b$, let $b_{\varepsilon}=b+\varepsilon(1-p_b)$, then $b_{\varepsilon}^q=b^q+\varepsilon^q(1-p_b)\le b^q+\varepsilon^q$ and $b_{\varepsilon}^2=b^2+\varepsilon^2(1-p_b)$. Therefore $$ |b_{\varepsilon} a|^2=ab_{\varepsilon}^2 a= ab^2a +\varepsilon^2 a(1-p_b)a. $$ Let $|ba|=\sum\limits_{k\in\mathbb N_0}\gamma_k e_k\otimes e_k$ with $\gamma_k=\lambda_k(|ba|)$ and $\{e_k\}_k$ an orthonormal basis of ${\rm{Ran\, }}|ba|$. Then since $\gamma_0=\|ab\|=\|ba\|$, we have $\langle ab^2ae_0,e_0\rangle=\|ba\|^2$ and \begin{eqnarray} \varepsilon^2\|(1-p_b)ae_0\|^2+\|ba\|^2 &\le & \|b_{\varepsilon} a\|^2=\|ab_{\varepsilon}\|^2\le \|\frac1p a^p+\frac1q b_{\varepsilon}^q\|^2\nonumber\\ &= & \|\frac1p a^p+\frac1q b^q+\frac1q \varepsilon^q (1-p_b)\|^2\nonumber\\ &\le &\|\frac1p a^p+\frac1q b^q+\frac1q \varepsilon^q\|^2 = \left[\|\frac1p a^p+\frac1q b^q\|+\frac1q \varepsilon^q\right]^2\nonumber\\ & = & (\|ab\|+\frac1q\varepsilon^q)^2= \|ab\|^2+\frac2q\|ab\|\varepsilon^q+\frac{1}{q^2}\varepsilon^{2q}\nonumber \end{eqnarray} by Remark \ref{aef} and the hypothesis. Therefore, dividing by $\varepsilon^2$ and letting $\varepsilon\to 0$, since $q>2$, we conclude that $(1-p_b)ae_0=0$ or equivalently, $ae_0\in \overline{{\rm{Ran\, }} b}$. We iterate the argument above for all $k$ such that $\gamma_k=\gamma_0$: let us abuse the notation and assume then that $\gamma_1<\gamma_0$. Then for all sufficiently small $\varepsilon\le \varepsilon_1$, $$ \gamma_1^2+\varepsilon^2\|(1-p_b)ae_1\|^2<\gamma_0^2. $$ Therefore for all such $\varepsilon$, if $Q=e_0+e_1$ then \begin{eqnarray} \lambda_1\left(Q(\varepsilon^2a(1-p_b)a+ ab^2a)Q\right)&=&\lambda_1\left(\varepsilon^2\|(a(1-p_b)a)^{1/2}e_1\|^2e_1+\gamma_1^2e_1+\gamma_0^ 2e_0\right)\nonumber\\ &=&\gamma_1^2+\varepsilon^2\|(1-p_b)ae_1\|^2.\nonumber \end{eqnarray} Now by the same reasons as above (and since $\lambda_k(QAQ)\le \lambda_k(A)$ and $\lambda_k(A+tP)\le \lambda_k(A+t1)=\lambda_k(A)+t$ for $A\ge 0$, $t\in \mathbb R_{\ge 0}$ and $P^2=P=P^*$) \begin{eqnarray} \lambda_1\left(Q(\varepsilon^2a(1-p_b)a+ ab^2a)Q\right) &= & \lambda_1|Q|ab_{\varepsilon}|^2Q|\le \lambda_1|ab_{\varepsilon}|^2\le \lambda_1\left(\frac1p a^p+\frac1q b_{\varepsilon}^q\right)^2\nonumber\\ &\le &\lambda_1\left(\frac1p a^p+\frac1qb^q+\frac1q \varepsilon^q\right)^2\nonumber\\ & = & \left[\lambda_1\left(\frac1p a^p+\frac1qb^q\right) +\frac1q \varepsilon^q\right]^2\nonumber\\ &\le &\left(\lambda_1|ab|+\frac1q\varepsilon^q\right)^2=\gamma_1^2+\frac2q\gamma_1\varepsilon^q+\frac{1}{q^2}\varepsilon^{2q}.\nonumber \end{eqnarray} Therefore $$ \gamma_1^2+\varepsilon^2\|(1-p_b)ae_1\|^2\le \gamma_1^2+\frac2q\gamma_1\varepsilon^q+\frac{1}{q^2}\varepsilon^{2q} $$ and again, dividing by $\varepsilon$ and letting $\varepsilon\to 0$, we conclude that $ae_1\in \overline{{\rm{Ran\, }} b}$. Proceeding recursively, we conclude that $a({\rm{Ran\, }}|ba|)\subset \overline{{\rm{Ran\, }} b}$. Now if $\xi\in \mathcal H$, then $a|ba|\xi\in\overline{{\rm{Ran\, }}(b)}$, therefore $a^2|ba|\xi=a(a|ba|\xi)\in a\overline{{\rm{Ran\, }}(b)}\subset \overline{{\rm{Ran\, }}(ab)}=\overline{{\rm{Ran\, }}|ba|}$, and $a^3|ba|\xi=a(a^2|ba|\xi)\in a \overline{{\rm{Ran\, }}|ba|}\subset \overline{{\rm{Ran\, }}(b)}$. Iterating this argument, we arrive to the conclusion that $a^{2n+1}({\rm{Ran\, }}|ba|)\subset\overline{{\rm{Ran\, }}(b)}$ for all $n\in\mathbb N_0$. Using an approximation of $f=\chi_{\sigma(a)}$ by odd functions, we conclude that $p_a({\rm{Ran\, }}|ba|)=f(a)({\rm{Ran\, }}|ba|)\subset\overline{{\rm{Ran\, }}(b)}$ where $p_a$ is the projection onto the closure of the range of $a$. Therefore $|ba|^2\xi=ab^2a\xi=p_aab^2a\xi=p_a|ba|^2\xi\subset \overline{{\rm{Ran\, }}(b)}$, which gives $\overline{{\rm{Ran\, }}|ba|}=\overline{{\rm{Ran\, }}(|ba|^2)}\subset \overline{{\rm{Ran\, }}(b)}$. \end{proof} \medskip \begin{rem}\label{proyecciones} Here are two remarks on projections, its verifications are left to the reader. \begin{enumerate} \item Let $b=b^*\in {\cal B}({\cal H})$, $\eta\in \mathcal H$. Then $b(\eta\otimes \eta)b=(b \eta)\otimes(b\eta)$ and the projection onto $span(bx)$ is given by $\frac{(b\eta)\otimes(b\eta)}{\|b\eta\|^2}$. \item Let $b=b^*\in {\cal B}({\cal H})$ assume that $b\eta=\xi$, with $\|\xi\|=1$. Name $p$ the projection onto $\xi$, name $p_{\eta}$ the projection onto $\eta$. Then $$ \|\eta\|^2bp_{\eta}b=p\quad \textit{ and } \quad p_{\eta}b^2p_{\eta}=\frac{1}{\|\eta\|^2}p_{\eta}. $$ \end{enumerate} \end{rem} \begin{lem}\label{minij} Let $0\le x\in {\cal K}({\cal H})$, let $\xi\in\mathcal H$ with $\|\xi\|=1$. Then $$ \langle x^r \xi,\xi\rangle \le \langle x\xi,\xi\rangle^r, \qquad 0<r<1 $$ with equality iff $x\xi=\langle x\xi,\xi\rangle\xi$. Also $$ \langle x \xi,\xi\rangle^s \le \langle x^s\xi,\xi\rangle, \qquad 1<s $$ with equality iff $x\xi=\langle x\xi,\xi\rangle\xi$. \end{lem} \begin{proof} Let $x=\sum x_i p_i$ be a spectral decomposition of $x$, with $\sum_i p_i=1$. Let $t_i=\|p_i\xi\|^2$, then $\sum_i t_i =1$. Using H\"older's inequality for sequences, with $p=1/r>1$, we obtain $$ \langle x^r \xi,\xi\rangle =\sum_i x_i^r t_i = \sum_i x_i^r t_i^{1/p}\, t_i^{1/q}\le \left(\sum_i x_i t_i \right)^r\left(\sum_i t_i\right)^{1/q}=\langle x\xi,\xi\rangle^r. $$ Assuming equality, in H\"older's inequality, it must be $x_it_i=ct_i$ for all $i$, therefore $x\xi=\langle x\xi,\xi\rangle \xi$. Taking $s=1/r$ and replacing $x$ with $x^s$, the proof of the other case ($s>1$) is straightforward. \end{proof} \medskip A rewriting of the lemma above, gives the following: \begin{coro}\label{holderes} Let $q$ be a rank one projection and $0\le x\in {\cal K}({\cal H})$ then $$ qx^rq\le (qxq)^r,\qquad 0<r< 1 $$ with equality iff $xq=cq$ for some $c\ge 0$ and $$ (qxq)^s\le qx^sq,\qquad 1<s, $$ with equality iff $xq=cq$. \end{coro} \medskip What follows is the statement that tells us that the relevant hypothesis is neither on the operator equality, nor on the norm equality, but the singular numbers equality. \begin{teo}\label{igual} Assume that $a,b\in{\cal K}({\cal H})$, $p>1$, $1/p+1/q=1$. If $$ \lambda_k(|ab^*|)=\lambda_k\left(\frac1p |a|^p +\frac1q |b|^q\right) $$ for all $k\in\mathbb N_0$, then $|a|^p=|b|^q$. \end{teo} \begin{proof} Since $\lambda_k(|ab^*|)=\lambda_k(|ba^*|)$, exchanging $a$ with $b$ if necessary we can assume that $1<p\le 2$. It will be easier to deal first with $a,b\ge 0$. We follow the notation of Remark \ref{nota}. Since $ba^2b=|ab|^2=\sum_k\gamma_k^2 p_k$, if $p_0=\xi\otimes\xi$ with $\xi\in\mathcal H$ and $\|\xi\|=1$, then $ba^2b\xi=\gamma_0^2\xi$. Let $\eta=\frac{1}{\gamma_0^2}p_b a^2b\xi$; then $\eta\in \overline{{\rm{Ran\, }} b}$ and $b\eta=\xi$. Let $p_{\eta}$ be the projection onto $span(\eta)$, then $\|\eta\|^2bp_{\eta}b=p_0$ by Remark \ref{proyecciones}.2. Observe that \begin{equation}\label{aaa} ba^2b \ge \gamma_0^2 p_0= \gamma_0^2\|\eta\|^2 bp_{\eta} b. \end{equation} Now we have to deal with two cases separately, regarding whether $p=2$ or $p\ne 2$. \textbf{Case $p\ne 2$}. By Proposition \ref{pnoes2}, we have $p_b|ba|=|ba|$, but ${\rm{Ran\, }}|ba|={\rm{Ran\, }}(ab)$, hence if we name $\overline{a}=p_bap_b$, then $$ b\overline{a}^2b=bp_bap_bp_bap_bb=bap_bab=ba^2b\ge \gamma_0^2\|\eta\|^2 bp_{\eta} b. $$ Therefore $\overline{a}^2\ge \gamma_0^2\|\eta\|^2 p_{\eta}$ as operators acting on $\mathcal H'=\overline{{\rm{Ran\, }} b}$. Since $1/2<p/2<1$, the operator monotony of $t\mapsto t^{p/2}$ implies that in $\mathcal H'$, we have $$ \overline{a}^p\ge \gamma_0^p\|\eta\|^p p_{\eta}. $$ This also implies \begin{equation}\label{ap} \frac{\langle \overline{a}^p\eta ,\eta\rangle}{\|\eta\|^2} \ge \gamma_0^p \|\eta\|^p. \end{equation} On the other hand, by Remark \ref{proyecciones}.2 and Corollary \ref{holderes} with $s=q/2> 1$, \begin{equation}\label{bestr} \frac{1}{\|\eta\|^q}p_{\eta}=\left(\frac{p_{\eta}}{\|\eta\|^2}\right)^{q/2}=\left( p_{\eta} b^2p_{\eta}\right)^{q/2}\le p_{\eta} b^q p_{\eta}, \end{equation} equivalently \begin{equation}\label{bq} \frac{1}{\|\eta\|^q}\langle \eta, \eta\rangle \le \langle b^q \eta,\eta \rangle. \end{equation} By Young's numeric inequality \begin{equation}\label{yn} \gamma_0=\gamma_0\|\eta\|\frac{1}{\|\eta\|}\le \frac1p\gamma_0^p\|\eta\|^p+\frac1q \frac{1}{\|\eta\|^q}. \end{equation} Since $1<p<2$, the map $t\mapsto t^p$ is operator convex \cite[Theorem 2.4]{pedersen}, therefore $\overline{a}^p=(p_bap_b)^p\le p_ba^pp_b$, hence combining this with (\ref{ap}), (\ref{bq}) and (\ref{yn}) gives \begin{eqnarray}\label{t} \gamma_0 &\le & \frac1p \frac{\langle \overline{a}^p \eta,\eta \rangle}{\|\eta\|^2} + \frac1q\frac{\langle b^q \eta,\eta\rangle}{\|\eta\|^2}\le \frac1p \frac{\langle p_b a^p p_b \eta,\eta \rangle}{\|\eta\|^2} + \frac1q\frac{\langle b^q \eta,\eta\rangle}{\|\eta\|^2} \nonumber\\ &= & \frac1p \frac{\langle a^p \eta,\eta \rangle}{\|\eta\|^2} + \frac1q\frac{\langle b^q \eta,\eta\rangle}{\|\eta\|^2}=\frac{1}{\|\eta\|^2}\left\langle\left( \frac1p a^p +\frac1q b^q \right)\eta,\eta\right\rangle\le \gamma_0\nonumber \end{eqnarray} by the hypothesis on the $\lambda_k$. From here we can derive several conclusions. The first one, since there is equality in Young's numeric inequality (\ref{yn}), is that $\gamma_0=\frac{1}{\|\eta\|^q}$. The second one, since we have equality in (\ref{bestr}), is that $b^2\eta=\frac{1}{\|\eta\|^2}\eta=\gamma_0^{2/q}\eta$ (Lemma \ref{minij}), therefore $\xi=b\eta=\gamma_0^ {1/q}\eta$. The third one, since $0\le \frac1p a^p +\frac1q b^q\le \gamma_0 1$ and now $$ \frac{1}{\|\eta\|^2}\left\langle\left( \frac1p a^p +\frac1q b^q \right)\eta,\eta\right\rangle=\frac{1}{\|\xi\|^2}\left\langle\left( \frac1p a^p +\frac1q b^q \right)\xi,\xi\right\rangle=\gamma_0 $$ is that (Lemma \ref{ellema}.4) $$ \left(\frac1p a^p+\frac1q b^q\right) \xi=\gamma_0 \xi $$ and rearranging if necessary the basis of $ker((\frac1p a^p +\frac1q b^q)-\gamma_0 1)$, we conclude $p_0=p_{\eta}=p_{\xi}=q_0$. Note that $$ \gamma_0\xi=\frac1p a^p\xi+\frac1q b^q\xi=\frac1p a^p\xi +\frac1q \gamma_0\xi, $$ which implies that $a^p\xi=\gamma_0\xi$; with a similar argument and since $$ 0\le \frac1p \overline{a}^p+\frac1q b^q\le \frac1p p_ba^pp_b+\frac1q b^q= p_b( \frac1p p_ba^pp_b+\frac1q b^q)p_b\le \gamma_0 p_b\le \gamma_0 1 $$ we deduce that $\overline{a}^p=\gamma_0\xi$ also, therefore $a\xi=\gamma_0^{1/p}\xi$. We now proceed with an induction argument. Write $$ a=\sum\limits_{\overline{\alpha}_j>\gamma_0^{1/p}}\overline{\alpha}_j \overline{a_j}+ \sum\limits_{\alpha_k\le \gamma_0^{1/p}}\alpha_k a_k, $$ with $a_k,\overline{a}_j$ rank one disjoint projections and $a_k\overline{a}_j=0$ for all $k,j$. Then rearranging if necessary $\alpha_0=\gamma_0^{1/p}$, $a_0=p_0$. Write similarly $$ b=\sum\limits_{\overline{\beta}_j>\gamma_0^{1/q}}\overline{\beta}_j \overline{b_j}+ \sum\limits_{\beta_k\le \gamma_0^{1/q}}\beta_k b_k, \quad \beta_0=\gamma_0^{1/q},\, b_0=p_0. $$ Let $\overline{a}=(1-p_0)a(1-p_0)$ and $\overline{b}=(1-p_0)b(1-p_0)$, then $p_0\overline{a}=p_0\overline{b}=0$, $$ a=\overline{a}+\gamma_0^{1/p} p_0,\qquad b=\overline{b}+\gamma_0^{1/q} p_0, $$ $$ ab=\overline{a}\overline{b}+\gamma_0p_0, \quad |ab|=|\overline{a}\overline{b}|+\gamma_0 p_0, $$ and $$ \frac1p \overline{a}^p +\frac1q \overline{b}^q+\gamma_0p_0=\frac1p a^p +\frac1q b^q. $$ Therefore $$ \lambda_0\left(\frac1p \overline{a}^p+\frac1q \overline{b}^q\right)=\lambda_1\left(\frac1p a^p+\frac1q b^q\right)=\lambda_1(|ab|)=\lambda_0(|\overline{a}\overline{b}|), $$ and iterating the above construction we arrive to $$ a=\overline{a}+\sum_{k}\gamma_k^{1/p} p_k,\quad b=\overline{b}+\sum_k\gamma_k^{1/q} p_k $$ with $\overline{a}p_k=\overline{b}p_k=0$ for each $j,k\in\mathbb N_0$. Then $$ \frac1p a^p+\frac1q b^q = \sum_k \lambda_k p_k + \frac1p \overline{a}^p+\frac1q \overline{b}^q=|ab|+ \frac1p \overline{a}^p+\frac1q \overline{b}^q=|ab|+T $$ with $T\ge 0$ compact and $T|ab|=0$. Now $\lambda_k(|ab|)=\lambda_k(\frac1p a^p+\frac1q b^q)$ for all $k$, which means equal eigenvalues with equal (and finite) multiplicities, a fact that forces $T=0$, therefore $\overline{a}=\overline{b}=0$, from which the claim $a^p=b^q$ follows for $a,b\ge 0$, assuming $1<p<2$. \medskip \textbf{Case $p=2$}. Let us now return to the case we skipped. From (\ref{aaa}), we know that $p_ba^2p_b\ge \gamma_0^2\|\eta\|^2p_\eta$ on the whole $\mathcal H$, therefore \begin{eqnarray} \gamma_0 &\le & \frac12\frac{\gamma_0^2\|\eta\|^2}{\|\eta\|^2}+\frac12\frac{1}{\|\eta\|^2}\le \frac12 \frac{\langle p_ba^2p_b \eta,\eta \rangle}{\|\eta\|^2} + \frac12\frac{\langle b^2 \eta,\eta\rangle}{\|\eta\|^2}=\frac{1}{\|\eta\|^2}\left\langle\left( \frac12 p_ba^2p_b +\frac12 b^2 \right)\eta,\eta\right\rangle\nonumber\\ &= & \frac{1}{\|\eta\|^2}\left\langle\left(p_b\left( \frac12 a^2 +\frac12 b^2 \right)p_b\right)\eta,\eta\right\rangle =\frac{1}{\|\eta\|^2}\left\langle\left( \frac12 a^2 +\frac12 b^2 \right)\eta,\eta\right\rangle \le \gamma_0\nonumber \end{eqnarray} since $\eta\in{\rm{Ran\, }}(b)$. Then from the equality in the numerical inequality (\ref{yn}) we derive that $\lambda_0=\|\eta\|^{-2}$, and $(\frac12 a^2 +\frac12 b^2)\eta=\gamma_0\eta$ as before. Since $q=2$, we have lost the strict inequality in (\ref{bestr}) regarding $b$. However, Since now $\langle p_ba^2p_b \eta,\eta \rangle\ge \gamma_0^2{\|\eta\|^2}$ must be an equality, from Lemma \ref{ellema}.4 we conclude that $p_ba^2\eta=p_ba^2p_b\eta=\lambda \eta$ for some positive $\lambda$, hence $b^2\eta=(2\gamma_0-\lambda)\eta$ also. Recalling $1=\|\xi\|^2=\|b\eta\|^2=\langle b^2\eta,\eta\rangle=(2\gamma_0- \lambda)\|\eta\|^2=(2\gamma_0-\lambda)\gamma_0^{-1}$, we obtain $\lambda=\gamma_0$. This tells us that $b\eta=\gamma_0^{1/2}\eta=a\eta$. The rest of the argument follows as in the case of $p<2$. Returning to the original statement, if for arbitrary compact $a,b$, we have equality of singular values, since $\lambda_k(|ab^*|)=\lambda_k(||a||b||)$ (Remark \ref{autov}.2), we obtain $|a|^p=|b|^q$. \end{proof} \bigskip Let us resume all the results in one clear cut statement, the main result of this paper: \begin{teo}\label{elteo} Let $a,b\in {\cal K}({\cal H})$. If $p>1$ and $1/p+1/q=1$, then the following are equivalent: \begin{enumerate} \item $|a|^p=|b|^q$. \item $z|ab^*|z^* =\frac1p |a|^p +\frac1q |b|^q$ for some contraction $z\in {\cal B}({\cal H})$ \item $\|z|ab^*|w\|_{\phi} =\|\frac1p |a|^p +\frac1q |b|^q\|_{\phi}$ for a pair of contractions $z,w\in{\cal B}({\cal H}) $ and $\|\cdot\|_{\phi}$ a \textbf{strictly increasing} symmetric norm. \item $\lambda_k(|ab^*|)=\lambda_k\left(\frac1p |a|^p+\frac1q |b|^q\right)$ for all $k\in\mathbb N_0$. \end{enumerate} \end{teo} \begin{proof} Clearly $1\Rightarrow 2$ with $z=\nu$ (the partial isometry in the polar decomposition of $b=\nu|b|$). If $2$ holds, picking a norm as in equation (\ref{normastr}), we have $2\Rightarrow 3$. By Lemma \ref{gammaks}, we have $3\Rightarrow 4$ and finally, by Theorem \ref{igual} it follows that $4\Rightarrow 1$. \end{proof} \subsubsection{Final remarks: equality of operators} Assume that we have an equality of operators \begin{equation}\label{igutrcomp} z|ab^*|z^*=\frac{1}{p}|a|^p+\frac{1}{q}|b|^q \end{equation} for some contraction $z\in{\cal B}({\cal H})$. Then from the previous theorem $|a|^p=|b|^q$ and $$ z|b^*|^qz^*=z\nu|b|^q\nu^*z^*=z|ab^*|^*z^*=|b|^q. $$ \begin{rem} Let $Tr$ stand for the semi-finite trace of ${\cal B}({\cal H})$. Assume for a moment that $Tr|b|^q<\infty$, or equivalently, that $\beta_k=\lambda_k(b)\in \ell_q$. Then $$ Tr(|b^*|^q(1-z^*z))=Tr|b^*|^q-Tr(z|b^*|^q)=Tr|b|^q-Tr(z|b^*|^qz^*)=0, $$ which is only possible if $|b^*|^q=|b^*|^qz^*z$, since $z$ is a contraction and the trace is faithful. Then also $$ zz^*|b|^q=zz^*z|b^*|^qz^*=z|b^*|^qz^*=|b|^q, $$ and $$ |b|^qz=z|b^*|^qz^*z=z|b^*|^q $$ or equivalently $|b|z=z|b^*|$, which can be stated as $bz\nu=\nu zb$. The reader can check that these three conditions $$ 1) \; |b^*|z^*z=|b^*|,\qquad 2)\; |b|zz^*=|b|,\qquad 3)\; |b|z=z|b^*| $$ are also sufficient to have equality in (\ref{igutrcomp}). \end{rem} This last fact, for $z$ a partial isometry (and with a different proof) was observed \cite{af} by Argerami and Farenick. \medskip \textit{We conjecture that these three conditions are also necessary for (\ref{igutrcomp}) to happen with a contraction $z$ if $b$ is just compact}. \subsection*{Acknowledgements} I would like to thank Jorge Antezana for pointing me to the nice paper by Argerami and Farenick; I would also like to thank Esteban Andruchow for our valuable conversations on this subject, and Martin Argerami for all his help improving the manuscript.
1,108,101,566,209
arxiv
\section{Introduction} Low density nuclear matter tends to be much more interesting than a simple zero density limit of the bulk physics of nuclei. Indeed, new phenomena show up and are mainly associated with the formation of bound states or with the emergence of strong correlations~\cite{bal04,sed03,bro00,bar06,hor05,sed06}. For instance, in symmetric nuclear matter, neutrons and protons become strongly correlated while the density decreases and a deuteron BEC state appears at very low density~\cite{bal95}. The deuteron-type correlations give extra binding to the nuclear equation of state and induce new features at low density~\cite{mar07}. This transition belongs to the BCS-BEC crossover phenomena which have been extensively studied in several domains of physics and has recently been experimentally accessible in cold atomic gas (see Ref.~\cite{bul05} and references therein). In nuclear matter, neutron pairs are also strongly correlated. Theoretical predictions suggest that, at density around $\rho_0/10$ where $\rho_0$=0.16~fm$^{-3}$, the $^1$S pairing gap may take a considerably larger value than that at normal nuclear density $\rho_0$~\cite{lom99}. The density dependence of the pairing gap at low density is unfortunately not yet completely clarified and still awaits a satisfactory solution~\cite{hei00,sch02,cao06}. Therefore, it could be interesting to explore pairing interactions based on Brueckner theory and its consequences to the BCS-BEC crossover. Indeed, pairing at low density is relevant for different purposes: for the understanding of neutron-rich exotic nuclei near the drip line~\cite{ber91,esb92,esb97,dob96,hag05,pil07}, where the long tails of density profiles give rise to "halo" or "skin" behavior, or for the expanding nuclear matter in heavy ion collisions~\cite{cho04}, or even for the physics of neutron stars, where several physical phenomena, such as cooling and glitches, are thought to depend very sensitively on the size and the density dependence of the pairing gap~\cite{yak04,sed07,mon07}. In $^{11}$Li, the wave function of the two neutrons participating to the halo nucleus has been analyzed with respect to the BCS-BEC crossover~\cite{hag07}. It has been shown that as the distance between the center of mass of the two neutrons and the core increases, the wave function changes from the weak coupling BCS regime to the strongly correlated BEC regime. This is due to the fact that the pairing correlations are strongly density-dependent~\cite{mat06} and the distance between the two neutrons and the core provides a measure of the pairing strength. It should be emphasized that the bare nuclear interaction in the particle-particle channel should be corrected by the medium polarization effects~\cite{lom99,cao06} (usually referred to as the screening effects). These effects have been neglected for a long time since the nuclear interaction is already attractive in the $^1$S$_0$ channel without the medium polarization effects, contrary to the Coulomb interaction for which the medium polarization effects are absolutely necessary to get an attractive interaction between electrons. However, several many-body methods have been developed recently to include the medium polarization effects in the calculation of the pairing gap such as a group renormalization method~\cite{sch03}, Monte-Carlo methods calculation~\cite{fab05,abe07,gez07} and extensions of the Brueckner theory~\cite{lom99,cao06}. These calculations, except the one presented in Ref.~\cite{fab05}, predict a reduction of the pairing gap in neutron matter Note that, based on the nuclear field model, it has also been suggested that the medium polarization effects contribute to the pairing interaction in finite nuclei and increase the pairing gap~\cite{bar99,gio02,bar05}. To understand this apparent contradiction between neutron matter and finite nuclei, a Brueckner calculation including the medium polarization effects in both symmetric and neutron matter has been performed in Ref.~\cite{cao06}. It has been shown that the medium polarization effects are different in neutron matter and in symmetric matter. The medium polarization effects do not reduce the pairing gap in symmetric matter, contrary to that in neutron matter. Instead, in symmetric matter, the neutron pairing gap is much enlarged at low density compared to that of the bare calculation. This enhancement takes place especially for neutron Fermi momenta $k_{Fn}<0.7$~fm$^{-1}$. This could explain why the medium polarization effects increase largely the pairing correlations in finite nuclei but decrease it in neutron matter. In this paper, we propose an effective density-dependent pairing interaction which reproduces both the neutron-neutron (nn) scattering length at zero density and the neutron pairing gap in uniform matter obtained by a microscopic treatment based on the nucleon-nucleon interaction~\cite{cao06}. The proposed interaction has isoscalar and isovector terms which could simultaneously describe the density dependence of the neutron pairing gap for both symmetric and neutron matter. Furthermore, we invent different density-dependent interactions to describe the ``bare'' and ``screened'' pairing gaps, together with the asymmetry of uniform matter, given in Ref.~\cite{cao06}. Then, we explore the BCS-BEC crossover phenomena in symmetric and asymmetric nuclear matter. This paper is organized as follows. In Sec.~\ref{sec:int}, we discuss how to determine the isoscalar and isovector density-dependent contact interactions. Applications of those interactions to the BCS-BEC crossover are presented in Sec.~\ref{sec:crossover}. We give our conclusions in Sec.~\ref{sec:conclusions}. \section{Density-dependent pairing interaction} \label{sec:int} Recently, spatial structure of neutron Cooper pair in low density nuclear matter has been studied using both finite range interactions like Gogny or G3RS and density-dependent contact interactions properly adjusted to mimic the pairing gap obtained with the former interactions~\cite{mat06}. It was found that the contact interactions provide almost equivalent results compared to the finite range ones for many properties of the Cooper pair wave functions. It is thus reasonable to investigate the evolution of the Cooper pair wave function with respect to the density and the isospin asymmetry using contact interactions adjusted to realistic interactions. In this paper, we take a contact interaction $v_{nn}$ acting on the singlet $^1$S channel, \begin{eqnarray} \langle k|v_{nn}|k'\rangle=\frac{1-P_\sigma}{2}v_0 \,g[\rho_n,\rho_p] \,\theta(k,k') \; , \label{eq:pairing_interaction} \end{eqnarray} where the cutoff function $\theta(k,k')$ is introduced to remove the ultraviolet divergence in the particle-particle channel. A simple regularization could be done by introducing a cutoff momentum $k_c$. That is, $\theta(k,k')=1$ if $k,k'<k_c$ and 0 otherwise. In finite systems, a cutoff energy $e_c$ is usually introduced instead of a cutoff momentum $k_c$. The relation between the cutoff energy and the cutoff momentum may depend on the physical problem, and it is known that the pairing strength $v_0$ depends strongly on the cutoff. A detailed discussion on the different prescriptions used in the literature are then presented in Appendix~\ref{app:co}. In this paper, we choose the prescription 3 in the Appendix~\ref{app:co} so that the adjusted interaction can be directly applied to Hartree-Fock-Bogoliubov calculations. In Eq.~(\ref{eq:pairing_interaction}), the interaction strength $v_0$ is determined by the low energy scattering phase-shift, that fixes the relation between $v_0$ and the cutoff energy $e_c$, while the density-dependent term $g[\rho_n,\rho_p]$ is deduced from predictions of the pairing gaps in symmetric and neutron matter based on the nucleon-nucleon interaction~\cite{cao06}. The density-dependent term accounts for the medium effects and satisfies the boundary condition $g\rightarrow 1$ for $\rho\rightarrow 0$. The volume type and surface type pairing interactions have $g=1$ and $g=0$ at $\rho=\rho_0$, respectively. In this paper, we introduce more general types of pairing interactions and the novelty is a dependence on the ratio of neutron to proton composition of the considered system. We thus define a function \begin{eqnarray} g_1[\rho_n,\rho_p] = 1 -f_s(I)\eta_s\left(\frac{\rho}{\rho_0}\right)^{\alpha_s} -f_n(I)\eta_n\left(\frac{\rho}{\rho_0}\right)^{\alpha_n} , \label{eq:ipa} \end{eqnarray} where $I$ is the asymmetry parameter, defined as $I=(N-Z)/(N+Z)$, and $\rho_0$=0.16~fm$^{-3}$ is the saturation density of symmetric nuclear matter. We insert the function $g_1$ into Eq.~(\ref{eq:pairing_interaction}) as $g=g_1$. The goal of the functional form in Eq.~(\ref{eq:ipa}) is to reproduce the theoretical calculation of the pairing gap in both symmetric and neutron matter and also to be used for prediction of the pairing gap in asymmetric matter. It could also be applied to describe pairing correlations in finite nuclei by acquiring an explicit dependence on the coordinate $r$ in the density $\rho(r)$ and the asymmetry parameter $I(r)$. In Eq.~(\ref{eq:ipa}), the interpolation functions $f_s(I)$ and $f_n(I)$ are not explicitly known but should satisfy the following condition $f_s(0)=f_n(1)=1$ and $f_s(1)=f_n(0)=0$. The density-dependent function $g_1$ is flexible enough and we can obtain an effective pairing interaction which reproduces the density dependence of the pairing gap in uniform matter. It should however be noticed that the interpolation functions $f_s(I)$ and $f_n(I)$ cannot be deduced from the adjustment of the pairing gap in symmetric and neutron matter. For that, theoretical calculations in asymmetric nuclear matter or application to exotic nuclei are necessary. In this paper, we choose $f_s(I)=1-f_n(I)$ and $f_n(I)=I$. Many different interpolation functions could be explored but we think that it has little consequences to the BCS-BEC crossover. We have also explored other density-dependent functionals, introducing an explicit dependence on the isovector density, $1-\eta_{s}\left(\frac{\rho}{\rho_0}\right)^{\alpha_{s}} -\eta_{i}\left(\frac{\rho_n-\rho_p}{\rho_0}\right)^{\alpha_{i}}$, or introducing the neutron and proton densities, $1-\eta_n\left(\frac{\rho_n}{\rho_0}\right)^{\alpha_n} -\eta_p\left(\frac{\rho_p}{\rho_0}\right)^{\alpha_p}$. The isospin dependence of those functionals is fixed but these functional forms are not flexible enough so that the adjustment becomes sometimes impossible. For instance, the pairing gap in symmetric and neutron matter including the medium polarization effects and calculated in Ref.~\cite{cao06} cannot be reproduced with such functionals. In the following, we have therefore considered only the density-dependent interaction given in Eq.~(\ref{eq:ipa}). Recently, another attempt has been done to introduce an isospin dependent volume type pairing interaction in a very different approach~\cite{gor04}. This interaction has been adjusted to reproduce empirical mass formula over a thousands of nuclei, but the pairing gap calculated with this pairing interaction compares very badly with realistic calculations in uniform matter. \subsection{The free interaction} \label{ssec:fi} \begin{figure}[tb] \begin{center} \includegraphics[scale=0.33]{art_fig01.eps} \end{center} \caption{ (Color online) Phase shifts for s-wave nucleon-nucleon scattering as a function of the center of mass energy. In the left panel are shown nn phase shifts obtained from Argonne $v_{18}$ potential~\cite{esb97} (stars) and the result of the best adjustment obtained with a contact interaction for a set of cutoff energies (from 10 to 120~MeV). In the right panel are shown the s-wave phase shifts for various channels: nn, np and pp. The np and pp phase shifts have been provided by the Nijmegen group (http://nn-online.org).} \label{fig01} \end{figure} In Ref.~\cite{ber91}, it has been proposed to deduce the free interaction parameter $v_0$ from the low energy phase shift of nucleon-nucleon scattering. The nn, np and pp phase shifts versus the center of mass energy are shown on the right panel of Fig.~\ref{fig01}. It is clear from this figure that each of these three channels are different and the interaction parameter $v_0$ should depend on the channel of interest. In this paper, we are interested only in the nn channel. We then express the phase shift as a function of the cutoff momentum $k_c$ and the scattering length $a_{nn}$~\cite{esb97,gar99}: \begin{eqnarray} k \cot \delta&=&-\frac{2}{\alpha \pi} \left[1+\alpha k_c+\frac{\alpha k}{2}\ln\frac{k_c-k}{k_c+k} \right]\\ &=& -\frac{1}{a_{nn}} - \frac{k}{\pi}\ln\frac{k_c-k}{k_c+k} \end{eqnarray} where $\alpha=2a_{nn}/(\pi-2k_c a_{nn})$. In this way, for a given cutoff momentum $k_c$, the phase shift can be adjusted using the scattering length $a_{nn}$ as a variable. \begin{table}[tb] \begin{center} \renewcommand{\arraystretch}{1.3} \begin{tabular}{ccccccc} \toprule $e_c$ & $a_{nn}$ & $r_{nn}$ & $\alpha$ & $v_0$ & $v_0^*$ & $v_0^\infty$ \\ \,[MeV] & [fm] & [fm] & [fm] & [MeV.fm$^3$] & [MeV.fm$^3$] & [MeV.fm$^3$] \\ \colrule 120 & $-$12.6 & 0.75 & $-$0.55 & $-$448 & $-$458 & $-$481 \\ 80 & $-$13.0 & 0.92 & $-$0.66 & $-$542 & $-$555 & $-$589 \\ 40 & $-$13.7 & 1.30 & $-$0.91 & $-$746 & $-$767 & $-$833 \\ 20 & $-$15.0 & 1.83 & $-$1.25 & $-$1024 & $-$1050 & $-$1178\\ 15 & $-$15.7 & 2.12 & $-$1.43 & $-$1167 & $-$1192 & $-$1360\\ 10 & $-$17.1 & 2.59 & $-$1.72 & $-$1404 & $-$1421 & $-$1666\\ \botrule \end{tabular} \end{center} \caption{For a given cutoff energy $e_c$, the parameters $r_{nn}$, $\alpha$ and $v_0$ are determined by the scattering length $a_{nn}$ which reproduces the phase shift in the low energy region $e_{c.m.}<2$~MeV. The interaction strengths $v_0^*$ and $v_0^\infty$ are obtained from the empirical value of $a_{nn}$=-18.5~fm and from the unitary limit, defined as $a_{nn}\rightarrow\infty$.} \label{tab1} \end{table}% The results are shown in Fig.~\ref{fig01} and Table~\ref{tab1} for a set of cutoff energies $e_c=\hbar^2k_c^2/m$ (note that we use the reduced mass $m/2$). We have found that, for cutoff energies larger than 20~MeV, the optimal parameters cannot reproduce the nn phase shift in the low energy region ($e_{c.m.}<$2~MeV) and in the higher energy region simultaneously. We mainly focus on the adjustment of the nn phase shift in the low energy region. At higher energies, or equivalently at higher densities in nuclear matter, the medium effects modify the interaction anyhow and generate a density dependent term in the interaction. Fixing $e_c$ and $a_{nn}$, one can deduce the effective range $r_{nn}=4/\pi k_c$, the parameter $\alpha$ and the interaction strength $v_0=2\pi^2\alpha\hbar^2/m$. The values of those parameters are given in Table~\ref{tab1}. The value of the free interaction parameter $v_0^*$ deduced from the empirical value of the scattering length $a_{nn}=-18.5$~fm is also indicated. One can see that the difference between $v_0$ and $v_0^*$ is small, as much as 3\%. Indeed, as we are in a regime of large scattering length, one can deduce the interaction strength approximately from the relation $v_0\approx v_0^\infty\left(1+\pi/2k_ca_{nn}+\dots\right)$ where $v_0^\infty=-2\pi^2\hbar^2/mk_c$ is the interacting strength in the unitary limit ($k_c a_{nn}\rightarrow\infty$). \subsection{The density-dependent function $g[\rho_n,\rho_p]$} \label{ssec:dd} The density-dependent function $g$ is adjusted to reproduce the pairing gaps in symmetric and neutron matter obtained from Ref.~\cite{cao06}. Pairing in uniform nuclear matter is evaluated with the BCS ansatz: \begin{eqnarray} |BCS\rangle=\prod_{k>0} (u_k+v_k \hat{a}^\dagger_{k\uparrow} \hat{a}^\dagger_{-k\downarrow})|-\rangle \; , \label{eq:bcs} \end{eqnarray} where $u_k$ and $v_k$ represent the BCS variational parameters and $\hat{a}^\dagger_{k\uparrow}$ are creation operators of a particle with momentum $k$ and spin $\uparrow$ on top of the vacuum $|-\rangle$~\cite{bcs57,gennes,schuck}. The BCS equations are deduced from the minimization of the energy with respect to the variational parameters $u_k$ and $v_k$. For a contact interaction, the equation for the pairing gap $\Delta_n$ takes the following simple form at zero temperature, \begin{eqnarray} \Delta_n = -\frac{v_0 g[\rho_n,\rho_p]}{2(2\pi)^3} \int d^3 k \frac{\Delta_n}{E_n(k)} \theta(k,k) \;, \label{eq:gap} \end{eqnarray} where $\theta(k,k)$ is the cutoff function associated to the contact interaction~(\ref{eq:ipa}), $E_n(k)=\sqrt{(\epsilon_n(k)-\nu_n)^2+\Delta_n^2}$ is the neutron quasi-particle energy, $\epsilon_n(k)=\hbar^2k^2/2m_n^*$ is the neutron single particle kinetic energy with the effective mass $m^*_n$. We define the effective neutron chemical potential $\nu_n=\mu_n-U_n$, where the neutron mean field potential $U_n$ is subtracted from the neutron chemical potential $\mu_n$. The effective neutron chemical potential $\nu_n$ gives the neutron density, \begin{eqnarray} \rho_n=\frac{2}{V}\sum_k n_n(k) \label{eq:rho} \end{eqnarray} where $V$ is the volume and $n_n(k)$ is the occupation probability defined as \begin{eqnarray} n_n(k)= \frac{1}{2}\left[ 1-\frac{\epsilon_n(k)-\nu_n}{E_n(k)}\right] \; . \label{eq:nn} \end{eqnarray} Finally, the neutron Fermi momentum $k_{Fn}$ is defined as $\rho_n\equiv k_{Fn}^3/3\pi^2$. \begin{table}[tb] \begin{center} \setlength{\tabcolsep}{.08in} \renewcommand{\arraystretch}{1.3} \begin{tabular}{cccccc} \toprule & $E_c=e_c/2$ & $\eta_s$ & $\alpha_s$ & $\eta_n$ & $\alpha_n$ \\ \colrule \hbox{bare} & 60~MeV & 0.598 & 0.551 & 0.947 & 0.554 \\ $g=g_1$ & 40~MeV & 0.664 & 0.522 & 1.01 & 0.525 \\ & 20~MeV & 0.755 & 0.480 & 1.10 & 0.485 \\ & 10~MeV & 0.677 & 0.365 & 0.931 & 0.378 \\ \hline \hbox{screened-I} & 60~MeV & 7.84 & 1.75 & 0.89 & 0.380 \\ $g=g_1$ & 40~MeV & 8.09 & 1.69 & 0.94 & 0.350 \\ & 20~MeV & 9.74 & 1.68 & 1.00 & 0.312 \\ & 10~MeV & 14.6 & 1.80 & 0.92 & 0.230 \\ \hline \hbox{screened-II} & 60~MeV & 1.61 & 0.23 & 1.56 & 0.125 \\ $g=g_1+g_2$ & 40~MeV & 1.80 & 0.27 & 1.61 & 0.122 \\ $\eta_2=0.8$ & 20~MeV & 2.06 & 0.31 & 1.70 & 0.122 \\ & 10~MeV & 2.44 & 0.37 & 1.66 & 0.0939 \\ \botrule \end{tabular} \end{center} \caption{Parameters of the function $g$ defined in Eq.~(\ref{eq:ipa}). These parameters are obtained from the fit to the pairing gaps in symmetric and neutron matter. These are the parameters obtained from the adjustment of the bare gap and the screened gap with $g=g_1$, and the screened gap including the additional function $g_{2}$. The effective mass is obtained from SLy4 Skyrme interaction. Note that $E_c$ is the cutoff for the quasi-particle gap equation~(\ref{eq:gap}) while $e_c$ is that for the two-body scattering so that $E_c=e_c/2$. See the text for details.} \label{tab2} \end{table}% We have chosen to adjust our interaction to the results of nuclear matter pairing gaps in Ref.~\cite{cao06} since it is the only calculations performed for both symmetric and neutron matter. We adjust the contact pairing interaction so that it reproduces the position and the absolute values of the maxima of the pairing gaps in symmetric and neutron matter. For the bare pairing gap, the maximum is located at $k_{Fn}=0.87$~fm$^{-1}$ with $\Delta_n$=3.1~MeV for both symmetric and neutron matter, while for the screened pairing gap, the maximum is at $k_{Fn}=0.60$~fm$^{-1}$ with $\Delta_n$=2.70~MeV for symmetric matter and $k_{Fn}=0.83$~fm$^{-1}$ and $\Delta_n$=1.76~MeV for neutron matter. The value of the parameters $\eta_s$ and $\eta_n$ are freely explored in the real axis while the parameters $\alpha_s$ and $\alpha_n$ are imposed to be positive to avoid singularities. The neutron effective mass $m_n^*$ is obtained from SLy4 Skyrme interaction since it is widely used in nuclear mean-field calculations. The results of the fits are given in Table~\ref{tab2} and the pairing gaps are shown in Fig.~\ref{fig02}. \begin{figure}[tb] \begin{center} \includegraphics[scale=0.35]{art_fig02.eps} \end{center} \caption{(Color online) Pairing gap in symmetric, asymmetric ($x_p=\rho_p/\rho$) and neutron matter adjusted to the "bare gap" (upper panel) or to the "screened gap" (lower panel) for various cutoff energies $E_c$. The pairing gap calculated from the microscopic treatment presented in Ref.~\cite{cao06} is also shown as star symbols.} \label{fig02} \end{figure} One should note that for the bare interaction, even if the pairing gap is identical in symmetric and neutron matter, the adjusted contact interaction is not necessarily isoscalar. Indeed, the transformation from the Fermi momentum, the x-axis of Fig.~\ref{fig02}, to the density is different in symmetric nuclear matter, $\rho/\rho_0=(k_{Fn}/k_{F0})^3$ (where $\rho_0=2/(3\pi^2) k_{F0}^3$=0.16~fm$^{-3}$), and in neutron matter, $\rho/\rho_0=0.5(k_{Fn}/k_{F0})^3$. Therefore, an interaction which depends only on the ratio $\rho/\rho_0$ gives different results if it is plotted as a function of $k_{Fn}$ in symmetric and neutron matter. As the pairing gap calculated with the bare interaction~\cite{cao06} is quasi-identical in symmetric and neutron matter when it is plotted versus $k_{Fn}$, one can then deduce the following relations between the parameters of the density-dependent term $g_1$ (neglecting the isospin dependence of the effective mass): $\alpha_s=\alpha_n$ and $\eta_s=\eta_n/2^{\alpha_n}$. For the bare pairing gap and for a given cutoff energy $E_c$, the position and the maximum value of the gap are reproduced well by the contact interaction in Fig.~\ref{fig02}. However, in the high Fermi momentum region $k_{Fn}>1$~fm$^{-1}$, we can see appreciable difference between the microscopic predictions and the pairing gap obtained from the contact interactions. The best agreement is obtained for a cutoff energy $E_c=40$~MeV. In the screened case, the dependence of the pairing gap on $k_{Fn}$ is badly reproduced, especially for symmetric nuclear matter. This is because the maximum position of the pairing gap is shifted towards a lower neutron Fermi momentum (one third in density from that for the bare gap). Consequently, the density dependence of the function $g_1$ becomes stiffer in the ``screened'' case than in the bare case, and the gap drops faster after the maximum, as it is shown in Fig.~\ref{fig02}. This may indicate that the screened interaction has a different density dependence and cannot be cast into a simple power law of the density as in Eq.~(\ref{eq:ipa}). Indeed, in Ref.~\cite{cao06}, the medium polarization effects have been analyzed at the level of the interacting potential, and it was shown that the medium polarization effects emerge at very low density and remain relatively constant. To simulate such effects, it seems necessary to introduce a new term, $g_2$ in Eq.~(\ref{eq:ipa}). We propose for $g_2$ a simple isoscalar constant which switches on at a very low value of the density ($k_F\sim0.15$~fm$^{-1}$) and switch off around the saturation density. The following form satisfies this condition, \begin{eqnarray} g_{2}=\eta_{2}\left[\left(1+e^{\frac{k_F-1.4}{0.05}}\right)^{-1}- \left(1+e^{\frac{k_F-0.1}{0.05}}\right)^{-1}\right] \; . \label{eq:g2} \end{eqnarray} The new pairing interaction with $g=g_1+g_2$, hereafter named screened-II, has then only one new adjustable parameter $\eta_{2}$. As the medium polarization effects could also change the density-dependent term $g_1$, all the 5 parameters have to be re-adjusted. In Tables~\ref{tab2} and \ref{tab4}, we give the new parameters $\eta_s$, $\alpha_s$, $\eta_n$ and $\alpha_n$, obtained for several values of $\eta_{2}$. The cutoff energy is fixed to be $E_c$=40~MeV. The corresponding pairing gap is represented in Fig.~\ref{fig03}. \begin{figure}[tb] \begin{center} \includegraphics[scale=0.36]{art_fig03.eps} \end{center} \caption{(Color online) Pairing gap calculated in symmetric, asymmetric and neutron matter with the screened-II interaction ($g=g_1+g_2$) and for several values of $\eta_{2}$ as indicated in the legend. The corresponding parameters are given in Table~\ref{tab4}. See the text for details.} \label{fig03} \end{figure} The best fit is obtained for the value $\eta_{2}$=0.8. Eq.~(\ref{eq:g2}) may not be a unique way to take into account the medium polarization effects. Nevertheless it is simple enough to apply to the BCS-BEC crossover, so this is why we adopt this functional form. \begin{table}[b] \begin{center} \setlength{\tabcolsep}{.15in} \renewcommand{\arraystretch}{1.3} \begin{tabular}{ccccc} \toprule $\eta_{2}$ & $\eta_s$ & $\alpha_s$ & $\eta_n$ & $\alpha_n$ \\ \colrule 0.2 & 1.90 & 0.72 & 1.08 & 0.24 \\ 0.4 & 1.61 & 0.46 & 1.26 & 0.19 \\ 0.6 & 1.64 & 0.33 & 1.44 & 0.15 \\ 0.8 & 1.80 & 0.27 & 1.61 & 0.122 \\ \botrule \end{tabular} \end{center} \caption{Parameters of the the screened-II interaction, where the density-dependent function of the pairing interaction is taken to be $g=g_1+g_2$. The functional forms $g_1$ and $g_2$ are obtained by fitting the screened pairing gap for several values of $\eta_{2}$. The energy cutoff is taken to be $E_c=40$~MeV and the neutron effective mass is deduced from the SLy4 Skyrme interaction.} \label{tab4} \end{table}% Solving the gap Eq.~(\ref{eq:gap}), the neutron effective chemical potential $\nu_n$ is determined for a given interaction at a given neutron Fermi momentum $k_{Fn}$. The neutron effective mass $m^*_n$, the effective neutron chemical potential $\nu_n=\mu_n-U_n$ and the difference $\nu_n-\epsilon_{Fn}$ are represented in Fig.~\ref{fig04} as a function of the neutron Fermi momentum in symmetric and neutron matter. \begin{figure}[tb] \begin{center} \includegraphics[scale=0.36]{art_fig04.eps} \end{center} \caption{(Color online) Comparison of the neutron effective mass $m^*_n$, the effective chemical potential $\nu_n=\mu_n-U_n$, and the difference $\nu_n-\epsilon_{Fn}$ calculated with the bare and the screened-II contact interactions. The parameters of the pairing interactions are taken from Table~\ref{tab2} with the cutoff energy $E_c$=40~MeV. The effective mass $m_n^*$ and the neutron potential $U_n$ are taken from SLy4 Skyrme interaction. The two arrows indicate the lower and upper limits for the condition $\nu_n<0$ with the screened-II interaction in symmetric nuclear matter.} \label{fig04} \end{figure} The neutron effective mass and the neutron potential $U_n$ are deduced from the SLy4 Skyrme interaction. Note that the neutron density $\rho_n$ is changed into the neutron Fermi momentum, $k_{Fn}$. We have selected from Table~\ref{tab2} the bare and the screened-II pairing interactions for a cutoff energy $E_c=40$~MeV. In the absence of pairing, the effective neutron chemical potential $\nu_n$ is identical to the neutron Fermi kinetic energy, $\nu_n=\epsilon_{Fn}$ where $\epsilon_{Fn}=\epsilon_n(k_{Fn})$. The difference $\nu_n-\epsilon_{Fn}$ is then null in the absence of pairing correlations, otherwise it is negative as shown in Fig.~\ref{fig04}. From this difference, one can estimate the relative importance of the pairing correlations: in neutron matter the screened-II interaction leads to weaker pairing correlations compared to the bare one, while in symmetric matter, the screened-II interaction give much stronger pairing correlations for $k_{Fn}<0.7$~fm$^{-1}$ and less for $k_{Fn}>0.7$~fm$^{-1}$. It is easy to show that the gap~(\ref{eq:gap}) and the occupation probability~(\ref{eq:nn}) go over into the Schr\"odinger-like equation for the neutron pair wave function $\Psi_{pair}$~\cite{noz85}, \begin{eqnarray} \frac{p^2}{m}\Psi_{pair}+[1-2n_n(k)] \frac{1}{V} {\rm Tr}\; v_{nn}\Psi_{pair} =2\nu_n\Psi_{pair}. \label{eq:sch} \end{eqnarray} See Eq.(~\ref{eq:psi}) in Sec.~\ref{sec:crossover} for a proper definition of the neutron pair wave function $\Psi_{pair}$. Notice that, at zero density where $n_n(k)=0$, Eq.~(\ref{eq:sch}) is nothing but the Schr\"odinger equation for the neutron pair. From Eq.~(\ref{eq:sch}), one usually relates the effective neutron chemical potential $\nu_n$ to be a half of the ``binding energy'' of a Cooper pair. The Cooper pairs may then be considered to be strongly correlated if $\nu_n$ is negative. Notice from Fig.~\ref{fig04}, that the effective neutron chemical potential $\nu_n$ becomes negative with the screened-II interaction in symmetric nuclear matter for $k_{Fn}$=0.05-0.35~fm$^{-1}$, but not at all in neutron matter. One could then expect in this case that the Cooper pair may resemble a closely bound system (BEC) in symmetric nuclear matter at very low density while in neutron matter it should behave like that of the weak coupling BCS regime. However, to go beyond this rough interpretation, we need to study more accurately the BCS-BEC crossover in asymmetric matter. \section{Application to the BCS-BEC crossover} \label{sec:crossover} Going from the weak coupling BCS regime, around the saturation density $\rho_0$, down to the BCS-BEC crossover, for densities $<\rho_0/10$, it has been shown that the spatial structure of the neutron Cooper pair changes~\cite{mat06}. It is indeed expected that the correlations between two neutrons get large as the density decreases and as a consequence, the BCS-BEC crossover occurs in the uniform matter at low density. However, being of the second order, this transition is smooth. Hereafter, we clarify the boundaries of the BCS-BEC phase transition by using a regularized gap equation. Although the BCS ansatz~(\ref{eq:bcs}) has been developed to describe the Cooper pair formation in the weak BCS regime~\cite{bcs57}, it has been shown that the BCS equations are also valid in the strong BEC condensation regime~\cite{noz85,PS03}. The BCS equations are thus adopted as a useful framework to describe the intermediate BCS-BEC crossover regime at zero temperature~\cite{eng97}. It has been proposed to define the limit of the BCS-BEC phase transition using a regularized model for the pairing gap~\cite{eng97,pap99,mat06}. In this model, the BCS gap~(\ref{eq:gap}) is combined with the relation between the interaction strength and the scattering length which has a similar divergent behavior. The difference between those two divergent integrals gives a regularized equation, \begin{eqnarray} \frac{m_n}{4\pi a_{nn}}=-\frac{1}{2V} {\rm Tr}\; \left( \frac{1}{E_n(k)}-\frac{1}{\epsilon_n(k)} \right) \; , \label{eq:reg} \end{eqnarray} which has no divergence. The gap equation~(\ref{eq:gap}) can be solved analytically for the contact interaction with a constraint of the particle number conservation~(\ref{eq:rho}). The solution of this regularized gap equation is independent of the strength of the interaction, and the gap is uniquely determined by the value of the scattering length $a_{nn}$. From Eq.~(\ref{eq:reg}), one can study the boundaries of the BCS-BEC phase transition with respect to the dimensionless order parameter $k_{Fn} a_{nn}$. \begin{table}[tb] \begin{center} \setlength{\tabcolsep}{0.02in} \renewcommand{\arraystretch}{1.3} \begin{tabular}{cccccc} \toprule $(k_{Fn} a_{nn})^{-1}$ & $P(d_n)$ & \,$\xi_{rms}/d_n$ & \,$\Delta_n/\epsilon_{Fn}$ & \,$\nu_n/\epsilon_{Fn}$ & \\ \colrule $-$1 & 0.81 & 1.10 & 0.21 & 0.97 & \hbox{BCS boundary}\\ 0 & 0.99 & 0.36 & 0.69 & 0.60 & \hbox{unitarity limit}\\ 1 & 1.00 & 0.19 & 1.33 & $-$0.77& \hbox{BEC boundary}\\ \botrule \end{tabular} \end{center} \caption{Reference values of $(k_{Fn} a_{nn})^{-1}$, $P(d_n)$, $\xi_{rms}/d_n$, $\Delta_n/\epsilon_{Fn}$ and $\nu_n/\epsilon_{Fn}$ characterizing the BCS-BEC crossover in the regularized model for the contact interaction. The values $d_n$, $P(d_n)$, and $\xi_{\rm rms}$ are the average distance between neutrons $d_n=\rho_n^{-1/3}$, the probability for the partner neutrons correlated within the relative distance $d_n$, and the rms radius of Cooper pair, respectively. The numbers have been taken from Refs.~\cite{eng97,mat06}. See the text for details.} \label{tab5} \end{table}% We give in Table~\ref{tab5} the values of several quantities which specify the phase transition: the probability $P(d_n)$ for the partner neutrons to be correlated within the relative distance $d_n$ ($d_n$ is the average distance between neutrons $d_n=\rho_n^{-1/3}$), the ratio of the rms radius to the mean neutron distance $\xi_{rms}/d_n$, the ratio of the pairing gap to the single particle kinetic energy $\Delta_n/\epsilon_{Fn}$ and also the ratio of the effective neutron chemical potential to the single particle kinetic energy $\nu_n/\epsilon_{Fn}$. As we already mentioned, these boundaries are indicative because the phase transition is smooth at the boundaries being of the second order. For instance, even if the nuclear matter does not enter into the BEC regime, we will show that the Cooper pair wave function is already very similar to the BEC one when it is close. A drawback of this regularized model is that the relation between the dimensionless order parameter $k_{Fn}a_{nn}$ and the density of the medium is unknown. To relate the order parameter to the density, one has to re-introduce the pairing strength in the gap equation~(\ref{eq:gap}). We could consider for instance a contact interaction with a cutoff regularization. The density will then trigger the phase transition for a given pairing interaction strength. In the following, we study the BCS-BEC phase diagram in asymmetric nuclear matter for the two pairing interactions discussed in Sec.~\ref{sec:int}. Namely we explore the properties of the Cooper pair wave function obtained by the bare and the screened-II interactions presented in Table~\ref{tab2} for a fixed cutoff energy $E_c$=40~MeV. The BCS approximation provides the Cooper pair wave function $\Psi_{pair}(k)$~\cite{bcs57,gennes,schuck} \begin{eqnarray} \Psi_{pair}(k) &\equiv& C \langle BCS|\hat{a}^\dagger(k\uparrow) \hat{a}^\dagger(-k\downarrow)|BCS\rangle \\ &\equiv& C u_k v_k \; . \label{eq:psi} \end{eqnarray} The radial shape of the Cooper pair wave function is deduced from the Fourier transform of $u_k v_k=\Delta_n/2E_n(k)$ in Eq.~(\ref{eq:psi}). The rms radius of Cooper pairs is then given by $\xi_{rms}=\sqrt{\langle r^2\rangle}=\sqrt{\int d r r^4 |\Psi_{pair}(r)|^2}$. The rms radius $\xi_{rms}$ and the Pippard's coherence length, $\xi_P=\hbar^2k_{Fn}/m^*_n\pi\Delta_n$, give similar size of the Cooper pair in the weak coupling regime. The rms radius $\xi_{rms}$ is nevertheless a more appropriate quantity in the BCS-BEC crossover region as well as in the strong BEC coupling region. In order to estimate the size of Cooper pairs a reference scale is given by the average distance between neutrons $d_n$. If the rms radius of Cooper pairs is larger than $d_n$, the pair is interpreted as an extended BCS pair while the Cooper pair will be considered as a compact BEC pair if the rms radius is smaller than the average distance. Let us introduce another important quantity which also gives a measure of the spatial correlations: the probability $P(r)$ for the partners of the neutron Cooper pair to come close to each other within the relative distance $r$, \begin{eqnarray} P(r) = \int_0^r dr' r'^2 |\Psi_{pair}(r')|^2 \; . \end{eqnarray} The order parameters listed in Table~\ref{tab5} are closely related. For instance, approximating $\xi_{rms}$ by $\xi_P$, it could easily be shown that the ratio $\xi_{rms}/d_n$ is proportional to $\epsilon_{Fn}/\Delta_n$. Then, the strong coupling regime is reached if the ratio $\Delta_n/\epsilon_{Fn}$ is large. The order parameter $\nu_n$, the effective neutron chemical potential, could be interpreted as a half of the binding energy of Cooper pairs at finite density according to the Schr\"odinger-like Eq.~(\ref{eq:sch}). As shown in the Appendix~\ref{app:wf} it is convenient to decompose the Cooper pair wave function into \begin{eqnarray} \Psi_{pair}(r)=\Psi_1(r)+\Psi_2(r) \; , \label{eq:wf} \end{eqnarray} where \begin{eqnarray} \Psi_1(r)&=&C'\int_0^{k_\infty} dk \frac{k^2}{E_n(k)}\frac{\sin(kr)}{kr} \; , \label{eq:wf1} \\ \Psi_2(r)&=&C'\int_{k_\infty}^\infty dk \frac{k^2}{E_n(k)}\frac{\sin(kr)}{kr} \label{eq:wf2} \; . \end{eqnarray} Choosing $k_\infty/k_0\gg 1$, with $k_0=\sqrt{2m^*_n\nu_n}/\hbar$, it is possible to find an analytic expression for $\Psi_2$: \begin{eqnarray} \Psi_2(r)=-\frac{C'}{r} \frac{2 m^*_n}{\hbar^2} \,{\rm si}(k_\infty r) \; , \label{eq:psi2} \end{eqnarray} where ${\rm si}(u)$ is the sinus integral defined as ${\rm si}(u)=\int_u^\infty dz \,[\sin(z)/z]$. It is clear from Eq.~(\ref{eq:psi2}) that the term $\Psi_2$ has a $1/r$-type singularity. This singularity is due to the nature of the contact interaction which does not contain a hard core repulsion. With the hard core repulsion, the wave function goes to zero as $r\rightarrow 0$~\cite{mat06}. In the outer region ($r>3$~fm), the wave function behaves in the same way if the contact interaction is deduced properly from the microscopic calculations. We checked the convergence of the wave function~(\ref{eq:wf}) with respect to the parameter $k_\infty$. We found that the convergence is reached with $k_\infty\approx 2k_0$ as is is shown in Fig.~\ref{fig11} in the Appendix~\ref{app:wf}. In Ref.~\cite{mat06}, Matsuo introduced the cutoff momentum $k_c$ to calculate the pair wave function~(\ref{eq:wf}). We have compared the pair wave function $\Psi_{pair}(r)$ with the one obtained by Matsuo. The two wave functions give essentially the same results, except for the low density region. In the worst case, the wave function of Matsuo's treatment increases the rms radius by about 10\% compared to the one obtained by the wave function~(\ref{eq:wf}). \begin{figure*}[htb] \begin{center} \includegraphics[scale=0.5]{art_fig05.eps} \end{center} \caption{(Color online) Neutron Cooper pair wave function $r^2|\Psi_{pair}(r)|^2$ as a function of the relative distance $r$ between the pair partner at the Fermi momenta $k_{Fn}$=1.1, 0.8, 0.5 and 0.2~fm$^{-1}$.} \label{fig05} \end{figure*} The neutron Cooper pair wave function $r^2|\Psi_{pair}(r)|^2$ is shown in Fig.~\ref{fig05} as a function of the relative distance $r$ between the pair partners taking different Fermi momenta $k_{Fn}$=1.1, 0.8, 0.5 and 0.2~fm$^{-1}$, which correspond respectively to the densities: $\rho_n/\rho_0$=0.3, 0.1, 0.03 and 0.002. Calculations in symmetric, asymmetric and neutron matter are shown in the left, middle and right panels, respectively. In Fig.~\ref{fig05}, we observe that the spatial extension and the profile of the Cooper pair varies strongly with the density. A large extension is found close to the saturation density at $k_{Fn}$=1.1~fm$^{-1}$. The profile of the wave function behaves as an oscillation convoluted by a decreasing exponent and casts into the well known limit $\sim K_0(r/\pi\xi_P)\sin(k_Fr)/k_Fr$~\cite{bcs57}. This indicates that the Cooper pair is in the weak coupling BCS regime. At lower densities, the Cooper pair shrinks and the oscillation disappears. The wave function resembles now the strong coupling limit (BEC) $\sim\exp(-\sqrt{4m/\hbar^2 \; |\mu|}r)/r$~\cite{PS03}. This is an indication that a possible BCS-BEC crossover may occur in uniform matter. \begin{figure}[b] \begin{center} \includegraphics[scale=0.36]{art_fig06.eps} \end{center} \caption{(Color online) Occupation probability $n_n(k)$ for symmetric matter defined by Eq.~(\ref{eq:nn}) as a function of the ratio of the single particle kinetic energy to the Fermi energy $\epsilon_n(k)/\epsilon_{Fn}$ for a set of Fermi momenta $k_{Fn}$= 1.1, 0.8, 0.5 and 0.2 fm$^{-1}$. We compare the results of the bare interaction (left panel) with the screened-II interaction (right panel). The values of the effective neutron chemical potential $\nu_n/\epsilon_{Fn}$ are indicated by the filled circles.} \label{fig06} \end{figure} It should be remarked that the latter limit seems well pronounced in symmetric matter with the screened gap (see the panel at the left bottom corner of Fig.~\ref{fig05}). We show in Fig.~\ref{fig06} the evolution of the occupation probability~(\ref{eq:nn}) in symmetric matter for the two pairing interactions. For the screened-II interaction, the pairing correlations becomes strong at low densities as the occupation probability is considerably different from the step function. In the case of the bare interaction, the correlations are not so strong to change $n_n(k)$ drastically even at low densities. It should be noticed that this analysis is independent of the detailed structure of the Cooper pair wave function. This change of the occupation probability proves that the behavior of the Cooper pair wave function is not an artifact induced by the zero range behavior of the contact interaction but indeed is physical. It is clear that low density symmetric nuclear matter is much more correlated with the screened-II interaction than with the bare one. This is also the case for the BCS-BEC crossover as will be discussed below. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.5]{art_fig07.eps} \end{center} \caption{(Color online) Probability $P$ for the partner neutrons within two typical lengths, 3 fm and $d_n$, as a function of the neutron Fermi momentum $k_{Fn}$ in symmetric (left panel), asymmetric (central panel) and neutron matter (right panel). The boundary of the BCS-BEC crossover is denoted by the dashed line.} \label{fig07} \end{figure*} Let us now discuss the BCS-BEC crossover which may depend on the pairing interactions and also on the asymmetry of the nuclear medium. In the following, we study the different order parameters in Table~\ref{tab5} for the boundaries of the BCS-BEC phase transition. Fig.~\ref{fig07} shows the probabilities $P(r)$ for the partner neutrons to be correlated within the typical scales, $r$=3 fm and $r$=$d_n$. The former scale is the typical range of the nucleon-nucleon force. For the bare interaction, the probability $P(3~$fm$)$ has similar behavior in symmetric and asymmetric matter as a function of $k_{Fn}$. For the screened-II interaction, there is a noticeable isospin dependence. A low density shoulder appears in symmetric matter, at around $k_{Fn}\sim 0.25$~fm$^{-1}$ ($\rho_n/\rho_0\sim 0.003$). Then, it becomes smaller as the asymmetry increases and eventually disappears in neutron matter. In neutron matter, the strong concentration of the pair wave function within the interaction range 3~fm, $P(3$~fm$)>0.5$, is realized in the density region $k_{Fn}\sim 0.3-1.1$~fm$^{-1}$ (or $\rho_n/\rho_0\sim 0.007-0.3$) for both pairing interactions. For symmetric matter, on the other hand, this region is different for the two pairing interactions: the strong correlation occurs at much lower density region for the screened-II interaction than for the bare one. This property can also be confirmed by the probability $P(d_n)$. For the two pairing interactions, the Cooper pairs in symmetric and asymmetric matter enter into the crossover regime at almost the same density. The crossover in neutron matter occurs somewhat at lower density for the screened-II interaction. As the density decreases, a different behavior is observed between the two pairing interactions for symmetric matter. While the probability $P(d_n)$ decreases and goes back to the weak BCS regime for the bare interaction at very small density below $k_{Fn}\sim 0.1$~fm$^{-1}$, the probability $P(d_n)$ continue to increase up to 1 for the screened-II interaction at very low densities, $k_{Fn}<0.7$~fm$^{-1}$ ($\rho=n/\rho=0<0.07$~fm$^{-1}$). \begin{figure*}[p] \begin{center} \includegraphics[scale=0.48]{art_fig08.eps} \end{center} \caption{(Color online) Top panels: Comparison between the rms radius $\xi_{rms}$ of the neutron pair and the average inter-neutron distance $d_n=\rho_n^{-1/3}$ (thin line) as a function of the neutron Fermi momentum $k_{Fn}$ in symmetric (left panel), asymmetric (central panel) and neutron matter (right panel). Bottom panels: The order parameter $\xi_{rms}/d_n$ as a function of $k_{Fn}$. The boundaries of the BCS-BEC crossover are represented by the two dashed lines, while the unitary limit is shown by the dotted line. The two pairing interactions are used for the calculations.} \label{fig08} \end{figure*} \begin{figure*}[p] \begin{center} \includegraphics[scale=0.48]{art_fig09.eps} \end{center} \caption{(Color online) Ratios $\Delta_n/\epsilon_{Fn}$ and $\nu_n/\epsilon_{Fn}$ plotted as a function of the neutron Fermi momentum $k_{Fn}$ in symmetric (left panel), asymmetric (central panel) and neutron matter (right panel). The boundaries of the BCS-BEC crossover are shown by the two dashed lines, while the unitary limit is given by the dotted line. See the text for details.} \label{fig09} \end{figure*} We study further the BCS-BEC crossover by looking at the rms radius $\xi_{rms}$ and the neutron pairing gap $\Delta_n$. In Fig.~\ref{fig08}, we show the rms radius $\xi_{rms}$ as a function of the neutron Fermi momentum $k_{Fn}$ as well as the order parameter $\xi_{rms}/d_n$. The rms radius of the Cooper pair is less than 5~fm in the region $k_{Fn}\sim(0.4-0.9)$~fm$^{-1}$ ($\rho_n/\rho_0\sim 0.01-0.15$) in the three panels for the bare interaction. The screened-II interaction gives different effects in symmetric and asymmetric matter: it increases the rms radius for the neutron matter, while the rms radius stays small around 4~fm even at very low density at $k_{Fn}\sim 0.15$~fm$^{-1}$ ($\rho_n/\rho_0\sim 0.0007$) in symmetric matter. In the lower panels is shown the ratio of the rms radius to the average distance between neutrons $d_n$. For the bare interaction, the size of the Cooper pair becomes smaller than $d_n$ for the Fermi momentum $k_{Fn}<0.8$~fm$^{-1}$ ($\rho_n/\rho_0\sim 0.1$) in general. There are substantial differences for symmetric and asymmetric matter in the case of the screened-II interaction. The crossover region becomes smaller for the neutron matter, while the crossover region increases in the cases of asymmetric ($x_p=0.3$) and symmetric matter. Especially, the correlations becomes strong in symmetric matter and the Cooper pair reaches almost the BEC boundary at $k_{Fn}\sim 0.2$~fm$^{-1}$ ($\rho_n/\rho_0\sim 0.002$). Notice that the two neutrons system is known experimentally to have a virtual state in the zero density limit. We have shown that this virtual state could lead to a strongly correlated BEC state at low density in symmetric nuclear matter according to the screened-II interaction. The two other order parameters $\Delta_n/\epsilon_{Fn}$ and $\nu_n/\epsilon_{Fn}$ are shown in Fig.~\ref{fig09}. These results confirm the BCS-BEC crossover behavior which is found in Fig.~\ref{fig08}. Namely, in symmetric matter, the gap $\Delta_n$ is much enhanced by the screened interaction in the low density region, while no enhancement can be seen in neutron matter. As expected, the effective chemical potential $\nu_n$ induced by the screened-II interaction becomes negative for $k_{Fn}\sim 0.05-0.3$~fm$^{-1}$ ($\rho_n/\rho_0\sim 0.00002-0.01$) in symmetric nuclear matter. This strong correlation is reduced in asymmetric matter and is almost absent in the neutron matter as can be seen in Fig.~\ref{fig09}. It should also be remarked that the order parameter $\nu_n/\epsilon_{Fn}$, due to the Schr\"odinger-like Eq.~(\ref{eq:sch}), gives the same BCS-BEC crossover behavior as that of the order parameter $\xi_{rms}/d_n$. The neutron effective chemical potential is then a good criteria to discuss the BCS-BEC crossover. \section{Conclusions} \label{sec:conclusions} A new type of density-dependent contact pairing interaction was obtained to reproduce the pairing gaps in symmetric and neutron matter obtained from a microscopic calculation~\cite{cao06}. The contact interactions reproduce the two types of pairing gaps, i.e., the gap calculated with the bare interaction and the gap modified by medium polarization effects. It is shown that the medium polarization effects cannot be cast into the usual density power law form in symmetric nuclear matter so that another new isoscalar term $g_2$ in Eq.~(\ref{eq:g2}) is then added to the density dependent term of the pairing interaction in Eq.~(\ref{eq:pairing_interaction}). We have applied these density-dependent pairing interactions to the study of the BCS-BEC crossover phenomenon in symmetric and asymmetric nuclear matter. We found that the spatial di-neutron correlation is strong in general in a wide range of low matter densities, up to $k_{Fn}\sim 0.9$~fm$^{-1}$ ($\rho_n/\rho_0\sim 0.15$). This result is independent of the pairing interaction, either bare or screened one, as well as of the asymmetry of the uniform matter. Moreover, it is shown that the two pairing interactions mentioned above lead to different features for BCS-BEC phase transition in symmetric nuclear matter. To clarify the difference, we studied various order parameters, the correlation probability $P(d_n)$, the rms radius of the Cooper pair $\xi_{rms}$, the gap $\Delta_n$ and the effective chemical potential $\nu_n$, as a function of the Fermi momentum $k_{Fn}$, or equivalently as a function of the density. The screened interaction enhances the BCS-BEC crossover phenomena in symmetric matter, while the pairing correlations as well as the crossover phenomena are decreased in neutron matter by the medium polarization effects. For the screened-II interaction, the crossover reaches almost to the BEC phase at $k_{Fn}\sim 0.2$~fm$^{-1}$ in symmetric matter. We should notice, however, that the BEC state is very sensible to the asymmetry of the medium and disappears in neutron matter. \acknowledgments We are grateful to M.~Matsuo, P.~Schuck and N.~Sandulescu for interesting discussions during the completion of this work. This work was supported by the Japanese Ministry of Education, Culture, Sports, Science and Technology by Grant-in-Aid for Scientific Research under the program number 19740115.
1,108,101,566,210
arxiv
\section{Introduction} The major goal of statistical mechanics is to explain the macroscopic properties of complex systems in terms of a very small number of parameters by using probabilistic approaches. As is well known, its primary motivation was the study of thermodynamical properties of matter based on random behavior of their very large number of constituents (atom and molecules). Almost a century after Boltzmann's seminal work \cite{Boltz}, such approaches were extended to dynamical systems theory \cite{Sinai,Bowen,Ruelle}, a branch that is currently known as thermodynamic formalism \cite{Ruelle}. In this scenario, the chaotic dynamics of an ensemble of trajectories plays the role of randomness in the many-body dynamics, even for one degree-of-freedom dynamical systems. The thermodynamic approach has proven to be a powerful tool in the ergodic theory of hyperbolic and expanding dynamical systems \cite{Ruelle}. Later, there has been growing interest, mostly by theoretical physicists, in extending this approach to more general dynamical systems (see \cite{BS} and references. therein), particularly those that exhibit fractal sets \cite{FJP,KP,BJ,BP,GBP} or some kind of intermittent behavior \cite{STCK,Wang,Feign,MKHH,SH,Prell}. Here we deal with phase transitions for Pomeau-Manneville (PM) maps $x_{t+1}=f(x_{t})$ where $f$ takes the form \begin{eqnarray} \label{PMmap} f(x)=x(1+ax^{1/\alpha})\,\,\,\mbox{mod}\,1, \end{eqnarray} with $a > 0$ and $\alpha > 0$ \cite{PM}. The remarkable characteristic of such systems is the intermittent behavior due to the presence of the indifferent fixed point $x=0$, i.e., $f(0)=0$ and $f'(0)=1$. It is important to stress that the global form of $f$ far from $x=0$ is less relevant here. For example, systems behaving like (\ref{PMmap}) on $[0,x_{*})$, where $f(x_{*})=1$, exhibit the same statistical behavior of (\ref{PMmap}) since the map on $[x_{*},1]$ is given by some well-behaved function $f_{1}$ such that $f_{1}(x_{*})=0$ and $f_{1}(1)=1$. Systems of the type (\ref{PMmap}) have diverging invariant measure $\mu(x)$ near their indifferent fixed points for $0<\alpha<1$. More specifically, the invariant density $\omega(x)$ of map (\ref{PMmap}), where $d\mu(x)=\omega(x)dx$, behaves as \begin{eqnarray} \label{omeg} \omega(x)\sim bx^{-1/\alpha}, \end{eqnarray} near $x=0$ \cite{Thaler}. Therefore, diverging measure regime of (\ref{omeg}) leads to a very slow laminar phase near $x=0$ alternating with fast turbulent one elsewhere. Due to this peculiarity, the dynamics of the system (\ref{PMmap}) exhibit subexponential instability of the type $|\delta x_{t}|\sim|\delta x_{0}| \exp(\lambda_{\alpha} t^{\alpha})$ for $0<\alpha<1$ \cite{SV}. On the other hand, $\alpha>1$ leads to the finiteness of invariant measure, which is naturally related to the usual chaos and ordinary Lyapunov exponents. It is important to point out here the connection between subexponential instability and the so-called ``sporadic randomness,'' a phenomenon initially studied by Gaspard and Wang \cite{GW}. These authors conjectured that the Kolmogorov-Chaitin algorithmic complexity $C_{t}$ for map (\ref{PMmap}) is proportional to the number of entrances $N_{t}$ into a given phase space cell after a large number of iterations. In this assumption, recently confirmed in \cite{SV} by means of a Pesin-type indentity, the statistics of $N_{t}$ is ruled by non-Gaussian fluctuations involving Feller's renewal results \cite{Feller}. Subsequently, thermodynamic phase transitions of PM map (\ref{PMmap}) for $0<\alpha<2$ was studied by Wang \cite{Wang} employing the same approach of \cite{GW}. It is also interesting to note that the sporadic randomness has not only been verified in PM intermittent maps (e.g., \cite{SV}), but also suggested as a distinguishing feature in weather systems \cite{RPNEB}, noncoding DNA sequences \cite{LK}, and some linguistic texts \cite{EN}. The purpose of this work is twofold. First, we will revisit the pioneering results of \cite{Wang}, but now from the point of view of infinite ergodic theory \cite{Aaronson} (for some applications, see also \cite{SV,PSV}). In this first part some results involving phase transitions of the so-called topological pressure are considerably improved. The topological pressure can be interpreted as a free energy density associated with the ensemble of trajectories. We also discuss the phase transition related to the R\'enyi entropy, extending the results observed in \cite{STCK} to the diverging measure (nonergodic) regime of PM map (\ref{PMmap}). Finally, we show the connection between Feller's sporadic statistics and the infinite ergodic theory. Second, it aims at understanding the phase transition problem from a dynamical point of view since singular behavior of thermodynamical quantities does not tell everything about dynamic characteristics of a system. The approach employed here also show us precisely what happens at each phase, particularly in the subexponential regime of map (\ref{PMmap}). \section{Topological Pressure} In the thermodynamic formalism, systems of the type (\ref{PMmap}) exhibit continuous phase transition, a situation where thermodynamic quantities vary continuously but not analytically when some external parameter of the system is changed. The paradigmatic example in the usual statistical mechanics is the ferromagnetic material at zero external magnetic field: Ferromagnets lose their spontaneous magnetization when heated above a specific critical temperature $T_{c}$ and the derivative of the magnetization with respect to magnetic field (susceptibility) diverges at $T_{c}$ and zero field. As shown in \cite{Wang}, the many-body model that most closely resembles (\ref{PMmap}) is the Fisher-Felderhof droplet model of condensation \cite{FF}. Let us first consider the topological pressure $P(\beta)$, a kind of negative Helmholtz free energy of thermodynamic formalism, defined as \cite{BS} \begin{eqnarray} \label{Prestop} P(\beta)=\lim_{t\rightarrow\infty}\frac{1}{t}\ln Z_{t}(\beta), \end{eqnarray} with the corresponding partition function $Z_{t}(\beta)$ given by \begin{eqnarray} \label{partic} Z_{t}(\beta)=\sum_{\left\{x_{i}\right\}}\exp\left[-\beta\sum_{k=0}^{t-1}\ln|f'(f^{k}(x_{i}))|\right]. \end{eqnarray} The set of points $\left\{x_{i}\right\}$ in Eq. (\ref{partic}) is chosen as follows. First, consider a partition of phase space into disjoint boxes $\Delta_{i}$ so that transitions between nearest-neighbor configurations $(i,i')$ are possible, i.e., $f(\Delta_{i})\cap\Delta_{i'}\neq\emptyset$. For each allowed sequence $i_{0},\ldots,i_{t-1}$, there is a subset $\Delta_{x}(i_{0},\ldots,i_{t-1})$ of phase space defined by \begin{eqnarray} \label{Delta} \Delta_{x}(i_{0},\ldots,i_{t-1})=\left\{x:f^{k}(x)\in\Delta_{i_{k}},k=0,\ldots,t-1\right\}.\nonumber\\ \end{eqnarray} The size of subsets $\Delta_{i_{k}}$ goes to zero as $t\rightarrow\infty$. Then, for very large but still finite $t$, we pick a representative point $x_{i}$, one from each subset, and collect them as the set $\left\{x_{i}\right\}$. It is important to stress, however, that the analytical determination of this set is not usually a practical task. We can circumvent this problem by replacing the summation over $x_{i}$ by the integration over the conditional measure as follows ($h$ is an arbitrary function) \begin{eqnarray} \label{sumint} \sum_{\left\{x_{i}\right\}}h(x_{i})\sim\int_{x\in[0,1]}d\sigma(x)|\Delta_{x}(i_{0},\ldots,i_{t-1})|h(x), \end{eqnarray} where $\sigma(x)$ represents the measure of initial condition $x$ and $|\Delta|$ denotes the Lebesgue measure of $\Delta$. As $t\rightarrow\infty$ we have the following property \cite{EcP} \begin{eqnarray} \label{Leb} |\Delta_{x}(i_{0},\ldots,i_{t-1})|=|\Delta_{x}(i_{1},\ldots,i_{t-1})||f'(x)|, \end{eqnarray} where, on the right side of (\ref{Leb}), we can replace $i_{1}$ by $i_{2}$ and $|f'(x)|$ by $|f'[f(x)]||f'(x)|$, and so forth, leading to \begin{eqnarray} \label{Lebdel} |\Delta_{x}(i_{0},\ldots,i_{t-1})|=\prod_{k=0}^{t-1}|f'[f^{k}(x)]|. \end{eqnarray} Finally, we can rewrite Eq. (\ref{partic}) by means of Eqs. (\ref{sumint}) and (\ref{Lebdel}) yielding \begin{eqnarray} \label{Zint} Z_{t}(\beta)\sim\int d\sigma(x)\exp\left[(1-\beta)\sum_{k=0}^{t-1}\ln|f'[f^{k}(x)]|\right]. \end{eqnarray} Before attempting to estimate Eq. (\ref{Zint}) we will make use of the Aaronson-Darling-Kac (ADK) theorem \cite{Aaronson}, which is precisely applicable to PM systems of type (\ref{PMmap}). For such systems, this theorem ensures that, for a positive function $\vartheta$ integrable over $\mu$ and an arbitrary measure $\sigma$ of initial conditions absolutely continuous with respect to the Lebesgue measure, we have \begin{eqnarray} \label{ADK} \frac{1}{t^{\gamma}}\sum_{k=0}^{t-1}\vartheta[f^{k}(x)]\stackrel{d}{\rightarrow}\xi_{\gamma}c_{\gamma}(t)\int\vartheta d\mu, \end{eqnarray} as $t\rightarrow\infty$, where $\xi_{\gamma}$ is a non-negative Mittag-Leffler random variable of index $\gamma\in(0,1]$ and expected value $E(\xi_{\gamma})=1$. The corresponding Mittag-Leffler probability density function $\rho_{\gamma}(\xi)$ is given by \cite{Feller,SVml} \begin{eqnarray} \label{ML} \rho_{\gamma}(\xi)=\frac{\Gamma^{1/\gamma}(1+\gamma)}{\gamma\xi^{1+1/\gamma}}g_{\gamma}\left[\frac{\Gamma^{1/\gamma}(1+\gamma)}{\xi^{1/\gamma}}\right], \end{eqnarray} where $g_{\gamma}$ stands for the one-sided L\'evy stable density, whose Laplace transform is $\tilde{g}(u)=\exp(-u^{\gamma})$ (see \cite{SVml,PG} for a detailed discussion). For PM maps of the type (\ref{PMmap}), the index $\gamma$ is \begin{eqnarray} \label{gam} \gamma = \left\{ \begin{array}{ll} \displaystyle \alpha, & 0<\alpha<\displaystyle1,\\ \displaystyle 1, & \displaystyle\alpha\geq1, \end{array} \right. \end{eqnarray} whereas the coefficient $c_{\gamma}(t)$ in Eq. (\ref{ADK}) takes the asymptotic form \cite{RZ} \begin{eqnarray} \label{ct} c_{\gamma}(t)\sim\left\{ \begin{array}{ll} \displaystyle \frac{1}{ba}\left(\frac{a}{\alpha}\right)^{\alpha}\frac{\sin(\pi\alpha)}{\pi\alpha}, & 0<\alpha<\displaystyle1,\\ \displaystyle (b\ln t)^{-1}, & \displaystyle\alpha=1,\\ \displaystyle 1, & \displaystyle\alpha>1, \end{array} \right. \end{eqnarray} as $t\rightarrow\infty$, recalling that $b=\lim_{x\rightarrow0}x^{1/\alpha}\omega(x)$. For $\alpha>1$ we have introduced the Birkhoff ergodic case $\gamma=1$, for which the corresponding Mittag-Leffler density reduces to $\rho_{1}(\xi)=\delta(1-\xi)$, as in the $\alpha=1$ case. Evidently, we can choose $\vartheta=\ln|f'|$ in the ADK formula (\ref{ADK}). Consider now the algorithmic complexity $C_{t}$ of PM map (\ref{PMmap}), valid for all $\alpha>0$ \cite{SV}: \begin{eqnarray} \label{Comp} C_{t}(x)\sim\sum_{k=0}^{t-1}\ln|f'[f^{k}(x)]|, \end{eqnarray} as $t\rightarrow\infty$. Equations (\ref{ADK}) and (\ref{ct}) lead to (see also \cite{SV}) \begin{eqnarray} \label{CompML} \frac{C_{t}}{\left\langle C_{t}\right\rangle}\stackrel{d}{\rightarrow}\xi_{\gamma}, \end{eqnarray} as $t\rightarrow\infty$, where the ADK average $\left\langle C_{t}\right\rangle=h_{\gamma}t^{\gamma}$ is given in terms of the average of generalized Kolmogorov-Sinai entropy \cite{SV} \begin{eqnarray} \label{hgam} h_{\gamma}=c_{\gamma}\int d\mu\ln|f'|. \end{eqnarray} Going back to Eq. (\ref{Zint}), we can overcome the integration problem over arbitrary $\sigma$ considering it absolutely continuous with respect to the Lebesgue measure. Such condition is sufficiently broad to assure that our results involving phase transitions typically do not depend on the initial condition distributions. This is somewhat surprising in the case of nonergodic regimes, i.e., $0<\alpha<1$ in the present case. After applying Eqs. (\ref{CompML}) and (\ref{hgam}) in the ADK formula (\ref{ADK}), we have \begin{eqnarray} \label{zml} Z_{t}(\beta)\sim\int_{0}^{\infty}d\xi\rho_{\gamma}(\xi)\exp[-(\beta-1)h_{\gamma}t^{\gamma}\xi]. \end{eqnarray} Note that Eq. (\ref{zml}) is just the Laplace transform of $\rho_{\gamma}$, which is given by the Mittag-Leffler special function $E_{\gamma}(u)$, namely \cite{SVml} \begin{eqnarray} \label{MLsf} \tilde{\rho_{\gamma}}(u)=E_{\gamma}(u)=\sum_{n=0}^{\infty}\frac{[\Gamma(1+\gamma)u]^{n}}{\Gamma(1+n\gamma)}, \end{eqnarray} with $u=(1-\beta)h_{\gamma}t^{\gamma}$. Now, considering the asymptotes $E_{\gamma}(u)\sim\gamma^{-1}\exp(u^{1/\gamma})$ as $u\rightarrow\infty$ \cite{ML} and $E_{\gamma}(u)\sim0$ as $u\rightarrow-\infty$ \cite{note}, we have finally for all $\alpha>0$ and $\beta$ near $1$ \begin{eqnarray} \label{PTph} P(\beta) \sim \left\{ \begin{array}{ll} \displaystyle [h_{\gamma}(1-\beta)]^{1/\gamma}, & \displaystyle \beta<1,\\ \displaystyle 0, & \displaystyle\beta\geq1, \end{array} \right. \end{eqnarray} observing that $h_{\gamma}=0$ ($c_{\gamma}\rightarrow0$) for $\alpha=1$. Note that Eq. (\ref{PTph}) is in accordance with the results first obtained in \cite{Wang} for $0<\alpha<2$ and later extended for all $\alpha>0$ in \cite{Prell}. It is noteworthy here that, unlike these approaches, the prefactor $h_{\gamma}$ in Eq. (\ref{PTph}) is obtained exactly, given by Eq. (\ref{hgam}) for all $\alpha>0$. \section{R\'enyi Entropy} Let us consider now the phase transition related to the R\'enyi entropy \cite{BS}: \begin{eqnarray} \label{Renk} K(\beta)=\frac{1}{1-\beta}\lim_{t \rightarrow\infty}\frac{1}{t}\ln\sum_{j=0}^{t-1}\left(p_{j}^{(t)}\right)^{\beta}, \end{eqnarray} where $p_{j}^{(t)}=p_{j}(i_{0}, \ldots, i_{t-1})$ usually denotes the probability that a randomly chosen initial condition ($\sigma$ distributed) on the phase space falls into $\Delta_{j}$ at time $t-1$. In view of the fact that we are also dealing with nonergodic regimes ($0<\alpha<1$), we set $p_{j}^{(t)}$ such that \begin{eqnarray} \label{pbet} p_{j}^{(t)}(q)=\frac{\tau_{j}^{q}}{\sum_{k=0}^{t-1}\tau_{k}^{q}}, \end{eqnarray} where $p_{j}^{(t)}(q=1)=p_{j}^{(t)}$. In Eq. (\ref{pbet}) we consider the amount of time $\tau_{j}$ spent in state $\Delta_{j}$ instead of its length $|\Delta_{j}|$, which is usually considered for ergodic systems (for which $\tau_{j}\propto|\Delta_{j}|$). Then the partition function $Z_{t}$ takes the asymptotic form \begin{eqnarray} \label{ztq} Z_{t}(q)\sim\exp[tP(q)]\sim\sum_{k=0}^{t-1}\tau_{k}^{q}. \end{eqnarray} Recalling that $\sum_{j}\left[p_{j}^{(t)}(q)\right]^{\beta}\sim\exp[(1-\beta)K(\beta,q)t]$, we have for $q=1$ \begin{eqnarray} \label{kpb} K(\beta)=\frac{P(\beta)-\beta P(1)}{1-\beta}, \end{eqnarray} also valid for ergodic systems \cite{BS}. From the topological pressure (\ref{PTph}) we then have \begin{eqnarray} \label{Reny} K(\beta) \sim \left\{ \begin{array}{ll} \displaystyle h_{\gamma}^{1/\gamma}(1-\beta)^{-1+1/\gamma}, & \displaystyle \beta<1,\\ \displaystyle 0, & \displaystyle\beta\geq1. \end{array} \right. \end{eqnarray} Note that for $\gamma=1$, i.e., ergodic regimes, $K(\beta)=h_{KS}$ for $\beta<1$, where $h_{1}=h_{KS}$ is the Kolmogorov-Sinai entropy, whereas $K(\beta)=0$ for $\beta\geq1$. Therefore, Eq. (\ref{Reny}) extends for nonergodic regimes ($0<\alpha<1$) the nonanalytic behavior of R\'enyi entropy at $\beta=1$ observed in \cite{STCK}. \section{Algorithmic complexity satisfies the ADK theorem} In \cite{Wang}, as well as in \cite{GW}, the algorithmic complexity $C_{t}$ of a piecewise version of the PM map was considered as the random number of entrances $N_{t}$ into a given phase space cell ($A_{0}$) during $t$ iterations of the map, i.e., $C_{t}\sim N_{t}$. The statistics $p_ {\alpha}$ of $N_{t}$ employed is well known from Feller's renewal theorems \cite{Feller}, and it was applied in the estimation of $P(\beta)$. The accordance with Eq. (\ref{PTph}) can be understood by observing that $p_{\alpha}$ is, in fact, a Mittag-Leffler probability density function. The statistics of $N_{t}$ for the case $0<\alpha<1$ is given by \cite{Feller} \begin{eqnarray} \label{pg1} P_{\alpha}\left(N_{t}\geq c_{1}\frac{t^{\alpha}}{q^{\alpha}}\right)\sim G_{\alpha}(q), \end{eqnarray} as $t\rightarrow\infty$, where $P_{\alpha}$ and $G_{\alpha}$ stand for the cumulative distribution functions of $p_{\alpha}$ and $g_{\alpha}$, respectively. Applying the change of variable $q=r\xi^{-1/\alpha}$, with $r^{\alpha}=\alpha\Gamma(\alpha)$ \cite{SVml}, and after introducing the normalized random variable $\xi=N_{t}/\left\langle N_{t}\right\rangle$, we have \begin{eqnarray} \label{pg2} p_{\alpha}(N_{t})dN_{t}\sim\rho_{\alpha}(\xi)d\xi, \end{eqnarray} as $t\rightarrow\infty$, where $\left\langle N_{t}\right\rangle=c_{1}t^{\alpha}/\alpha\Gamma(\alpha)$ and $\rho_{\alpha}$ is the Mittag-Leffler density (\ref{ML}). For the case $\alpha>1$, but different from $2$, we have \cite{Feller} \begin{eqnarray} \label{pg3} P_{\alpha}\left(N_{t}\geq c_{2}t-c_{3}t^{1/\kappa}q\right)\sim G_{\kappa}(q), \end{eqnarray} where $\kappa=\alpha$ for $1<\alpha<2$ and $\kappa=2$ for $\alpha>2$. Now $G_{\kappa}$ stands for the cumulative distribution function of the two-sided stable density $g_{\kappa}$ \cite{Wang}. We can consider the same normalized variable $\xi$ of Eq. (\ref{pg2}), but now with $\left\langle N_{t}\right\rangle=c_{2}t$. Then Eq. (\ref{pg3}) becomes \begin{eqnarray} \label{pg4} p_{\alpha}(N_{t})dN_{t}\sim\frac{1}{\epsilon}g_{\kappa}\left(\frac{1-\xi}{\epsilon}\right)d\xi, \end{eqnarray} where $\epsilon=(c_{3}/c_{2})t^{-1+1/\kappa}$ goes to $0$ as $t\rightarrow\infty$. This leads to $\delta(1-\xi)d\xi$ on the right hand side of Eq. (\ref{pg4}). Already in the Gaussian case $\kappa= 2$, $\epsilon^{2}$ is proportional to the variance, also leading Eq. (\ref{pg4}) to the same Dirac $\delta$ function. The same occurs for $\alpha= 2$, where we also have Eq. (\ref{pg4}) for $\kappa=2$, but replacing $c_{3}$ by $c_{3}\sqrt{\ln t}$ and also leading to $\epsilon\rightarrow0$ as $t\rightarrow\infty$. \section{FINAL REMARKS} We revisit here the problem of thermodynamic phase transitions for PM maps (\ref{PMmap}) by using the infinite ergodic theory, in particular the ADK theorem. The topological pressure $P(\beta)$ and R\'enyi entropy $K(\beta)$ are calculated exactly for such systems, exhibiting both phase transitions at the same critical value $\beta_{c}=1$. Our results also shed some light on the role of the measure of initial conditions $\sigma$ in the calculation of these thermodynamic functions. Such quantities are invariant under $\sigma$ since it is absolutely continuous with respect to the Lebesgue measure. This result is somewhat surprising in the case of the nonergodic regime of the PM map (\ref{PMmap}), showing once more the strength of the ADK theorem. From a dynamical point of view, the thermodynamic formalism allows us to obtain important quantities that characterize nonlinear systems. We can mention, for instance, the Pesin formula relating Lyapunov exponent $\Lambda$ to the Kolmogorov-Sinai entropy $h_{1}=h_{KS}$, namely $h_{KS}=[P(\beta)-P'(\beta)]_{\beta\rightarrow1_{-}}=\Lambda$ \cite{BS}. For nonergodic regimes $0<\alpha<1$, however, the topological pressure (\ref{PTph}) gives us the trivial relation $h_{KS}=\Lambda=0$. In fact, the dynamic instability is stretched exponential for such cases, of the form $|\delta x_{t}|\sim|\delta x_{0}| \exp(\lambda_{\alpha} t^{\alpha})$, rather than exponential $\alpha=1$. For such cases we can consider, once more, the infinite ergodic theory approach. It has recently been shown that the Pesin relation can be extended in a nontrivial way provided one introduces a convenient subexponential generalization of the Lyapunov exponent and Kolmogorov-Sinai entropy \cite{SV}. Moreover, the generalizations of such quantities behave like Mittag-Leffler random variables with $h_{\alpha}$ as the first moment \cite{SV,PSV}. A quest for new constitutive relations involving $P(\beta)$ that lead directly to these results in the thermodynamic formalism probably deserve further investigations. \acknowledgments The author thanks A. Saa for enlightening discussions. This work was supported by the Brazilian agency CNPq.
1,108,101,566,211
arxiv
\section{Introduction} Bitcoin is widely used, with a current market capitalization of over 120 billion USD. Given the massive level of economic activity and the potential for future growth, it is natural to ask: how confident can we be in the security and reliability of this system and its variants? While practical experience suggests that Bitcoin and related systems are robust against various kinds of misuse, their security rest on a combination of features whose interactions are not fully understood. For example, could advances in computing change the balance of power between honest miners and adversaries? Initial studies of blockchain security \cite{garay_bitcoin_2015, li_continuous-time_2020} have proved some fundamental relationships involving security parameters and the probabilities of desired properties. In this paper, we develop a formal model based on the \emph{Bitcoin Backbone Protocol} abstraction \cite{garay_bitcoin_2015} and use a statistical model checking tool (UPPAAL-SMC) to study its security. We focus on how the properties of the backbone protocol vary as a function of concrete parameters, in a network where an adversary is capable of selfish mining (delaying the release of malicious blocks). The main contributions of this paper are: \begin{itemize}[leftmargin=0.25cm] \item We demonstrate a way to model the backbone protocol in the presence of a selfish-mining adversary. \item We quantitatively analyze a concrete trade-off between different security properties, based on how honest miners act on receiving new blocks. \item We demonstrate how the failure rate of the backbone properties vary with different values of $f$, where $f$ is the probability that at least one honest party mines a block in a round, a parameter that is different for different cryptocurrencies. \end{itemize} The paper is divided into the following sections. In Section 2, we provide an overview of the Bitcoin Backbone Protocol, which is the basis for our UPPAAL-SMC model. In Section 3 we describe how we model the backbone protocol using the tool. In Section 4 we describe our results. Finally, Section 5 provides an overview of related work, Section 6 outlines potential future work, and Section 7 concludes. \section{Overview of the Backbone Protocol} The Bitcoin Backbone Protocol was introduced in \cite{garay_bitcoin_2015} and subsequently improved in \cite{journals/iacr/GarayKL16, li_continuous-time_2020}. The aim of the backbone protocol is to analyze the core mechanisms of proof-of-work consensus protocols and provide more detailed security guarantees than those provided by Nakamoto's whitepaper \cite{nakamoto2008bitcoin}. The protocol captures key elements of Bitcoin and related Proof-of-Work (PoW) consensus protocols that are use in other blockchains. The protocol represents time as a series of discrete \textit{rounds,} each short enough so that the probability of any party completing the work needed to write a new block is low. Each round proceeds as follows: \begin{enumerate} \item \textit{Start:} Each miner starts the round with a preferred current chain. \item \textit{Check:} Each miner begins the round by checking for new chains. \item \textit{Adopt:} Each honest miner adopts the best chain visible to it, using a selection criteria that is defined by the blockchain. \item \textit{Mine:} Each miner queries a cryptographic hash function. If they probabilistically succeed with proof of work in this round, they append a block to their chain. \item \textit{Broadcast:} Any miner that modifies its local chain will broadcast its new chain to other parties. \end{enumerate} In modeling the backbone protocol, a party may be designated as honest or adversarial. Honest parties will immediately share blocks they find, and select their chain based on the protocol’s designated chain selection algorithm. They will not deviate from the steps outlined above. The protocol adversary represents a possible coalition of malicious miners. The adversary is therefore able to query the cryptographic hash function and produce blocks, with a success probability per round that may differ from a single honest miner. In addition, a possible network advantage of the adversary is represented by allowing the adversary to inject messages into any miner's input channel and reorder any input channel at will.\footnote{We use "input channel" to mean the same thing as "RECEIVE()" from \cite{garay_bitcoin_2015}.} The adversary can also select a preferred chain arbitrarily (rather than using the honest selection criteria given in step 3 above) and withhold blocks in order to transmit them at a later round. In \cite{garay_bitcoin_2015}, the authors make certain assumptions about the system: \begin{itemize}[leftmargin=0.25cm] \item The protocol is executed by a fixed number of parties \textit{n}. \item Parties do not know the source of messages. \item All messages are delivered by the end of a round.\footnote{This assumption is for the synchronous model of \cite{garay_bitcoin_2015}.} \item All parties involved are allowed the same number of queries to a cryptographic hash function, for the PoW computation. \end{itemize} The protocol parameters from \cite{garay_bitcoin_2015} that are relevant to our work are shown in Figure \ref{fig:parameters}; the reader may consult \cite{garay_bitcoin_2015} for further detail. \begin{figure}[h!] \centering \fbox{\begin{minipage}{{0.5\textwidth}} \scriptsize $\lambda$: security parameter\\ \textit{n}: number of parties \\ \textit{t}: number of parties controlled by the adversary\\ \textit{f}: probability at least one honest party finds a PoW in a round\\ $\epsilon$: concentration of random variables in typical executions\\ $\mu$: proportion of blocks in honest chains that were mined by honest parties\\ \textit{k}: number of blocks for common prefix\\ $\ell$: number of blocks for chain quality \\ \textit{s}: rounds for chain growth property \\ $\tau:$ chain growth parameter \end{minipage}} \caption{Positive integers \textit{n}, \textit{t}, \textit{s}, $\ell$, \textit{k}; positive reals \textit{f}, $\epsilon, \mu, \tau, \lambda$, with $\textit{f}, \epsilon, \mu \in (0,1).$ } \label{fig:parameters} \end{figure} If the modeling assumptions listed above hold and the majority of parties are honest, the behavior of the protocol can be described using the concept of typical execution, introduced in \cite{garay_bitcoin_2015}. By definition, a \textit{typical execution} is a sequence of rounds in which the random variables are close to their expected values. A straightforward calculation shows that a typical execution occurs with probability $1-e^{-\Omega({\epsilon}^2\lambda\textit{f})}$. The desired properties of the backbone protocol are: \begin{itemize}[leftmargin=0.25cm] \item \textit{Chain Quality:} Chain quality is the proportion of honest blocks in the chain of an honest participant. A subsection of $\ell$ blocks in a honest chain will have at least $\mu\ell$ blocks that were mined by honest parties. Note that $\ell \geq 2\lambda\textit{f}$ for provable security. \item \textit{Common Prefix:} The common prefix property holds if honest parties that prune $k$ blocks from their chain share a common view with another honest party. In a typical execution the common prefix holds for \textit{k} $\geq$ 2$\lambda$\textit{f}. \item \textit{Chain Growth:} The chain growth property measures how quickly the chains of honest parties grow. Given that $\tau$ = (1 - $\epsilon$)\textit{f}, honest chains grow at least as fast as $\tau$ $\cdot$ \textit{s} in a typical execution. Note that \textit{s} $\geq \lambda$. \end{itemize} The authors of \cite{garay_bitcoin_2015} show that if these properties hold, then persistence and liveness, which are crucial to the robustness of the system, also hold. Persistence states that once a transaction reported by an honest party becomes ‘deep enough’ in a blockchain, it is present in the chain of every other honest party at the same position. Liveness states that any transaction that comes from an honest party, and is provided to all other honest parties, will be inserted into all honest ledgers. The authors note that persistence and liveness are not proof that Bitcoin meets all of its objectives, as the analysis assumes the number of parties is fixed and there is an honest majority. \cite{garay_bitcoin_2015} also provides a similar analysis for a network setting which is not highly synchronous, meaning there is an upper bound on the amount of rounds a message takes to be delivered. We omit a discussion of this because it is beyond the scope of our study, but extending our analysis to this model remains a potential direction for future work. \section{Modeling the Backbone Protocol} In this section we introduce our model checking formalism of the backbone protocol. We first define the following for each participant in the protocol: \begin{itemize}[leftmargin=0.25cm] \item Input channel: Chains from other participants will be sent to a participant's input channel. \item Output channel: When a participant successfully mines a block, it sends its newly mined block to its output channel, to be broadcasted to the rest of the network. \end{itemize} We now present how honest and adversarial participants are modeled in the protocol. Honest participants are modeled by Algorithm \ref{algo:honest_mining} (above), and the adversary is modeled by Algorithm \ref{algo:adv_mining} (above). \begin{algorithm}[t] \scriptsize \KwIn{Honest miner m} \caption{Honest Mining} check input channel and update bestBlock[m] \If{Mining success} { create newBlock publish newBlock with newBlock $\mapsto$ bestBlock[m] set bestBlock[m] = newBlock } \caption{Honest Mining} \label{algo:honest_mining} \end{algorithm} \begin{algorithm}[t] \scriptsize \KwIn{Adversarial miner a} decide strategy and update arbitraryBlock[a] \If{advMiningSuccess} { create badBlock $\mapsto$ arbitraryBlock[a] publish badBlock or keep private } route published and private blocks to honest miners, at will \caption{Adversarial Mining} \label{algo:adv_mining} \end{algorithm} As shown in Algorithm \ref{algo:honest_mining}, $bestBlock$ is a global array that stores the head of each miner's blockchain. At the beginning of each round, an honest miner checks its input channel, checking its local chain against any chains published in the network. The miner then attempts to mine a block. If mining was successful, a new block will be created and published to the rest of the network. The miner will also update its local blockchain by appending its newly-mined block. In contrast to Algorithm \ref{algo:honest_mining}, Algorithm \ref{algo:adv_mining} allows an adversarial miner to decide the best strategy to use in the current round, and keep blocks private \footnote{Private blocks are not broadcasted to other miners.}. In the first execution round, everyone adopts the genesis block. In all subsequent rounds, honest participants select their block by referring to their input channels. The adversary may select their block arbitrarily. Figure \ref{fig:broadcast} shows how the input and output channels are updated at each round. Assume \textit{A}, \textit{B}, and \textit{C} are honest miners and \textit{A} successfully mined a block in the most recent round. Participant \textit{A} will send its block to its output channel. The block is then sent to the input channel of all other participants on the network. In the following round, the honest parties will check their input channels and use a selection algorithm to determine whether to adopt \textit{A's} block. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth, height = 5cm]{input_output.png} \caption{Participant \textit{A} broadcasts a block} \label{fig:broadcast} \end{figure} In the presence of an adversary, input channels can be manipulated. The adversary can standby until all honest parties have completed mining. This allows the adversary to maximize the amount of information they can account for when deciding their propagation strategy. When all honest miners have completed a round, the adversary checks each participant's output channel for blocks and set the order of blocks on each honest input channel (Figure \ref{fig:adversary}). By default, honest parties will adopt the first chain they receive when encountering chains of equal length. This allows the adversary to rearrange input channels to win head-to-head ties. The adversary also decides whether or not to share blocks they have mined with honest participants. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{setchain.png} \caption{Example of an adversary reorganizing an input channel. $B_H$ is a block mined by an honest party and $B_A$ was mined by the adversary.} \label{fig:adversary} \end{figure} At the end of this process, the output channels of each participant will include any blocks not sent to the input channels by the adversary, but mined by honest participants in the most recent round. At this point all parties have attempted to solve the PoW, and been informed of all blocks found in the most recent round. The participants continue the process above for a finite-amount of rounds. \subsection{Differences from Backbone Protocol} We now review the key ways that our UPPAAL-SMC model differs from the backbone protocol. \begin{itemize}[leftmargin=0.25cm] \item Parties: We model the adversary as one party. The adversary has a mining power of $\alpha$ and the honest parties collectively have a mining power of (1-$\alpha$). This is equivalent to \cite{garay_bitcoin_2015}, where the adversary controls a subset of honest parties. \item Transactions: We omit the inclusion of transactions in blocks. This does not affect the outcome of the backbone properties. \item Mining: Our mining process relies on probabilistic transitions to either a success or failure state. This avoids modeling hash functions. Our work does not account for honest mining pools or a single party mining multiple blocks in one round. \item Message Propagation: We assume that all messages will be delivered at the \textit{end} of a round. This means no blocks are sent or received during the mining phase of a round. \item Non-Determinism: Whereas the Bitcoin Backbone Protocol quantifies over all possible adversaries, we only model a selfish mining adversary. This is because UPPAAL-SMC does not handle non-determinism. \end{itemize} \section{Results and Analysis} We now introduce the results we obtained. UPPAAL-SMC simulates many runs of our system, and then computes the probability of a property holding. We use a 95\% confidence interval. This means that 95\% of the time the true probability of a failure event is within $\pm0.05$ of the value computed. \subsection{Fork Resolution Rules} In Bitcoin, when honest miners receive multiple longest chains, they adopt the one they received first. We refer to this fork resolution rule as the \say{First-Received Rule}. Under the first-received rule, a well connected adversary can gain a large advantage. If they are able to route their blocks quickly, more parties will adopt their chain. The assumption in \cite{garay_bitcoin_2015} is that all honest participants, except for one, will adopt the adversary's preferred block in the case of a fork. We also make this assumption. We begin by testing uniform tie breaking, an alternative where an honest party adopts one longest chain at random. This modification can limit the network advantage that an adversary can obtain. No matter the adversary's propagation strategy, uniform tie breaking will cause about 50\% of parties to adopt the block preferred by the adversary. For each resolution rule, we use our model to measure the probability of each backbone property failing. For the following experiments, we let $n = 8, \alpha = 0.33 , f = 0.02, \text{and } \mu =0.39$. Like \cite{garay_bitcoin_2015}, we assume $f \approx 0.02$ for Bitcoin. \begin{figure}[t] \centering \begin{tikzpicture} \begin{axis}[ title={}, xlabel={Blocks for chain quality ($\ell$)}, ylabel={Failure Rate}, xmin=25, xmax=175, ymin=0, ymax=1, xtick={40,70,100,130,160}, ytick={0,.20,.40,.60,.80,1}, legend pos= north east, ymajorgrids=true, grid style=dashed, width=0.5\textwidth, height = 5.2cm, ] \addplot+[smooth][ color=blue, mark=square, ] coordinates { (40, 0.50) (50, 0.32) (60, 0.23) (70, 0.13) (80, 0.12) (120, 0.05) (160, 0.05) }; \addplot+[smooth][ color=red, mark=triangle, ] coordinates { (40, 0.80) (50, 0.66) (60, 0.52) (70, 0.46) (80, 0.34) (110, 0.29) (120, 0.18) (144, 0.12) (160, 0.05) }; \legend{Uniform Tie Breaking, First-Received Rule } \end{axis} \end{tikzpicture} \caption{Failure rate of chain quality given two fork resolution rules} \label{cq_selection} \end{figure} In agreement with \cite{Sapirshtein2015OptimalSM}, we find that the failure rate of the chain quality property is consistently lower when uniform tie breaking is used (Figure \ref{cq_selection}). Still, the failure rate under both resolution rules decreases towards zero for large $\ell$. Large $\ell$ means a longer subsection of blocks is checked against the chain quality property, and prevents the adversary from breaking the property by mining more blocks than expected over a short time period. We extend the work of \cite{Sapirshtein2015OptimalSM} and test uniform tie breaking against the common prefix and chain growth properties. Our results show that uniform tie breaking, assuming the adversary follows the selfish mining strategy we implemented, has a negative impact on the common prefix of honest blockchains (Figure \ref{cp_selection}). \begin{figure}[t] \centering \begin{tikzpicture} \begin{axis}[ title={}, xlabel={Blocks for common prefix (\textit{k})}, ylabel={Failure Rate}, xmin=3.5, xmax=10.5, ymin=0, ymax=1, xtick={4,6,8,10,12,14}, ytick={0,.20,.40,.60,.80,1}, legend pos=north east, ymajorgrids=true, grid style=dashed, width=0.5\textwidth, height = 5.2cm, ] \addplot+[smooth][ color=blue, mark=square, ] coordinates { (4, 0.95) (5, 0.79) (6, 0.51) (7, 0.29) (8, 0.10) (10, 0.05) }; \addplot+[smooth][ color=red, mark=triangle, ] coordinates { (4, 0.10) (5, 0.05) (6, 0.05) (7, 0.05) (8, 0.05) (10, 0.05) }; \legend{Uniform Tie Breaking, First-Received Rule} \end{axis} \end{tikzpicture} \caption{Failure rate of common prefix given two fork resolution rules} \label{cp_selection} \end{figure} Under uniform tie breaking, the selfish mining strategy from \cite{garay_bitcoin_2015} is similar to the common prefix attack described in \cite{eprint-2015-26827}. Because uniform tie breaking roughly splits the honest participants onto two competing chains, our selfish mining attacker produces long forks with higher probability. Forks are caused when the adversary releases a private block or two honest parties find a block in the same round. The forks take longer to resolve, on average, because the parties equally mine on each branch. \begin{figure}[t] \centering \begin{tikzpicture} \begin{axis}[ title={}, xlabel={Expected blocks ($\tau \cdot$ \textit{s})}, ylabel={Failure Rate}, xmin=35, xmax=265, ymin=0, ymax=1, xtick={50,100,150,200,250}, ytick={0,.20,.40,.60,.80,1}, legend pos= north east, ymajorgrids=true, grid style=dashed, width=0.5\textwidth, height = 5.2cm, ] \addplot+[smooth][ color=blue, mark=square, ] coordinates { (19, 0.95) (38, 0.95) (57,0.89) (76,0.79) (95,0.70) (114,0.60) (133, 0.49) (152, 0.37) (171, 0.31) (190, 0.27) (228, 0.23) (247, 0.07) (266, 0.05) }; \addplot+[smooth][ color=red, mark=triangle, ] coordinates { (19, 0.95) (38, 0.95) (57,0.93) (76,0.87) (95,0.67) (114,0.59) (133, 0.48) (152, 0.40) (171, 0.29) (190, 0.28) (228, 0.25) (247, 0.09) (266, 0.05) }; \legend{Uniform Tie Breaking, First-Received Rule} \end{axis} \end{tikzpicture} \caption{Failure rate of chain growth given two fork resolution rules} \label{fig:cg_selection} \end{figure} We believe this is a practical implementation of a common prefix attack because it is not clear how an adversary could consistently force honest participants to equally mine on competing chains otherwise. Given that blocks are spread using a gossip protocol, equally dividing the network onto different chains would be difficult. In \cite{eprint-2015-26827}, the authors state that uniform tie breaking would not help against their common prefix attack. They did not realize the rule could make long-forks attacks easier to execute. Our adversary follows the selfish mining strategy from \cite{garay_bitcoin_2015} and lets uniform tie breaking do the rest. This suggests that protocol modifications to Bitcoin may have unintended consequences when only measured against one property of the backbone. The chain growth property is also used to compare each resolution rule. We find that over long periods the difference in performance between the two fork resolution rules with respect to chain growth is negligible (Figure \ref{fig:cg_selection}). \subsection{Calibrating $f$ for security and speed} Recall that $f$ is the probability that at least one honest party finds a PoW solution in a round. The importance of calibrating $f$ is made clear by Garay et al \cite{garay_bitcoin_2015}. If the PoW puzzle is too difficult ($f$ is too small) then chain growth suffers. Too few blocks are produced by honest parties, so liveness is hurt. If the PoW puzzle is too easy ($f$ is too large), then the common prefix suffers. There are not enough rounds where only one party finds a PoW solution, so persistence suffers. Fluctuations in $f$ can be caused by variations in the network's hash rate or propagation speed. To keep $f$ in a small range, Bitcoin applies a PoW difficulty adjustment every 2016 blocks. The difficulty of PoW is increased or decreased depending on the network’s hash rate. If the hash rate increases, then more blocks are produced within a round ($f$ increases). If the hash rate decreases, then less blocks are produced within a round ($f$ decreases). Bitcoin adjusts its difficulty so that blocks, on average, are produced every 10 minutes. Assuming a full round of propagation takes up to 20 seconds, adjustments keep $f$ between 2-3\% \cite{garay_bitcoin_2015}. Note that this only accounts for changes in the network’s hash rate. \begin{figure}[h!] \centering \begin{tikzpicture} \begin{axis}[ title={}, xlabel={$\ell$}, ylabel={Failure Rate}, xmin=35, xmax=305, ymin=0, ymax=1, xtick={40, 80, 120, 160, 200, 240, 280}, ytick={0,.20,.40,.60,.80,1 }, legend pos=outer north east, ymajorgrids=true, grid style=dashed, width=0.9\linewidth, height = 5.5cm ] \addplot+[smooth][ color=blue, mark=square, ] coordinates { (40, 0.233) (60, 0.141) (80, 0.049) }; \addplot+[smooth][ color=red, mark=triangle, ] coordinates { (40, 0.947) (60, 0.640) (80, 0.446) (100,0.216) (120, 0.149) (140,0.079) }; \addplot+[smooth][ color=orange, mark=*, ] coordinates { (60, 0.938) (80, 0.645) (100,0.474) (120, 0.239) (140, 0.138) (160, 0.062) }; \addplot+[smooth][ color=teal, mark=oplus, ] coordinates { (80,0.901) (100,0.727) (120,0.525) (140,0.278) (160,0.183) (180,0.087) }; \addplot+[smooth][ color=pink, mark=*, ] coordinates { (100,0.924) (120,0.759) (140,0.422) (160,0.334) (180,0.249) (200,0.154) (220,0.075) }; \addplot+[smooth][ color=purple, mark=square* ] coordinates{ (120,0.951) (140,0.895) (160,0.704) (180,0.605) (200,0.462) (220,0.344) (240,0.243) (260,0.123) (280,0.098) (300,0.075) }; \legend{$f=0.01$, $f=0.08$, $f=0.15$, $f=0.30$, $f=0.50$, $f=0.80$} \end{axis} \end{tikzpicture} \caption{Failure rate of chain quality for various $f$} \label{cq_f} \end{figure} \begin{figure}[h!] \centering \begin{tikzpicture} \begin{axis}[ title={}, xlabel={$k$}, ylabel={Failure Rate}, xmin=1.8, xmax=12.2, ymin=0, ymax=1, xtick={2,4,6,8,10,12}, ytick={0,.20,.40,.60,.80,1}, legend pos=outer north east, ymajorgrids=true, grid style=dashed, width=0.9\linewidth, height=6cm ] \addplot+[smooth][ color=blue, mark=square, ] coordinates { (2, 0.921) (3, 0.272) (4, 0.050) (5, 0.049) (6, 0.049) (7, 0.049) (8, 0.049) (9, 0.049) (10, 0.049) (11, 0.049) (12, 0.049) }; \addplot+[smooth][ color=red, mark=triangle, ] coordinates { (2, 0.951) (3, 0.938) (4, 0.3613375) (5, 0.084) (6, 0.049) (7, 0.049) (8, 0.049) (9, 0.049) (10, 0.049) (11, 0.049) (12, 0.049) }; \addplot+[smooth][ color=blue, mark=*, ] coordinates { (2, 0.951) (3, 0.951) (4, 0.712) (5, 0.206) (6, 0.049) (7, 0.049) (8, 0.049) (9, 0.049) (10, 0.049) (11, 0.049) (12, 0.049) }; \addplot+[smooth][ color=magenta, mark=x, ] coordinates { (2, 0.951) (3, 0.951) (4, 0.950) (5, 0.446) (6, 0.102) (8, 0.049) (9, 0.049) (10, 0.049) (11, 0.049) (12, 0.049) }; \addplot+[smooth][ color=teal, mark=oplus, ] coordinates { (2, 0.951) (3,0.951) (4, 0.951) (5, 0.792) (6, 0.300) (7, 0.067) (8, 0.049) (9, 0.049) (10, 0.049) (11, 0.049) (12, 0.049) }; \addplot+[smooth][ color=purple, mark=star, ] coordinates { (2, 0.951) (3, 0.951) (4, 0.951) (5, 0.950) (6,0.655) (7, 0.197) (8, 0.091) (9, 0.049) (10, 0.049) (11, 0.049) (12, 0.049) }; \addplot+[smooth][ color=teal, mark=|, ] coordinates { (2, 0.951) (3, 0.951) (4, 0.951) (5, 0.951) (6, 0.938) (7, 0.537) (8, 0.170) (9, 0.050) (10, 0.049) (11, 0.049) (12, 0.049) }; \addplot+[smooth][ color=cyan, mark=o, ] coordinates{ (2,0.951) (3,0.951) (4,0.951) (5,0.951) (6,0.951) (7,0.951) (8,0.796) (9,0.443) (10,0.152) (11,0.087) (12,0.049) }; \legend{$f=0.01$, $f=0.08$, $f=0.15$, $f=0.23$, $f= 0.30$, $f= 0.39$, $f= 0.48$, $f= 0.60$} \end{axis} \end{tikzpicture} \caption{Failure rate of common prefix for various $f$} \label{cp_f} \end{figure} Changes in block propagation speed are not considered. Since a round is a period of complete block propagation, $f$ is subject to change when network speeds change. In Bitcoin, block propagation speeds have rapidly increased over the last few years \cite{KIT2020}. For example, the time for a block to reach 99\% of nodes has decreased from 11 seconds to 2 seconds.\footnote{ Over the period from 2017 to 2020.} It follows that $f$, a key parameter, has decreased as well. Because $f$ can change unexpectedly, measuring a cryptocurrency’s behavior over a range of $f$ values is important. For the following experiments, we let $n = 8, \alpha = 0.33 , \text{and } \mu =0.39$. Over a range of $f$ values, we find that Bitcoin has very different security properties. For smaller values of $f$, the failure rate of the chain quality and common prefix properties converge faster (Figures \ref{cq_f}, \ref{cp_f}, \ref{3d_f}). When the failure rate of chain quality converges faster, then we can look at a shorter length of blocks and be confident there are $\mu\ell$ honest blocks. When the common prefix converges faster, we can wait for less block confirmations and be confident that double spending will not occur. While these behaviors help with security, they do not help with transaction processing speed. To improve transaction processing speed, a cryptocurrency must increase its block generation rate \cite{eprint-2015-26827}. \begin{figure}[t] \begin{tikzpicture} \begin{axis}[ xtick={2,3,4,5,6,7,8,9,10}, ytick={0.25, 0.50, 0.75}, yticklabel style={ /pgf/number format/fixed, /pgf/number format/precision=2 }, scaled y ticks=false, xlabel=$k$, ylabel=$f$, xmax = 10.5, zlabel={Failure Rate}, width=0.9\linewidth, height=6cm ] \addplot3[surf,mesh/rows=9] coordinates { (2,0.013,0.92) (2,0.081,0.951) (2,0.152,0.951) (2,0.226,0.951) (2,0.304,0.951) (2,0.386,0.951) (2,0.475,0.951) (2,0.6,0.951) (2,0.7,0.951) (2,0.8,0.951) (3,0.013,0.272) (3,0.081,0.887) (3,0.152,0.951) (3,0.226,0.951) (3,0.304,0.951) (3,0.386,0.951) (3,0.475,0.951) (3,0.6,0.951) (3,0.7,0.951) (3,0.8,0.951) (4,0.013,0.049) (4,0.081,0.311) (4,0.152,0.711) (4,0.226,0.95) (4,0.304,0.951) (4,0.386,0.951) (4,0.475,0.951) (4,0.6,0.951) (4,0.7,0.951) (4,0.8,0.951) (5,0.013,0.049) (5,0.081,0.033) (5,0.152,0.205) (5,0.226,0.446) (5,0.304,0.792) (5,0.386,0.95) (5,0.475,0.951) (5,0.6,0.951) (5,0.7,0.951) (5,0.8,0.951) (6,0.013,0.048) (6,0.081,0.048) (6,0.152,0.048) (6,0.226,0.101) (6,0.304,0.299) (6,0.386,0.655) (6,0.475,0.937) (6,0.6,0.951) (6,0.7,0.951) (6,0.8,0.951) (7,0.013,0.048) (7,0.081,0.048) (7,0.152,0.048) (7,0.226,0.048) (7,0.304,0.066) (7,0.386,0.197) (7,0.475,0.537) (7,0.6,0.951) (7,0.7,0.951) (7,0.8,0.951) (8,0.013,0.048) (8,0.081,0.048) (8,0.152,0.048) (8,0.226,0.048) (8,0.304,0.048) (8,0.386,0.091) (8,0.475,0.169) (8,0.6,0.796) (8,0.7,0.951) (8,0.8,0.951) (9,0.013,0.048) (9,0.081,0.048) (9,0.152,0.048) (9,0.226,0.048) (9,0.304,0.048) (9,0.386,0.048) (9,0.475,0.049) (9,0.6,0.443) (9,0.7,0.916) (9,0.8,0.951) (10,0.013,0.048) (10,0.081,0.048) (10,0.152,0.048) (10,0.226,0.048) (10,0.304,0.048) (10,0.386,0.048) (10,0.475,0.048) (10,0.6,0.152) (10,0.7,0.721) (10,0.8,0.951) }; \end{axis} \end{tikzpicture} \caption{Overview of parameter space for common prefix property} \label{3d_f} \end{figure} Decreasing block generation times provide an opportunity for increased transaction throughput. If a cryptocurrency can improve its block propagation speed enough, then the block generation rate can be decreased such that $f$ stays constant. The behavior of the chain quality and common prefix properties will be the same, but blocks and transactions will be produced more rapidly. In the case of Bitcoin, this raises a question: should cryptocurrencies aim for idealogical or security based consistency? While Bitcoin block generation rate is kept at 10 minutes, this does not guarantee consistency from the backbone perspective. Bitcoin, from the perspective of the backbone protocol, has changed greatly over the last few years. \section{Related Work} \cite{Chaudhary_2015} and \cite{10.1007/978-3-319-77935-5_11} use UPPAAL SMC to model Bitcoin, but do not build off the backbone protocol. \cite{Chaudhary_2015} analyzes double spending, whereas \cite{10.1007/978-3-319-77935-5_11} analyzes an Andresen attack. Neither model chain quality or chain growth. \cite{gervais2016security} provide a framework for modeling the security of PoW blockchains using a Markov Decision Process. They also show that selfish mining is not always a rational strategy. Still, their rational attacker model does not account for incentives outside of blockchain. For example, an attacker may lose bitcoin during an attack, but have a payoff in USD. \footnote{Through financial derivatives or increased market share.} In \cite{Sapirshtein2015OptimalSM} the authors measure the effects of uniform tie breaking on the profit threshold for selfish mining. By looking at the revenue of the adversary, \cite{Sapirshtein2015OptimalSM} implicitly measures the chain quality property from \cite{garay_bitcoin_2015}. Their results showed that uniform tie breaking limited the power of strongly communicating attackers, but enhanced the power of poorly communicating attackers. Still, no work was done to directly measure the effects of this modification on the common prefix and chain growth properties. \section{Future Work} Some avenues for future work are: \begin{itemize}[leftmargin=0.25cm] \item \textbf{Other adversarial strategies}: Our model can be used to model other adversarial strategies, besides selfish mining. \item \textbf{Delay-bounded model}: Our model can be extended to the delay-bounded version of the backbone protocol, where rounds are not highly synchronous, but there is a bound on the time it takes for messages to be delivered. \item \textbf{Honest Mining Pools}: Our work assumes all honest parties have the same hashing capabilities. In reality, different parties and pools have different hashing capabilities. Future work could account for this. This would capture the importance of certain parties in terms of block propagation. For example, it would be more beneficial for a selfish miner to quickly relay his block to a large mining pool than to a single miner with a negligible percentage of the total hash rate \end{itemize} \section{Conclusion} This paper presents a case-study of model checking PoW cryptocurrencies using the Bitcoin Backbone Protocol as a foundation. We show how to model the protocol using Statistical Model Checking tools, and identify concrete security properties of the protocol. We use the model to demonstrate how design decisions can impact different concrete backbone protocol properties in different ways, in a manner that is not obvious from prior asymptotic analysis. This paper attempts to explain the value of applying the foundation introduced by \cite{garay_bitcoin_2015} to practice. The experiments above map out the effectiveness of a selfish mining strategy against various deployment parameters. By doing this, we are able to derive results that lay a direction for further work in the design and analysis of PoW protocols. \section{Acknowledgments} The authors thank Marco Patrignani (Stanford University) and Avradip Mandal, Hart Montgomery, Arnab Roy (Fujitsu Laboratories of America Inc.) for their assistance and insight in formulating the ideas underlying our model and results. The authors thank the Office of Naval Research for support through grant N00014-18-1-2620, Accountable Protocol Cus tomization. \printbibliography \clearpage \newpage \section{Modeling the Bitcoin Backbone Protocol in UPPAAL} In this section, we provide details on how we modeled the Bitcoin Backbone Protocol with UPPAAL-SMC. \subsection{Global Declarations} To model the backbone protocol, we designed three custom data structures: Block, Global\_Ledger, and Diffusion. \begin{declaration} typedef struct \{ \\ $\;$ int[0, node\_max] id; \\ $\;$ int[0, total\_run\_time] rd; \\ $\;$ int parent; \\ $\;$ int block\_num; \\ $\;$ int length; \\ $\;$ bool sent\_to[node\_max]; \\ $\;$ int[0, 1] is\_private; \\ $\;$ int num\_adv\_blocks; \\ \} Block; \caption{Block} \label{block} \end{declaration} \begin{declaration} typedef struct \{ \\ $\;$ Block blockchain[block\_max]; \\ $\;$ int best\_block[node\_max]; \\ $\;$ int max\_len; \\ \} Global\_Ledger; \caption{Global\_Ledger} \label{ledger} \end{declaration} \begin{declaration} typedef struct \{ \\ $\;$ int receive[node\_max][node\_max]; \\ $\;$ int receive\_len[node\_max]; \\ $\;$ int to\_be\_diffused[node\_max]; \\ $\;$ int[0, node\_max] to\_be\_diffused\_len; \\ \} Diffusion; \caption{Diffusion} \label{diffusion} \end{declaration} Declaration \ref{block} shows the structure of a block in our model. Each block has an unique identifier, a round number (indicating the round it was created in), a parent id that indicates the previous block in the chain, and a block number indicating the number of blocks ever created at the time of its creation. Each block also stores information such as its depth from the genesis block, an array of participants that the block was sent to, a flag indicating whether it is held private, and the number of blocks created by adversaries in its chain. Declaration \ref{ledger} shows the structure of the global blockchain ledger. Global\_ledger keeps track of every block created in the network with blockchain[]. Miners maintain their local chain by pointing to a block in best\_block[]. Finally, max\_len tracks the length of the longest chain(s) in the network. Declaration \ref{diffusion} shows the structure of the diffusion model. Diffusion models the diffusion functionality used in the backbone protocol. A 2D array, mimicking the RECEIVE() tape from the backbone, is used to keep track of block propagation. \subsection{Honest and Adversarial Parties} As shown in in Figure \ref{fig:honest} and Figure \ref{fig:adv}, we modeled honest parties and the adversary separately. Each state diagram consists of five non-trivial states: \begin{enumerate} \item \textbf{Start}: the start state of each round \item \textbf{End}: the end state of each round \item \textbf{Protocol Failure}: indicates a failure of one of the backbone properties \item \textbf{No Block}: indicates a party's mining outcome is unsuccessful \item \textbf{Found Block}: indicates a party's mining outcome is successful \end{enumerate} Each round begins at the \textbf{start} state and ends at the \textbf{end\_of\_round} state. The \textbf{protocol\_failure} state represents a failure in the backbone protocol. The \textbf{no\_block} and \textbf{found\_block} states correspond to a miners PoW outcome in the current round. We note features of the state diagrams: \begin{itemize} \item One or more backbone property is verified at the end of each round. In this example, the expression \textbf{check\_common\_prefix} will force a miner to enter the failure state when the common prefix is broken. \item The probability that at least one honest party succeeds in finding a PoW solution in a round ($f$), is captured with probabilistic edges. A weight assigned to each edge is used to vary $f$. \item The synchronization channels \textbf{mine!} and \textbf{mine?} prevent miners from starting a new round before everyone has finished the previous round. \end{itemize} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{honestLTS.png} \caption{An honest party's state diagram} \label{fig:honest} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{adversaryLTS.png} \caption{The adversary's state diagram} \label{fig:adv} \end{figure} \subsection{Property Checking Algorithms} At the end of each round, we check each honest chain against one or more of the backbone properties. If a party's chain does not satisfy the backbone property, the party enters the failure state of the model. UPPAAL will terminate and consider this run a failure. We illustrate how property is checked: \begin{itemize} \item \textbf{Common Prefix}: Common prefix checks that the block $k$ deep in an honest chain is in every other honest chain. This search is pruned by ignoring parties that point to the same \textbf{best\_block}. This means they have identical chains. \item \textbf{Chain Quality}: Chain quality is checked by counting the blocks in honest chains that were contributed by honest parties. The share of these contributions over any set of $\ell$ blocks should be at least $\mu$$\ell$. \item \textbf{Chain Growth}: Chain growth is checked by iterating over the blocks of an honest chain until a block $s$ rounds or older is found. Chain growth is satisfied if at least $\tau \cdot s$ blocks were found in this time frame. \end{itemize} \subsection{Selfish Mining} As shown in Table 6, each block has a data field \textbf{is\_private}. Our adversary deterministically chooses whether to keep their blocks private. There are two cases: \begin{enumerate} \item If the adversary's private chain is at least one block ahead of the longest honest chain, it will release one block from its private chain for every honest block that is published. \item When the adversary's private branch is depleted, it will return to mining on the public branch. \end{enumerate} \end{document}
1,108,101,566,212
arxiv
\section{Introduction} The present study deals with an empirical analysis and mapping of Swiss franc (CHF) interest rates (IR). The complete empirical analysis includes comprehensive quantitative analysis and characterisation of daily behaviour of CHF interest rates from October 1998 to December 2005. A mapping part of the paper considers the application of spatial interpolation models for IR mapping in a feature space ``maturity-date''. In a more general setting interest rates can be considered as functional data (IR curves are formed by different maturities) having specific internal structures (term structure) and parameterised by econometric models. The main issues related to IR mapping are the following: 1) empirical analysis of IR spatio-temporal patterns, 2) reconstruction and prediction of interest rate curves; 3) incorporation of economical/financial hypotheses into the IR prediction process; 4) development of what-if scenario for financial engineering and risk management. Some of the preliminary ideas elaborated in this study were firstly presented in \citep{ref1}. In general there are two principal approaches to make term-structure predictions \citep{ref2}: no-arbitrage models and 2) equilibrium models. The no-arbitrage models focuses on fitting the term structure at a point in time (one dimensional model depending on maturity) to ensure that no arbitrage possibilities exist. This is important for pricing derivatives. The equilibrium models focuses on modelling the dynamics of the intravenous rate using affine models after which rates at other maturities can be derived under various assumptions about risk premium. Detailed discussion along with corresponding references can be found in \citep{ref2}. An important and interesting approach complementary to classical empirical analysis of interest rates time series was developed in \citep{ref3, ref4, ref5} where both traditional econophysics studies (power law distributions, etc.) and a coherent hierarchical structure of interest rates were considered in detail. An empirical quantitative analysis of multivariate interest rates time series and their increments (carried out but not presented in this paper) includes study of autocorrelations, cross-correlations, detrending fluctuation analysis, embedding, analysis of distribution of tails, etc. \citep{ref1, ref5, ref6, ref7}. The most important part of the current study deals with an IR mapping in a two dimensional feature space \{maturity(months), time(date/days)\} using spatial interpolation/extrapolation models (inverse distance weighting - IDW, geostatistical kriging models), nonlinear artificial neural network models (multilayer perceptron - MLP) and robust approaches based on recent developments in Statistical Learning Theory (Support Vector Regression) \citep{ref10}. Embedding of IR data into a two-dimensional space brings us to the application of spatial statistics and its modelling tools. Higher dimensional feature spaces can be considered and applied as well. Simple models, like linear and inverse distance weighting were used mainly for the comparison and visualisation purposes. \begin{figure} \centering \includegraphics[width=6.5cm]{Kanevski_mapFig1.eps} \includegraphics[width=6.5cm]{Kanevski_mapFig2.eps} \caption{Left: Evolution of IR time series for different maturities: 1,6,12 months and 5 and 10 years. Right: Examples of the observed CHF interest rates curves for different days.} \label{fig1} \end{figure} Evolution of CHF interest rates data is given in Figure~1 where temporal behaviour of different maturities is presented. The IRCs are composed of LIBOR interest rates (maturities up to 1 year) and of swap interest rates (maturities from one year to 10 years). Such information is available on the specialised terminals like Reuters, Bloomberg, etc. and is usually provided for some fixed time intervals (daily, weekly, monthly) and for some definite maturities (in this research we use the following maturities: 1 week, 1, 2, 3, 6 and 9 months; 1, 2, 3, 4, 5, 7 and 10 years). There are some important stylized facts that have to be considered when modelling IR curves \citep{ref2}: the average yield curve is increasing and concave; the yield curve assumes a variety of shapes through time, including upward sloping, downward sloping, humped, and inverted humped; IR dynamics is persistent, and spread dynamics is much less persistent; the short end of curve is more volatile than the long end; long rates are more persistent than short rates. There is a coherence in evolution of interest rate curves (see some typical examples in Figure~1). In the present research the application of IR mapping is concentrated on: 1) visualisation and perception of complex data: IR data represented as maps are easier for the analysis and interpretation, 2) reconstruction of interest rates at any time and for any maturities by using spatial prediction models and 3) prediction of IR curves. The last problem can be considered from two different points of view: a) predictions without any a priori information (problem of extrapolation using historical IR data) and b) forecasting under some prior hypotheses about the future evolution of market or interest rates (some banks provide information about future level of interest rates at several maturities). In the latter case one can consider different scenario ``what-if'' and prepare information on IR curves for dynamic simulations of portfolio development, asset-liability management etc. \citep{ref1}. \section{Data description and interest rates mapping} Daily CHF IR data from October 1998 to December 2005 were prepared as a training data set (data used to develop a model). The study of this period is quite interesting because during this time different market states (bullish, bearish) and even historically the lowest LIBOR rates were observed. Training data were used for IR mapping (IRC reconstruction and in-sample predictions) and to develop a prediction model which then was used to make predictions for January 2006 (out-of-sample predictions). Thus, real data for 2006 were used only for validation purposes. Time series for several maturities are presented in Figure~\ref{fig1} (left). It is evident that the behaviour of the curves is rather coherent but very complex in time reflecting financial market evolution. Figure~\ref{fig1} (right) demonstrates term structure of several IR curves: dependence of IR on maturity for fixed dates. In December 21 an inversion of the curve, when very short interest rates (weekly) are higher than monthly interest rates was observed. Despite of some coherencies in temporal behavior between different maturities, the relationship between short (1 month) and long (10 years) maturities are nontrivial which can be demonstrated by analysing correlation matrix. Sometimes multi-valued relationships can be detected. In \citep{ref2} monthly IR were modelled using parametric Nelson-Siegel model based on 3 factors corresponding to long-term, short-term and medium term IR behaviour. These parameters can be interpreted in terms of level, slope ($\textrm{maturity}_{10years} - \textrm{maturity}_{3months}$) and curvature ($2\textrm{maturity}_{2years} - \textrm{maturity}_{3months} - \textrm{maturity}_{10years}$). First, these parameters were modelled as a time series using historical data and then they were applied to make IR forecasting. Such approach seems to be quite interesting and transparent both from scientific and practical point of views. The results obtained using linear time series models have demonstrated efficiency of this methodology. Possible extensions to nonlinear models were not considered and are a topic of ongoing research. Let us consider interest rates mapping in a two-dimensional feature space described by maturity (X-axes, in months) and time (Y-axis, in days). In this presentation along X-axis one can observe IR curves for fixed date (when Y is fixed) and along Y-axis one can observe temporal behaviour of IR for fixed maturities. Distance between points in this space is rather synthetic and should be taken into account both during model development and interpretation of the results. Geostatistical approach (family of kriging models) is based on empirical analysis and modelling of variograms \citep{ref9}. Variography was efficiently used to quantify the quality of machine learning algorithms modelling by estimating the spatial structure of the MLA residuals: good models have to demonstrate pure nugget effects (no spatial correlations) with a variance fluctuating around a raw data noise level \citep{ref9}. Let us consider interest rate mapping procedure. The value at each prediction (unsampled) point $Z(x,y)$ can, in general, be assessed in two ways: 1) model $Z_{1}$ is a weighted sum of the measured/observed neighbouring data $Z_{i}$ or 2) model $Z_{2}$ is a weighted sum of kernel functions $K_{j}$: \[ \begin{array}{l} Z_{1} (x,y) = {\sum\limits_{i = 1}^{n} {w_{i}(x,y) Z_{i} (x_{i} ,y_{i} )}} {\rm} \\ Z_{2} (x,y) = {\sum\limits_{j = 1}^{m} {\alpha _{j} K_{j} (x,y;x_{i} , y_{i})}} + b \end{array} \] where $n$ and $m$ are the number of points or kernels used for the prediction, $w_{i}$ and \textit{$\alpha$}$_{i}$ are corresponding weight coefficients. In case of inverse distance weighting mapping the weights are inversely proportional to the power of distance between observations and prediction point; in case of kriging models they are a solution of a system of linear equations derived from the principle of best linear unbiased predictor; and in case of Support Vector Regression they are a solution of quadratic optimization problem following Statistical Learning Theory \citep{ref8, ref9, ref10}. In case of MLP kernels are replaced by a combination of nonlinear transfer function, e.g. sigmoid. IDW, kriging and MLP are well known and widely applied models for spatial data analysis and mapping. Statistical Learning Theory is a general framework for solving classification, regression and probability density estimation problems using finite number of empirical data. SVR is a non-parametric regression method, which exploits kernel expansion. It attempts at minimizing the empirical risk (the residuals on the training data), simultaneously keeping low the complexity of the model. By doing this, the over-fitting on the training data can be avoided and one may expect promising predictive abilities. Unlike MLP, SVR after fixing few hyper-parameters, has a unique solution of quadratic optimisation problem \citep{ref10}. All models (IDW, kriging, MLP, SVR) depend on some hyper-parameters (number of neighbours, IDW power, variogram model, number of hidden neurons, regularization and kernel parameters, etc.) which can be tuned using different statistical techniques, e.g. data splitting (training-testing-validation), cross-validation, jack-knife. \begin{figure} \centering \includegraphics[width=6.5cm]{Kanevski_mapFig3.eps} \includegraphics[width=6.5cm]{Kanevski_mapFig4.eps} \caption{Interest rates mapping using inverse distance weighting (left) and multilayer perceptron model (right).} \label{fig3} \end{figure} The result of IDW mapping with a power equal to two is presented in Figure~\ref{fig3}. This map gives quite consistent reproduction of the evolution of interest rates in time. IDW model is fast to tune and easy to apply and useful for the visualisation purposes. But it does not take into account spatial structure of data and can produce some artifacts. Second approach presented in this paper is based on MLP. It is a well known nonlinear modelling tool having both advantages and some drawbacks as a black-box tool (interpretability of the results, overfitting of data, multiple local minima, etc.). Nevertheless, MLP is a flexible and very powerful approach for data mining and data modelling. The efficiency and quality of mapping of MLP can be controlled using (geo)statistical tools both for raw data and for the residuals \citep{ref8, ref9}. More details and a generic methodology of MLP application for spatial data analysis and mapping can be found, for example, in \citep{ref9} and for different financial applications in \citep{ref11}. The MLP map is given in Figure~\ref{fig3}. Following the data modeling traditions we have split original IR data into training (80\% of data) and testing subsets (20\% of data). Testing data set was used to control the quality of different MLP and to avoid overfitting. Testing data set was used to find optimal number of hidden layers and hidden neurons. As an optimal a network with 2 hidden layers each consisting of 25 hidden neurons was applied. By comparing the results of MLP mapping and visualisation of original data it can be noted that MLP was able to detect and to model large and medium scale structures using nonlinear smoothing approach. Small scale and low level variability were ignored and treated as insignificant noise. Reconstruction of two different interest rate curves (in-sample prediction) - these curves were extracted from training data set and then modelled using MLP is given in Figure~\ref{fig5}. Both curves are quite well reconstructed, including inversion of IRC. \begin{figure} \centering \includegraphics[width=6cm]{Kanevski_mapFig5.eps} \includegraphics[width=7.4cm]{Kanevski_mapFig6.eps} \caption{Left: CHF interest rate curves reconstruction using MLP. Right: CHF interest rate curve prediction using MLP.} \label{fig5} \end{figure} Developed MLP model was used for the one month ahead IRC prediction. An example of the predicted IRC along with real/observed data is given in Figure~\ref{fig5}. The result is quite promising and more simulations under different market conditions and time horizons are in progress to validate the predictability of this mapping approach. Finally, Support Vector Regression was applied for CHF IR mapping. The same as for MLP methodology was applied to tune SVR hyper-parameters: epsilon-insensitive zone of robust loss-function, regularisation parameter C, and the shape of a Gaussian kernel used. It should be noted that SVR approach is quite computationally intensive and tuning of its parameters is not a trivial task. The IR map produced by SVR is presented in Figure~\ref{fig7}. In comparison with MLP SVR is less smoothing data and well reproduces spatio-temporal pattern. \begin{figure} \centering \includegraphics[width=6cm]{Kanevski_mapFig7.eps} \includegraphics[width=7cm]{Kanevski_mapFig8.eps} \caption{Left: Interest rates mapping using Support Vector Regression. Right: Testing of Support Vector Regression model.} \label{fig7} \end{figure} In order to validate the quality of SVR the testing data predictions (also not used to develop a model) are compared with the observed ones (Figure~\ref{fig7}, right). The results obtained are quite promising. An important improvement for all models proposed can be achieved when applying the same analysis (development and retraining of models) within a moving window which can be related to the market conditions. This can partly help to avoid also the problem of spatial nonstationarity. Another approach can be based on hybrid MLA+geostatistics models \citep{ref9} already successfully applied for spatial environmental data. All spatial data analysis was carried out using Geostat Office software \citep{ref9}. \section{Conclusions} \label{Conclusions} The paper presents the results on mapping of CHF interest rates. An important attention was paid to the visualisation of IR curves in a two-dimensional maturity-date feature space. Tools and models from spatial statistics and machine learning were applied to produce the maps. Developed interest rates maps can be easily interpreted and give good summary view on IRC evolution. Some promising results on forecasting were obtained using artificial neural networks. An important improvement can be achieved by using hybrid models based on geostatistics and machine learning and incorporating time series tools into modelling and forecasting procedures. In the future IR mapping methodology will incorporate economical/financial hypotheses of market behaviour in order to elaborate ``what-if'' scenarios for the financial risk management.
1,108,101,566,213
arxiv
\section{Motivations, metric problem and overview of past and new results} \label{sec:intro} This paper continues the related work \cite{bright2021easily}, which provided practical motivations for a metric map of the Lattice Isometry Space and then focused on 2-dimensional lattices. Briefly, since crystal structures are determined in a rigid form, their most fundamental equivalence is rigid motion (any composition of translations and rotations in $\mathbb R^3$). The concept of an isometry (any map preserving Euclidean distances) also includes mirror reflections. It is a bit more convenient to study the isometry equivalence. We can easily detect if an isometry preserves an orientation. \medskip Isometry is the fundamental equivalence of lattices due to rigidity of most crystals. The resulting Lattice Isometry Space (LISP) consists of infinitely many classes, where every class includes all lattices isometric to each other. Then any transition between lattices is a continuous path in the LISP. The two square lattices in the top left corner of Fig.~\ref{fig:lattice_classification} have different bases (related by a rotation) but belong to the same isometry class of unit square lattices. A past approach to uniquely represent any isometry class was to choose a reduced basis (Niggli's reduced cell). Any such reduction is discontinuous under perturbations of a basis, see \citeasnoun[Theorem~15]{widdowson2022average}. Metric Problem~\ref{pro:metric} is stated below for any dimension $n\geq 2$. The main contribution is the extension of the solution for $n=2$ from \cite{bright2021easily} to $n=3$. \begin{pro}[metric on lattices] \label{pro:metric} Find a metric $d(\Lambda,\Lambda')$ on lattices in $\mathbb R^n$ such that \smallskip \noindent (\ref{pro:metric}a) $d(\Lambda,\Lambda')$ is independent of given primitive bases of lattices $\Lambda,\Lambda'$; \smallskip \noindent (\ref{pro:metric}b) the function $d(\Lambda,\Lambda')$ is preserved under any isometry or rigid motion of $\mathbb R^n$; \smallskip \noindent (\ref{pro:metric}c) $d$ satisfies the metric axioms: $d(\Lambda,\Lambda')=0$ if and only if $\Lambda,\Lambda'$ are isometric, symmetry $d(\Lambda,\Lambda')=d(\Lambda',\Lambda)$ and triangle inequality $d(\Lambda,\Lambda')+d(\Lambda',\Lambda'')\geq d(\Lambda,\Lambda'')$; \smallskip \noindent (\ref{pro:metric}d) $d(\Lambda,\Lambda')$ continuously changes under perturbations of primitive bases of $\Lambda,\Lambda'$; \smallskip \noindent (\ref{pro:metric}e) $d(\Lambda,\Lambda')$ is computed from reduced bases of $\Lambda,\Lambda'$ in a constant time. \hfill $\blacksquare$ \end{pro} \citeasnoun[section~2]{bright2021easily} has reviewed many past attempts to solve Problem~1.1, especially based on Niggli's reduced cell \cite{niggli1928krystallographische}, whose discontinuity \cite{andrews1980perturbation} fails condition (\ref{pro:metric}d). We should certainly mention the celebrated efforts of Larry Andrews and Herbert Bernstein \cite{andrews1988lattices,andrews2014geometry,mcgill2014geometry,andrews2019selling} whose latest advance is the $DC^7$ function comparing lattices by the seven ordered distances from the origin to its closest neighbours \cite{andrews2019space}. This function $DC^7$ turns out to be a nearly ideal solution to Problem~\ref{pro:metric}, see details in Example~\ref{exa:dc7=0}. \medskip \begin{figure} \label{fig:lattice_classification} \caption{ The LISP is bijectively and bi-continuously mapped to root forms of lattices, which are triples of root products between vectors of an obtuse superbase in $\mathbb R^2$.} \includegraphics[width=1.0\textwidth]{images/lattice_classification3.png} \end{figure} Section~\ref{sec:definitions} formally defines key concepts, most importantly Voronoi domains. Following \cite{conway1992low}, we remind an obtuse superbase consisting of vectors $v_1,v_2,v_3$ and $v_0=-v_1-v_2-v_3$ in $\mathbb R^3$ such that all vectors have non-acute angles, equivalently non-positive scalar products $v_i\cdot v_j\leq 0$. Section~\ref{sec:invariants3d} introduces the root products $r_{ij}=\sqrt{-v_i\cdot v_j}$, coordinates on the space of root forms of lattices (RFL). \medskip This space $\mathrm{RFL}$ provides a complete and continuous parameterisation of the Lattice Isometry Space (LISP) as follows. Theorem~\ref{thm:superbases/isometry} substantially reduces the ambiguity of lattice representations by infinitely many bases to only very few obtuse superbases, see the bottom right corner in Fig.~\ref{fig:lattice_classification}. Theorem~\ref{thm:classification3d} proves completeness of root forms by establishing an invertible 1-1 map $\mathrm{LISP}\leftrightarrow\mathrm{RFL}$. Theorems~\ref{thm:superbases->root_forms} and~\ref{thm:root_forms->superbases} prove that this 1-1 map is continuous in both directions. As a result, we have a complete and continuous metric map on the isometry space of lattices ($\mathrm{LISP}$) in $\mathbb R^3$. \section{Basic definitions and Conway-Sloane's results for lattices} \label{sec:definitions} Any point $p$ in Euclidean space $\mathbb R^n$ can be represented by the vector from the origin $0\in\mathbb R^n$ to $p$. So $p$ may also denote this vector, though an equal vector $p$ can be drawn at any initial point. The \emph{Euclidean} distance between points $p,q\in\mathbb R^n$ is $|p-q|$. \begin{dfn}[a lattice $\Lambda$, a unit cell $U$] \label{dfn:periodic_set} Let vectors $v_1,\dots,v_n$ form a linear {\em basis} in $\mathbb R^n$ so that if $\sum\limits_{i=1}^n c_i v_i=0$ for some real $c_i$, then all $c_i=0$. Then a {\em lattice} $\Lambda$ in $\mathbb R^n$ consists of all linear combinations $\sum\limits_{i=1}^n c_i v_i$ with integer coefficients $c_i\in\mathbb Z$. The parallelepiped $U(v_1,\dots,v_n)=\left\{ \sum\limits_{i=1}^n c_i v_i \,:\, c_i\in[0,1) \right\}$ is a \emph{primitive unit cell} of $\Lambda$. \hfill $\blacksquare$ \end{dfn} The (signed) volume $V$ of a unit cell $U(v_1,\dots,v_n)$ equals the determinant of the $n\times n$ matrix with columns $v_1,\dots,v_n$. The sign of $V$ is used to define an \emph{orientation}. \begin{dfn}[isometry, orientation and rigid motion] \label{dfn:isometry} An \emph{isometry} is any map $f:\mathbb R^n\to\mathbb R^n$ such that $|f(p)-f(q)|=|p-q|$ for any $p,q\in\mathbb R^n$. For any basis $v_1,\dots,v_n$ of $\mathbb R^n$, the volumes of $U(v_1,\dots,v_n)$ and $U(f(v_1),\dots,f(v_n))$ have the same absolute non-zero value. If these volumes are equal, the isometry $f$ is \emph{orientation-preserving}, otherwise $f$ is \emph{orientation-reversing}. Any orientation-preserving isometry $f$ is a composition of translations and rotations, and can be included into a continuous family of isometries $f_t$, where $t\in[0,1]$, $f_0$ is the identity map and $f_1=f$, which is also called a \emph{rigid motion}. Any orientation-reversing isometry is a composition of a rigid motion and a single reflection in a linear subspace of dimension $n-1$. \hfill $\blacksquare$ \end{dfn} \begin{comment} a periodic point set $S=M+\Lambda$ A \emph{periodic point set} $S\subset\mathbb R^n$ is the \emph{Minkowski sum} $S=M+\Lambda=\{u+\vec v \,:\, u\in M, \vec v\in \Lambda\}$, so $S$ is a finite union of translates of the lattice $\Lambda$. A unit cell $U$ is \emph{primitive} if $S$ remains invariant under shifts by vectors only from $\Lambda$ generated by $U$ (or the basis $\vec v_1,\dots,\vec v_n$). Any lattice $\Lambda$ can be considered as a periodic set with a 1-point motif $M=\{p\}$. This single point $p$ can be arbitrarily chosen in a unit cell $U$. The lattice translate $p+\Lambda$ is also considered as a lattice, because the point $p$ can be chosen as the origin of $\mathbb R^n$. \medskip A lattice $\Lambda$ of a periodic set $S=M+\Lambda\subset\mathbb R^n$ is not unique in the sense that $S$ can be generated by a sublattice of $\Lambda$ and a motif larger than $M$. If $U$ is any unit cell of $\Lambda$, the sublattice $2\Lambda=\{2\vec v \,:\, \vec v\in \Lambda}\subset\Lambda$ has the $2^n$ times larger unit cell $2^n U$, twice larger along each of $n$ basis vectors of $U$. Then $S=M'+2\Lambda$ for the motif $M'=S\cap 2^nU$ containing $2^n$ times more points than $M$. If we double the unit cell $U$ only along one of $n$ basis vectors, the new motif of $S$ has only twice more points than $M$. \end{comment} The Voronoi domain defined below is also called the \emph{Wigner-Seitz cell}, \emph{Brillouin zone} or \emph{Dirichlet cell}. We use the word \emph{domain} to avoid a confusion with a unit cell, which is a parallelepiped spanned by a vector basis. Though the Voronoi domain can be defined for any point of a lattice, it will suffice to consider only the origin $0$. \begin{dfn}[Voronoi domain and Voronoi vectors of a lattice] \label{dfn:Voronoi_vectors} The \emph{Voronoi domain} of a lattice $\Lambda$ is the neighbourhood $V(\Lambda)=\{p\in\mathbb R^n: |p|\leq|p-v| \text{ for any }v\in\Lambda\}$ of the origin $0\in\Lambda$ consisting of all points $p$ that are non-strictly closer to $0$ than to other points $v\in\Lambda$. A vector $v\in\Lambda$ is called a \emph{Voronoi vector} if the bisector hyperspace $H(0,v)=\{p\in\mathbb R^n \,:\, p\cdot v=\frac{1}{2}v^2\}$ between 0 and $v$ intersects $V(\Lambda)$. If $V(\Lambda)\cap H(0,v)$ is an $(n-1)$-dimensional face of $V(\Lambda)$, then $v$ is called a \emph{strict} Voronoi vector. \hfill $\blacksquare$ \end{dfn} Theorem~\ref{thm:reduction} proves that any lattice in $\mathbb R^3$ has an obtuse superbase of vectors whose pairwise scalar products are non-positive and are called \emph{Selling parameters}. For any superbase in $\mathbb R^n$, the opposite parameters $p_{ij}=-v_i\cdot v_j$ can be interpreted as conorms of lattice characters, functions $\chi: \Lambda\to\{\pm 1\}$ satisfying $\chi(u+v)=\chi(u)\chi(v)$), see \citeasnoun[Theorem~6]{conway1992low}. Hence $p_{ij}$ will be shortly called \emph{conorms}. \begin{dfn}[obtuse superbase and its conorms $p_{ij}$] \label{dfn:conorms} For any basis $v_1,\dots,v_n$ in $\mathbb R^n$, the \emph{superbase} $v_0,v_1,\dots,v_n$ includes the vector $v_0=-\sum\limits_{i=1}^n v_i$. The \emph{conorms} $p_{ij}=-v_i\cdot v_j$ are equal to the negative scalar products of the vectors above. The superbase is called \emph{obtuse} if all conorms $p_{ij}\geq 0$, so all angles between vectors $v_i,v_j$ are non-acute for distinct indices $i,j\in\{0,1,\dots,n\}$. The superbase is called \emph{strict} if all $p_{ij}>0$. \hfill $\blacksquare$ \end{dfn} \citeasnoun[formula (1)]{conway1992low} has a typo initially defining $p_{ij}$ as exact Selling parameters, but their Theorems 3,7,8 explicitly use non-negative $p_{ij}=-v_i\cdot v_j\geq 0$. \medskip The indices of a conorm $p_{ij}$ are distinct and unordered, so we assume that $p_{ij}=p_{ji}$. A 1D lattice generated by a vector $v_1$ has the obtuse superbase of $v_0=-v_1$ and $v_1$, so the only conorm $p_{01}=-v_0\cdot v_1=v_1^2$ is the squared norm of $v_1$. Any basis of $\mathbb R^n$ has $\dfrac{n(n+1)}{2}$ conorms $p_{ij}$, for example three conorms $p_{01},p_{02},p_{12}$ in dimension 2. Definition~\ref{dfn:partial_sums} introduces partial sums $v_S$ for any superbase $\{v_i\}_{i=0}^n$ of a lattice $\Lambda$. \begin{dfn}[partial sums $v_S$ and their vonorms] \label{dfn:partial_sums} Let a lattice $\Lambda\subset\mathbb R^n$ have any superbase $v_0,v_1,\dots,v_n$ with $v_0=-\sum\limits_{i=1}^n v_i$. For any proper subset $S\subset\{0,1,\dots,n\}$ of indices, consider its complement $\bar S=\{0,1,\dots,n\}-S$ and the \emph{partial sum} $v_S=\sum\limits_{i\in S} v_i$ whose squared lengths $v_S^2$ are called \emph{vonorms} of the superbase $\{v_i\}_{i=0}^n$. The \emph{vonorms} can be expressed as $v_S^2=(\sum\limits_{i\in S} v_i)(-\sum\limits_{j\in\bar S}v_j)=-\sum\limits_{i\in S,j\in\bar S}v_{j}\cdot v_j=\sum\limits_{i\in S,j\in\bar S}p_{ij}$. \hfill $\blacksquare$ \end{dfn} \cite{conway1992low} call lattices $\Lambda\subset\mathbb R^n$ that have an obtuse superbase \emph{lattices of Voronoi's first kind}, which are all lattices in dimensions 2 and 3 by Theorem~\ref{thm:reduction}. \begin{thm}[obtuse superbase existence] \label{thm:reduction} Any lattice $\Lambda$ in dimensions $n=2,3$ has an obtuse superbase $v_0,v_1,\dots,v_n$ so that $p_{ij}=-v_i\cdot v_j\geq 0$ for any $i\neq j$. \hfill $\blacksquare$ \end{thm} Section~7 in \cite{conway1992low} tried to prove Theorem~\ref{thm:reduction} for $n=3$ by example, which turned out to be wrong, see corrections in Fig.~\ref{fig:forms3d_reduction}. This above will be proved in section~\ref{sec:invariants3d} by reducing a basis to an obtuse superbase and correcting key details from pages 60-63 in \cite{conway1992low}. \medskip Lemma~\ref{lem:partial_sums} will later help to prove that a lattice is uniquely determined up to isometry by an obtuse superbase, hence by its vonorms or, equivalently, conorms. \begin{lem}[Voronoi vectors $v_S$, Theorem~3 in \cite{conway1992low}] \label{lem:partial_sums} For any obtuse superbase $v_0,v_1,\dots,v_n$ of a lattice, all partial sums $v_S$ from Definition~\ref{dfn:partial_sums} split into $2^n-1$ symmetric pairs $v_S=-v_{\bar S}$, which are Voronoi vectors representing distinct $2\Lambda$-classes in $\Lambda/2\Lambda$. All Voronoi vectors $v_S$ are strict if and only if all $p_{ij}>0$. \hfill $\blacksquare$ \end{lem} \begin{comment} \section{Voforms and coforms are isometry invariants of lattices in dimension 2} \label{sec:invariants2d} Definition~\ref{dfn:forms2d} introduces voforms and coforms, which are triangular cycles whose three nodes are marked by vonorms and conorms, respectively. Though we start from any obtuse superbase $B$ of a lattice $\Lambda$ to define a voform $\mathrm{VF}$, a coform $\mathrm{CF}$ and root form $\mathrm{RF}$, Lemma~\ref{lem:isometric_superbases} will justify that they are independent of $B$ up to isomorphism. \begin{figure} \label{fig:forms2d} \caption{\textbf{1st}: a voform $\mathrm{VF}(B)$ of a 2D lattice with an obtuse superbase $B=(v_0,v_1,v_2)$. \textbf{2nd}: nodes of a coform $\mathrm{CF}(B)$ are marked by conorms $p_{ij}$. \textbf{3rd and 4th}: $\mathrm{VF}$ and $\mathrm{CF}$ of the hexagonal and square lattice base with a minimum vector length $a$.} \includegraphics[height=40mm]{images/voform2d.png} \includegraphics[height=40mm]{images/coform2d.png} \hspace*{2mm} \includegraphics[height=40mm]{images/hexagonal2d.png} \hspace*{2mm} \includegraphics[height=40mm]{images/square2d.png} \end{figure} \begin{dfn}[2D \emph{voforms}, \emph{coforms}, \emph{root forms} and \emph{isomorphisms}] \label{dfn:forms2d} Any obtuse superbase $B=(v_0,v_1,v_2)$ of a lattice $\Lambda\subset\mathbb R^2$ has three pairs of partial sums $\pm v_0=\mp(v_1+v_2)$, $\pm v_1$, $\pm v_2$. The formula $v_S^2=\sum\limits_{i\in S,j\in\bar S}p_{ij}$ in Definition~\ref{dfn:partial_sums} implies that $$ v_0^2=p_{01}+p_{02},\qquad v_1^2=p_{01}+p_{12},\qquad v_2^2=p_{02}+p_{12}. \leqno{(\ref{dfn:forms2d}a)}$$ The conorms are conversely expressed from the above formulae as $$ p_{12}=\dfrac{1}{2}(v_1^2+v_2^2-v_0^2),\quad p_{01}=\dfrac{1}{2}(v_0^2+v_1^2-v_2^2),\quad p_{02}=\dfrac{1}{2}(v_0^2+v_2^2-v_1^2). \leqno{(\ref{dfn:forms2d}b)}$$ Briefly, $p_{ij}=\dfrac{1}{2}(v_i^2+v_j^2-v_k^2)$ for any distinct $i,j\in\{0,1,2\}$ and $k=\{0,1,2\}-\{i,j\}$. \medskip The \emph{voform} $\mathrm{VF}(B)$ is the cycle on three nodes marked by the vonorms $v_0^2,v_1^2,v_2^2$, see Fig.~\ref{fig:forms2d}. The \emph{coform} $\mathrm{CF}(B)$ is the cycle on three nodes marked by the conorms $p_{12},p_{02},p_{01}$. An \emph{isomorphism} (denoted by $\sim$) of voforms (coforms, respectively) is any permutation $\sigma$ of the three vonorms (conorms, respectively) from the permutation group $S_3$ on the indices 0,1,2. This isomorphism is \emph{orientation-preserving} (denoted by $\stackrel{o}{\sim}$) if $\sigma$ is one of even permutations that form the alternating subgroup $A_3\subset S_3$. \medskip Since all conorms $p_{ij}$ are non-negative, we can define the \emph{root products} $r_{ij}=\sqrt{p_{ij}}$, which have the same units as original coordinates, for example in Angstroms: $1\AA=10^{-10}$m. Up to isomorphism of coforms, these root products can be ordered: $r_{12}\leq r_{01}\leq r_{02}$, which is equivalent to $v_1^2\leq v_2^2\leq v_0^2$ by formulae (\ref{dfn:forms2d}a). This unique ordered triple $(r_{12},r_{01},r_{02})$ is called the \emph{root form} $\mathrm{RF}(B)$. If we consider only orientation-preserving isomorphisms (even permutations), the last two entries may be swapped, then the triple is called the \emph{orientation-preserving root form} $\mathrm{RF}^o(B)$. \hfill $\blacksquare$ \end{dfn} \begin{lem}[equivalence of $\mathrm{VF},\mathrm{CF},\mathrm{RF}$] \label{lem:forms2d_equiv} For any obtuse superbase $B$ in $\mathbb R^2$, its voform $\mathrm{VF}(B)$, coform $\mathrm{CF}(B)$ and unique $\mathrm{RF}(B)$ are reconstructible from each other. \end{lem} \begin{proof} The conorms $p_{12},p_{02},p_{01}$ are uniquely expressed via the vonorms $v_0^2,v_1^2,v_2^2$ by formulae (\ref{dfn:forms2d}ab) and vice versa. If we apply a permutation of indices $0,1,2$ to the conorms, the sane permutation applies to the vonorms. Hence we have a 1-1 bijection $\mathrm{CF}(\Lambda)\leftrightarrow\mathrm{VF}(\Lambda)$ up to (orientation-preserving) isomorphism. The root form $\mathrm{RF}(\Lambda)$ is uniquely defined by ordering root products without any need for isomorphisms. \end{proof} \begin{lem}[isometry$\to$isomorphism] \label{lem:forms2d_invariants} Any (orientation-preserving) isometry of obtuse superbases $B\to B'$ induces an (orientation-preserving, respectively) isomorphism of voforms $\mathrm{VF}(B)\sim\mathrm{VF}(B')$, coforms $\mathrm{CF}(B)\sim\mathrm{CF}(B')$ and keeps $\mathrm{RF}(B)=\mathrm{RF}(B')$. \hfill $\blacksquare$ \end{lem} \begin{proof} Any isometry preserves lengths and scalar products of vectors. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:reduction} for $n=2$] For any lattice $\Lambda\subset\mathbb R^2$, permuting vectors of a superbase $B=(v_0,v_1,v_2)$ allows us to order the conorms: $p_{12}\leq p_{01}\leq p_{02}$. Our aim is to reduce $B$ so that all conorms non-negative. Assuming that $p_{12}=-v_1\cdot v_2=-\varepsilon<0$, we change the superbase: $u_1=-v_1$, $u_2=v_2$, $u_0=v_1-v_2$ so that $u_0+u_1+u_2=0$. \medskip Two vonorms remain the same: $u_1^2=v_1^2$ and $u_2^2=v_2^2$. The third vonorm decreases by $4\varepsilon>0$ as follows: $u_0^2=(v_1-v_2)^2=(v_1+v_2)^2-4v_1v_2=v_0^2-4\varepsilon$. One conorm changes its sign: $q_{12}=-u_1\cdot u_2=-p_{12}=\varepsilon>0$. The two other conorms decrease: $$q_{01}=-u_0\cdot u_1=-(v_1-v_2)\cdot(-v_1) =-(-v_1-v_2)v_1-2v_1\cdot v_2=p_{01}-2\varepsilon,$$ $$q_{02}=-u_0\cdot u_2=-(v_1-v_2)\cdot v_2 =-(-v_1-v_2)v_2-2v_1\cdot v_2=p_{02}-2\varepsilon.$$ If one of the new conorms becomes negative, we apply the above reduction again. To prove that all conorms become non-negative in finitely many steps, notice that every reduction can make superbase vectors only shorter, but not shorter than a minimum distance between points of $\Lambda$. The angle between $v_i,v_j$ can have only finitely many values when lengths of $v_i,v_j$ are bounded. Hence the scalar product $\varepsilon=v_i\cdot v_j>0$ cannot converge to 0. Since every reduction makes one of the superbase vectors shorter by a positive constant, the reductions should finish in finitely many steps. \end{proof} By Theorem~\ref{thm:reduction} any lattice $\Lambda\subset\mathbb R^2$ has an obtuse superbase with all $p_{ij}\geq 0$. At least two conorms $p_{ij}$ should be positive, otherwise a vonorm might be zero, but there are no other restrictions on values of conorms. The vonorms $v_0^2,v_1^2,v_2^2>0$ should satisfy three triangle inequalities such as $v_0^2\leq v_1^2+v_2^2$, only one of them can be equality. \begin{lem \label{lem:isometric_superbases} All obtuse superbases of any lattice $\Lambda\subset\mathbb R^2$ are isometric. Hence $\mathrm{VF}(\Lambda)$, $\mathrm{CF}(\Lambda)$, $\mathrm{RF}(\Lambda)$ are independent of a superbase (well-defined up to isomorphism). \hfill $\blacksquare$ \end{lem} \begin{proof} By Lemma~\ref{lem:partial_sums} for $n=2$, if a lattice $\Lambda$ has a strict obtuse superbase $v_0,v_1,v_2$, then all Voronoi vectors of $\Lambda$ are three pairs of symmetric partial sums $\pm v_0,\pm v_1,\pm v_2$. Hence the superbase is uniquely determined by the strict Voronoi vectors up to a sign. So $\Lambda$ has symmetric obtuse superbases $\{v_0,v_1,v_2\}$ and $\{-v_0,-v_1,-v_2\}$, see Fig.~\ref{fig:Voronoi2D}. \medskip If a superbase of $L$ is non-strict, one conorm vanishes, say $p_{12}=0$. Then $v_1,v_2$ span a rectangular cell and $\Lambda$ has four non-strict Voronoi vectors $\pm v_1\pm v_2$ with all combinations of signs. Hence $\Lambda$ has four obtuse superbases $\{v_1,v_2,-v_1-v_2\}$, $\{-v_1,v_2,v_1-v_2\}$, $\{v_1,-v_2,v_2-v_1\}$, $\{-v_1,-v_2,v_1+v_2\}$, which are related by reflections, see Fig.~\ref{fig:Voronoi2D}. \medskip Any (even) permutation of vectors $v_0,v_1,v_2$ induces an (orientation-preserving) isomorphism of voforms and coforms and keeps the root form invariant. Lemma~\ref{lem:forms2d_invariants} implies that $\mathrm{VF}(\Lambda)$, $\mathrm{CF}(\Lambda)$, $\mathrm{RF}(\Lambda)$ are independent of a superbase $B$ of $\Lambda$. \end{proof} \section{The simpler space of obtuse superbases up to isometry in dimension 2} \label{sec:superbases/isometry2d} \begin{dfn}[space $\mathrm{OSI}$ of obtuse superbases up to isometry] \label{dfn:OSI} Let $B=\{v_i\}_{i=0}^n,B'=\{u_i\}_{i=0}^n$ be any obtuse superbases in $\mathbb R^n$. Let $L_{\infty}(B,B')=\min\limits_{R\in\mathrm{O}(\mathbb R^n)}\max\limits_{i=0,\dots,n}|R(u_i)-v_i|$ be the maximum Euclidean length of vector differences minimised over all orthogonal maps from the compact group $\mathrm{O}(\mathbb R^n)$. Let $\mathrm{OSI}$ denote the space of \emph{obtuse superbases up to isometry}, which we endow by the metric $L_{\infty}$. For orientation-preserving isometries, the metric $L_{\infty}^o$ can be similarly defined by minimising over high-dimensional rotations from the group $\mathrm{SO}(\mathbb R^n)$, though other metrics on $\mathrm{OSI}$ are also possible. \hfill $\blacksquare$ \end{dfn} Theorem~\ref{thm:superbases/isometry} substantially reduces the representation ambiguity for the lattice isometry space by the 1-1 map $\mathrm{LISP}\to\mathrm{OSI}$. Indeed, any fixed lattice $\Lambda\subset\mathbb R^2$ has infinitely many (super)bases but only a few obtuse superbases, maximum four as shown in Fig.~\ref{fig:Voronoi2D}. \begin{thm}[lattices up to isometry $\leftrightarrow$ obtuse superbases up to isometry] \label{thm:superbases/isometry} Lattices in $\mathbb R^2$ are isometric if and only if any of their obtuse superbases are isometric. \hfill $\blacksquare$ \end{thm} \begin{proof} Part \emph{only if} ($\Rightarrow$): any isometry $f$ between lattices $\Lambda,\Lambda'$ maps any obtuse superbase $B$ of $\Lambda$ to the obtuse superbase $f(B)$ of $\Lambda'$, which should be isometric to any other obtuse superbase of $\Lambda'$ by Lemma~\ref{lem:isometric_superbases}. Part \emph{if} ($\Leftarrow$): any isometry between obtuse superbases of $\Lambda,\Lambda'$ linearly extends to an isometry between the lattices $\Lambda,\Lambda'$. \end{proof} \begin{dfn}[signs of lattices] \label{dfn:sign_lattice} A lattice $\Lambda\subset\mathbb R^2$ is called \emph{neutral} (or \emph{achiral}) if two of its three vonorms are equal, so $\Lambda$ maps to itself under a mirror reflection: if $v_1^2=v_2^2$, then map $v_1\leftrightarrow v_2$, $v_0\mapsto v_0$. A non-neutral lattice $\Lambda$ is called \emph{positive} if its root form $\mathrm{RF}(\Lambda)$ has its root products in increasing order starting from the smallest. If the last two root products are not in increasing order, then $\Lambda$ is called \emph{negative}. \hfill $\blacksquare$ \end{dfn} \begin{exa} \label{exa:lattices2D} {\bf (a)} The square lattice $S\subset\mathbb R^2$ with a side length $a$ has four obtuse superbases in Fig.~\ref{fig:Voronoi2D}, for example $v_1=(a,0)$, $v_2=(0,a)$, $v_0=(-a,-a)$ and $u_1=(a,a)$, $u_2=(0,-a)$, $u_0=(-a,0)$. These superbases have the vonorms $v_0^2=2a^2$, $v_1^2=a^2$, $v_2^2=a^2$ and ${u_0}^2=a^2$, ${u_1}^2=2a^2$, ${u_2}^2=a^2$. The conorms are $p_{12}=0$, $p_{01}=a^2=p_{02}$ and $q_{12}=a^2=q_{01}$, $q_{02}=0$. Both superbases have the same root form $\mathrm{RF}(S)=(0,a,a)$ of ordered root products, because the coforms $(0,a^2,a^2)$ and $(a^2,a^2,0)$ are isomorphic by a cyclic permutation, so any square lattice $S$ is neutral. \medskip \noindent {\bf (b)} The hexagonal lattice $H\subset\mathbb R^2$ with a side length $a$ has two obtuse superbases in Fig.~\ref{fig:Voronoi2D}, which are symmetric with respect to the origin: $v_1=(a,0)=-u_1$, $v_2=(-\frac{a}{2},\frac{\sqrt{3}}{2}a)=-u_2$, $v_0=(-\frac{a}{2},-\frac{\sqrt{3}}{2}a)=-u_0$. These superbases have the same vonorms $v_i^2=a^2=u_i^2$ and the same conorms $p_{jk}=\frac{a^2}{2}=q_{jk}$ for any $i,j\neq k$. Both superbases have the same root form $\mathrm{RF}(H)=(\frac{a}{\sqrt{2}},\frac{a}{\sqrt{2}},\frac{a}{\sqrt{2}})$, so any hexagonal lattice is neutral. \medskip \noindent {\bf (c)} At first sight the lattice $\Lambda$ with the basis $v_1=(3,0)$, $v_2=(-1,3)$ looks non-isometric to the lattice $\Lambda'$ with the basis $v_1$, $v'_2=(-2,3)$. However, they have extra superbase vectors $v_0=(-2,-3)$, $v'_0=(-1,-3)$ leading to the coforms $\mathrm{CF}(\Lambda)=(3,6,7)$ and $\mathrm{CF}(\Lambda')=(6,3,7)$ written as $(p_{12},p_{01},p_{02})$. Up to isomorphism, we can shift 3 to the first place, so $\mathrm{CF}(\Lambda)\sim\mathrm{CF}(\Lambda')$ and $\mathrm{RF}^o(\Lambda)=(\sqrt{3},\sqrt{6},\sqrt{7})=\mathrm{RF}^o(\Lambda')$. If we aim to preserve orientation, the orientation-preserving root forms are $\mathrm{RF}^o(\Lambda)=(\sqrt{3},\sqrt{6},\sqrt{7})$ and $\mathrm{RF}^o(\Lambda')=(\sqrt{3},\sqrt{7},\sqrt{6})$, which differ up to orientation-preserving isomorphism. The lattices $\Lambda,\Lambda'$ are related by the mirror reflection $x\leftrightarrow -x$, see Fig.~\ref{fig:root_forms2d_reflection}. By Definition~\ref{dfn:sign_lattice} the lattice $\Lambda$ is positive, while its mirror image $\Lambda'$ is negative. \hfill $\blacksquare$ \end{exa} \begin{figure} \label{fig:root_forms2d_reflection} \caption{Root forms of the lattices $\Lambda,\Lambda'$, which differ by a reflection in Example~\ref{exa:lattices2D}(c).} \includegraphics[width=\textwidth]{images/root_forms2d_reflection.png} \end{figure} \section{Unique root forms classify all lattices up to isometry in dimension 2} \label{sec:classification2d} \begin{lem}[superbase reconstruction] \label{lem:superbase_reconstruction} For any lattice $\Lambda\subset\mathbb R^2$, an obtuse superbase $B$ of $\Lambda$ can be reconstructed up to isometry from $\mathrm{VF}(\Lambda)$ or $\mathrm{CF}(\Lambda)$ or $\mathrm{RF}(\Lambda)$. \hfill $\blacksquare$ \end{lem} \begin{proof} Since $\mathrm{VF}(\Lambda),\mathrm{CF}(\Lambda),\mathrm{RF}(\Lambda)$ are expressible via each other by Lemma~\ref{lem:forms2d_equiv}, it suffices to consider $\mathrm{VF}(\Lambda)$. Choosing any two vonorms from $\mathrm{VF}(\Lambda)=(v_0^2,v_1^2,v_2^2)$, say $v_1^2,v_2^2$, we can find the lengths $|v_1|,|v_2|$ and the angle $\alpha=\arccos\dfrac{v_1\cdot v_2}{|v_1|\cdot|v_2|}\in[0,\pi)$ between the basis vectors $v_1,v_2$ from $v_1\cdot v_2=-p_{12}=\dfrac{1}{2}(v_0^2-v_1^2-v_2^2)$. Hence an obtuse superbase $(v_0,v_1,v_2)$ of $\Lambda$ is reconstructed and should be unique up to isometry by Lemma~\ref{lem:isometric_superbases}. If a cyclic order of vonorms is fixed, say $v_0^2$ goes after the ordered pair $v_1^2,v_2^2$, we draw the angle $\alpha$ from $v_1$ to $v_2$ counterclockwisely, otherwise clockwisely. \end{proof} \begin{thm}[isometry classification: 2D lattices $\leftrightarrow$ root forms] \label{thm:classification2d} Lattices $\Lambda,\Lambda'\subset\mathbb R^2$ are isometric if and only if their root forms coincide: $\mathrm{RF}(\Lambda)=\mathrm{RF}(\Lambda')$ or, equivalently, their coforms and voforms are isomorphic: $\mathrm{CF}(\Lambda)\sim\mathrm{CF}(\Lambda')$, $\mathrm{VF}(\Lambda)\sim\mathrm{VF}(\Lambda')$. The existence of an orientation-preserving isometry is equivalent to $\mathrm{RF}^o(\Lambda)=\mathrm{RF}^o(\Lambda')$. \hfill $\blacksquare$ \end{thm} \begin{proof} The part \emph{only if} ($\Rightarrow$) means that any isometric lattices $\Lambda,\Lambda'$ have $\mathrm{RF}(\Lambda)=\mathrm{RF}(\Lambda')$. Lemma~\ref{lem:forms2d_invariants} implies that the root form $\mathrm{RF}(B)$ of an obtuse superbase $B$ is invariant under isometry. Theorem~\ref{lem:isometric_superbases} implies $\mathrm{RF}(\Lambda)$ is independent of $B$. \medskip The part \emph{if} ($\Leftarrow$) follows from Lemma~\ref{lem:superbase_reconstruction} by reconstructing a superbase of $\Lambda$. \end{proof} Though both vonorms and conorms are complete continuous invariants, we use the unique root form $\mathrm{RF}$ based on conorms, because formulae (\ref{dfn:forms2d}a) are easier than (\ref{dfn:forms2d}b). \begin{figure} \label{fig:triangle2d} \caption{\textbf{Left}: the \emph{octant} $\mathrm{Oct}=\{(r_{12},r_{01},r_{02})\in\mathbb R^3 \mid \text{ all } r_{ij}\geq 0, \text{ only one can be }0\}$. \textbf{Middle}: under the scaling (projection from the origin) $r_{ij}\mapsto \bar r_{ij}=r_{ij}(r_{12}+r_{01}+r_{02})^{-1}$, the octant $\mathrm{Oct}$ projects to the \emph{full triangle} $\mathrm{FT}=\mathrm{Oct}\cap\{r_{12}+r_{01}+r_{02}=1\}$. \textbf{Right}: the quotient triangle $\mathrm{QT}=\mathrm{FT}/S_3$ consists of ordered triples $\bar r_{12}\leq\bar r_{01}\leq\bar r_{02}$ and can be parameterised by $x=\frac{1}{2}(\bar r_{02}-\bar r_{01})\in[0,\frac{1}{2}]$ and $y=\bar r_{12}\in[0,\frac{1}{3}]$.} \includegraphics[height=36mm]{images/octant2d.png}\hspace*{1mm} \hspace*{1mm} \includegraphics[height=36mm]{images/full_triangle2d.png} \hspace*{1mm} \includegraphics[height=36mm]{images/quotient_triangle2d.png} \end{figure} \begin{dfn}[full triangle $\mathrm{FT}$ and quotient triangle $\mathrm{QT}$] \label{dfn:triangle2d} All isometry classes of lattices $\Lambda\subset\mathbb R^2$ with root forms $\mathrm{RF}(\Lambda)=(r_{12},r_{01},r_{02})$ are in a 1-1 correspondence with all ordered triples $r_{12}\leq r_{01}\leq r_{02}$. If we allow any permutations (all isomorphisms of coforms), these triples fill the octant $\mathrm{Oct}=[0,+\infty)^3$ excluding the axes. Project any point $(r_{12},r_{01},r_{02})\in\mathrm{Oct}$ from the origin $(0,0,0)$ to point the plane $r_{12}+r_{01}+r_{02}=1$ using the scaling factor $(r_{12}+r_{01}+r_{02})^{-1}$. The image of $\mathrm{Oct}$ is the \emph{full triangle} $\mathrm{FT}=\mathrm{Oct}\cap\{r_{12}+r_{01}+r_{02}=1, r_{ij}\geq 0\}$ without the three vertices. The scaled products $\bar r_{ij}=r_{ij}(r_{12}+r_{01}+r_{02})^{-1}$ ar barycentric coordinates on $\mathrm{FT}$. Take the quotient of $\mathrm{FT}$ by the permutation group $S_3$ to get the smaller \emph{quotient triangle} $\mathrm{QT}$ with the extra conditions $\bar r_{12}\leq\bar r_{01}\leq\bar r_{02}$ (as in the root form $\mathrm{RF}$). Then $\mathrm{QT}$ is parameterised by $x=\frac{1}{2}(\bar r_{02}-\bar r_{01})\in[0,\frac{1}{2}]$ and $y=\bar r_{12}\in[0,\frac{1}{3}]$. A mirror image of a lattice $\Lambda$ considered up to rigid motion (orientation-preserving isometry) is represented by the point $(-x,y)$ in the reflection of $\mathrm{FT}$ by $x\mapsto -x$, so $\mathrm{FT}/A_3$ is the quotient of the full triangle $\mathrm{FT}$ by rotations through $\pm\frac{2\pi}{3}$ around the centre $(\frac{1}{3},\frac{1}{3},\frac{1}{3})$. \hfill $\blacksquare$ \end{dfn} \begin{exa}[Bravais lattices] \label{exa:Bravais2d} \textbf{(a)} Any square lattice $S$ with a side length $a$ in Fig.~\ref{fig:forms2d} has the root form $(0,a,a)$ projected to $(0,\frac{1}{2},\frac{1}{2})$ in the full triangle $\mathrm{FT}$ and represented by $(0,0)$ in the quotient triangle $\mathrm{QT}$. Any hexagonal lattice $H$ has the root form $(\frac{a^2}{2},\frac{a^2}{2},\frac{a^2}{2})$ projected to $(\frac{1}{3},\frac{1}{3},\frac{1}{3})\in\mathrm{FT}$ and represented by $(0,\frac{1}{3})\in\mathrm{QT}$. \medskip \noindent \textbf{(b)} The vertical side of the quotient triangle has $x=0$, $y\in[0,\frac{1}{3}]$ and represents all lattices with $r_{12}\leq r_{01}=r_{02}$. Choosing $v_0=(a,0)$, we get $v_1=(-\frac{a}{2},b)$, $v_2=(-\frac{a}{2},-b)$ with $0\leq p_{12}=b^2-\frac{a^2}{4}\leq p_{01}=\frac{a^2}{2}$, so $\frac{a}{2}\leq b\leq a\frac{\sqrt{3}}{2}$. In the lower case $b=\frac{a}{2}$, the point $(x,y)=(0,0)$ represents all square lattices with $p_{12}=0$. In the upper case $b=a\frac{\sqrt{3}}{2}$, the point $(x,y)=(0,\frac{1}{3})$ represents all hexagonal lattices with $p_{12}=p_{01}=p_{02}$. The vertical side of the quotient triangle $\mathrm{QT}$ represents lattices whose centred rectangular cells have a short side $2a$ and a longer side $2b$ within the interval $(a,a\sqrt{3})$. \medskip \noindent \textbf{(c)} The hypotenuse of $\mathrm{QT}$ has $p_{12}=p_{01}\leq p_{02}$ and represents lattices whose centred rectangular cells have a short side $2a$ and a longer side $2b\geq a\sqrt{3}$. Indeed, for the superbase $v_1=(a,0)$, $v_2=(-\frac{a}{2},b)$, $v_0=(-\frac{a}{2},-b)$, we get $p_{01}=\frac{a^2}{2}\leq p_{02}=b^2-\frac{a^2}{4}$, so $b\geq a\frac{\sqrt{3}}{2}$. The excluded vertex of $\mathrm{QT}$ represents the limit case $b\to+\infty$. \medskip \noindent \textbf{(d)} The horizontal side of $\mathrm{QT}$ has $p_{12}=0$, $p_{01}\leq p_{02}$ and represents all lattices with rectangular cells with sides $a\leq b$. We approach the exluded vertex when $b\to+\infty$. \hfill $\blacksquare$ \end{exa} \section{Easily computable continuous metrics on root forms in dimension 2} \label{sec:metrics2D} \begin{dfn}[space $\mathrm{RFL}$ with root metrics $\mathrm{RM}_d(\Lambda,\Lambda')$] \label{dfn:metrics2d} Let $S_3$ be the group of all six permutations of three conorms of a lattice $\Lambda\subset\mathbb R^2$, $A_3$ be its subgroup of three even permutations. For any metric $d$ on $\mathbb R^3$, the \emph{root metric} is $\mathrm{RM}_d(\Lambda,\Lambda')=\min\limits_{\sigma\in S_3} d(\mathrm{RF}(\Lambda),\sigma(\mathrm{RF}(\Lambda'))$, where a permutation $\sigma$ applies to $\mathrm{RF}(\Lambda')$ as a vector in $\mathbb R^3$. The \emph{orientation-preserving} root metric $\mathrm{RM}_d^o(\Lambda,\Lambda')=\min\limits_{\sigma\in A_3}d(\mathrm{RF}(\Lambda),\sigma(\mathrm{RF}(\Lambda')))$ is minimised over even permutations. If we use the Minkowski $L_q$-norm $||v||_q=(\sum\limits_{i=1}^n |x_i|^q)^{1/q}$ of a vector $v=(x_1,\dots,x_n)\in\mathbb R^n$ for any real parameter $q\in[1,+\infty]$, the root metric is denoted by $\mathrm{RM}_q(\Lambda,\Lambda')$. The limit case $q=+\infty$ means that $||v||_{+\infty}=\max\limits_{i=1,\dots,n}|x_i|$. Let $\mathrm{RFL}$ denote the space of \emph{Root Forms of Lattices} $\Lambda\subset\mathbb R^2$, where we can use any of the above metrics satisfying all necessary axioms by Lemma~\ref{lem:metric_axioms}. \hfill $\blacksquare$ \end{dfn} \begin{exa} \label{exa:metrics2d} The lattices $\Lambda,\Lambda'$ from Example~\ref{exa:lattices2D}(c) with root forms $\mathrm{RF}(\Lambda)=(\sqrt{3},\sqrt{6},\sqrt{7})$ and $\mathrm{RF}(\Lambda')=(\sqrt{3},\sqrt{7},\sqrt{6})$ differ by a reflection, so $\mathrm{RM}_d(\Lambda,\Lambda')=0$ for any metric $d$ on $\mathbb R^3$, but the orientation-preserving metric gives $\mathrm{RM}_q^o(\Lambda,\Lambda')=2^{1/q}(\sqrt{7}-\sqrt{6})$ for $q\in[1,+\infty)$ and $\mathrm{RM}_{+\infty}^o(\Lambda,\Lambda')=\sqrt{7}-\sqrt{6}$ for the $L_q$-norm. \hfill $\blacksquare$ \end{exa} \begin{lem}[metric axioms for $\mathrm{RM}_d$] \label{lem:metric_axioms} For any metric $d$ on $\mathbb R^3$, the coform metrics $\mathrm{RM}_d$, $\mathrm{RM}_d^o$ from Definition~\ref{dfn:metrics2d} satisfy the metric axioms in Problem~\ref{pro:metric}c. \hfill $\blacksquare$ \end{lem} \begin{proof} We prove the metric axioms for $\mathrm{RM}_d$, the orientation-preserving case is similar. The first axiom requires that $\mathrm{RM}_d(\Lambda,\Lambda')=0$ if and only if $\Lambda,\Lambda'$ are isometric. By Definition~\ref{dfn:metrics2d} the equality $\mathrm{RM}_d(\Lambda,\Lambda')=0$ means that there is a permutation $\sigma\in S_3$ such that $d(\mathrm{RF}(\Lambda),\sigma(\mathrm{RF}(\Lambda')))=0$, equivalently $\mathrm{RF}(\Lambda)=\sigma(\mathrm{RF}(\Lambda'))$ since the metric $d$ on $\mathbb R^3$ satisfies the first axiom. The last equality implies that $\mathrm{RF}(\Lambda)=\mathrm{RF}(\Lambda')$, because any root form is an ordered triple. Then $\Lambda,\Lambda'$ are isometric by Theorem~\ref{thm:classification2d}. \medskip Since $d$ is symmetric, the symmetry follows by taking the inverse permutation: $$\mathrm{RM}_d(\Lambda,\Lambda')=\min\limits_{\sigma\in S_3} d(\mathrm{RF}(\Lambda),\sigma(\mathrm{RF}(\Lambda'))) =\min\limits_{\sigma^{-1}\in S_3} d(\sigma^{-1}(\mathrm{RF}(\Lambda)),\mathrm{RF}(\Lambda')) =\mathrm{RM}_d(\Lambda',\Lambda).$$ To prove the triangle inequality, let permutations $\sigma,\tau\in S_3$ minimise the distances: $\mathrm{RM}_d(\Lambda,\Lambda')=d(\mathrm{RF}(\Lambda),\sigma(\mathrm{RF}(\Lambda')))$ and $\mathrm{RM}_d(\Lambda',\Lambda'')=d(\mathrm{RF}(\Lambda'),\tau(\mathrm{RF}(\Lambda'')))$. The triangle inequality for the auxiliary metric $d$ implies that $$d(\mathrm{RF}(\Lambda),\tau\circ\sigma(\mathrm{RF}(\Lambda'')))\leq d(\mathrm{RF}(\Lambda),\sigma(\mathrm{RF}(\Lambda')))+d(\sigma(\mathrm{RF}(\Lambda')),\tau\circ\sigma(\mathrm{RF}(\Lambda''))).$$ In the final term the common permutation $\sigma$ can be dropped, because $\sigma$ identically permutes both root forms $\mathrm{RF}(\Lambda')$ and $\mathrm{RF}(\Lambda'')$. Hence $d(\mathrm{RF}(\Lambda),\tau\circ\sigma(\mathrm{RF}(\Lambda'')))\leq \mathrm{RM}_d(\Lambda,\Lambda')+\mathrm{RM}_d(\Lambda',\Lambda'')$. The left hand side contains one permutation $\tau\circ\sigma\in S_3$ and can become only smaller after minimising over all permutations from $S_3$. So the triangle inequality is proved: $\mathrm{RM}_d(\Lambda,\Lambda'')\leq\mathrm{RM}_d(\Lambda,\Lambda')+\mathrm{RM}_d(\Lambda',\Lambda'')$. \end{proof} \begin{lem}[continuity of products] \label{lem:continuity_products} Let vectors $u_1,u_2,v_1,v_2\in\mathbb R^n$ have a maximum Euclidean length $l$, scalar products $u_1\cdot u_2,v_1\cdot v_2\leq 0$ and be $\delta$-close in the Euclidean distance so that $|u_i-v_i|\leq\delta$, $i=1,2$. Then $|\sqrt{-u_1\cdot u_2}-\sqrt{-v_1\cdot v_2}|\leq\sqrt{2l\delta}$. \hfill $\blacksquare$ \end{lem} \begin{proof} If $\sqrt{-u_1\cdot u_2}+\sqrt{-v_1\cdot v_2}\leq\sqrt{2l\delta}$, the difference of square roots is at most $\sqrt{2l\delta}$ as required. Assuming that $\sqrt{-u_1\cdot u_2}+\sqrt{-v_1\cdot v_2}\geq\sqrt{2l\delta}$, it suffices to estimate the difference $|u_1\cdot u_2-v_1\cdot v_2|=|\sqrt{-u_1\cdot u_2}-\sqrt{-v_1\cdot v_2}|(\sqrt{-u_1\cdot u_2}+\sqrt{-v_1\cdot v_2})\leq 2l\delta$. \medskip We estimate the scalar product $|u\cdot v|\leq |u|\cdot|v|$ by using Euclidean lengths. Then we apply the triangle inequality for scalars and replace vector lengths by $l$ as follows: \noindent $|u_1\cdot u_2-v_1\cdot v_2|=|(u_1-v_1)\cdot u_2+v_1\cdot(u_2-v_2)|\leq |(u_1-v_1)\cdot u_2|+|v_1\cdot(u_2-v_2)|\leq$ \noindent $\leq |u_1-v_1|\cdot |u_2|+|v_1|\cdot|u_2-v_2|\leq \delta(|u_2|+|v_1|)\leq 2l\delta$ as required. \end{proof} Theorems~\ref{thm:superbases->root_forms} and~\ref{thm:root_forms->superbases} show that the 1-1 map $\mathrm{OSI}\leftrightarrow\mathrm{LISP}\leftrightarrow\mathrm{RFL}$ established by Theorems~\ref{thm:superbases/isometry} and \ref{thm:classification2d} is continuous in both directions. \begin{thm}[continuity of $\mathrm{OSI}\to\mathrm{RFL}$] \label{thm:superbases->root_forms} Let lattices $\Lambda,\Lambda'\subset\mathbb R^2$ have obtuse superbases $B=(v_0,v_1,v_2)$, $B'=(u_0,u_1,u_2)$ whose vectors have a maximum length $l$ and $|u_i-v_i|\leq\delta$ for some $\delta>0$, $i=0,1,2$. Then $\mathrm{RM}_q(\mathrm{RF}(\Lambda),\mathrm{RF}(\Lambda'))\leq 3^{1/q}\sqrt{2l\delta}$ for any $q\in[1,+\infty]$, where $3^{1/q}$ is interpreted for $q=+\infty$ as $\lim\limits_{q\to+\infty}3^{1/q}=1$. The same upper bound holds for the orientation-preserving metric $\mathrm{RM}_q^o$. \hfill $\blacksquare$ \end{thm} \begin{proof} Lemma~\ref{lem:continuity_products} implies that the root products $r_{ij}=\sqrt{-v_i\cdot v_j}$ and $\sqrt{-u_i\cdot u_j}$ of the superbases $B,B'$ differ by at most $2l\delta$ for any pair $(i,j)$ of indices. Then the $L_q$-norm of the vector difference in $\mathbb R^3$ is $\mathrm{RM}_q(\mathrm{RF}(\Lambda),\mathrm{RF}(\Lambda'))\leq 3^{1/q}\sqrt{2l\delta}$ for any $q\in[1,+\infty]$. By Definition~\ref{dfn:metrics2d}, the root metric $\mathrm{RM}_q$ is minimised over permutations of $S_3$ (or $A_3$ for the orientation-preserving metric $\mathrm{RM}_q^o$), so the upper bound still holds. \end{proof} Theorem~\ref{thm:superbases->root_forms} is proved for the $L_q$ norm only to give the explicit upper bound for $\mathrm{RM}_q$. The similar argument proves continuity for $\mathrm{RM}_d$ with any metric $d$ on $\mathbb R^3$ satisfying $d(u,v)\to 0$ when $u\to v$ coordinate-wise. Theorem~\ref{thm:root_forms->superbases} is stated for the maximum norm with $q=+\infty$ only for simplicity, because all Minkowski norms in $\mathbb R^n$ are topologically equivalent due to $||v||_q\leq ||v||_{r}\leq n^{\frac{1}{q}-\frac{1}{r}}||v||_q$ for any $1\leq q\leq r$ \cite{norms}. \begin{thm}[continuity of $\mathrm{RFL}\to\mathrm{OSI}$] \label{thm:root_forms->superbases} Let lattices $\Lambda,\Lambda'\subset\mathbb R^2$ have $\delta$-close root forms, so $\mathrm{RM}_{\infty}(\mathrm{RF}(\Lambda),\mathrm{RF}(\Lambda'))\leq\delta$. Then $\Lambda,\Lambda'$ have obtuse superbases $B$, $B'$ that are close in the $L_{\infty}$ metric on the space $\mathrm{OSI}$ so that $L_{\infty}(B,B')\to 0$ as $\delta\to 0$. The same conclusion holds for the orientation-preserving metrics $\mathrm{RM}_{\infty}^o$ and $L_{\infty}^o$. \hfill $\blacksquare$ \end{thm} \begin{proof} Superbases $B=(v_0,v_1,v_2),B'=(u_0,u_1,u_2)$ can be reconstructed from the root forms $\mathrm{RF}(\Lambda),\mathrm{RF}(\Lambda')$ by Lemma~\ref{lem:superbase_reconstruction}. By applying a suitable isometry of $\mathbb R^2$, one can assume that $\Lambda,\Lambda'$ share the origin and the first vectors $v_0,u_0$ lie in the positive horizontal axis. Let $r_{ij},s_{ij}$ be the root products of $B,B'$ respectively. Formulae~(\ref{dfn:forms2d}a) imply that $v_i^2=r_{ij}^2+r_{ik}^2$ and $u_i^2=s_{ij}^2+s_{ij}^2$ for distinct indices $i,j,k\in\{0,1,2\}$ by , for example if $i=0$ then $j=1$, $k=2$. For any continuous transformation from $\mathrm{RF}(\Lambda)$ to $\mathrm{RF}(\Lambda')$, all root products have a finite upper bound $M$, which is used below: $$|v_i^2-u_i^2|=|(r_{ij}^2+r_{ik}^2)-(s_{ij}^2+s_{ik}^2)|\leq |r_{ij}^2-s_{ij}^2|+|r_{ik}^2-s_{ik}^2|\leq $$ $$(r_{ij}+s_{ij})|r_{ij}-s_{ij}|+(r_{ik}+s_{ik})|r_{ik}-s_{ik}|\leq (r_{ij}+s_{ij})\delta+(r_{ik}+s_{ik})\delta\leq 4M\delta.$$ Since at least two continuously changing conorms should be strictly positive to guarantee positive lengths of basis vectors by formula~(\ref{dfn:forms2d}a), there is a minimum length $a>0$ of all basis vectors during a transformation $\Lambda'\to\Lambda$. Then $||v_i|-|u_i||\leq\dfrac{4M\delta}{|v_i|+|u_i|}\leq\dfrac{2M}{a}\delta$. Since the first basis vectors $v_0,u_0$ lie in the positive horizontal axis, the lengths can be replaced by vectors: $|v_0-u_0|\leq\dfrac{2M}{a}\delta$, so $|v_0-u_0|\to 0$ as $\delta\to 0$. \medskip For other indices $i=1,2$, the basis vectors $v_i,u_i$ can have a non-zero angle equal to the difference $\alpha_i-\beta_i$ of the angles from the positive horizontal axis to $v_i,u_i$, respectively. These angles are expressed via the root products as follows: $$\alpha_i=\arccos\dfrac{v_0\cdot v_i}{|v_0|\cdot|v_i|} =\arccos\dfrac{-r_{0i}^2}{\sqrt{r_{01}^2+r_{02}^2}\sqrt{r_{ij}^2+r_{ik}^2}},$$ $$ \beta_i=\arccos\dfrac{u_0\cdot u_i}{|u_0|\cdot|u_i|} =\arccos\dfrac{-s_{0i}^2}{\sqrt{s_{01}^2+s_{02}^2}\sqrt{s_{ij}^2+s_{ik}^2}} ,$$ where $j\neq k$ differ from $i=1,2$. If $\delta\to 0$, then $s_{ij}\to r_{ij}$ and $\alpha_i-\beta_i\to 0$ for all indices, because all the above functions are continuous for $|u_j|,|v_j|\geq a$, $j=0,1,2$. \medskip Then we estimate the squared length of the difference by using the scalar product: $$|v_i-u_i|^2=v_i^2+u_i^2-2u_iv_i =(|v_i|^2-2|u_i|\cdot |v_i|+|u_i|^2)+2|u_i|\cdot |v_i|-2|u_i|\cdot |v_i|\cos(\alpha_i-\beta_i)=$$ $$ =(|v_i|-|u_i|)^2+2|u_i|\cdot |v_i|(1-\cos(\alpha_i-\beta_i)) =(|v_i|-|u_i|)^2+|u_i|\cdot |v_i|4\sin^2\dfrac{\alpha_i-\beta_i}{2} \leq$$ $$\leq (|v_i|-|u_i|)^2+|u_i|\cdot |v_i|4\left(\dfrac{\alpha_i-\beta_i}{2}\right)^2 =(|v_i|-|u_i|)^2+|u_i|\cdot |v_i|(\alpha_i-\beta_i)^2,$$ where we have used that $|\sin x|\leq|x|$ for any real $x$. The upper bound $M$ of all root products guarantees a fixed upper bound for lengths $|u_i|,|v_i|$. If $\delta\to 0$, then $|v_i|-|u_i|\to 0$ and $\alpha_i-\beta_i\to 0$ as proved above, so $v_i-u_i\to 0$ and $L_{\infty}(B,B')\to 0$. \end{proof} \section{Large families of 2D lattices from the Cambridge Structural Database} \label{sec:CSDlattices2D} The Cambridge Structural Database contains about 145K periodic crystals whose lattices are primitive orthorhombic. By orthogonally projecting such a lattice along a longer side to $\mathbb R^2$, we get a rectangular lattice with root products $r_{12}=0$ and $r_{01}\leq r_{02}$. \begin{figure} \label{fig:CSDorthorhombic2D} \caption{ Density plot of rectangular 2D lattices extracted from primitive orthorhombic crystals in the CSD and represented by the two non-zero root products $r_{01}\leq r_{02}$.} \includegraphics[width=1.0\textwidth]{images/CSDorthorhombic2D25A_145199_200x200.png} \end{figure} To represent such a large number of naturally existing lattices, we subdivide the upper triangle $\{0<r_{01}\leq r_{02}\}$ into small squares (pixels) and count lattices whose root products $(r_{01},r_{02})$ fall into each pixel. These counts (from 0 to 75) are represented the colour bar on the right hand side of Fig.~\ref{fig:CSDorthorhombic2D}. The resulting plot shows a high density cluster close to the diagonal, where lattices are nearly square and also another less expected cluster with root products $r_{01},r_{02}\approx (7,12)\AA$. To make these clusters better visible, we removed about 6\% of outliers with large root products $r_{02}>25\AA$. \medskip Any monoclinic 3D lattice can be orthogonally projected to a generic 2D lattice. The CSD contains about 374K monoclinic lattices, who projected 2D lattices have generic root forms $r_{12}\leq r_{01}\leq r_{02}$. Any such a triple is projected to the quotient triangle $\mathrm{QT}$ from Definition~\ref{dfn:triangle2d}. The projection of a triple has the coordinates $x=\frac{1}{2}(\bar r_{02}-\bar r_{01})$ and $y=\bar r_{12}$, which became unitless after scaling by $r_{12}+r_{02}+r_{01}$. \begin{figure} \label{fig:CSDmonoclinic2D} \caption{ Density plot of all monoclinic 2D lattices extracted from primitive monoclinic crystals in the CSD and represented in the quotient triangle $\mathrm{QT}$. } \includegraphics[width=1.0\textwidth]{images/CSDmonoclinic2D_374255.png} \end{figure} The density plot in Fig.~\ref{fig:CSDmonoclinic2D} shows that projected monoclinic lattices fill the quotient triangle $\mathrm{QT}$ almost completely with white spots only for small $\bar r_{12}<0.01$ and near the vertex $(x,y)=(\frac{1}{2},0)$ representing infinitely long cells. The high density pixels are close to (but not exactly at) the vertex $(0,\frac{1}{3})$ representing hexagonal lattices. \section{Current conclusions and justifications for a continuous crystallography} \label{sec:conclusions} This paper finally resolves Metric Problem~\ref{pro:metric} for 2D lattices, whose continuity and computability conditions remained unfulfilled despite years of persistent attempts. The follow-up paper will extend the presented tools to 3D lattices whose space will be bijectively and bi-continuously parameterised by root forms of six root products. \medskip The density plots in Fig.~\ref{fig:CSDmonoclinic2D} shows that all reasonable primitive monoclinic 2D lattices appear in known crystals from the CSD. In other words, real crystal lattices fill a continuous space, which should be studied by continuous invariants and metrics that only slightly change under ever-present thermal vibrations of atoms. Using a geographic analogue, the proposed solution to Problem~\ref{pro:metric} draws a complete and continuous map for efficient navigation on the space (new planet) of all 2D lattices. \medskip Using a biological analogue, the classical crystallography was developing more and more sophisticated ways to distinguish crystals by discrete properties similar to anatomical characteristics for biological species. A new continuous crystallography allows us to compare lattices and crystals of different symmetries (species) and see similarities at the deeper level similar to molecular biology. The proposed root form of a lattice is a direct analogue of a DNA of any individual, which also allows a full reconstruction. \end{comment} \section{Voforms and coforms of an obtuse superbase of a lattice in dimension 3} \label{sec:forms3d} For a lattice $\Lambda\subset\mathbb R^3$ with an obtuse superbase $B$, Definition~\ref{dfn:forms3d} introduces the voform $\mathrm{VF}(B)$ and the coform $\mathrm{CF}(B)$. These forms are Fano planes marked by vonorms and conorms, respectively. The \emph{Fano} projective plane of order 2 consists of seven non-zero classes (called \emph{nodes}) of the space $\Lambda/2\Lambda$, arranged in seven triples (called \emph{lines}). If we mark these nodes by 3-digit binary numbers $001$, $010$, $011$, $100$, $101$, $110$, $111$, the digit-wise sum of any two numbers in each line equals the third number modulo 2, see Fig.~\ref{fig:forms3d}. Lemma~\ref{lem:forms3d_invariants} will justify that $\mathrm{VF},\mathrm{CF}$ are well-defined for any lattice $\Lambda\subset\mathbb R^3$. \begin{figure} \label{fig:forms3d} \caption{\textbf{Left}: the Fano plane is a set of seven nodes arranged in triples shown by six lines and one circle. \textbf{Middle}: nodes of the voform $\mathrm{VF}(\Lambda)$ are marked by vonorms $v_i^2$ and $v_{ij}^2$. \textbf{Right}: nodes of the coform $\mathrm{CF}(\Lambda)$ are marked by conorms $p_{ij}$ and 0.} \includegraphics[height=31mm]{images/Fano_plane.png} \hspace*{0mm} \includegraphics[height=31mm]{images/voform3d.png} \hspace*{0mm} \includegraphics[height=31mm]{images/coform3d.png} \end{figure} \begin{dfn}[voform and coform of an obtuse superbase] \label{dfn:forms3d} Any obtuse superbase $B=(v_0,v_1,v_2,v_3)$ in $\mathbb R^3$ has seven pairs of partial sums $\pm v_0=\mp(v_1+v_2+v_3)$, $\pm v_1$, $\pm v_2$, $\pm v_3$, $\pm(v_1+v_2)$, $\pm(v_1+v_3)$, $\pm(v_2+v_3)$. Definition~\ref{dfn:partial_sums} expresses their vonorms as $v_i^2=p_{ij}+p_{ik}+p_{il}$ for the unordered triple $\{j,k,l\}=\{0,1,2,3\}-\{i\}$, for instance $v_0^2=p_{01}+p_{02}+p_{03}$. Similarly, $v_{ij}^2=(v_i+v_j)^2=(-v_k-v_l)^2=p_{ik}+p_{il}+p_{jk}+p_{jl}$ for the unordered pair $\{k,l\}=\{0,1,2,3\}-\{i,j\}$. The seven vonorms above have the linear relation $v_0^2+v_1^2+v_2^2+v_3^2=v_{01}^2+v_{02}^2+v_{03}^2$. The six conorms are conversely expressed as $p_{ij}=\dfrac{1}{2}(v_i^2+v_j^2-v_{ij}^2)$ for any distinct indices $i,j\in\{0,1,2,3\}$. \medskip The \emph{voform} $\mathrm{VF}(B)$ is the Fano plane in Fig.~\ref{fig:forms3d} with four nodes marked by $v_0^2,v_1^2,v_2^2,v_3^2$ and three nodes marked by $v_{12}^2, v_{23}^2,v_{13}^2$ so that $v_0^2$ is in the centre, $v_1^2$ is opposite to $v_{23}^2$, etc. The \emph{coform} $\mathrm{CF}(B)$ is the dual Fano plane in Fig.~\ref{fig:forms3d} with three nodes marked by $p_{12},p_{23},p_{13}$ and three nodes marked by $p_{01},p_{02},p_{03}$, the centre is marked by $0$. \hfill $\blacksquare$ \end{dfn} The \emph{zero} conorm $p_0=0$ at the centre of the coform $\mathrm{CF}(B)$ seems mysterious, because \cite{conway1992low} gave no formula for $p_0$, which also wrongly became non-zero in their Fig.~5. This past mystery is explained by Lemma~\ref{lem:vonorms<->conorms}. The proof of Theorem~\ref{thm:reduction} for $n=3$ will correct more details in \citeasnoun[Fig.~5]{conway1992low}. \begin{lem}[6 conorms $\leftrightarrow$ 7 vonorms] \label{lem:vonorms<->conorms} For any distinct indices $i,j\in\{0,1,2,3\}$, the conorm $p_{ij}$ in $\mathrm{CF}(B)$ of any superbase $B$ defines the dual line in the voform $\mathrm{VF}(B)$ through the nodes marked by $v_{ij}^2,v_k^2,v_l^2$ for $\{k,l\}=\{0,1,2,3\}-\{i,j\}$. Then $$4p_{ij}=v_i^2+v_j^2+v_{ik}^2+v_{jk}^2-v_{ij}^2-v_k^2-v_l^2, \leqno{(\ref{lem:vonorms<->conorms}a)}$$ where the vonorms with negative signs are in the line of the voform $\mathrm{VF}(B)$ dual to $p_{ij}$. The zero conorm $p_0=0$ in $\mathrm{CF}(B)$ can be computed by the similar formula $$4p_{0}=v_0^2+v_1^2+v_2^2+v_3^2-v_{01}^2-v_{02}^2-v_{03}^2=0, \leqno{(\ref{lem:vonorms<->conorms}b)}$$ where the line dual to the zero conorm $p_0$ is the `circle' through $v_{01}^2,v_{02}^2,v_{03}^2$. \hfill $\blacksquare$ \end{lem} \begin{proof} Since all indices $i,j,k,l\in\{0,1,2,3\}$ are distinct, formula (\ref{lem:vonorms<->conorms}a) is symmetric in $k,l$ due to $v_{ik}^2+v_{jk}^2=v_{il}^2+v_{jl}^2$ following from $v_{ik}=v_i+v_k=-(v_j+v_l)=-v_{jl}$ and $v_{jk}=v_j+v_k=-(v_i+v_l)=-v_{il}$. To prove (\ref{lem:vonorms<->conorms}a), we simplify its right hand side: $$v_i^2+v_j^2+v_{ik}^2+v_{jk}^2-v_{ij}^2-v_k^2-v_l^2 =v_i^2+v_j^2+(v_i+v_k)^2+(v_j+v_k)^2-(v_i+v_j)^2-v_k^2-$$ $$-(-v_i-v_j-v_k)^2 =v_i^2+v_j^2+(v_i^2+2v_iv_k+v_k^2)+(v_j^2+2v_jv_k+v_k^2)-$$ $$-(v_i^2+2v_iv_j+v_j^2)-v_k^2-(v_i^2+v_j^2+v_k^2+2v_i v_j+2v_iv_k+2v_jv_k)=-4v_iv_j=4p_{ij} .$$ Formula~(\ref{lem:vonorms<->conorms}b) follows from $v_0^2+v_1^2+v_2^2+v_3^2=v_{01}^2+v_{02}^2+v_{03}^2$ in Definition~\ref{dfn:forms3d}. \end{proof} \begin{dfn}[isomorphisms of voforms and coforms] \label{dfn:isomorphisms3d} An \emph{isomorphism} of voforms is a permutation $\sigma\in S_4$ of indices $0,1,2,3$, which maps vonorms as follows: $v_i^2\mapsto v_{\sigma(i)}^2$, $v_{ij}^2\mapsto v_{\sigma(i)\sigma(j)}^2$, where $v_{ij}^2=v_{ji}^2$. If we swap $v_1^2,v_2^2$, then we also swap only $v_{13}^2=v_{02}^2$ and $v_{23}^2=v_{01}^2$. If we swap $v_0^2,v_1^2$, then we also swap only $v_{12}^2=v_{03}^2$ and $v_{02}^2=v_{13}^2$, see Fig.~\ref{fig:forms3d_permutations}. An \emph{isomorphism} of coforms is a permutation $\sigma\in S_4$ of $0,1,2,3$, which maps conorms as follows: $p_{ij}\mapsto p_{\sigma(i)\sigma(j)}$, where $p_{ij}=p_{ji}$. An isomorphism above is called \emph{orientation-preserving} if the permutation $\sigma$ of the indices $0,1,2,3$ is even (or positive) meaning that $\sigma$ decomposes into an even number of transpositions $i\leftrightarrow j$. \hfill $\blacksquare$ \end{dfn} \begin{figure} \label{fig:forms3d_permutations} \caption{Actions of permutations $1\leftrightarrow 2$ and $0\leftrightarrow 1$ on voforms (top) and coforms.} \includegraphics[width=\textwidth]{images/forms3d_permutations.png} \end{figure} \begin{exa}[voforms and coforms as matrices] \label{exa:forms3dmat} Any voform can be written as the $2\times 3$ matrix $\mathrm{VF}(\Lambda)=\mat{v_{23}^2}{v_{13}^2}{v_{12}^2}{v_1^2}{v_2^2}{v_3^2}$, where $v_{23}^2=v_{01}^2$ is above $v_{1}^2$ and so on. The 7th vonorm can be found as $v_0^2= v_{23}^2 + v_{13}^2 + v_{12}^2 - v_1^2 - v_2^2- v_3^2$ and is unnecessary to include. Similarly, any coform can be written as $\mathrm{CF}(\Lambda)=\mat{p_{23}}{p_{13}}{p_{12}}{p_{01}}{p_{02}}{p_{03}}$. \medskip The permutations $1\leftrightarrow 2$ and $0\leftrightarrow 1$ affect the voforms and coforms as follows: $$\mat{ \color[rgb]{0,0.5,0.5}{v_{13}^2}}{ \color[rgb]{0,0.5,1}{v_{23}^2}}{ v_{12}^2}{ \color[rgb]{0,0.5,0}{v_2^2}}{ \color[rgb]{0,0,1}{v_1^2}}{ v_3^2} \stackrel{1\leftrightarrow 2}{\longleftrightarrow}\mathrm{VF}(\Lambda)= \mat{ \color[rgb]{0,0.5,1}{v_{23}^2}}{ \color[rgb]{0,0.5,0.5}{v_{13}^2}}{ \color[rgb]{0.5,0,0}{v_{12}^2}}{ \color[rgb]{0,0,1}{v_1^2}}{ \color[rgb]{0,0.5,0}{v_2^2}}{ v_3^2} \stackrel{0\leftrightarrow 1}{\longleftrightarrow} \mat{ v_{23}^2}{ \color[rgb]{0.5,0,0}{v_{12}^2}}{ \color[rgb]{0,0.5,0.5}{v_{13}^2}}{ \color[rgb]{1,0.5,0}{v_0^2}}{ v_2^2}{ v_3^2} \leqno{(\ref{exa:forms3dmat}a)}$$ $$\mat{ \color[rgb]{0.5,0.5,0}{p_{13}}}{ \color[rgb]{0,0.5,1}{p_{23}}}{ p_{12}}{ \color[rgb]{0,0.5,0}{p_{02}}}{ \color[rgb]{0,0,1}{p_{01}}}{ p_{03}} \stackrel{1\leftrightarrow 2}{\longleftrightarrow}\mathrm{CF}(\Lambda)= \mat{ \color[rgb]{0,0.5,1}{p_{23}}}{ \color[rgb]{0.5,0.5,0}{p_{13}}}{ \color[rgb]{0.5,0,0}{p_{12}}}{ \color[rgb]{0,0,1}{p_{01}}}{ \color[rgb]{0,0.5,0}{p_{02}}}{ \color[rgb]{1,0,0}{p_{03}}} \stackrel{0\leftrightarrow 1}{\longleftrightarrow} \mat{ p_{23}}{ \color[rgb]{1,0,0}{p_{03}}}{ \color[rgb]{0,0.5,0}{p_{02}}}{ p_{01}}{ \color[rgb]{0.5,0,0}{p_{12}}}{ \color[rgb]{0.5,0.5,0}{p_{13}}} \leqno{(\ref{exa:forms3dmat}b)}$$ Since an action on the voform may require the 7th vonorm $v_0^2$, we will mainly use conorms for classifying lattices and defining metrics on their isomery classes. \medskip In general, any transposition of non-zero indices $i\leftrightarrow j$ swaps the columns $i$ and $j$ in $\mathrm{CF}(\Lambda)$. Any transposition $0\leftrightarrow i$ for $i\neq 0$ diagonally swaps two pairs in the columns different from $i$. In all cases, two conorms from one column remain in one column. \hfill $\blacksquare$ \end{exa} Permutations~(\ref{exa:forms3dmat}ab) show that coforms of six conorms are easier than voforms, which essential require seven vonorms since $v_0^2$ appears after the transposition $0\leftrightarrow 1$. \begin{exa}[non-isometric lattices with $DC^7(\Lambda,\Lambda')=0$] \label{exa:dc7=0} Fig.~\ref{fig:DC7examples} shows that we can not arbitrarily permute conorms or vonorms without changing our lattice. Only $4!=24$ permutations are allowed for isomorphisms in Definition~\ref{dfn:forms3d}. The voforms in Fig.~\ref{fig:DC7examples} differ by a single transposition $10\leftrightarrow 12$ for the vonorms $v_{12}^2$ and $v_{23}^2$. The coforms in Fig.~\ref{fig:DC7examples} are computed from the voforms by the formulae in Definition~\ref{dfn:forms3d}. Since coforms consist of different numbers, they are not isomorphic and will give rise to non-isometric lattices $\Lambda,\Lambda'$, see an explicit reconstruction in Lemma~\ref{lem:superbase_reconstruction}. \medskip In these lattices $\Lambda,\Lambda'\subset\mathbb R^3$ the origin $0$ has the same distances $|v_0|$, $|v_1|$, $|v_2|$, $|v_3|$, $|v_{12}|$, $|v_{23}|$, $|v_{13}|$ to its seven closest Voronoi neighbours. Hence the $DC^7$ functions taking the Euclidean distance between these 7-dimensional distance vectors \cite{andrews2019space} vanishes for $\Lambda,\Lambda'$. Our colleagues Larry Andrews and Herbert Bernstein quickly checked that $\Lambda,\Lambda'$ can be distinguished by the 8th distance from the origin to its 8th closest neighbour. However, the example Fig.~\ref{fig:DC7examples} can be extended to tan infinite 6-parameter family of pairs $\Lambda,\Lambda'$ with $DC^7(\Lambda,\Lambda')=0$ as follows. \medskip Add an arbitrary coform of any conorms $q_{ij}\geq 0$ to $\mathrm{CF}(\Lambda),\mathrm{CF}(\Lambda)$ `conorm-wise'. Definition~\ref{dfn:forms3d} implies that the voforms consist of the same 7 numbers, e.g. $$\Lambda:\quad v_0^2=(p_{01}+q_{01})+(p_{02}+q_{02})+(p_{03}+q_{03}) =1+4+1+q_{01}+q_{02}+q_{03},$$ $$\Lambda':\quad v_0^2=(p'_{01}+q_{01})+(p'_{02}+q_{02})+(p'_{03}+q_{03}) =2+1+3+q_{01}+q_{02}+q_{03}.$$ The coforms will remain non-isomorphic if we exclude the singular case when $q_{23}+q_{01} = q_{12}+q_{03}$. These 6-parameter family of non-isometric lattices $\Lambda,\Lambda'$ might be distinguished by 8 or more distances from the origin to its neighbours, but this conclusion requires a theoretical argument. The root metric in Definition~\ref{dfn:metrics3d} will provably satisfy the first metric axiom: $\mathrm{RM}_d(\Lambda,\Lambda')=0$ if and only if $\Lambda,\Lambda'$ are isometric. \hfill $\blacksquare$ \end{exa} \begin{figure} \label{fig:DC7examples} \caption{The lattices defined by non-isomorphic coforms $\mathrm{CF}(\Lambda)\not\sim\mathrm{CF}(\Lambda')$ are not isometric but the origin $0$ has the same distances to its seven closest neighbours.} \includegraphics[width=\textwidth]{images/DC7examples.png} \end{figure} \section{Unique root forms are isometry invariants of lattices in dimension 3} \label{sec:invariants3d} Isomorphisms from Definition~\ref{dfn:isomorphisms3d} help unambiguously order the six conorms within a coform and define a unique root form, which will classify lattices up to isometry. \begin{dfn}[the root form $\mathrm{RF}(\Lambda)$ of an obtuse superbase] \label{dfn:root_form3d} Since any obtuse superbase $B$ has only non-negative conorms, the six \emph{root products} $r_{ij}=\sqrt{p_{ij}}$ are well-defined for all distinct indices $i,j\in\{0,1,2,3\}$ and have the same units as original coordinates of basis vectors, for example in Angstroms: $1\AA=10^{-10}$m. \medskip For any matrix of root products $\mat{r_{23}}{r_{13}}{r_{12}}{r_{01}}{r_{02}}{r_{03}}$, a permutation of indices 1,2,3 as in (\ref{exa:forms3dmat}a) allows us to arrange the three columns in any order. The composition of transpositions $0\leftrightarrow i$ and $j\leftrightarrow k$ for distinct $i,j,k\neq 0$ vertically swaps the root products in columns $j$ and $k$, for example apply the transposition $2\leftrightarrow 3$ to the result of $0\leftrightarrow 1$ in (\ref{exa:forms3dmat}b). So we can put the minimum value $r_{min}$ into the top left position ($r_{23}$). Then we consider the four root products in columns 2 and 3. Keeping column 1 fixed, we can put the minimum of these four into the top middle position ($r_{13}$). Then the resulting root products in the top row should be in increasing order. \medskip If the top left and top middle root products are accidentally equal ($r_{23}=r_{13}$), we can put their counterparts ($r_{01}$ and $r_{02}$) in the bottom row of columns 1,2 in increasing order. If the top middle and top right root products are accidentally equal ($r_{13}=r_{12}$), we can put their counterparts ($r_{02}$ and $r_{03}$) in the bottom row of columns 2 and 3 in increasing order. The resulting matrix is called the \emph{root form} $\mathrm{RF}(B)$ and can be visualised as in the last picture of Fig.~\ref{fig:forms3d} with root products instead of conorms. \medskip For orientation-preserving isomorphism, we have only 12 available permutations of 0,1,2,3 from the group $A_4$ such as the cyclic permutations of the three columns and vertical swaps in two columns, for example realised by the composition of $0\leftrightarrow 1$ and $2\leftrightarrow 3$. These positive permutations still allow us to put the minimum of the six root products into the top left position. The top row can not be put in increasing order if $r_{13}>r_{12}$ and $r_{02}>r_{03}$. The vertical swap in columns 2 and 3 can put $(r_{13},r_{12})$, $(r_{02},r_{03})$ in the lexicographic order so that $r_{13}<r_{02}$, if $r_{13}=r_{02}$ then $r_{12}\leq r_{03}$. \medskip The only unresolved ambiguity may appear in the case when all root products in the top row equal the minimum value $r_{min}$ of all six. Then we put the minimum of three remaining root products at the left position in the bottom row. If five root products equal the minimum value $r_{min}$, the 6th one can be put in the bottom right position. We got a unique \emph{root form} $\mathrm{RF}^+(\Lambda)$ up to orientation-preserving isomorphism. \hfill $\blacksquare$ \end{dfn} Geometrically, any root product $r_{ij}$ measures non-orthogonality of vectors $v_i,v_j$. \begin{lem}[equivalence of $\mathrm{VF},\mathrm{CF},\mathrm{RF}$] \label{lem:forms3d_equiv} For any obtuse superbase $B$ in $\mathbb R^3$, its voform $\mathrm{VF}(B)$, coform $\mathrm{CF}(B)$ and unique $\mathrm{RF}(B)$ are reconstructible from each other. \end{lem} \begin{proof} The six conorms $p_{ij}$ are uniquely expressed via the seven vonorms $v_i^2,v_{ij}^2$ by formulae (\ref{dfn:forms3d}ab) and vice versa. If we apply a permutation of indices $0,1,2,3$ to the conorms, the same permutation applies to the vonorms. Hence we have a 1-1 bijection $\mathrm{CF}(\Lambda)\leftrightarrow\mathrm{VF}(\Lambda)$ up to (orientation-preserving) isomorphism. The root form $\mathrm{RF}(\Lambda)$ is uniquely defined by ordering root products without any need for isomorphisms. \end{proof} \begin{lem}[isometry$\to$isomorphism] \label{lem:forms3d_invariants} Any (orientation-preserving) isometry of obtuse superbases $B\to B'$ induces an (orientation-preserving, respectively) isomorphism of voforms $\mathrm{VF}(B)\sim\mathrm{VF}(B')$, coforms $\mathrm{CF}(B)\sim\mathrm{CF}(B')$ and keeps $\mathrm{RF}(B)=\mathrm{RF}(B')$. \hfill $\blacksquare$ \end{lem} \begin{proof} Any isometry preserves lengths and scalar products of vectors. \end{proof} Lemma~\ref{lem:reduction} will help find an obtuse superbase for any lattice $\Lambda\subset\mathbb R^3$. \begin{lem}[reduction] \label{lem:reduction} Let $B=(v_0,v_1,v_2,v_3)$ be any superbase of a lattice $\Lambda\subset\mathbb R^3$. For any distinct $i,j,k,l\in\{0,1,2,3\}$, let the new superbase vectors be $u_i=-v_i$, $u_j=v_j$, $u_k=v_{ik}=v_i+v_k$, $u_l=v_{il}=v_i+v_j$. Then all vonorms remain the same or swap their places, and the only change is $u_{ij}^2=v_{ij}^2-4\varepsilon$, where $\varepsilon=v_i\cdot v_j$. The conorms $q_{\bullet}$ of the new vectors $u_{\bullet}$ are updated as in Fig.~\ref{fig:forms3d_reduction} for $(i,j)=(1,3)$, $(k,l)=(0,2)$. $$q_{ij}=\varepsilon,\; q_{jk}=p_{jk}-\varepsilon,\; q_{jl}=p_{jl}-\varepsilon,\; q_{ik}=p_{il}-\varepsilon,\; q_{il}=p_{ik}-\varepsilon,\; q_{kl}=p_{kl}+\varepsilon. \quad \blacksquare \leqno{(\ref{lem:reduction})}$$ \end{lem} \begin{proof} If initial vectors $v_{\bullet}$ form a superbase, which means that $v_i+v_j+v_k+v_l=0$, then so do the new vectors: $u_i+u_j+u_k+u_l=(-v_i)+v_j+(v_i+v_k)+(v_i+v_l)=0$. \begin{figure} \label{fig:forms3d_reduction} \caption{Lemma~\ref{lem:reduction} for $i=1$, $k=2$, $j=3$, $l=0$ says that the new superbase $u_1=-v_1$, $u_2=v_{12}$, $u_3=v_3$, $u_0=v_{01}$ has the new voform $\mathrm{VF}$ and coform $\mathrm{CF}$ shown above.} \includegraphics[width=\textwidth]{images/forms3d_reduction.png} \end{figure} For the new superbase $u_i=-v_i$, $u_j=v_j$, $u_k=v_{ik}$, $u_l=v_{il}$, two vonorms remain the same: $u_i^2=v_i^2$ and $u_j^2=v_j^2$. Two pairs of vonorms swap their places: $u_k^2=v_{ik}^2$, $u_{ik}^2=(u_i+u_k)^2=v_k^2$ and $u_l^2=v_{il}^2$, $u_{il}^2=(u_i+u_l)^2=v_l^2$. The final vonorm is $$u_{ij}^2=u_{kl}^2=(v_j-v_i)^2=(v_i+v_j)^2-4v_i\cdot v_j=v_{ij}^2+4p_{ij}=v_{ij}^2-4\varepsilon, \text{ see Fig.~\ref{fig:forms3d_reduction}.}$$ We similarly check formulae (\ref{lem:reduction}) illustrated in Fig.~\ref{fig:forms3d_reduction} for $i=1$, $k=2$, $j=3$, $l=0$. \noindent $q_{ij}=-u_i\cdot u_j=v_i\cdot v_j=-p_{ij}=\varepsilon$ \noindent $q_{jk}=-u_j\cdot u_k=-v_j\cdot(v_i+v_k)=-v_i\cdot v_j-v_j\cdot v_k=p_{jk}-\varepsilon$ \noindent $q_{jl}=-u_j\cdot u_l=-v_j\cdot(v_i+v_l)=-v_i\cdot v_j-v_j\cdot v_l=p_{jl}-\varepsilon$ \noindent $q_{ik}=-u_i\cdot u_k=v_i\cdot(v_i+v_k)=v_i\cdot(-v_j-v_l)=-v_i\cdot v_l-v_i\cdot v_j=p_{il}-\varepsilon$ \noindent $q_{il}=-u_i\cdot u_l=v_i\cdot(v_i+v_l)=v_i\cdot(-v_j-v_k)=-v_i\cdot v_k-v_i\cdot v_j=p_{ik}-\varepsilon$ \noindent $q_{kl}=-u_k\cdot u_l=(v_i+v_k)\cdot(v_i+v_l)=v_i\cdot(-v_i-v_j-v_k)-v_k\cdot v_l =v_i\cdot v_j+p_{kl} =p_{kl}+\varepsilon$. \medskip Notice that the conorm $p_0$ at the centre of $\mathrm{CF}$ remains zero by formula~(\ref{lem:vonorms<->conorms}b): $$4p_{0}=u_i^2+u_j^2+u_k^2+u_l^2-u_{ij}^2-u_{ik}^2-u_{il}^2 =v_i^2+v_j^2+(v_i+v_k)^2+(v_i+v_l)^2-(v_j-v_i)^2-v_k^2-v_l^2=$$ $$=v_i^2+v_j^2+(v_i^2+2v_iv_k+v_k^2)+(v_j+v_k)^2-(v_i^2-2v_iv_j+v_j^2)-v_k^2-(v_i+v_j+v_k)^2=$$ $$=v_i^2+v_j^2+v_k^2+2v_iv_j+2v_jv_k+2v_iv_k-(v_i+v_j+v_k)^2 =0.$$ Hence all central conorms $p_0$ in \citeasnoun[Fig.~5]{conway1992low} should be 0. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:reduction} for $n=3$] We will reduce any superbase $B=(v_0,v_1,v_2,v_3)$ of a lattice $\Lambda\subset\mathbb R^3$ to make all conorms $p_{ij}$ non-negative. Starting from any negative conorm $p_{ij}=-\varepsilon<0$, we change the superbase by Lemma~\ref{lem:reduction}. This reduction leads to the positive conorm $q_{ij}=\varepsilon$, not zero as in \citeasnoun[Fig.~4(b)]{conway1992low}. \medskip Four other conorms decrease by $\varepsilon>0$ and can potentially become negative, which requires a new reduction by Lemma~\ref{lem:reduction} and so on. To prove that the reduction process always finishes, notice that six vonorms keep or swap their values, but one vonorm always decreases by $4\varepsilon>0$. Every reduction can make superbase vectors only shorter, but not shorter than a minimum distance between points of $\Lambda$. The angle between $v_i,v_j$ can have only finitely many values when lengths of $v_i,v_j$ are bounded. Hence the scalar product $\varepsilon=v_i\cdot v_j>0$ cannot converge to 0. Since every reduction makes one partial sum $v_S$ shorter by a positive constant, while other six vectors $v_S$ keep or swap their lengths, the reductions by Lemma~\ref{lem:reduction} should finish in finitely many steps. \end{proof} A reduction of lattice bases for real crystals has many efficient implementations. Theoretical estimates for reduction steps are discussed in \cite{nguyen2009low}. \medskip \begin{lem \label{lem:isometric_superbases} All obtuse superbases of any lattice $\Lambda\subset\mathbb R^3$ are isometric. Hence $\mathrm{VF}(\Lambda)$, $\mathrm{CF}(\Lambda)$, $\mathrm{RF}(\Lambda)$ are independent of a superbase (well-defined up to isomorphism). \hfill $\blacksquare$ \end{lem} \begin{proof} By Lemma~\ref{lem:partial_sums} for $n=3$, if $\Lambda$ has a strict obtuse superbase $v_0,v_1,v_2,v_3$, all Voronoi vectors of $\Lambda$ are 7 pairs of partial sums $\pm v_S$ for the vectors $v_S$ from the list $$v_0,\quad v_1,\quad v_2,\quad v_3,\quad v_2+v_3=-(v_0+v_1),\quad v_3+v_1=-(v_0+v_2),\quad v_1+v_2=-(v_0+v_3).$$ In this generic case, the Voronoi domain $V(\Lambda)$ is a truncated octahedron. First, $V(\Lambda)$ has four pairs of opposite hexagonal faces obtained by cutting corners in four pairs of opposite triangular faces in an octahedron. The normal vectors of these hexagons are the Voronoi vectors $\pm v_i$, $i=0,1,2,3$. Second, $V(\Lambda)$ has three pairs of opposite parallelogram faces obtained by cutting three pairs of opposite vertices in an octahedron. The normal vectors of these faces are the Voronoi vectors $v_i+v_j$ for distinct $i,j\in\{0,1,2,3\}$. Hence a superbase $\{v_0,v_1,v_2,v_3\}$ of any generic $\Lambda$ is determined up to a sign by the four pairs of opposite hexagonal faces in $V(\Lambda)$. \medskip If a superbase of $\Lambda$ is non-strict, one conorm vanishes, say $p_{12}=0$, so the basis vectors $v_1,v_2$ become orthogonal. If $v_1$ or $v_2$ has strictly obtuse angles with both other vectors $v_3$ and $v_0$, there are still only two symmetric superbases $\pm\{v_0,v_1,v_2,v_3\}$. If (say) $v_1$ becomes orthogonal to both $v_2,v_3$, we get the new pair of symmetric superbases $\pm\{v_1-v_2-v_3,-v_1,v_2,v_3\}$ related to $\pm\{-v_1-v_2-v_3,v_1,v_2,v_3\}$ by the mirror reflection with respect to the plane orthogonal to $v_1$. If two more vectors $v_2,v_3$ become orthogonal, the Voronoi domain $V(L)$ is a rectangular box with four pairs of symmetric superbases, which are all related by mirror reflections in $\mathbb R^3$. \medskip Any (even) permutation of vectors $v_0,v_1,v_2,v_3$ induces an (orientation-preserving) isomorphism of voforms and coforms and keeps the root form invariant. Lemma~\ref{lem:forms3d_invariants} implies that $\mathrm{VF}(\Lambda)$, $\mathrm{CF}(\Lambda)$, $\mathrm{RF}(\Lambda)$ are independent of a superbase $B$ of $\Lambda$. \end{proof} \begin{exa}[root forms of orthorhombic lattices] \label{exa:orthorhombic_lattices} Scaling any lattice $\Lambda$ by a factor $s\in\mathbb R$ multiplies all root products in $\mathrm{RF}(\Lambda)$ by $s$. The primitive orthorhombic lattice ($oP$) with side lengths $0\leq a\leq b\leq c$ has the obtuse superbase $v_1=(a,0,0)$, $v_2=(0,b,0)$, $v_3=(0,0,c)$, $v_0=(-a,-b,-c)$ and the root form $\mathrm{RF}(oP)=\mat{0}{0}{0}{a}{b}{c}$. \medskip Let a Base-centred Orthorhombic lattice ($oS$) have the underlying cube of side lengths $2a\leq 2b\leq 2c$. The obtuse superbase $v_1=(2a,0,0)$, $v_2=(-a,b,0)$, $v_3=(0,0,c)$, $v_0=(-a,-b,-c)$ gives the root form $\mathrm{RF}(oS)=\mat{0}{0}{a\sqrt{2}}{a\sqrt{2}}{\sqrt{b^2-a^2}}{c}$, where the first two columns should be swapped if $a\sqrt{2}>\sqrt{b^2-a^2}$ or $3a^2>b^2$. \medskip In the above notations, a Face-centred Orthorhombic lattice ($oF$) has the obtuse superbase $v_1=(a,b,0)$, $v_2=(a,-b,0)$, $v_3=(-a,0,c)$, $v_0=(-a,0,-c)$. If $b^2<2a^2$, the root form is $\mathrm{RF}(oF)=\mat{\sqrt{b^2-a^2}}{a}{a}{\sqrt{c^2-a^2}}{a}{a}$, otherwise the first column should be swapped with the last column. For a Body-centred Orthorhombic lattice ($oI$) on the same cube above, assume the triangle with side lengths $a,b,c$ is acute to guarantee non-negative conorms. This lattice has the obtuse superbase $v_1=(a,b,-c)$, $v_2=(a,-b,c)$, $v_3=(-a,b,c)$, $v_0=(-a,-b,-c)$ and the root form $\mathrm{RF}(oI)=\mat{\sqrt{a^2+b^2-c^2}}{\sqrt{a^2-b^2+c^2}}{\sqrt{-a^2+b^2+c^2}}{\sqrt{a^2+b^2-c^2}}{\sqrt{a^2-b^2+c^2}}{\sqrt{-a^2+b^2+c^2}}$, where the root products in each row are in increasing order as expected due to $a\leq b\leq c$. \hfill $\blacksquare$ \end{exa} \section{The simpler space of obtuse superbases up to isometry in dimension 3} \label{sec:superbases/isometry3d} \begin{dfn}[space $\mathrm{OSI}^{(3)}$ of obtuse superbases up to isometry] \label{dfn:OSI} Let $B=\{v_i\}_{i=0}^3$ and $B'=\{u_i\}_{i=0}^3$ be any obtuse superbases in $\mathbb R^3$. The maximum Euclidean length of vector differences $L_{\infty}(B,B')=\min\limits_{R\in\mathrm{O}(\mathbb R^3)}\max\limits_{i=0,1,2,3}|R(u_i)-v_i|$ is minimised over all orthogonal maps $R$ from the compact group $\mathrm{O}(\mathbb R^3)$. Let $\mathrm{OSI}^{(3)}$ denote the space of all \emph{obtuse superbases up to isometry} in $\mathbb R^3$, which we equip with the metric $L_{\infty}$. For orientation-preserving isometries, we have the space $\mathrm{OSI}^{(3)+}$ with the metric $L_{\infty}^+$ defined by minimising over all 3-dimensional rotations from the group $\mathrm{SO}(\mathbb R^3)$. \hfill $\blacksquare$ \end{dfn} Theorem~\ref{thm:superbases/isometry} substantially reduces the ambiguity of basis representations due to the 1-1 map $\mathrm{LISP}^{(3)}\to\mathrm{OSI}^{(3)}$. Any fixed lattice $\Lambda\subset\mathbb R^3$ has infinitely many (super)bases but only a few obtuse superbases, maximum eight for rectangular Voronoi domains. \begin{thm}[lattices up to isometry $\leftrightarrow$ obtuse superbases up to isometry] \label{thm:superbases/isometry} Lattices in $\mathbb R^3$ are isometric if and only if any of their obtuse superbases are isometric. \hfill $\blacksquare$ \end{thm} \begin{proof} Part \emph{only if} ($\Rightarrow$): any isometry $f$ between lattices $\Lambda,\Lambda'$ maps any obtuse superbase $B$ of $\Lambda$ to the obtuse superbase $f(B)$ of $\Lambda'$, which should be isometric to any other obtuse superbase of $\Lambda'$ by Lemma~\ref{lem:isometric_superbases}. Part \emph{if} ($\Leftarrow$): any isometry between obtuse superbases of $\Lambda,\Lambda'$ linearly extends to an isometry between the lattices $\Lambda,\Lambda'$. \end{proof} \begin{lem}[special lattices and their root forms] \label{lem:special_forms} \textbf{(a)} If the root form $\mathrm{RF}(\Lambda)$ of a lattice $\Lambda\subset\mathbb R^3$ has two equal columns with identical root products, for example $r_{12}=r_{13}$ and $r_{02}=r_{03}$, then $\Lambda$ is a mirror reflection of itself. \medskip \noindent \textbf{(b)} If the rows of $\mathrm{RF}(\Lambda)$ coincide, then $\Lambda$ is a Face-centred Orthorhombic lattice. \hfill $\blacksquare$ \end{lem} \begin{proof} \textbf{(a)} If $r_{12}=r_{13}$ and $r_{02}=r_{03}$, then the vectors $v_2,v_3$ have the same length by formulae of Definition~\ref{dfn:forms3d}: $v_2^2=p_{02}+p_{12}+p_{23}=p_{03}+p_{13}+p_{23}=v_3^2.$ Then $v_2,v_3$ are mirror images with respect to their bisector plane $P$. The identity $p_{02}=p_{03}$ implies that $v_0$ has the same angles with the vectors $v_2,v_3$ of equal lengths, also $v_1$ due to $p_{12}=p_{13}$. Then both $v_0,v_1$ belong to the bisector plane $P$ between $v_2,v_3$. Hence the superbase is invariant under the mirror reflection with respect to $P$. \medskip \noindent \textbf{(b)} If $p_{01}=p_{23}$, $p_{02}=p_{13}$, $p_{03}=p_{12}$, the formulae of Definition~\ref{dfn:forms3d} imply that the vectors $v_0,v_1,v_2,v_3$ have the same squared length equal to $p_{01}+p_{02}+p_{03}$. The three other partial sums $v_0+v_i$, $i=1,2,3$, are orthogonal to each other. Indeed, $$(v_0+v_i)\cdot (v_0+v_j)=v_0^2+v_0\cdot v_i +v_0\cdot v_j+v_i\cdot v_j=(p_{01}+p_{02}+p_{03})-p_{0i}-p_{0j}-p_{ij}=0,$$ because $p_{ij}=p_{0k}$ when all indices $i,j,k\in\{1,2,3\}$ are distinct. Hence the vectors $v_0+v_i$ form a non-primitive orthogonal basis of $\Lambda$. Parameters $a,b,c$ of a Face-centred Orthorhombic lattice ($oF$) can be found from Example~\ref{exa:orthorhombic_lattices}. \end{proof} \begin{dfn}[sign of a lattice] \label{dfn:sign_lattice} A lattice $\Lambda\subset\mathbb R^3$ is called \emph{neutral} (or \emph{achiral}) $\Lambda$ maps to itself under a mirror reflection. If $\Lambda$ is not neutral, we define its positive/negative sign from the orientation-preserving root form $\mathrm{RF}^+(\Lambda)$ as follows. \medskip If the root products in the top row of $\mathrm{RF}^+(\Lambda)$ are in strictly increasing (decreasing) order, then $\Lambda$ is called \emph{positive} (\emph{negative}, respectively). In the exceptional case when the rows of $\mathrm{RF}^+(\Lambda)$ coincide, $\Lambda$ is neutral by Lemma~\ref{lem:special_forms}(b). \medskip If the top row contains two zero root products, say $p_{12}=p_{13}=0$, then the vector $v_1$ is orthogonal to both $v_2,v_3$, hence $\Lambda$ can be reflected to itself by $v_1\mapsto -v_1$, so $\Lambda$ is neutral. If two root products in the top row have the same non-zero value, say $r_{13}=r_{12}>0$, then we compare the root products $r_{02}$ and $r_{03}$ below them: if $r_{02}<r_{03}$ then the lattice $\Lambda$ is called \emph{positive}, if $r_{02}>r_{03}$ then $\Lambda$ is called \emph{negative}. \medskip If $r_{12}=r_{13}$ and $r_{02}=r_{03}$, then $\Lambda$ is neutral by Lemma~\ref{lem:special_forms}(a). \hfill $\blacksquare$ \end{dfn} \begin{exa}[neutral lattices] \label{exa:sign_lattice} All orthorhombic lattices from Example~\ref{exa:orthorhombic_lattices} are neutral, because they have a mirror symmetry, which is also visible in their root forms $\mathrm{RF}^+(\Lambda)$ containing other two zeros in the top row ($oP$ and $oS$) or having identical columns ($oF$) or rows ($oI$). Any monoclinic lattice $\Lambda$ has a superbase $v_1=(a,0,0)$, $v_2=(b\cos\alpha,b\sin\alpha,0)$, $v_3=(0,0,c)$, $v_0=(-a-b\cos\alpha,-b\sin\alpha,-c)$, where $a\leq b$ and a non-acute angle $\alpha$ satisfies $a+b\cos\alpha\geq 0$. Then $\Lambda$ is neutral and has the root form $\mathrm{RF}^+(\Lambda)=\mat{0}{0}{\sqrt{-ab\cos\alpha}}{\sqrt{a^2+ab\cos\alpha}}{\sqrt{b^2+ab\cos\alpha}}{c}$. \hfill $\blacksquare$ \end{exa} \section{Unique root forms classify all lattices up to isometry in dimension 3} \label{sec:classification3d} \begin{lem}[superbase reconstruction] \label{lem:superbase_reconstruction} For any lattice $\Lambda\subset\mathbb R^2$, an obtuse superbase $B$ of $\Lambda$ can be reconstructed up to isometry from $\mathrm{VF}(\Lambda)$ or $\mathrm{CF}(\Lambda)$ or $\mathrm{RF}(\Lambda)$. \hfill $\blacksquare$ \end{lem} \begin{proof} Since $\mathrm{VF}(\Lambda),\mathrm{CF}(\Lambda),\mathrm{RF}(\Lambda)$ are expressible via each other by Lemma~\ref{lem:forms3d_equiv}, it suffices to reconstruct an obtuse superbase $v_0,v_1,v_2,v_3$ of $\Lambda$ from $\mathrm{RF}(\Lambda)$. The positions of root products $r_{ij}=\sqrt{-v_i\cdot v_j}$ in $\mathrm{RF}(\Lambda)$ allow us to compute the lengths $|v_i|$ from the formulae of Definition~\ref{dfn:forms3d}, for example $|v_0|=\sqrt{r_{01}^2+r_{02}^2+r_{03}^2}$. Up to orientation-preserving isometry, one can fix $v_0$ along the positive $x$-axis in $\mathbb R^3$. The angle $\angle(v_i,v_j)=\arccos\dfrac{v_i\cdot v_j}{|v_i|\cdot|v_j|}\in[0,\pi)$ between the vectors $v_i,v_j$ can be found from the vonorms $v_i^2,v_j^2$ and root product $r_{ij}=\sqrt{-v_i\cdot v_j}$. A known length $|v_1|$ and angle $\angle(v_0,v_1)$ allow us to fix $v_1$ in the $xy$-plane of $\mathbb R^3$. The vector $v_2$ with a known length $|v_2|$ and two angles $\angle(v_0,v_2)$ and $\angle(v_1,v_2)$ has two symmetric positions with respect to the $xy$-plane spanned by $v_0,v_1$. These positions can be distinguished by an order of root products in $\mathrm{RF}^+(\Lambda)$ if we reconstruct up to orientaton-preserving isometry. The resulting superbases is unique up to isometry by Lemma~\ref{lem:isometric_superbases}. \end{proof} \begin{thm}[isometry classification: 3D lattices $\leftrightarrow$ root forms] \label{thm:classification3d} Lattices $\Lambda,\Lambda'\subset\mathbb R^2$ are isometric if and only if their root forms coincide: $\mathrm{RF}(\Lambda)=\mathrm{RF}(\Lambda')$ or, equivalently, their coforms and voforms are isomorphic: $\mathrm{CF}(\Lambda)\sim\mathrm{CF}(\Lambda')$, $\mathrm{VF}(\Lambda)\sim\mathrm{VF}(\Lambda')$. The existence of orientation-preserving isometry is equivalent to $\mathrm{RF}^+(\Lambda)=\mathrm{RF}^+(\Lambda')$. \hfill $\blacksquare$ \end{thm} \begin{proof} The part \emph{only if} ($\Rightarrow$) means that any isometric lattices $\Lambda,\Lambda'$ have $\mathrm{RF}(\Lambda)=\mathrm{RF}(\Lambda')$. Lemma~\ref{lem:forms3d_invariants} implies that the root form $\mathrm{RF}(B)$ of an obtuse superbase $B$ is invariant under isometry. Theorem~\ref{lem:isometric_superbases} implies $\mathrm{RF}(\Lambda)$ is independent of $B$. \medskip The part \emph{if} ($\Leftarrow$) follows from Lemma~\ref{lem:superbase_reconstruction} by reconstructing a superbase of $\Lambda$. \end{proof} Similarly to the 2-dimensional case in \citeasnoun[Definition~7.3]{bright2021easily}, one can visualise root forms of many 3D lattices by projecting two triples $(r_{23},r_{13},r_{12})$ and $(r_{01},r_{02},r_{03})$ from the positive octant to a triangle. Due to the order $r_{23}\leq r_{13}\leq r_{12}$, after scaling by $(r_{23}+r_{13}+r_{12})^{-1}$ the top triple maps to a point in the quotient triangle $\mathrm{QT}$ with coordinates $x=(\bar r_{12}-\bar r_{13})/2\in[0,\frac{1}{2}]$ and $y=\bar r_{23}\in[0,\frac{1}{3}]$. The bottom triple $(r_{01},r_{02},r_{03})$ is not ordered and maps under scaling by $(r_{01}+r_{02}+r_{03})^{-1}$ to a point in the full triangle $\mathrm{FT}$ with coordinates $x=(\bar r_{03}-\bar r_{02})/2\in[-\frac{1}{2},\frac{1}{2}]$ and $y=\bar r_{01}\in[0,1]$. \section{Easily computable continuous metrics on root forms in dimension 3} \label{sec:metrics3D} Any isomorphism on coforms from Definition~\ref{dfn:forms3d} similarly acts on a root form $\mathrm{RF}(\Lambda)$ rearranging root products $r_{ij}$ by one of 24 permutations from the group $S_4$ (for any isometries) or 12 even permutations from the group $A_4$ (for orientation-preserving isometries). The root metric is obtained from any distance $d$ between root forms considered as 6-dimensional vectors by minimising over all such permutations. \begin{dfn}[space $\mathrm{RFL}^{(3)}$ with root metrics $\mathrm{RM}_d(\Lambda,\Lambda')$] \label{dfn:metrics3d} For any metric $d$ on $\mathbb R^6$, the \emph{root metric} is $\mathrm{RM}_d(\Lambda,\Lambda')=\min\limits_{\sigma\in S_4} d(\mathrm{RF}(\Lambda),\sigma(\mathrm{RF}(\Lambda'))$, where a permutation $\sigma$ applies to $\mathrm{RF}(\Lambda')$ as a vector in $\mathbb R^6$. The \emph{orientation-preserving} root metric $\mathrm{RM}_d^+(\Lambda,\Lambda')=\min\limits_{\sigma\in A_4}d(\mathrm{RF}(\Lambda),\sigma(\mathrm{RF}(\Lambda')))$ is minimised over even permutations. \medskip If we use the Minkowski $L_q$-norm $||v||_q=(\sum\limits_{i=1}^n |x_i|^q)^{1/q}$ of a vector $v=(x_1,\dots,x_n)\in\mathbb R^n$ for any real parameter $q\in[1,+\infty]$, the root metric is denoted by $\mathrm{RM}_q(\Lambda,\Lambda')$. The limit case $q=+\infty$ means that $||v||_{+\infty}=\max\limits_{i=1,\dots,n}|x_i|$. Let $\mathrm{RFL}^{(3)}$ denote the space of \emph{Root Forms of 3-dimensional Lattices} $\Lambda\subset\mathbb R^3$, where we can use any of the above metrics satisfying all necessary axioms by Lemma~\ref{lem:metric_axioms}. \hfill $\blacksquare$ \end{dfn} The proof of Lemma~\ref{lem:metric_axioms} is almost identical to \citeasnoun[Lemma~8.3]{bright2021easily}. \begin{lem}[metric axioms for $\mathrm{RM}_d$] \label{lem:metric_axioms} For any metric $d$ on $\mathbb R^6$, the root metrics $\mathrm{RM}_d$, $\mathrm{RM}_d^+$ from Definition~\ref{dfn:metrics3d} satisfy the metric axioms in Problem~\ref{pro:metric}c. \hfill $\blacksquare$ \end{lem} \begin{lem}[Lemma~8.4 in \cite{bright2021easily}] \label{lem:continuity_products} Let vectors $u_1,u_2,v_1,v_2\in\mathbb R^n$ have a maximum Euclidean length $l$, scalar products $u_1\cdot u_2,v_1\cdot v_2\leq 0$ and be $\delta$-close in terms of Euclidean distance: $|u_i-v_i|\leq\delta$, $i=1,2$. Then $|\sqrt{-u_1\cdot u_2}-\sqrt{-v_1\cdot v_2}|\leq\sqrt{2l\delta}$. \hfill $\blacksquare$ \end{lem} Theorems~\ref{thm:superbases->root_forms} and~\ref{thm:root_forms->superbases} show that the 1-1 map $\mathrm{OSI}\leftrightarrow\mathrm{LISP}\leftrightarrow\mathrm{RFL}$ established by Theorems~\ref{thm:superbases/isometry} and \ref{thm:classification3d} is continuous in both directions. \begin{thm}[continuity of $\mathrm{OSI}\to\mathrm{RFL}$] \label{thm:superbases->root_forms} Let lattices $\Lambda,\Lambda'\subset\mathbb R^3$ have obtuse superbases $B=\{v_i\}_{i=0}^3$, $B'=\{u_i\}_{i=0}^3$ whose vectors have a maximum length $l$ and $|u_i-v_i|\leq\delta$ for some $\delta>0$, $i=0,1,2$. Then $\mathrm{RM}_q(\mathrm{RF}(\Lambda),\mathrm{RF}(\Lambda'))\leq 6^{1/q}\sqrt{2l\delta}$ for any $q\in[1,+\infty]$, where $6^{1/q}$ is interpreted for $q=+\infty$ as $\lim\limits_{q\to+\infty}6^{1/q}=1$. The same upper bound holds for the orientation-preserving metric $\mathrm{RM}_q^+$. \hfill $\blacksquare$ \end{thm} \begin{proof} Lemma~\ref{lem:continuity_products} implies that the root products $r_{ij}=\sqrt{-v_i\cdot v_j}$ and $\sqrt{-u_i\cdot u_j}$ of the superbases $B,B'$ differ by at most $2l\delta$ for any pair $(i,j)$ of indices. Then the $L_q$-norm of the vector difference in $\mathbb R^3$ is $\mathrm{RM}_q(\mathrm{RF}(\Lambda),\mathrm{RF}(\Lambda'))\leq 6^{1/q}\sqrt{2l\delta}$ for any $q\in[1,+\infty]$. By Definition~\ref{dfn:metrics3d}, the root metric $\mathrm{RM}_q$ is minimised over permutations of $S_4$ (or $A_4$ for the orientation-preserving metric $\mathrm{RM}_q^+$), so the upper bound still holds. \end{proof} Theorem~\ref{thm:superbases->root_forms} is proved for the $L_q$ norm only to give the explicit upper bound for $\mathrm{RM}_q$. A similar argument proves continuity for $\mathrm{RM}_d$ with any metric $d$ on $\mathbb R^3$ satisfying $d(u,v)\to 0$ when $u\to v$ coordinate-wise. Theorem~\ref{thm:root_forms->superbases} is stated for $L_{+\infty}$ only for simplicity, because all Minkowski norms in $\mathbb R^n$ are topologically equivalent due to $||v||_q\leq ||v||_{r}\leq n^{\frac{1}{q}-\frac{1}{r}}||v||_q$ for any $1\leq q\leq r$ \cite{norms}. \begin{thm}[continuity of $\mathrm{RFL}\to\mathrm{OSI}$] \label{thm:root_forms->superbases} Let lattices $\Lambda,\Lambda'\subset\mathbb R^3$ have $\delta$-close root forms, so $\mathrm{RM}_{\infty}(\mathrm{RF}(\Lambda),\mathrm{RF}(\Lambda'))\leq\delta$. Then $\Lambda,\Lambda'$ have obtuse superbases $B$, $B'$ that are close in the $L_{\infty}$ metric on the space $\mathrm{OSI}$ so that $L_{\infty}(B,B')\to 0$ as $\delta\to 0$. The same conclusion holds for the orientation-preserving metrics $\mathrm{RM}_{\infty}^+$ and $L_{\infty}^+$. \hfill $\blacksquare$ \end{thm} \begin{proof} Superbases $B=\{v_i\}_{i=0}^3$, $B'=\{u_i\}_{i=0}^3$ can be reconstructed from the root forms $\mathrm{RF}(\Lambda),\mathrm{RF}(\Lambda')$ by Lemma~\ref{lem:superbase_reconstruction}. By applying a suitable isometry of $\mathbb R^3$, one can assume that $\Lambda,\Lambda'$ share the origin and the first vectors $v_0,u_0$ lie in the positive $x$-axis. Let $r_{ij},s_{ij}$ be the root products of $B,B'$ respectively. Definition~\ref{dfn:forms3d} implies that $v_i^2=r_{ij}^2+r_{ik}^2+r_{il}^2$ and $u_i^2=s_{ij}^2+s_{ij}^2$ for distinct indices $i,j,k,l\in\{0,1,2,3\}$, for example if $i=0$ then $j=1$, $k=2$, $l=3$. For any continuous transformation from $\mathrm{RF}(\Lambda)$ to $\mathrm{RF}(\Lambda')$, all root products have a finite upper bound $M$, which is used below: $$|v_i^2-u_i^2|=|(r_{ij}^2+r_{ik}^2)-(s_{ij}^2+s_{ik}^2)|\leq |r_{ij}^2-s_{ij}^2|+|r_{ik}^2-s_{ik}^2|\leq $$ $$(r_{ij}+s_{ij})|r_{ij}-s_{ij}|+(r_{ik}+s_{ik})|r_{ik}-s_{ik}|\leq (r_{ij}+s_{ij})\delta+(r_{ik}+s_{ik})\delta\leq 4M\delta.$$ Since at least two continuously changing conorms should be strictly positive to guarantee positive lengths of basis vectors by Definition~\ref{dfn:forms3d}, there is a minimum length $a>0$ of all basis vectors during a transformation $\Lambda'\to\Lambda$. Then $||v_i|-|u_i||\leq\dfrac{4M\delta}{|v_i|+|u_i|}\leq\dfrac{2M}{a}\delta$. Since the first basis vectors $v_0,u_0$ lie in the positive horizontal axis, the lengths can be replaced by vectors: $|v_0-u_0|\leq\dfrac{2M}{a}\delta$, so $|v_0-u_0|\to 0$ as $\delta\to 0$. \medskip Up to orientation-preserving isometry and keeping both $v_0,u_0$ fixed in the positive $x$-axis, one can put the vectors $v_1,u_1$ into the $xy$-plane of $\mathbb R^3$. Then $v_1,u_1$ can have a non-zero angle equal to the difference $\alpha_1-\beta_1$ of the angles from the positive $x$-axis in $\mathbb R^3$ to $v_1,u_1$, respectively. These angles are expressed via the root products as follows: $$\alpha_i=\arccos\dfrac{v_0\cdot v_i}{|v_0|\cdot|v_i|} =\arccos\dfrac{-r_{0i}^2}{\sqrt{r_{01}^2+r_{02}^2}\sqrt{r_{ij}^2+r_{ik}^2}},\leqno{(\ref{thm:root_forms->superbases}a)}$$ $$ \beta_i=\arccos\dfrac{u_0\cdot u_i}{|u_0|\cdot|u_i|} =\arccos\dfrac{-s_{0i}^2}{\sqrt{s_{01}^2+s_{02}^2}\sqrt{s_{ij}^2+s_{ik}^2}}\leqno{(\ref{thm:root_forms->superbases}b)} $$ for distinct $i,j,k\in\{1,2,3\}$. If $\delta\to 0$, then $s_{ij}\to r_{ij}$ and $\alpha_i-\beta_i\to 0$ for all indices, because all the above functions are continuous for $|u_j|,|v_j|\geq a$, $j=0,1,2,3$. \medskip Then we estimate the squared length of the difference by using the scalar product: $$|v_i-u_i|^2=v_i^2+u_i^2-2u_iv_i =(|v_i|^2-2|u_i|\cdot |v_i|+|u_i|^2)+2|u_i|\cdot |v_i|-2|u_i|\cdot |v_i|\cos(\alpha_i-\beta_i)=$$ $$ =(|v_i|-|u_i|)^2+2|u_i|\cdot |v_i|(1-\cos(\alpha_i-\beta_i)) =(|v_i|-|u_i|)^2+|u_i|\cdot |v_i|4\sin^2\dfrac{\alpha_i-\beta_i}{2} \leq$$ $$\leq (|v_i|-|u_i|)^2+|u_i|\cdot |v_i|4\left(\dfrac{\alpha_i-\beta_i}{2}\right)^2 =(|v_i|-|u_i|)^2+|u_i|\cdot |v_i|(\alpha_i-\beta_i)^2,$$ where we have used that $|\sin x|\leq|x|$ for any real $x$. The upper bound $M$ of all root products guarantees a fixed upper bound for lengths $|u_i|,|v_i|$. The above arguments starting from formulae~(\ref{thm:root_forms->superbases}a,b) hold for any index $i=1,2,3$. For $i=1$, if $\delta\to 0$ then $|v_1|-|u_1|\to 0$ and $\alpha_1-\beta_1\to 0$ as proved above, so we conclude that $u_1\to v_1$. \medskip The vectors $u_2,v_2$ may not be in the same $xy$-plane in $\mathbb R^3$. If they are in the same line, the difference $u_2-v_2$ tends to 0 as $|u_2|-|v_2|\to 0$. Otherwise $v_2,u_2$ span a plane $P_2$ intersecting the $xy$-plane ijn a line $L_2$. A unit length vector $w_2$ along $L_2$ can be expressed as linear combinations $a_0v_0+a_1v_1=b_0v_0+b_1v_1$ for some coefficients $a_0,a_1,b_0,b_1\in\mathbb R$. The above arguments now work for angles measured from $w_2$ (instead of $u_0$ and $v_0$) to $v_2,u_2$. Since we already know that $u_0\to v_0$ and $u_1\to v_1$ as $\delta\to 0$, we get $a_0\to b_0$ and $a_1\to b_1$. All arguments about continuity of scalar products and angles work when the vectors $u_0,v_0$ in the $x$-axis are replaced by $w_2$ in the $xy$-plane. \medskip So we get $u_2\to v_2$, similarly $u_3\to v_3$, and finally $L_{\infty}(B,B')\to 0$ as $\delta\to 0$. \end{proof} \section{Visualisation of large families of lattices from the CSD and conclusions} \label{sec:conclusions} The Cambridge Structural Database (CSD) has about 145K crystals whose lattices are primitive orthorhombic. To represent such a large number of real lattices, we subdivide the quotient triangle into a $200 \times 200$ grid and count lattices parameters fall into each pixel. These counts (from 0 to 75) are represented the colour bar on the right hand side of Fig.~\ref{fig:CSDorthorhombic3D}. The resulting plot shows high density pixels close to the top vertex, which represents cubical lattices. The white region for $\bar r_{01}<0.1$ indicates that there are much fewer orthorhombic lattices with one side considerably shorter than others. \begin{figure} \label{fig:CSDorthorhombic3D} \caption{ Density plot of all 145,199 primitive orthorhombic lattices in the CSD. Any such lattice is represented by a triple of side lengths $a=r_{01}\leq b=r_{02}\leq b=r_{03}$, which under scaling by $(a+b+c)^{-1}$ are projected to the quotient triangle $\mathrm{QT}$ .} \includegraphics[width=1.0\textwidth]{images/ortho_prim_145199_200x200.png} \end{figure} For more generic triclinic lattices $\Lambda$, the root form $\mathrm{RF}(\Lambda)$ consists of two rows: the top ordered triple $r_{23}\leq r_{13}\leq r_{12}$ and the bottom unordered triple $(r_{01},r_{02},r_{03})$. \begin{figure} \label{fig:CSDtriclinic3D} \caption{Scatter plot for all triclinic lattices $\Lambda$ from the CSD. \textbf{Top}: the ordered top rows of root forms $\mathrm{RF}(\Lambda)$ are projected to the quotient triangle $\mathrm{QT}$. \textbf{Bottom}: the unordered bottom rows of root forms $\mathrm{RF}(\Lambda)$ are projected to the full triangle $\mathrm{FT}$. } \includegraphics[width=0.9\textwidth]{images/triclinic_rij_scatter_219339.png} \includegraphics[width=\textwidth]{images/triclinic_r0i_scatter_219339.png} \end{figure} The large-scale visualisations confirm that real lattices form a continuum, which further motivates a continuous crystallography, see \citeasnoun[section~10]{bright2021easily}.
1,108,101,566,214
arxiv
\section{Introduction} The ESA gamma-ray observatory INTEGRAL \citep{winkler03} is dedicated to the fine spectroscopy (2.5 keV FWHM @ 1 MeV) and fine imaging (angular resolution: 12 arcmin FWHM) of celestial gamma-ray sources in the energy range 15 keV to 10 MeV with concurrent source monitoring in the X-ray (3-35 keV) and optical (V-band, 550 nm) bands. While the pre-planned scientific observing programme of the mission is driven by the usual Announcement of Opportunities with peer reviewed observing proposals, important \underline{\em{serendipitous discoveries}} have also been made by INTEGRAL. Two of them are briefly described in this paper. Ingredients for new discoveries of point sources above 15 keV are: (i) a very large field of view of almost 900 square degrees, (ii) arc-minute source location capability, and (iii) a broad energy band with good sensitivity. In addition, long exposures and/or frequent monitoring of the inner Galaxy are provided through the general open time observing programme, the core programme (\cite{gehrels}, \cite{winkler01}) and via the key programmes introduced recently. Nature, finally, contributes the highly variable high-energy sky. Monitoring the sky in the INTEGRAL energy range is of fundamental importance for Targets of Opportunities, multi-wavelengths follow-up observations and serendipitous discoveries. \section{HMXB and New IGR Sources} Recent hard X-ray sky surveys with INTEGRAL (\cite{bird}, \cite{bodaghee}, \cite{krivonos}) have produced source catalogues containing between 400 and 500 detected point sources, covering energy intervals between 15 keV to 100 keV. These surveys differ slightly in sky coverage, energy range, time interval, detection criteria and detection software, but we can conclude that, on average, about 17\% of all detected sources are HMXB (73 sources, on average) out of which one third are new INTEGRAL gamma-ray (IGR) sources identified as HMXB. About 23\% of all detected sources remain yet to be identified. In the following sections we will briefly describe the new HMXB classes as detected by INTEGRAL: highly obscured HMXB and super-giant fast X-ray transients. Both classes are populated by binary systems, where the compact object is orbiting a massive early-type OB super-giant star. These systems contribute to about 20\% of the Galactic HMXB population, while the remaining 80\% of HMXB are binaries involving a compact object orbiting a Be star. \subsection{Highly Obscured HMXB} \begin{figure*}[htp] \centering \includegraphics[width=9 cm]{winkler_c_fig1.ps} \caption{Spatial distribution (inner Galaxy) of all galactic sources (mostly HMXB) detected by INTEGRAL, for which N$_H$ has been reported. From \citet{bodaghee}. } \label{hmxb} \end{figure*} \vfil IGR J16318$-$4848 was discovered by INTEGRAL on 29 January 2003 \citep{courvoisier}, shortly after the start of the nominal observing programme. The source is located very close to the Galactic plane (30$^\prime$ off), and it was found that strong absorption below 5 keV (N$_H$ $\sim$ 2$\times$10$^{24}$ cm$^{-2}$) is dominating the spectral distribution of the HMXB with a compact object enshrouded in a Compton thick environment \citep{walter}. Subsequent long term monitoring confirmed that the source remained bright and Compton thick \citep{ibarra}. Using data from XMM-Newton, ASCA and RXTE, more INTEGRAL detected IGR sources with similar broad band ''highly absorbed'' spectra could be identified \citep{lutovinov}. How many of these sources have been detected (until end 2006) by INTEGRAL? The recent survey by \citet{bodaghee} displays, as shown in Fig.~\ref{hmxb}, the spatial distribution of all Galactic sources (mostly HMXB) detected by the imager IBIS on-board INTEGRAL for which N$_H$ has been reported. If we define ''highly absorbed'' as local absorption in excess of ISM absorption in the line-of-sight, that is N$_H$ $>$ 10$^{23}$ cm$^{-2}$, then we conclude from Table 1 in \citet{bodaghee} that out of 25 HMXB meeting this criterion, 16 sources (64\%) are new IGR/HMXB sources, and 9 sources (36\%) have been known previously. The ''typical'' source geometry and population characteristics are as follows: (i) a compact source embedded in dense material; (ii) the fluorescence region is larger than the orbital radius; (iii) spherical geometry; (iv) unknown or weakly detected in X-ray surveys prior to INTEGRAL; (v) strong low energy absorption N$_H$ $>$ 10$^{23}$ cm$^{-2}$; (vi) predominantly located in the Galactic bulge and along the Norma/Scutum spiral arms; (vii) long spin periods (typically 100 s to 1300 s) characteristic of wind accretion; (viii) short orbital periods $<$ 10 days; (ix) early type stellar super-giant companion (\cite{walter05}, \cite{bodaghee}). In summary, the absorbed systems $-$ a new class of HMXB $-$ form the majority of active super-giant HMXB in our Galaxy. Thanks to the large increase of known HMXB in the inner Galaxy, it is now possible to study the HMXB spatial distribution in the inner Galaxy and to compare it with the location of star forming regions and spiral arm patterns (e.g. \cite{lutovinov}, \cite{bodaghee}). \subsection{Super-Giant Fast X-Ray Transients} Thanks to the ''ingredients'' mentioned in the introduction, another (sub)class of HMXB previously hidden throughout the Galaxy could be identified with INTEGRAL: the super-giant fast X-ray transients (SFXT). While X-ray transients in general exhibit variations on timescales from few days to weeks or months, the SFXT are characterized by short outbursts lasting typically up to a few hours. Before INTEGRAL, the nature of these transients was largely unknown (see \cite{sguera} for a summary). With INTEGRAL it was possible to detect new SFXT; detect for the first time recurrent outbursts, and associate SFXT with super-giant HMXB known to be persistent sources \citep{sguera06}. SFXT form a new sub-class of super-giant HMXB and we can assume that there are many more SFXT in our Galaxy as previously thought. The origin of the fast outburst, during which the persistent luminosity of $\sim$10$^{32}$ erg/s increases by about 3 orders of magnitude is not firmly established: sudden accretion of small ejections originating in a clumpy wind, outbursts near or at periastron, or due to a second wind component (equatorial disk) of the super-giant donor (see \cite{sidoli} for a discussion). Up to now (October 2007), eight firm detections of SFXT\footnote{IGR J08408$-$4503, IGR J11215$-$5952, IGR J16465$-$4507, XTE J17391$-$3021, IGR J17544$-$2619, SAX J1818.6$-$1703, AX J1841.0$-$0535, and IGR J18450$-$0435} exist with 20 more candidates under current investigation (\cite{sguera06}, \cite{bird}, \cite{bodaghee}, \cite{bazzano}). \section{Conclusions} INTEGRAL discovered new classes of highly absorbed HMXB and SFXT, thanks to its large FOV combined with a broad energy range, fine imaging, good sensitivity and observing strategy. Long term monitoring with a large FOV is crucial to further enlarge the database. The new discoveries also pose new problems to be addressed by future observations: Why do we find highly absorbed HMXB as slowly rotating neutron stars with modest magnetic fields only, but no cyclotron lines and no black hole candidates? How can we explain the fast luminosity increase of SFXT by about 3 orders of magnitude?